CNET Is Testing an AI Engine. Here's What We've Learned, Mistakes and All

New tools are accelerating change in the publishing industry. We're going to help shape that change.

Connie Guglielmo SVP, AI Edit Strategy
Connie Guglielmo is a senior vice president focused on AI edit strategy for CNET, a Red Ventures company. Previously, she was editor in chief of CNET, overseeing an award-winning team of reporters, editors and photojournalists producing original content about what's new, different and worth your attention. A veteran business-tech journalist, she's worked at MacWeek, Wired, Upside, Interactive Week, Bloomberg News and Forbes covering Apple and the big tech companies. She covets her original nail from the HP garage, a Mac the Knife mug from MacWEEK, her pre-Version 1.0 iPod, a desk chair from Next Computer and a tie-dyed BMUG T-shirt. She believes facts matter.
Expertise I've been fortunate to work my entire career in Silicon Valley, from the early days of the Mac to the boom/bust dot-com era to the current age of the internet, and interviewed notable executives including Steve Jobs. Credentials
  • Member of the board, UCLA Daily Bruin Alumni Network; advisory board, Center for Ethical Leadership in the Media
Connie Guglielmo
3 min read
A hand over a lit-up keyboard
Manuel Breva Colmeiro/Getty Images

Over the past 25 years, CNET built its expertise in testing and assessing new technology to separate the hype from reality and help drive conversations about how those advancements can solve real-world problems. That same approach applies to how we do our work, which is guided by two key principles: We stand by the integrity and quality of the information we provide our readers, and we believe you can create a better future when you embrace new ideas

The case for AI-drafted stories and next-generation storytelling tools is compelling, especially as the tech evolves with new tools like ChatGPT. These tools can help media companies like ours create useful stories that offer readers the expert advice they need, deliver more personalized content and give writers and editors more time to test, evaluate, research and report in their areas of expertise.  

In November, one of our editorial teams, CNET Money, launched a test using an internally designed AI engine – not ChatGPT – to help editors create a set of basic explainers around financial services topics. We started small and published 77 short stories using the tool, about 1% of the total content published on our site during the same period. Editors generated the outlines for the stories first, then expanded, added to and edited the AI drafts before publishing. After one of the AI-assisted stories was cited, rightly, for factual errors, the CNET Money editorial team did a full audit. 

Here's what we've learned.

AI engines, like humans, make mistakes 

We identified additional stories that required correction, with a small number requiring substantial correction and several stories with minor issues such as incomplete company names, transposed numbers or language that our senior editors viewed as vague. Trust with our readers is essential. As always when we find errors, we've corrected these stories, with an editors' note explaining what was changed. We've paused and will restart using the AI tool when we feel confident the tool and our editorial processes will prevent both human and AI errors.

Bylines and disclosures should be as visible as possible

When you read a story on CNET, you should know how it was created. We changed the byline for articles compiled with the AI engine to "CNET Money" and moved the disclosure so you don't need to hover over the byline to see it. The disclosure clearly says the story was created in part with our AI engine. Because every one of our articles is reviewed and modified by a human editor, the editor also shares a co-byline. To offer even more transparency, CNET started adding a note in AI-related stories written by our beat reporters letting readers know that we're a publisher using the tech we're writing about. 

New citations will help us – and the industry 

In a handful of stories, our plagiarism checker tool either wasn't properly used by the editor or it failed to catch sentences or partial sentences that closely resembled the original language. We're developing additional ways to flag exact or similar matches to other published content identified by the AI tool, including automatic citations and external links for proprietary information such as data points or direct quotes. We're also adding additional steps to flag potential misinformation. 

Moving forward

We know firsthand that new ideas and change can be unsettling, as we've seen from the interest in CNET's early steps in this space and the speculation about our motives, how we work and what we're doing. There's still a lot more that media companies, publishers and content creators need to discover, learn and understand about automated storytelling tools, and we'll be at the front of this work. We're committed to improving the AI engine with feedback and input from our editorial teams so that we – and our readers – can trust the work it contributes to. 

In the meantime, expect CNET to continue exploring and testing how AI can be used to help our teams as they go about their work testing, researching and crafting the unbiased advice and fact-based reporting we're known for. The process may not always be easy or pretty, but we're going to continue embracing it – and any new tech that we believe makes life better. 

Thanks for reading.