Your Kid May Already Be Watching AI-Generated Videos on YouTube

Get-rich-quick hustlers say it’s a great time to push AI-generated kids videos on YouTube. WIRED found some channels targeting children that appear to be already embracing the technology.
Pink and blue photo of a child using a smart tablet with a glitchy texture on the screen and white scribbles overtop
Photo-illustration: WIRED Staff; Getty Images

There’s a whole new way to get rich on the internet—at least according to a rush of YouTube tutorials touting the money to be made using AI to generate videos for kids. Searching for how to create kids content or channels on YouTube now pulls up tutorials offering roadmaps for creating simple animations in just a few hours. They advocate use of tools like ChatGPT, voice synthesis services ElevenLabs and Murf AI, and the generative AI features within Adobe Express to automate scripting as well as audio and video production.

“IT’S NOT HARD,” reads one of the top results’ thumbnail images, while the title of another promises it is possible to generate a video with an original kids song “In Under 20 Minutes!” The virality fueled riches claimed to be on offer can be eye-popping. “$1.2 Million With AI Generated Videos for Kids?” one title suggests, while another proclaims “$50,000 a MONTH!”

Because YouTube is the most dominant force in young children’s entertainment, if AI-generated kids videos gain even a fraction of the success suggested in the side-hustle tutorials, millions of kids will see them. Last year, the BBC investigated the rise of “bad science” videos targeting older children on YouTube and identified over 50 channels using AI to promote pseudoscience and conspiracy theories, often racking up millions of views. But animated videos targeting younger children have been largely unstudied.

WIRED was able to identify several accounts that appear to be offering AI-generated content for kids, primarily by searching for relatively new channels posting a high volume of videos. Deepfake detection startup Reality Defender analyzed samples from several of these channels and found evidence that generative AI was part of the production pipeline. "Some of the videos we scanned have a mix of either likely generated scripts, likely generated voices, or a combination of the two, showing that generative-text-to-speech is increasingly more commonplace in YouTube videos now—even for children, apparently,” says Reality Defender CEO Ben Colman.

One channel, Yes! Neo, has over 970,000 subscribers, and its videos regularly get over a million views. Since it launched in November 2023, it has regularly published a new video every few days, with titles like “Ouch! Baby Got a Boo Boo” and “Poo Poo Song.” (Poop is an enduring fascination of kids on YouTube and music streaming services.) Reality Defender analyzed the transcribed script from a sample video, “Caring for Injured Baby Dino,” and found it was 98 percent likely AI-generated.

The channel Super Crazy Kids, produced by a company in Hyderabad, India, also appears to be incorporating AI tools into the production of its more recent animated videos. It has over 11 million subscribers. Reality Defender analyzed a sample video and found “synthetic voice snippets” present. (The video’s title is a garble of keywords: “Pig Finger Family Song Baby Nursery Rhymes Colorful Cars Colors for Kids 45 Mins Collection Video.”) The channel bills itself as educational and often labels its videos as ways to learn colors, shapes, and numbers.

Neither Yes! Neo nor Super Crazy Kids responded to WIRED’s request for comment.

Few Limits

Yes! Neo, Super Crazy Kids, and other similar channels share a common look—they feature 3D animation in a style similar to Cocomelon, YouTube’s most popular children’s channel in the US. (Dana Steiner, a spokesperson for Cocomelon’s parent company Moonbug, says that none of its shows currently use AI, “but our talented creative team is always exploring new tools and technologies.”)

This familiar aesthetic means that a busy parent glancing quickly at a screen might confuse the AI content for a program they’ve vetted. And while it is not particularly well-crafted, the content of the videos put out by these channels tends to be shoddy in the same way that so much of today’s human-made children’s entertainment is shoddy—frenetic, loud, unoriginal.

YouTube is in the process of introducing new policies for AI-generated content, although the company doesn’t seek to significantly restrict it. “YouTube will soon be introducing content labels and disclosure requirements for creators who upload content that contains realistic altered or synthetic material, including content geared toward kids and families,” YouTube spokesperson Elena Hernandez says.

When WIRED inquired whether YouTube will be proactively seeking out AI-generated content and labeling it as such, Hernandez said more details will come later but that it plans to rely primarily on voluntary disclosure. “Our main approach will be to require creators themselves to disclose when they've created altered or synthetic content that's realistic.” The company says it uses a combination of automated filters, human review, and user feedback to determine what content is accessible in the more restricted YouTube Kids service.

Some fear YouTube and parents around the world aren’t adequately prepared for the coming wave of AI-generated kids content. Neuroscientist Erik Hoel recently watched some of the tutorials on making kids content with AI, as well as some videos he suspected to be made using the technology. Hoel was so unsettled by what he saw that inveighed against the concept on his Substack, including by singling out Super Crazy Kids. “All around the nation there are toddlers plunked down in front of iPads being subjected to synthetic runoff, deprived of human contact even in the media they consume,” he wrote. “There’s no other word but dystopian.”

Hoel’s warning recalls the last great scandal about children’s YouTube, dubbed “Elsagate.” It kicked off in 2017 when people started noticing surreal and disturbing videos aimed at kids on the platform, often featuring popular characters like Elsa from Disney’s Frozen, Spiderman, and the titular porcine hero from Peppa Pig. While AI-generated content hasn’t reached a similar nadir, its creators appear to be chasing a similar goal of drawing the attention of YouTube’s automated recommendations.

Creative Baby Padre

Some more obscure AI video channels are already veering into weird territory. The channel Brain Nursey Egg TV, for example, gives its unsettling videos names like “Cars for Kids. Trailer the Slide With Lyrics.” The video’s description is a gigantic string of keywords, including “disney junior elimi birakma 24 chima sorozat BeamNG-Destruction ali babanın çiftliği şarkısı la brujita creative baby padre finger.”

The plotless video is an amalgamation of glitchy visuals like floating eyeballs and melting blocks of color. The soundtrack features children applauding, a robotic voice counting, individual babies laughing, and different robotic voices intoning the word “YouTube” at seemingly random intervals. “This has generated voices throughout and is either powered by an AI-generated script or may be one of the greatest and most underrated works of surrealist video art in recent memory,” says Colman of Reality Defender. Either way, this kind of content hasn’t picked up much traction yet—some of the channel’s videos only have a handful of views. Brain Nursery Egg TV does not provide an email address or other way to contact those running the channel.

An AI-generated live knockoff of Spongebob Squarepants called AISponge discloses that it is an art project using AI. Although it riffs on a children’s show, it solicits storylines from its audience who tend to offer decidedly adult topics. One episode WIRED reviewed centered around labor unrest at the ersatz Krusty Krab fast food restaurant; several characters were incensed by the low salaries Mr. Krab paid. In another, Spongebob carefully instructs his friend Patrick, a sea star, on how to shave his testicles. (“Make a downward motion.”)

A few mainstream children’s programs on YouTube have openly embraced AI. Kartoon Studios, formerly Genius Brands International, has promoted its use of AI in children’s shows Kidaverse Fun Facts and Warren Buffett’s Secret Millionaires Club, which are both available on YouTube.

Outside of Youtube, other prominent kid’s content creators are also experimenting. PBS Kids, a standard-bearer for high-quality children’s entertainment, is exploring the use of AI to create interactive digital episodes of shows like Lyla in the Loop and Elinor Wonders Why. But this project won’t use generative AI to create content. “This AI is used to decode user responses and help guide the in-episode pre-scripted character responses,” says Lubna Abuulbah, a senior director at PBS Kids Communications.

AI tools, when used in a thoughtful, deliberate way, could be a boon. Abuulbah cites research indicating that AI-guided interactive programs can be potent learning tools. David Bickham, the research director at Boston Children’s Hospital’s Digital Wellness Lab, says the type of application that PBS Kids is developing as “really promising.” But he sees the wider rush to get rich quick from AI kids content as an opening of the floodgates for a new wave of junk. “Something that’s generated entirely to capture eyeballs—you wouldn’t expect it to have any educational or positive beneficial impact.”

Bad kid’s programming existed on YouTube before the AI boom—and on TV before it. The primary threat created by generative AI tools may simply be in making it way easier to crank out janky children’s shows at an accelerated pace—similarly to the technology’s impact on web content.

“The formula to creating the best stuff is a thorough process,” Bickham says, noting that shows like Sesame Street meticulously workshop planned lessons with actual children before turning them into television. Blitzing YouTube with hastily-assembled AI slime-for-tots takes an opposite approach, concerned with throughput, algorithmic amplification, and monetization rather than actually enriching the lives of children.

What’s more, because AI tools automate the process of making a show, channels focused on putting out as many videos as possible may not even be watching their output before others see it. “There will be no way to know, necessarily, how much of the AI generated content is being reviewed by humans before it is put somewhere,” says Tracy Pizzo Frey, senior adviser of AI for the media literacy nonprofit Common Sense Media.

YouTube’s forthcoming policies would seem to allow kids channels to post AI content without a single set of human eyeballs looking it over as long as they’re disclosed as AI-generated. And bad actors may simply choose not to flag their videos and see how long they can get away with serving up unvetted robotic content to children.

Pizzo Frey thinks both creators and platforms like YouTube should be accountable for what they serve to children. “Meaningful human oversight, especially of generative AI, is incredibly important,” she says. “That responsibility shouldn’t be entirely on the shoulders of families.”