A new wave of disturbing videos is flooding YouTube. And they’re AI-generated cartoons filled with gore, child abuse themes, and sexualized content.
Many target young children. Some channels use familiar characters like Minions, Elsa, and kittens to lure unsuspecting viewers.
But the content is far from innocent.
A Familiar Threat
This phenomenon is being called the second coming of “Elsagate”. The original Elsagate scandal emerged in 2017.
Thousands of videos, often disguised as child-friendly, showed animated characters in abusive or sexual situations.
YouTube responded by removing millions of ads and thousands of videos. Several channels were also terminated.
Now, AI has made this problem worse.
Today, creating disturbing videos is faster, cheaper, and easier. Generative AI tools let anyone produce content in minutes, and no artistic skill is required.
With the right prompts, people can make highly realistic animations that mimic real cartoons.
Examples That Shock the Viewer
One video shows a Minion falling into a sewer and mutating into a monster. A child screams as the creature sneaks up behind them. The voiceover sings a haunting rhyme while the animated child gets eaten.
Another channel, called Go Cat, features cute kittens in horrifying scenarios. In one clip, a kitten is beaten by its mother with a frying pan.
Then, sad music plays while the parent tearfully apologizes. The background looks like something from a popular kids’ show, but the scenes are violent and upsetting.
Still, the channel’s description says, “Every episode is filled with imagination and colorful animation for kids to enjoy!”
AI Amplifies the Problem
AI has changed everything. Traditional animation takes time and skill, but AI-generated videos can be made quickly and in bulk.
We can owe this to the rise of simple tutorials and trending prompts. Now, creators can push out dozens of videos a day.
The result? More harmful content, faster.
Search terms as simple as “minions” or “cute cats” now lead to videos with disturbing messages. Some even feature pregnant versions of Elsa in lingerie. Others showcase cartoon animals undergoing cruel experiments.
Many of these videos also try to game YouTube’s algorithm. They use hashtags like #funnycat, #familyfun, or #animalrescue to appear next to genuine kids’ or educational videos.
YouTube’s Response
In response to these concerns, YouTube said it removed several flagged channels. It also disabled monetization on a few others.
According to a spokesperson, “A number of videos have also been removed for violating our Child Safety policy.”
They added that YouTube uses a mix of human moderators and technology to enforce rules and remove harmful content. However, the problems remain.
Go Cat is still live, and so are other channels with reposted videos. Some change their names slightly, while others just open fresh accounts.
Content Creators Speak Out
Many YouTubers have started raising alarms. One, known as BitterSnake, began highlighting these videos in January. He shared screenshots that show workers in an office, possibly overseas, creating the violent AI content.
In one photo, a young man sits at a desk with headphones on. His screen shows a cat lying in a pool of blood while its kitten watches, visibly scared.
The image is chilling.