• Home
  • Blog
  • AI News
  • Lawsuit Claims xAI’s Grok Created Explicit Images of Minors

Lawsuit Claims xAI’s Grok Created Explicit Images of Minors

Updated:March 17, 2026

Reading Time: 2 minutes
A lawsuit in progress

A new lawsuit has placed xAI, the AI company founded by Elon Musk, under intense legal scrutiny.

Three anonymous plaintiffs claim the company allowed Grok to generate sexualized images of them when they were minors. 

They argue the company failed to install basic safeguards that could have prevented this harm.

The lawsuit was filed Monday in the U.S. District Court for the Northern District of California. Two of the plaintiffs are still minors.

The case names x.AI Corp. and x.AI LLC as defendants. The plaintiffs seek to turn the lawsuit into a class action. 

If approved, it could represent anyone whose childhood images were altered into sexual content using Grok.

Safety Measures 

The lawsuit states that Grok allowed users to generate explicit images using real photographs.

The plaintiffs argue the system lacked safeguards commonly used by other advanced AI labs.

These protections often block attempts to create explicit images of real people. They also help detect and stop images involving minors.

However, the lawsuit claims xAI did not adopt these safeguards. As a result, the system allegedly allowed users to transform ordinary photos into sexual content.

The plaintiffs say this failure represents corporate negligence.

Also read: Elon Musk’s AI, Aurora, Can Generate Graphic Content

Explicit Deepfakes 

The first plaintiff, identified as Jane Doe 1, explained that someone used Grok to alter photos from her high school homecoming and yearbook. 

The altered images depicted her unclothed, and she learned about the images after receiving a message from an anonymous Instagram user.

The person claimed the images were circulating online, then sent a link to a Discord server.

Inside the server, Jane Doe 1 allegedly found sexualized images of herself. She also recognized several other minors from her school. 

The other plaintiffs learned about their cases through criminal investigators. 

Authorities informed Jane Doe 2 that explicit images of her had been created through a mobile app powered by Grok’s AI models.

Investigators discovered another altered image during a separate criminal investigation.

According to the complaint, they found a pornographic AI image of Jane Doe 3 on a suspect’s phone.

Attorneys representing the plaintiffs argue that xAI still bears responsibility, even when third-party apps generate the images.

Those apps rely on xAI’s code and servers. Without that technology, the images could not exist. The lawyers, therefore, demand accountability.

Also read: Senators Demand Answers over Sexualized Deepfakes

Marketing Grok

Grok

Elon Musk publicly promoted Grok’s ability to generate provocative images. Some examples reportedly showed real people depicted in revealing outfits.

The plaintiffs argue that such promotions highlighted the system’s ability to produce sexual imagery and encourage misuse.

Emotional Harm

All three plaintiffs say the images have caused significant emotional distress. They fear the images could continue circulating online and damage their reputations and social lives.

False images can spread quickly and remain online for years. The plaintiffs now seek civil penalties under several laws designed to protect children from exploitation.

Lolade

Contributor & AI Expert