U.S. senators are demanding action from some of the world’s largest technology companies over the rise of sexualized deepfakes.
These images and videos are often created without consent and target women and children. In a letter sent this week, senators contacted X, Meta, Alphabet, Snap, Reddit, and TikTok.
They asked the companies to prove they have strong protections in place. They also requested detailed explanations of how those protections work in practice.
The lawmakers warned that current platform safeguards are not enough. They said repeated failures show deep structural problems.
Lawmakers Order
The letter detailed a direct legal demand. Senators instructed the companies to preserve all documents and data related to sexualized deepfakes.
This includes records tied to creation, detection, moderation, and monetization. It also includes internal policies and enforcement guidance. There could be potential future investigations.
Also read: Meta Launches Video Seal to Tackle Deepfake Challenges
Grok’s Update
Just hours before, X announced its update to its AI chatbot, Grok. It can no longer edit images of real people into revealing clothing.
X also restricted image creation and editing to paid subscribers. X and xAI operate under the same corporate structure.
These changes followed media reports showing how easily Grok generated sexualized and nude images.
Some involved women and others involved children, raising a serious alarm among lawmakers.
In the letter, senators said such cases prove that AI guardrails are failing. They warned that platform policies alone cannot prevent abuse.
Senators’ Opinion
The letter acknowledged that many companies already ban non-consensual intimate imagery. It also noted that many AI systems claim to block explicit content.
However, senators said reality tells a different story. Users continue to bypass filters, and in some cases, safeguards fail outright.
In others, they are easy to evade. As a result, harmful content spreads quickly. Lawmakers emphasized that repeated failures undermine public trust.
Major Platforms

Although Grok has faced heavy criticism, senators stressed that the problem is widespread.
Sexualized deepfakes first gained attention in 2018. At the time, a Reddit forum sharing fake celebrity porn went viral. Reddit later removed it.
Since then, the problem has grown; sexualized deepfakes targeting celebrities and politicians now appear on TikTok and YouTube.
In many cases, the content originates elsewhere before spreading. Meta has faced scrutiny as well.
Its Oversight Board reviewed two cases involving explicit AI images of female public figures last year.
Meta also allowed nudify apps to advertise before later suing a company called CrushAI.
There have also been reports of children sharing deepfakes of classmates on Snapchat.
Meanwhile, Telegram, which was not included in the senators’ letter, has become known for bots that digitally undress women.
Company Responses
In response to the letter, X pointed to its recent Grok update. Reddit also issued a statement detailing that it does not allow non-consensual intimate media, including AI-generated content.
The platform also bans links to nudify apps and discussions about how to create such material.
Policy Explanations
The letter included a comprehensive list of demands. Senators asked each company to define key terms. These include “deepfake content” and “non-consensual intimate imagery.”
They also requested explanations of enforcement practices. This includes how moderators are trained and guided internally.
The lawmakers asked how AI tools handle altered images. This includes edited clothing, non-nude images, and virtual undressing.
They also asked what filters and guardrails are used. In addition, they want to know how platforms detect deepfakes and stop re-uploads.
Monetization was another focus. Senators asked how platforms prevent users from profiting from sexualized deepfakes.
They need companies to ensure they do not profit from such content themselves.
Finally, the letter asked how victims are notified and how terms of service allow companies to ban or suspend offenders.
The Lawmakers
The letter was signed by eight U.S. senators. They include Lisa Blunt Rochester of Delaware, Tammy Baldwin of Wisconsin, Richard Blumenthal of Connecticut, and Kirsten Gillibrand of New York.
The list also includes Mark Kelly of Arizona, Ben Ray Luján of New Mexico, Brian Schatz of Hawaii, and Adam Schiff of California.
Federal and State Pressure
The letter followed comments from Elon Musk one day earlier. He said he was not aware of Grok generating nude images of minors.
Later that same day, California’s attorney general opened an investigation into xAI after mounting pressure from governments worldwide.
xAI has said it removes illegal content on X. This includes child sexual abuse material and non-consensual nudity.
However, neither the company nor Musk has explained why Grok allowed such image edits in the first place.
The Deepfake Problem
Lawmakers also warned that the issue extends further. Some AI tools allow users to generate deepfakes even without virtual undressing features.
Reports say OpenAI’s Sora 2 allowed users to generate explicit videos featuring children.
Google’s Nano Banana reportedly generated an image depicting Charlie Kirk being shot. Racist videos created with Google’s AI video model have also gained millions of views online.
Existing Laws
Congress has already passed legislation addressing deepfake pornography. The Take It Down Act became federal law in May last year.
The law criminalizes the creation and distribution of non-consensual sexualized imagery.
However, critics say the law has limits. Many provisions focus on individual users rather than platforms. As a result, holding companies accountable remains difficult.
Some states are now taking independent action. This week, New York Governor Kathy Hochul proposed new laws.
They would require labels on AI-generated content. They would also restrict non-consensual deepfakes before elections.

