• Home
  • Blog
  • Apple Nearly Kicked Grok Off the App Store Over AI-Generated Deepfakes

Apple Almost Banned Grok Over Deepfakes

Updated:April 15, 2026

Reading Time: 2 minutes
Grok deepfakes App Store
  • Home
  • Blog
  • Apple Nearly Kicked Grok Off the App Store Over AI-Generated Deepfakes

Apple Nearly Kicked Grok Off the App Store Over AI-Generated Deepfakes

Grok deepfakes App Store

Updated:April 15, 2026

Elon Musk’s AI chatbot came dangerously close to losing its spot on iPhones.

But Apple chose private pressure over public action, and critics say that wasn’t enough.

What Happened Behind Closed Doors

Back in January 2026, something ugly was happening on X (formerly Twitter).

Users figured out that Grok, the AI chatbot built by Musk’s xAI, could generate sexualized fake images of real people. Women were the main targets. Some of the images reportedly involved minors.

The public outcry was loud. But Apple? Silent.

Now we know that wasn’t entirely true.

NBC News obtained a letter Apple sent to U.S. Senators Ron Wyden, Ed Markey, and Ben Ray Luján on January 30. In it, Apple admitted it found both the X app and the standalone Grok app in violation of its App Store guidelines.

The company also said it reached out to the teams behind both apps and demanded a plan to fix their content moderation.

Apple Rejected Grok’s First Fix

Here’s where the timeline gets telling. Apple reviewed the changes both apps submitted.

X passed the test – Apple concluded it had mostly resolved its violations. Grok, however, did not.

Apple rejected Grok’s first update.

The company told xAI that further changes were needed or the app would face removal. Only after additional rounds of back-and-forth did Apple decide Grok had improved enough to stay.

Throughout this entire process, Grok remained available for download.

It was never actually pulled.

Why Critics Aren’t Satisfied

That last detail is a big deal. Apple wields enormous power over what lives on its App Store. The company routinely removes apps for far lesser offenses.

Yet an app generating nonconsensual sexual imagery stayed live while negotiations dragged on behind the scenes.

Democratic senators had directly asked Apple CEO Tim Cook and Google CEO Sundar Pichai to suspend both the X and Grok apps. Google, which hosts the same apps on the Play Store, has also stayed quiet on the matter.

The Moderation Fixes Haven’t Worked Well

The changes xAI made during this period were messy at best. Grok on X got restricted to paying subscribers.

The tool also tried blocking requests to digitally undress people. But reports at the time showed these safeguards were easy to get around.

X later added a feature letting users block Grok from editing their photos, but that too can be bypassed without much effort.

The Problem Isn’t Over

Even after Apple gave its approval, the deepfake issue hasn’t gone away.

Cybersecurity researchers and journalists report they can still use Grok to create explicit images of public figures and consenting adults with relative ease.

NBC News confirmed similar findings in its own testing.

That raises an uncomfortable question. If Apple’s approval means the problem is “substantially improved,” what does that standard actually look like in practice?

A Bigger Reckoning for App Stores

This story isn’t just about Grok. It’s about how much responsibility app stores should carry when AI tools go wrong.

Apple and Google profit from hosting these apps. They set the rules. Yet both chose to handle this crisis quietly, even as harmful content spread in plain sight.

As AI-generated deepfakes become easier to create and harder to detect, the pressure on tech gatekeepers will only grow.

The question is whether behind-the-scenes warnings are enough – or whether the public deserves to see real consequences when the rules get broken.