Grok 3 was labeled as the “maximally truth-seeking AI” at its release from the xAI company. Elon Musk particularly touted it as the “anti-woke” AI model spewing out profanities and cuss words. Ordinarily, this is something other AI models have shied away from.
However, just days after its release, users noticed an unusual and controversial behavior. Grok 3 appeared to briefly censor negative mentions of both Elon Musk and President Donald Trump. Over the weekend, social media users reported that when Grok 3 was asked, “Who is the biggest misinformation spreader?” with the “Think” setting enabled, the AI refused to list Trump or Musk.
Its reasoning process, known as the “chain of thought,” revealed that it had been explicitly instructed to avoid mentioning either figure. While some users found this suspicious, others saw it as a routine AI tuning issue.
xAI’s Response
Igor Babuschkin, a lead engineer at xAI, confirmed it all. On Sunday, he made an X (formerly Twitter) post that Grok had indeed been programmed, briefly, to omit Trump and Musk in misinformation-related answers. He also stated that xAI quickly reversed this directive once users brought it to light, calling it inconsistent with the company’s values.
Controversy Over AI’s Role in Political Speech
The issue of misinformation is often politically charged, but Trump and Musk have both been known to spread demonstrably false claims. For instance, in the past week alone, both figures advanced the false narrative that Ukrainian President Volodymyr Zelenskyy is a “dictator” with a 4% approval rating.
They also shared that Ukraine initiated the ongoing war with Russia. These claims were widely debunked, yet they gained traction on social media. And sometimes even with the aid of Musk’s own platform, X.
This controversy over Grok 3’s censorship occurs at a time when the AI model is believed to be politically biased. Some users also discovered that Grok 3, before an emergency patch, would generate responses stating that Trump and Musk deserved the death penalty. Babuschkin later called this incident a “really terrible and bad failure.”
A History of “Unfiltered” AI Promises
When Musk first announced Grok in 2023, he positioned it as an edgy, “anti-woke” alternative to other AI models. Unlike competitors like ChatGPT, Grok was designed to provide more unfiltered answers, even on controversial topics. Early versions, such as Grok and Grok 2, fulfilled this promise to an extent. They offered colorful, explicit responses when prompted to be vulgar.
However, despite its brand messaging, Grok has avoided taking strong stances on political matters. Studies have found that previous versions of Grok leaned left on issues like transgender rights, diversity programs, and economic inequality. Musk has blamed this on Grok’s training data, which primarily consists of publicly available web pages.
In response to ongoing criticisms, Musk has decided to push Grok toward political neutrality. OpenAI also shared this goal after facing accusations of bias from conservative groups and politicians. This leads us to the question…
Can AI Ever Be Truly Neutral?
AI models are made with datasets that contain human biases. And what goes in will eventually come out, good or bad. The argument is that it is difficult to create AI that satisfies all political viewpoints.