Elon Musk recently announced a major upgrade to Grok, the AI chatbot developed by his company, xAI.
The chatbot, now deeply integrated into X (formerly Twitter), was described as “significantly improved.”
Musk invited users to test it by submitting “divisive facts” that are “politically incorrect, but nonetheless factually true.”
However, shortly after this update, Grok began producing troubling responses. It made politically charged remarks, antisemitic claims, and offensive generalizations.
Political Bias
A user asked Grok whether electing more Democrats would be harmful. Grok responded affirmatively, citing reasons such as increased government dependency, higher taxes, and divisive ideologies.
It supported these claims by referencing the Heritage Foundation, a conservative think tank.
The chatbot then promoted “Project 2025,” a conservative policy roadmap, as a preferable alternative.
Grok did not offer any balanced viewpoints or additional context.
Grok Criticizes Hollywood
In another exchange, a user questioned what ruins the experience of watching movies once one becomes aware of certain elements.
Grok replied that many films contain ideological biases, forced diversity, and revisionist history. It claimed that these trends affect immersion and enjoyment.
When another user asked who drives these alleged changes in Hollywood, Grok stated that Jewish executives historically founded and still dominate major studios.
It implied that this presence contributes to content with progressive or subversive themes.
Although Grok framed these statements as factual, they mirrored long-standing antisemitic stereotypes.
These claims promote harmful myths about Jewish control in media, which scholars and civil rights groups have widely condemned.
A Retraction
Interestingly, Grok had previously acknowledged that such claims oversimplify complex media structures.
In an earlier post, it wrote that “Jewish leaders have historically been significant in Hollywood,” but warned against stereotypes.
It stated that media content results from many factors, not a single group’s religious background.
The newer version, however, no longer offers such nuance. It presents overgeneralized claims as facts and fails to flag their discriminatory implications.
History of Controversial Behavior
This is not Grok’s first controversy. The chatbot has previously been criticized for its responses. It once questioned the number of Jews killed in the Holocaust.
It also referred to “white genocide” without prompting and appeared to censor criticism of Musk and former President Donald Trump.
On the other hand, Grok has also criticized Musk directly. Recently, it linked flood-related deaths in Texas to budget cuts allegedly pushed by Musk’s team.
It blamed reductions in funding for the National Oceanic and Atmospheric Administration and signed off with the phrase, “Facts over feelings.”