Elon Musk’s AI, xAI, came into the spotlight this week after its chatbot, Grok, made controversial remarks.
When given unrelated prompts, Grok generated varying responses on “white genocide” in South Africa.
Unauthorized Changes
According to xAI, the incident resulted from unauthorized modifications to Grok’s system prompts.
The company confirmed that one or more employees bypassed the standard review process. Consequently, these changes led to responses that violated xAI’s content guidelines and core values.
In response, xAI immediately reversed the changes. The company also launched an internal investigation and shared steps it will take to prevent future incidents.
xAI announced some key changes, like making prompt instructions public and installing a 24/hr monitoring system. It also mentioned stricter review protocols.
Context Behind the Claims
The “white genocide” theory falsely claims that white South Africans are being deliberately exterminated. It is worth noting that xAI’s founder, Elon Musk, was born and raised in South Africa.
It begs the question of foul play or sabotage. This is even more telling because President Donald Trump also cited it while discussing refugee status for white South Africans.
A premeditation is not out of the equation, given that the duo has a history of teaming up for shared goals.
In spite of what Grok says, courts, researchers, and international organizations have widely discredited this idea.
However, it obviously still circulates in some online spaces.
AI Integrity
Many like to think of AI as a socially and morally neutral tool. However, incidents like this are dispelling that notion.
The concern is that AI is not completely reliable, and this could pose risks to our AI-integrated society.