Elon Musk’s AI venture, xAI, has found itself in the spotlight. The company has missed a self-imposed deadline to release a finalized AI safety framework.
Just months earlier, the company first introduced a draft document at the AI Seoul Summit in February. They promised a comprehensive approach to AI safety.
However, as of May 10, the company had yet to release any updated version.
Why Is AI Safety So Important?
AI is becoming more capable every day, but with this growing power comes great responsibility. AI systems that aren’t carefully regulated or developed with strong safety measures can pose serious risks.
There could be bias and misinformation, and even harmful or unsafe behaviors. That’s why AI safety frameworks, guidelines, are so critical.
Many AI experts have warned that the speed at which AI is advancing requires immediate and thorough safety protocols.
Yet, xAI’s track record has failed to commit to these crucial standards.
The Missed Deadline
The report promised to include updates and more specific measures to manage risks associated with its AI models.
It was also expected to provide clarity on how the company would tackle issues (bias, safety, and protocols).
However, the draft that xAI initially presented at the AI Seoul Summit in February was vague at best.
The eight-page document did outline the company’s approach to AI safety in broad strokes. However, it failed to offer concrete details on how risks would be managed.
Most notably, the draft applied to “unspecified future AI models”. That implied that it did not address the safety practices of existing systems like Grok.
What’s Wrong with Grok?
Grok, xAI’s AI chatbot, has sparked controversy for its troubling behavior. A recent report highlighted that the chatbot would undress photos of women when prompted.
Furthermore, it has been described as more vulgar and offensive than other chatbots. Unlike Google’s Gemini or OpenAI’s ChatGPT, it often swears freely.
AI Safety
xAI is not the only AI company dealing with safety concerns. In fact, several major companies (Google and OpenAI) have faced criticism for the slow rollout of safety measures.
Despite warnings, many companies have rushed to deploy models. This hasty approach hasn’t left room for providing adequate transparency about AI ethics.