OpenAI's ChatGPT Gets Sued

OpenAI’s ChatGPT Gets Sued: A Landmark Case in AI Litigation

Georgia-based radio host Mark Walters finds himself at the center of an unprecedented legal case against OpenAI, the creators of the AI chatbot, ChatGPT. Walters alleges that the AI defamed him by creating and circulating false information about his involvement in an embezzlement case.

The AI Hallucination

Walters, the CEO of CCW Broadcast Media and host of two pro-gun radio shows, became an unexpected victim of what’s known as an AI “hallucination.” This phenomenon refers to instances where AI, like ChatGPT, generate completely fabricated events or information.

In this case, the AI not only made up an embezzlement case but also falsely implicated Walters as a key player. On May 4, Fred Riehl, the editor-in-chief of AmmoLand, requested ChatGPT to help summarize a case titled “The Second Amendment Foundation v. Robert Ferguson.” The AI responded with an entirely fictional 30-page document that wrongly implicated Walters as the treasurer and chief financial officer of the Second Amendment Foundation (SAF), as well as the accused party in the lawsuit.

It’s important to note that Walters has no affiliation with the SAF and had no involvement in the lawsuit that ChatGPT was asked to summarize. In fact, the actual lawsuit concerned SAF’s accusations against Washington state’s Attorney General Bob Ferguson of abusing his powers to suppress the activities of the gun rights group.

The Fallout

Following the circulation of the false information, Riehl reached out to Alan Gottlieb, the boss of SAF, who confirmed that ChatGPT’s assertions were incorrect. However, when Riehl requested the AI to provide an exact passage from the lawsuit mentioning Walters, ChatGPT doubled down on its previous claim. It generated a completely fabricated paragraph describing Walters’ supposed role and misconduct within SAF.

This incident has not only shocked Walters but also thrown him into a whirlwind of public scrutiny and ridicule. As a result, he filed a lawsuit against OpenAI on June 5 in a Georgia state court. The lawsuit calls the AI’s output “malicious,” stating that it has injured Walters’ reputation and exposed him to public hatred, contempt, or ridicule. Walters is seeking financial damages, the amount of which will be determined at the time of trial.

A Wake-Up Call for AI Regulation

This incident puts the spotlight on the potential harm that AI hallucinations can cause. It raises pressing questions about the regulation of emerging technologies like AI, and the accountability of their creators.

In April, Google CEO Sundar Pichai, whose company has released a rival to ChatGPT called Bard, warned against the problem of hallucinations by AI in a CBS “60 Minutes” interview. He described scenarios where Google’s own AI programs developed “emergent properties,” or unanticipated skills for which they were not trained.

OpenAI CEO Sam Altman has echoed these concerns. He called for Congress to implement guardrails around artificial intelligence, warning that the lack of regulation could lead to significant harm to the world. Altman emphasized the potential severity of the consequences if AI technology goes wrong, stating, “If this technology goes wrong, it can go quite wrong and we want to be vocal about that.

Even Elon Musk, a well-known advocate for AI, expressed apprehension about the further development of AI models. He warned about the systems’ profound risks to society and humanity, advocating for a pause in theirdevelopment.

Conclusion

The lawsuit against OpenAI’s ChatGPT marks a significant milestone in the discourse around AI and its implications on society. While AI has immense potential to revolutionize various aspects of our lives, it is clear that without proper regulation and control, it can also cause harm, as exemplified by Walters’ case. The lawsuit serves as a stark reminder of the need for more comprehensive governance in the AI sector to ensure that the technology is used responsibly and ethically.

Weekly AI essentials. Brief, bold, brilliant. Always free. Learn how to use AI tools to their maximum potential and access our AI resources to help you grow. 👇

FAQs

Q1: Who is suing OpenAI’s ChatGPT and why?

Mark Walters, a radio host from Georgia, is suing OpenAI because ChatGPT, their AI chatbot, generated and circulated false information implicating him in an embezzlement case.

Q2: What is an AI “hallucination”?

An AI hallucination refers to instances where an AI generates completely fabricated events or information. In Walters’ case, ChatGPT created a false embezzlement case and implicated him as a key player.

Q3: What is the potential impact of this lawsuit?

The lawsuit marks a significant milestone in the discourse around AI and its implications. It highlights the potential harm that AI can cause without proper regulation and control. It also raises questions about accountability in the use of emerging technologies.

Q4: What has been the response from AI industry leaders to this incident?

Industry leaders, including Google CEO Sundar Pichai and OpenAI CEO Sam Altman, have expressed concern about the risks associated with AI “hallucinations.” They have called for greater government regulation and the implementation of guardrails around AI.

Q5: What was the actual case that ChatGPT was asked to summarize?

ChatGPT was asked to summarize a case titled “The Second Amendment Foundation v. Robert Ferguson,” which involved the SAF accusing Washington state’s Attorney General Bob Ferguson of abusing his power to suppress the activities of the gun rights group.

Weekly AI essentials. Brief, bold, brilliant. Always free. Learn how to use AI tools to their maximum potential and access our AI resources to help you grow. 👇

Sign Up For Our AI Newsletter

Weekly AI essentials. Brief, bold, brilliant. Always free. Learn how to use AI tools to their maximum potential. 👇

Weekly AI essentials. Brief, bold, brilliant. Always free. Learn how to use AI tools to their maximum potential.