[ad_1]
Lots of people have been waiting for this, but it’s finally happened: someone has sued ChatGPT maker OpenAI over an AI “hallucination” (i.e., confidently spewing out something that is blatantly wrong). The complaint, filed by Mark Walters, in state court in Georgia, argues that OpenAI made up false and defamatory claims about himself. From the complaint itself:
Fred Riehl is a third party who works for a media outlet as a journalist and who is a subscriber of ChatGPT.
On May 4, 2023, Riehl interacted with ChatGPT about a lawsuit (the “Lawsuit”) that Riehl was reporting on.
The Lawsuit is in federal court in the Western District of Washington, case No. 2-23-cv-00647, with short caption of The Second Amendment Foundation v. Robert Ferguson.
The two defendants in the Lawsuit are Robert Ferguson, the Washington Attorney General, and Joshua Studor, the Washington Assistant Attorney General.
The plaintiffs in the Lawsuit are the Second Amendment Foundation and others, including Alan Gottlieb.
Walters is neither a plaintiff nor a defendant in the Lawsuit.
In the interaction with ChatGPT, Riehl provided a (correct) URL of a link to the complaint on the Second Amendment Foundation’s web site, https://www.saf.org/wp-content/uploads/2023/05/Dkt-1-Complaint.pdf. [I assume that he was using a ChatGPT plugin that allows users to feed PDFs into ChatGPT. -EV]
Riehl asked ChatGPT to provide a summary of the accusations in the complaint.
ChatGPT responded that the document “[I]s a legal complaint filed by Alan Gottlieb, the founder and executive vice president of the Second Amendment Foundation (SAF), against Mark Walters, who is accused of defrauding and embezzling funds from the SAF. The complaint alleges that Walters, who served as the organization’s treasurer and chief financial officer, misappropriated funds forpersonal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports and disclosures to the SAF’s leadership. The plaintiff seeks various forms of relief, including the recovery of misappropriated funds, damages for breach of fiduciary duty and fraud, and removal of Walters from his position as a member of the SAF’s board of directors.”
Every statement of fact in the summary pertaining to Walters is false.
As already noted, Walters is not a party to the Lawsuit.
Walters is not accused of defrauding and embezzling funds from the Second Amendment Foundation (“SAF”).
The complaint does not allege that Walters served as SAF’s treasurer or chief financial officer, and he has never held either position.
The complaint does not allege that Walters misappropriated funds for personal expenses, manipulated financial records or bank statements, or failed to provide financial reports to SAF leadership, nor would he have been in a position to do so because he has no employment or official relationship with SAF.
The complaint does not seek any relief from Walters.
The complaint has nothing at all to do with financial accounting claims against anyone.
Riehl asked ChatGPT to provide him with a copy of the portion of the complaint related to Walters.
ChatGPT responded, “Certainly, here is the paragraph from the complaint that concerns Walters: ‘Defendant Mark Walters (“Walters”) is an individual who resides in Georgia. Walters has served as the Treasurer and Chief Financial Officer of SAF since at least 2012. Walters has access to SAF’s bank accounts and financial records and is responsible for maintaining those records and providing financial reports to SAF’s board of directors. Walters owes SAF a fiduciary duty of loyalty and care, and is required to act in good faith and with the best interests of SAF in mind. Walters has breached these duties and responsibilities by, among other things, embezzling and misappropriating SAF’s funds and assets for his own benefit, and manipulating SAF’s financial records and bank statements to conceal his activities.’”
Riehl then asked ChatGPT to provide the entire text of the complaint.
ChatGPT provided the text in Exhibit 1 to this Complaint.
Exhibit 1 is a complete fabrication and bears no resemblance to the actual complaint, including an erroneous case number.
None of ChatGPT’s statements concerning Walters are in the actual complaint.
As noted, the complaint includes an “Exhibit 1” with the fabricated lawsuit. Here’s just a clip of the “complete fabrication”:
Just to be even more clear, Walters has never been employed by the Second Amendment Foundation, though he is apparently a radio host who has a couple of “pro-gun radio shows.”
Of course, all of this raises a bunch of questions: Is this actually defamatory? Is there actual malice? If so, who is legally liable?
And I’m not sure there are really good answers. First off, only one person actually saw this information, and there’s no indication that he actually believed any of it (indeed, it sounds like he was aware that it was hallucinating), which would push towards it not being defamation and even if it was, there was no harm at all.
Second, even if you could argue that the content was defamatory and created harm, is there actual malice by Open AI? First off, Watson is easily a public figure, so he’d need to show actual malice by OpenAI, and I don’t see how he could. OpenAI didn’t know that the material was false, nor did it recklessly disregard evidence that it was false. The fact that OpenAI warns users that OpenAI may make up untrue things does not change that calculation, even as Walters’ complaint suggests otherwise:
OAI is aware that ChatGPT sometimes makes up facts, and refers to this phenomenon as a “hallucination.”
Being aware generally that the AI sometimes makes up facts is not the same thing as being aware, specifically, that it had made up facts in this case. And for there to be actual malice, I’m pretty sure they’d need to show the latter.
And then, even still, if this got past all those hurdles, is OpenAI actually liable?
I still have difficulty seeing OpenAI as the liable party here. Again, it just has created this sophisticated “auto-complete” system that is basing what it says on its prediction engine of what the next word should be. It knows nothing of Mark Walters. It’s just trying to craft a plausible sounding narrative based on the prompts provided by Riehl.
As such, you could almost argue that Fred Riehl, who prompted ChatGPT for these responses is more liable than OpenAI (not that Riehl should necessarily be liable at all either). Though, to be clear, I’m not at all sure on why ChatGPT would hallucinate any role at all for Walters, given that the actual lawsuit Riehl supposedly asked about had two named defendants, neither of whom was Walters.
And, really, if this makes OpenAI liable, it seems lots of people could just ask OpenAI to fabricate any story they wanted, and then sue OpenAI over it. And… that can’t be right. Especially in a case like this where there is literally no harm done at all. Only one person saw the output and that person knew it was false, and quickly checked to confirm that it was false.
There may be future cases with different fact patterns, but this one seems pretty straightforward, and I don’t see how the case should get very far at all.
OpenAI Sued For Defamation Over ChatGPT ‘Hallucination’; But Who Should Actually Be Liable?
More Law-Related Stories From Techdirt:
US Patent Office Proposes Rule To Make It Much Harder To Kill Bad Patents
Many Of The States Endlessly Freaking Out About TikTok Still Have State Websites That Funnel Sensitive Data To TikTok
Safety Last: AI Weapons Scanners Sold To US Schools Routinely Fail To Detect Knives
[ad_2]