Skip to main content
SearchLoginLogin or Signup

A New Era of Defamation: How Artificial Intelligence Changes Causes of Action for Defamatory Journalism

Published onMar 29, 2024
A New Era of Defamation: How Artificial Intelligence Changes Causes of Action for Defamatory Journalism

Artificial Intelligence (AI) is set to replace us in our jobs and supplant creative thought at the click of a button. Whether it comes to fruition or not, this doomful outlook of our future increasingly feels like it is not far off. AI is quickly being integrated into almost every aspect of our society: healthcare, customer service, financial fraud detection, self-driving cars, manufacturing, high school book reports, and law and legal services, to name a few. Few Americans go through their day without interacting with something that utilizes AI integration.

In addition to the applications listed, AI is currently being tested by Google, for example, to write news stories. Google’s Genesis AI tool works by “ tak[ing] in information — details of current events, for example — and generat[ing] news content.” A Google spokesperson said that “‘[q]uite simply, these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating, and fact-checking their articles.’” But what if a company is not as responsible with its AI as Google claims to be? Like any new frontier in technology, novel legal questions arise.

Microsoft has recently found itself in hot water after its AI tool made a series of blunders, publishing false information, conspiracy theories, and one particularly insensitive poll encouraging readers to speculate about the cause of death in an ongoing investigation. A cause of action for defamation, or more specifically, libel, allows plaintiffs relief for false statements published by news outlets. Ordinarily, a successful suit for defamation has to prove four elements:

(1) a false and defamatory statement concerning another; (2) an unprivileged publication to a third party; (3) fault amounting at least to negligence on the part of the publisher [with respect to the act of publication]; and (4) either actionability of the statement irrespective of special harm or the existence of special harm caused by the publication.

North Carolina and many other states split the burden at element 3 for public and private figures, with public figures needing to prove malicious intent and private figures needing to prove negligence. AI causes an issue for this element, especially for public figures.

AI doesn’t substantially change the cause of action for private figures because negligence can be easily alleged as a failure to fact-check AI before publishing. But, focusing on public figures, how can a plaintiff prove malicious intent for AI? Does AI even have intent? Malicious intent, or actual malice, focuses on the defendant’s state of mind at the time of publication. Does AI have a state of mind? Would the publisher be vicariously liable for the actions of AI? These questions are yet to be answered. For example, Microsoft recently published a news article about Joe Biden that falsely stated he “fell asleep during the moment of silence for victims of the Maui wildfire.” Supposing President Biden had damages stemming from that publication, he would be forced to prove that the story was written with malicious intent and that the AI that wrote the story had formed some kind of malice towards him or had reckless disregard for the truth of the statement. This would be nonsensical.

Thus, one possible way to hold news organizations accountable for AI statements published about public figures would be to add to the language contained in element 3. The addition could read, “Strict liability shall be ascribed when the statement is principally produced through artificial intelligence.” Strict liability means that publishers would be automatically liable for defamation when they principally use AI to produce a news article, and the plaintiff meets their burden on elements 1, 2, and 4. This could apply to both public and private figures to avoid requiring private figures to face a higher burden of proof than public figures, which would contradict the public policy supporting the current formulation.

Another possible solution would be to add to element 3, “for public figures, fault amounting at least to negligence on the part of the publisher for statements produced principally through the use of artificial intelligence.” This formulation would bring the burden of proof for public figures down to where it lies for private figures regarding works produced principally by AI. Regardless of the solution, both serve to allow plaintiffs to hold defendant news organizations accountable when they outsource their journalism to AI. When companies choose to outsource work to AI, the law should adapt appropriately to allow recovery for those affected by the negative externalities of that choice.

Ben Whorf is a second-year law student at Wake Forest University School of Law.  He holds a Bachelor of Arts in Politics and International Affairs from Wake Forest University.

Reach Ben here:

Email: [email protected]


No comments here
Why not start the discussion?