DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

Who Pays When A.I. Is Wrong?

November 12, 2025
in News
Who Pays When A.I. Is Wrong?

Sales representatives for Wolf River Electric, a solar contractor in Minnesota, noticed an unusual uptick in canceled contracts late last year. When they pressed the former customers for an explanation, the answers left them floored.

The clients said they had bailed after learning from Google searches that the company settled a lawsuit with the state attorney general over deceptive sales practices. But the company had never been sued by the government, let alone settled a case involving such claims.

Confusion became concern when Wolf River executives checked for themselves. Search results that Gemini, Google’s artificial intelligence technology, delivered at the top of the page included the falsehoods. And mentions of a legal settlement populated automatically when they typed “Wolf River Electric” in the search box.

With cancellations piling up and their attempts to use Google’s tools to correct the issues proving fruitless, Wolf River executives decided they had no choice but to sue the tech giant for defamation.

“We put a lot of time and energy into building up a good name,” said Justin Nielsen, who founded Wolf River with three of his best friends in 2014 and helped it grow into the state’s largest solar contractor. “When customers see a red flag like that, it’s damn near impossible to win them back.”

Theirs is one of at least six defamation cases filed in the United States in the past two years over content produced by A.I. tools that generate text and images. They argue that the cutting-edge technology not only created and published false, damaging information about individuals or groups but, in many cases, continued putting it out even after the companies that built and profit from the A.I. models were made aware of the problem.

Unlike other libel or slander suits, these cases seek to define content that was not created by human beings as defamatory — a novel concept that has captivated some legal experts.

“There’s no question that these models can publish damaging assertions,” said Eugene Volokh, a leading First Amendment scholar at the University of California, Los Angeles. In 2023, he dedicated an entire issue of his publication, Journal of Free Speech Law, to the question of A.I. defamation.

“The question,” Mr. Volokh said, “is who is responsible for that?”

One of the first A.I. defamation cases was filed in Georgia in 2023. The plaintiff, Mark Walters, a talk radio host and Second Amendment advocate, argued that the ChatGPT chatbot had hurt his reputation when it responded to a query from a journalist about gun rights by falsely stating that Mr. Walters had been accused of embezzlement.

A fundamental task in many defamation cases in the United States is proving intent. But since it’s impossible to know what’s going on inside the algorithms that drive A.I. models like ChatGPT, the suit — and others like it — tried to pin blame on the company that wrote the code, in this case OpenAI.

“Frankenstein can’t make a monster that runs around murdering people and then claim he had nothing to do with it,” said John Monroe, Mr. Walters’s lawyer.

Mr. Walters’s case was dismissed in May, long before getting to trial. Among other reasons, the court noted that the journalist had conceded that he did not trust the claim made by ChatGPT and quickly verified it wasn’t true. That question — whether third parties are likely to be convinced that the allegedly defamatory content is true — is crucial in such cases.

“If the individual who reads a challenged statement does not subjectively believe it to be factual, then the statement is not defamatory,” Judge Tracie Cason wrote in her ruling. OpenAI did not respond to a request for comment.

To date, no A.I. defamation case in the United States appears to have made it to a jury, but at least one, filed in April against Meta by Robby Starbuck, a right-wing influencer known for his campaigns against diversity, equity and inclusion programs, has been settled.

Mr. Starbuck claimed that while scrolling on X, he had found an image containing false information about him that had been generated by Llama, one of Meta’s A.I. chatbots. The text in the image stated that Mr. Starbuck was in the U.S. Capitol during the riots on Jan. 6, 2021, and that he had ties to the QAnon conspiracy theory. Mr. Starbuck said he was at home in Tennessee on that date and had nothing to do with QAnon.

Meta settled in August without ever formally responding to the complaint. As part of the settlement, the company brought Mr. Starbuck on as an adviser focusing on policing Meta’s A.I.

“Since engaging on these important issues with Robby, Meta has made tremendous strides to improve the accuracy of Meta A.I. and mitigate ideological and political bias,” Meta said in a statement at the time. The company declined to disclose additional terms of the settlement or whether Mr. Starbuck was paid for his advisory work.

Last month, Mr. Starbuck also sued Google over its A.I. In the defamation suit, which seeks $15 million in damages, he claimed that Google’s large language models — the technology that helps power the chatbots — had made patently false statements about him. This time, however, he argued that the errors were the deliberate “result of political animus baked into its algorithm.”

Google has not yet formally responded to the complaint, but Jose Castañeda, a company spokesman, said the majority of Mr. Starbuck’s claims dated to 2023 and were addressed at the time. He added, “As everyone knows, if you’re creative enough, you can prompt a chatbot to say something misleading.”

No prompts were required for Dave Fanning, a popular Irish D.J. and talk show host, to discover what he said was defamatory material about him on the internet. The content, featured on Microsoft’s MSN web portal, was an article with his photograph on top and the headline “Prominent Irish broadcaster faces trial over alleged sexual misconduct.”

Mr. Fanning, who has not been charged with sexual misconduct, learned about it after people reached out to ask about the allegations. Eventually, he discovered that a news site based in India had used an A.I. chatbot to produce the article and had added his photo alongside the text. Microsoft then posted the article, which was briefly visible to anyone in Ireland who logged on to MSN or used the Microsoft Edge browser.

Early last year, Mr. Fanning sued both Microsoft and the Indian news outlet in Irish court, one of a handful of A.I. defamation suits that have been filed outside the United States.

Microsoft declined to comment about the pending case, which Mr. Fanning said he wanted to take to trial. “What Microsoft did was traumatizing, and the trauma turned to anger,” he said.

(The New York Times has sued Microsoft and OpenAI, claiming copyright infringement of news content related to A.I. systems. The two companies have denied the suit’s claims.)

Nina Brown, a professor of communications at Syracuse University who specializes in media law, said she expected that few if any of these cases would ever make it to trial. A verdict finding that a company is liable for the output of its A.I. model, she said, could lead to a huge flood of litigation from others who discover falsehoods about themselves.

“I suspect that if there is an A.I. defamation lawsuit where the defendant is vulnerable, it’s going to go away — the companies will settle that,” Ms. Brown said. “They don’t want the risk.”

She, Mr. Volokh and several other legal experts said the Wolf River case appeared particularly strong, in part because the company has been able to document specific losses due to the falsehood.

In its complaint, it cited $388,000 in terminated contracts. Vladimir Marchenko, the chief executive, said in an interview that the company had lost out on new customers, too.

“We’ve found out that some competitors bring up the fake attorney general claims in consultations with potential clients to convince them not to use us,” he said. He noted that he had also discovered posts on Reddit citing the false Google results. One called Wolf River a “possible devil corp.”

The company, in correspondence with Google, said it lost nearly $25 million in sales in 2024 and was seeking total damages of at least $110 million. The suit is on hold while a federal judge weighs whether to keep the matter or send it back to state court, where it was filed in March.

Potentially working in Wolf River’s favor is that it is not likely to be categorized as a public figure. That is an important distinction because in the United States, private figures have a lower bar to prove that defamation occurred: They have to show only that Google acted negligently, rather than with what is known as “reckless disregard” for the truth.

Mr. Castañeda, the Google spokesman, acknowledged in a statement that “with any new technology, mistakes can happen,” and noted that “as soon as we found out about the problem, we acted quickly to fix it.”

Yet as recently as Monday, a Google search of “wolf river electric complaint” produced a result saying that “the company is also facing a lawsuit from the Minnesota attorney general related to its sales practices” and suggesting that customers “file a complaint with the Minnesota attorney general’s office, as they are already involved with the company.”

Mr. Marchenko, who immigrated to Minnesota from Ukraine as a child and played junior hockey with Mr. Nielsen, said he worried that the company could go out of business if the A.I. results didn’t change.

“There’s no Plan B for us,” he said. “We started this from the ground up. We have our reputation, and that’s it.”

Ken Bensinger covers media and politics for The Times.

The post Who Pays When A.I. Is Wrong? appeared first on New York Times.

OpenAI’s veterans crafted 100 ChatGPT prompts to help people transition out of the armed forces
News

OpenAI’s veterans crafted 100 ChatGPT prompts to help people transition out of the armed forces

November 12, 2025

OpenAI's veteran employees said they wanted to find ways to make it easier for fellow vets to transition back into ...

Read more
News

‘It Smells But I Feel Free’: Why This Guy Left His Tech Job to Scoop Dog Shit

November 12, 2025
News

Stunned Economist Questions Trump’s Grip on Reality Over Economy Brags

November 12, 2025
News

Pritzker Calls Out ‘Demented’ Trump’s Embarrassing Mental Gaffe

November 12, 2025
News

Passengers sued United over its windowless window seats. Now the airline wants the suit thrown out.

November 12, 2025
You should be having more slumber parties with your friends

You should be having more slumber parties with your friends

November 12, 2025
Ukraine’s first ‘drone wall’ is about to see action combating Russia’s most menacing threats, its Western maker says

Ukraine’s first ‘drone wall’ is about to see action combating Russia’s most menacing threats, its Western maker says

November 12, 2025
Kennedy grandson Jack Schlossberg announces 2026 congressional bid

Kennedy grandson Jack Schlossberg announces 2026 congressional bid

November 12, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025