DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

In Arson Case, a Judge Wrestles With A.I.-Assisted Apology Letters

February 17, 2026
in News
In Arson Case, a Judge Wrestles With A.I.-Assisted Apology Letters

A judge in New Zealand who discovered last week that apology letters from a defendant in an arson case had been written with the help of artificial intelligence raised questions about the sincerity of her sentiments.

“The issue of remorse is interesting,” said the judge, Tom Gilbert, of the district court in Christchurch, as he mulled the punishment of a woman who had pleaded guilty to arson and other charges. Remorse can be a mitigating factor in sentencing. Her letters to the victims and the court were nicely written, the judge said. He decided to do some sleuthing.

“Out of curiosity I punched into two A.I. tools ‘draft me a letter for a judge expressing remorse for my offending,’” the judge said, according to a transcript of the sentencing hearing that was shared with The New York Times. “It became immediately apparent that these were two A.I.-generated letters, albeit with tweaks around the edges.”

The case illustrates a global debate about using machines for meaningful communication.

Judge Gilbert said he was not criticizing the defendant’s use of A.I. “But certainly when one is considering the genuineness of an individual’s remorse, simply producing a computer-generated letter does not really take me anywhere as far as I am concerned,” he said, according to the transcript.

The judge was not alone in wrestling with the question of authenticity in A.I.-assisted writing in the case before him, which was initially reported by The New Zealand Herald. Increasingly, people are outsourcing many tasks to machines, including writing apologies, eulogies and wedding vows, perhaps saving precious time but also inviting the ire of some of their fellow humans.

Growing reliance on A.I. has led some cultural commentators to bemoan “the rise of the LLeMmings,” people who depend on large language models known as LLMs to aid much of their thinking and production, including in their personal lives. Social scientists say the questions raised by use of these tools go beyond etiquette.

“It’s a mirror into who we are and what we care about as humans,” Jim A.C. Everett, an associate professor of psychology at the University of Kent in Britain, said in an email about the case in New Zealand and his own work on what is described as the “outsourcing penalty.” Dr. Everett worked on a series of recent studies on the perception of A.I. use and users.

Generative A.I. tools like ChatGPT are promoted as time savers that produce better work faster, experts say. But people apparently believe that certain activities should take work in order to seem genuine, and more personal A.I.-generated efforts are received particularly poorly, his research suggests.

Across the six studies Dr. Everett worked on, 4,000 participants were asked about 20 tasks — including writing computer code, concocting recipes and drafting love letters and apologies. The aim was to understand how people perceive those who use A.I. and how that perception might shift depending on the activity and the way the tools were used.

“A.I. is a tool for efficiency, and it can be helpful, but it also typically involves, and signals, reduced effort,” Dr. Everett said.

But the use of such tools acts as a kind of proxy for character traits, the researchers found. The findings, published in the journal Computers in Human Behavior, suggest that people generally perceived those using A.I. as lazier, less competent and less trustworthy, and their work as less meaningful and authentic.

“When you spend time crafting a piece of writing or completing a task yourself, others can assume that message reflects your priorities: that what you write is authentically yours and represents what you care about,” Dr. Everett said.

The situation in the New Zealand courtroom was a real-life test of the sentiments and perceptions that he and his fellow researchers sought to identify and understand.

“An A.I. could be perfectly trained on all apologies that a person has ever written, but one might still think that a specific apology it then generates in a new instance is not an authentic apology because it does not come from the kind of processes deemed important in an apology: a personal recollection of the wrong, a commitment to change,” the study said.

The study detected a difference in how people perceived using A.I. for social tasks, like writing a love letter, versus a practical tasks, like writing computer code. The more personal a task the more negative was their impression.

Judge Gilbert ultimately said he was willing to give the defendant some credit for genuine remorse, looking beyond the letters.

But he granted only a 5 percent reduction in the sentence instead of the 10 percent the defense lawyer had requested based on the defendant’s remorse.

“I do not consider this is a case where 10 percent is justified and, indeed, 5 percent might be viewed as reasonably generous,” Judge Gilbert concluded.

In the end, he sentenced the defendant to 27 months in prison.

Ephrat Livni is a Times reporter covering breaking news around the world. She is based in Washington.

The post In Arson Case, a Judge Wrestles With A.I.-Assisted Apology Letters appeared first on New York Times.

In Iran, Slain Protesters’ Memorials Will Test State Crackdown
News

Iranians Defy Government Crackdown at Memorials for Slain Protesters

by New York Times
February 17, 2026

With patriotic anthems and chants against the country’s clerical rulers, Iranians on Tuesday began commemorating the end of a traditional ...

Read more
News

Stephen Colbert calls out CBS for blocking interview with Democratic Senate candidate James Talarico

February 17, 2026
News

Supreme Court to use software to identify justices’ conflict of interests

February 17, 2026
News

Your muscle-building blueprint: 3 ways to boost your gains from an exercise science pro

February 17, 2026
News

Peru Ousts President, Again

February 17, 2026
Fifth Season Names Peter Traugott Head of TV, Exec to Exit Keshet Studios

Fifth Season Names Peter Traugott Head of TV, Exec to Exit Keshet Studios

February 17, 2026
Pentagon conducts strikes on three more alleged drug boats, killing 11

Pentagon conducts strikes on three more alleged drug boats, killing 11

February 17, 2026
Colbert Slams Trump Administration After CBS Pulls Senate Candidate Interview

Colbert Slams Trump Administration After CBS Pulls Senate Candidate Interview

February 17, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026