DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

When Chatbots Are Used to Plan Violence, Is There a Duty to Warn?

February 26, 2026
in News
When Chatbots Are Used to Plan Violence, Is There a Duty to Warn?

On New Year’s Day last year, a man parked a Tesla Cybertruck outside the Trump International Hotel in Las Vegas. The vehicle was packed with explosive material.

Shortly before 9 a.m., the driver shot himself, the gunfire detonating a combustible collection of fuel and fireworks in the back of the vehicle. The only fatality was the driver, who was burned beyond recognition, though seven bystanders were injured. When the police identified the driver days later as a soldier from Colorado, Matthew Livelsberger, an investigator at OpenAI checked to see whether he had used ChatGPT to plan the attack.

He had. Five days before the explosion, Mr. Livelsberger asked the generative A.I. chatbot about a dynamite-like material called Tannerite, how much he could legally buy and what caliber gun was needed to set it off. He asked where he could get these supplies on his route from Colorado to Nevada. “What phones do not require personal information for activation,” he added, in chat logs provided by OpenAI to the Las Vegas Metropolitan Police after the attack.

OpenAI has said ChatGPT has Ph.D.-level intelligence, yet the chatbot was not astute enough to recognize that it was assisting a suicide bomber. Officials in Las Vegas said in a media briefing last year that it was the first time ChatGPT had been used to build a bomb on U.S. soil.

Users are entitled to privacy from the government by laws passed in the dawn of the email age. Technology companies are generally not permitted to reveal our most sensitive conversations, thoughts and search queries to third parties, legal experts said, unless a judge grants permission or there are extreme circumstances: child-exploitation images or an emergency that poses a threat of death or serious physical injury.

But balancing user privacy with public safety has always been a subject of debate. It has become even more complicated with the rapid adoption of A.I. chatbots, which raise new questions about how the companies behind them should monitor for and report harm.

Last year, after the Cybertruck explosion, OpenAI created a channel called AutoInvestigator on its internal messaging platform that surfaces worrisome activity from the company’s 800 million weekly users. An automated monitoring system assesses the activity and creates alerts when a user seems to be moving into more frightening territory, a member of OpenAI’s investigations team said in an October interview that the company granted on the condition that the employee not be named.

In June, this monitoring system flagged activity by a Canadian user, Jesse Van Rootselaar, the company said. Ms. Van Rootselaar’s exchanges with ChatGPT discussed gun violence, according to The Wall Street Journal. OpenAI considered reporting the account to law enforcement but determined that there was not an imminent, credible plan to hurt others. The company banned the account for violating its policies.

This month, Ms. Van Rootselaar, 18, killed eight people in British Columbia, including children at a school. OpenAI then reached out to the authorities “with information on the individual and their use of ChatGPT,” an OpenAI spokeswoman said. Canadian officials are now questioning OpenAI about the failure to notify law enforcement earlier.

Tim Marple, who previously worked on OpenAI’s investigations team, said the police, not company employees, should determine what was a credible threat worth investigating.

Determining whether a threat is urgent enough to report involves doing research about the user, said Sean Zadig, chief information security officer at Yahoo, who has worked in cybercrime and user safety for two decades. That research includes, he said, where the user lives, what he or she has posted previously and on other platforms, and whether the person appears to have the means to follow through.

Mr. Zadig and others on his team have backgrounds in law enforcement and experience recognizing the signals that indicate when a threat is credible, he said.

“Generally, if we become aware of the content in question, we have an obligation to act — maybe not a legal one, but an ethical one,” he said. “Even if it turns out that the user didn’t truly intend to carry out a threat or wasn’t really serious, we’re not in the position to determine what’s happening behind the keyboard.”

Compared with previous technology services, chatbots interact with users in a novel, humanlike way. They not only supply requested information like a traditional search engine but also engage users in conversations that can elicit sensitive disclosures and influence the users’ behavior.

OpenAI’s chief executive, Sam Altman, has marveled at the intimacy of the conversations that users have with ChatGPT, saying they use it “as a therapist” and suggesting that these exchanges may be subject to the same confidentiality privileges that people have with a doctor. But real-life therapists have a legal requirement, called a duty to warn, to report a patient’s plan to harm others, said Ryan Calo, a law professor at the University of Washington.

“If you are a therapist and you know someone will get hurt, you have an obligation to warn them,” Mr. Calo said. “One wonders whether that would be appropriate here to the extent that someone is substituting chat for a therapist.”

(The New York Times sued OpenAI and Microsoft in 2023, accusing them of copyright infringement of news content related to A.I. systems. The two companies have denied those claims.)

OpenAI said it was concerned about reporting incidents to the police that might turn out to be nothing, given how distressing an interrogation can be. But Mr. Marple, the former employee, said the company was also reluctant to volunteer evidence that could reflect poorly on its technology.

“It forces them to share information about how their product is potentially exacerbating the threat environment,” he said. In a discussion about a mass shooting, for example, the chatbot “might be providing strategically valuable, illustrative scenarios,” he said.

Mr. Marple said it was easier in some ways to gauge a legitimate threat in a chatbot exchange than in a more traditional search engine query.

“It’s extremely cryptic when you have a Google search for ‘how to bomb,’” said Mr. Marple, who also formerly worked at Google. There is much more content in a conversation with a chatbot, he said. But there are false positives, including people writing fiction or engaging in fantasy — or researchers who create credible-sounding threats to see how chatbots handle them.

During Mr. Marple’s time at OpenAI, he said, the investigations team primarily informed law enforcement about child-exploitation materials, as required by federal law.

Mr. Marple, who now runs Maiden Labs, a nonprofit that studies A.I. risk, said lawmakers should require chatbot companies to file suspicious-activity reports, akin to how banks must report to regulators any financial transactions potentially linked to crimes.

Mr. Zadig, the Yahoo security chief, disagreed. If technology companies were required to do law enforcement’s bidding, they might become de facto government agents, turning the sort of investigations they do now into unreasonable searches in violation of the Constitution.

“It would make things potentially worse,” he said.

More reports might overwhelm law enforcement, said Mike German, a former F.B.I. agent and civil liberties advocate. “If not timely, relevant and actionable,” he said, “there’s not a whole lot that law enforcement can do about it.”

Kashmir Hill writes about technology and how it is changing people’s everyday lives with a particular focus on privacy. She has been covering technology for more than a decade.

The post When Chatbots Are Used to Plan Violence, Is There a Duty to Warn? appeared first on New York Times.

Study reveals the most annoying driving habit—and it’s not what you think
News

Study reveals the most annoying driving habit—and it’s not what you think

by New York Post
February 26, 2026

Backseat driving is officially the most irritating passenger habit for American drivers, according to new research. And most drivers can ...

Read more
News

New swing state Trump polling is making ‘Democrats jump for joy’: data analyst

February 26, 2026
News

I tried cheesecakes from Costco, Trader Joe’s, Whole Foods, and Walmart — there’s just one I’d buy again

February 26, 2026
News

Family of U.N. Expert Critical of Israel Sues Trump Over Sanctions

February 26, 2026
News

‘Never sensed danger’: Friends ponder Mar-a-Lago gunman’s motive — and anger over Epstein

February 26, 2026
Science Practice | A Study on a Robotic Hand Design

Science Practice | A Study on a Robotic Hand Design

February 26, 2026
Grace Ashcroft Fortnite Skin Revealed – Resident Evil Requiem Collab Release Date & Free Unlock Details

Grace Ashcroft Fortnite Skin Revealed – Resident Evil Requiem Collab Release Date & Free Unlock Details

February 26, 2026
6,000 Hogs Killed in Fire at Ohio Farm

6,000 Hogs Killed in Fire at Ohio Farm

February 26, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026