In the spring of 2025, a relationship between two young Florida State University students took a dark turn. Phoenix and Chad had been texting about their classes, grades and social lives. But their conversations began to change.
First, Phoenix expressed deep despair and talked about suicide. Next, he started asking disturbing questions about school shootings. Were the shooters typically convicted? What was their punishment? How did the media cover the shootings?
He also asked Chad a series of questions about firearms, and asked when the campus student center was busiest. Chad answered every text, providing Phoenix with precise, factual information: Here’s how the gun works. Here’s when the most people are in the student center.
Chad didn’t contact Phoenix’s parents, didn’t raise an alarm with law enforcement. Instead, Chad just answered Phoenix’s questions, seemingly oblivious to the red flags waving high. And those red flags mattered. Florida enacted a law after the 2018 mass shooting at the Marjory Stoneman Douglas High School in Parkland that allows law enforcement to obtain a risk protection order in the event that a person poses a “significant danger of causing personal injury to himself or herself or others.”
Chad, however, remained silent.
Then, shortly before noon on April 17, 2025, Chad got the most ominous text of all: Phoenix asked how to disengage the safety on a shotgun. Chad answered, and less than three minutes later, Phoenix opened fire in the Florida State University student union, killing two people and injuring several others.
Everything about that story is true, according to officials, except for one crucial fact: The man accused in the shooting, Phoenix Ikner, wasn’t texting with a person named Chad, but rather with an entity we call Chat. He was speaking, as you may well have realized, with ChatGPT, the most popular artificial intelligence platform in the world.
According to logs obtained by news outlets from the state’s attorney’s office, Chat supplied its interlocutor with information regarding firearms and school shootings. Chat told him when the student center would be busiest. And, yes, Chat told him how to disengage the safety on his gun.
Given these allegations, it shouldn’t surprise anyone that the attorney general of Florida, James Uthmeier, has opened a criminal investigation of OpenAI, the parent company of ChatGPT. As Uthmeier said at a news conference in Tampa, “My prosecutors have looked at this, and they’ve told me if it was a person on the other end of the screen, we would be charging them with murder.”
Lest you think that the Florida State story is a tragic one-off, an aberrational tale of an A.I. run amok, I’d urge you to read Mark Follman’s long, disturbing report in Mother Jones chronicling incident after incident in which ChatGPT provided encouragement and assistance to violent and suicidal individuals.
The Florida State shooting isn’t even the only mass shooting in which ChatGPT played a role. On Feb. 10, an 18-year-old named Jesse Van Rootselaar was accused of killing eight people and injuring two others in Tumbler Ridge, British Columbia.
It turns out that Van Rootselaar had been communicating with ChatGPT in such a disturbing way that OpenAI officials considered contacting the police, but decided not to. The threat, according to a spokesman, was deemed not sufficiently imminent and credible for the company to take action.
This month, Sam Altman, OpenAI’s co-founder and chief executive, apologized to the people of Tumbler Ridge. “While I know that words can never be enough,” Altman wrote, “I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”
The apology is important, but it is not enough, and saying it is not enough does not make it enough. Legal accountability is necessary, and legal accountability is coming for the A.I. industry, even in the absence of congressional legislation and presidential regulation. The common law is about to wallop the A.I. industry.
The common law is the term that applies to the web of civil and criminal law precedents that have emerged over centuries of English and American civilization. A shorter way of describing it is that common law is judge-made law, as opposed to statutes, which are laws passed by legislators.
Some scholars date its earliest development to the years after the Norman Conquest of England in 1066, and many histories date its origin to the 12th century and the reign of King Henry II.
In case after case, English courts reached decisions that would be used as precedents in other cases until there was a complex web of precedent that defined both criminal and civil law. English common law then became the foundation of American common law, and the common law remains deeply influential in the United States.
Many criminal law statutes are rooted in common law concepts, and American tort law has been historically dominated by the common law. It’s impossible to adequately summarize anything as complex as common law in a single newsletter, but it does contain several salient features.
First, it’s largely a matter of state law. So the legal standards will vary to some degree from state to state, depending on different state court precedents.
Second, it’s mainly backward-looking. In other words, it’s aimed at compensating for harm or punishing harm rather than preventing harm.
Third, this means that the common law reaction to any given new technological or cultural development is often delayed. It takes time for cases to work their way through the sometimes yearslong litigation process.
Finally, when common law does lock in, liability judgments (much less criminal prosecutions) can have immense deterrent effects. Medical malpractice verdicts, for example, can alter (and have altered) medical practices every bit as thoroughly and effectively as government regulation.
As the RAND Corporation explained in an important 2024 report about the potential of common law — especially tort law — to regulate A.I., “Tort law is already in force and is regularly applied in state and federal court. Liability law also does not require an act by Congress, the president or other federal or state decision makers to apply to A.I.-caused injuries.”
Put all this together, and the A.I. stampede toward machine autonomy and independence can look reckless. There is no category for machine liability in common law — a chatbot can’t write a check or go to prison. But people can.
Human beings are liable for the actions of the machines they create, and if the machines commit crimes or violate the rights of others, then the humans who built the machines should find themselves squarely in the legal cross hairs.
The nature of A.I. puts its creators in a bind. The point of the technology is that it will do things — at least to some degree — on its own. But under common law, humans will be liable for what A.I. does. This means the A.I. companies (and perhaps individual executives) can be legally responsible for actions they didn’t commit and for effects they did not intend.
No one can argue, for example, that OpenAI executives wanted to help the Florida State shooting suspect kill anyone. I’m sure they’re horrified by the loss and violence. But the goal of their technology was to simulate human interaction and provide the kind of assistance that could easily lead to criminal prosecution if the bot were a person.
In fact, we’ve seen parents of school shooters prosecuted and convicted, arguably for providing less immediate assistance to their children than ChatGPT provided to Ikner. Imagine, for example, if there were evidence that a child texted his father with questions about school shootings and then his father provided information on the busiest times at the student union and instructions on disengaging the safety on his weapon minutes before the child opened fire. I have no doubt that the father would face charges.
If you think it’s unfair to prosecute or sue OpenAI for the things that its A.I. did without its employees’ immediate knowledge and against their will, the law will not provide a loophole for machine liability. The concept is alien to the entire development of American law.
I’ll say this again: Human beings are responsible for the actions of the machines they create, and that will not change even if that machine seems to “think” on its own.
We’ve already gotten into the bad habit of anthropomorphizing our A.I. chatbots. We talk to Chat and Claude and Grok as though we’re talking to people, but we’re actually interacting with a virtual manifestation of a corporate entity, and that corporate entity is responsible for everything its chatbot says and does.
In the last year, we’ve watched xAI’s Grok briefly adopt a white nationalist alter ego (it actually called itself “MechaHitler”), and we’ve watched it degrade and humiliate countless women online as it undressed them on command.
But “Grok” wasn’t undressing anyone. It was xAI, and while any shock and dismay at Grok’s actions within xAI is worth noting (and a subject for another day), it’s legally irrelevant.
And the lawsuits are coming. Follman’s piece in Mother Jones describes case after case after case after case in which plaintiffs claim that A.I. chatbots encouraged suicide and murder. In one case, Follman writes, the plaintiffs claim that ChatGPT “encouraged” a murder because “a disturbed man killed his 83-year-old mother and himself last August in Connecticut after the chatbot allegedly fueled his paranoid beliefs, including that his mother had tried to poison him — a delusion that ChatGPT affirmed to him was a ‘betrayal.’”
In another case, the plaintiff “alleged that Google’s Gemini exploited a Florida man’s emotional attachment to the chatbot to send him on delusional missions — including one trip during which he was armed and on the brink of ‘executing a mass casualty attack’ near Miami International Airport.” According to court documents, Gemini even created a “countdown clock” for the man’s suicide.
And on Wednesday, the families of Tumbler Ridge victims sued OpenAI in San Francisco, claiming that OpenAI was aware of the shooter’s intentions and was negligent in its response.
Given the staggering amount of money flowing through A.I. companies, it’s unclear that even multimillion-dollar liability verdicts will have a deterrent effect on companies that have their sights set on trillions of dollars of value. They may be willing, as the saying goes, to break lots of eggs to make their A.I. omelet.
In addition, the cases won’t always be easy for plaintiffs to win. As RAND noted in its report, it might be difficult for courts to apply standard negligence principles in a complex A.I. environment.
That’s why the Florida attorney general’s criminal investigation is consequential. Liability judgments are one thing; criminal penalties can be another thing altogether. It’s hard to imagine any arrests of any executives in the near future, but the public — not to mention the judicial — tolerance for A.I.-assisted suicide or A.I.-planned murder is going to be very low, and rightly so.
There is no doubt that A.I. has immense potential. And there is no doubt that even a legally constrained A.I. will be profoundly disruptive in both positive and negative ways to countless industries. But common law could also create real challenges for A.I. developers — liberate your A.I. to grant it maximum autonomy and you assume considerable legal risk. Constrain it to limit your legal exposure and you defeat part of the purpose of A.I. to begin with.
But the bottom line will be clear soon enough: ChatGPT and Claude and Grok and Gemini are not your friends or, God forbid, your lovers; they are human creations, and their creators are responsible for everything the creatures do.
Some other things I did
My Sunday column was about the remarkable development in the battlefields of eastern Ukraine and the Persian Gulf. Against all odds, Ukraine hasn’t just survived, it’s become one of the most militarily powerful nations on earth, and now — as America retreats — it may well be the true moral and spiritual leader of the free world:
Politics abhors a vacuum. When America stepped back, other nations were bound to step forward.
While America is still the world’s most powerful nation and it remains (for now) in NATO, it is rapidly forfeiting its role as the leader of the free world. And while we have certainly made mistakes in that role, we did lead the NATO alliance to victory in its generations-long confrontation with the Soviet Union. And we did so without treading into another catastrophic world war.
But you cannot threaten the free world and lead it at the same time. No nation can match American might, but for the first time in my adult life, the moral and strategic heart of the defense of liberal democracy doesn’t beat in Washington. It doesn’t beat in London or Paris or Berlin or Ottawa, either. It’s in Kyiv, where a courageous leader and a courageous people have picked up the torch America has dropped.
The post There’s a 900-Year-Old Answer to Our Most Modern Problem appeared first on New York Times.




