SAN FRANCISCO — Florida’s attorney general announced a criminal investigation of ChatGPT-maker OpenAI, alleging the company’s chatbot advised the man accused of killing two people in a shooting at Florida State University last year which ammunition to use and where and when to strike.
“The chatbot advised the shooter on what type of gun to use, on which ammo went with which gun, on whether or not a gun would be useful at short range,” Florida Attorney General James Uthmeier said at a news conference Tuesday. “If it was a person on the other end of that screen, we would be charging them with murder.”
Uthmeier’s office sent subpoenas to OpenAI on Tuesday, asking for the artificial intelligence company’s policies on how to respond when its users make threats to harm others during conversations with ChatGPT, according to a statement. The criminal investigation announced Tuesday follows a civil inquiry Uthmeier announced this month.
“Last year’s mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime,” said OpenAI spokesperson Kate Waters. “After learning of the incident, we identified a ChatGPT account believed to be associated with the suspect and proactively shared this information with law enforcement.”
ChatGPT provided “factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity,” Waters said. (The Washington Post has a content partnership with OpenAI.)
Two people were killed and six others injured in the shooting at Florida State in Tallahassee last April after a college student opened fire on campus, authorities said at the time. The suspected shooter, Phoenix Ikner, was shot by police who had swarmed to the campus and was later hospitalized. Ikner has been charged with multiple counts of murder and attempted murder.
“ChatGPT advised the shooter on what time of day would be appropriate for the shooting to interact with more people and where on campus would be the place to encounter a higher population,” Uthmeier said at the Tuesday news conference.
Uthmeier said at the news conference that in addition to advising the alleged shooter on what ammunition to use, the bot told him what area of the campus and what time of day would be the busiest.
OpenAI faces intense scrutiny from law enforcement and elected officials after police have alleged that mass shooters in Florida and Canada discussed their intention to harm others in conversations with ChatGPT and several families of people who died by suicide have filed lawsuits claiming the chatbot contributed to their deaths.
The tragic incidents have fueled a debate about what responsibilities AI companies have to monitor user conversations and flag concerning ones to law enforcement.
OpenAI has said it has improved how ChatGPT responds to discussions suggesting a person is at risk of harming themself or others. The company is working to implement policies that would warn law enforcement about high-risk conversations in certain cases.
Concerns about AI’s impact on people and on the economy are becoming political issues, and Florida’s attorney general and its governor, Ron DeSantis, have expressed their own skepticism about the AI industry.
The state has also become a battleground in a growing split inside the Republican Party over how to regulate AI. DeSantis pushed the state’s legislature to pass an “AI bill of rights” that would have instituted a series of limits on how companies could use AI in consumer products, but after opposition from President Donald Trump, legislators did not pass the bill.
The post Florida attorney general alleges ChatGPT advised FSU campus shooter appeared first on Washington Post.




