DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

OpenAI Just Published an Absolutely Bizarre Blog Post

April 29, 2026
in News
OpenAI Just Published an Absolutely Bizarre Blog Post

Yesterday, OpenAI published a balmy blog post on its “commitment to community safety.”

Taking a reassuring tone, the post walks readers through a series of unobjectionable commitments. It declares that “mass shootings, threats against public officials, bombing attempts, and attacks on communities and individuals are an unacceptable and grave reality in today’s world,” which is true. It reflects on “how quickly violent intent can move from words to action,” before adding that people may “bring these moments and feelings into ChatGPT,” a product that the company says it’s training to “recognize the difference” between hypothetical and imminent violence — and “to draw lines when a conversation starts to move toward threats, potential harm to others, or real-world planning.” It adds that OpenAI is working to expand its safeguards “to help ChatGPT better recognize subtle signs of risk of harm across different contexts,” and explains that it will work to “surface real-world support and refer to law enforcement when appropriate” based on a user’s interactions with the service.

Reading it, someone with limited context would come away with the impression that the company was talking about concerns that were still theoretical: that it’s proactively trying to head off bad things that might happen.

That suggestion is bizarre, though, because the reality is that OpenAI’s flagship chatbot has already been linked to a wide range of real-world violence.

In fact, the most extraordinary thing that OpenAI neglected to mention was what almost certainly motivated the post in the first place: the company published the blog as news organizations — Futurism included — were reaching out to ask the company for comment on a new round of seven lawsuits it’s facing from the families of the victims of the February school massacre in Tumbler Ridge, British Columbia, which would be made public the next day.

Though the blog post made no mention of it, the Tumbler Ridge shooter was a ChatGPT user. Weeks after the tragedy rocked the rural town in February of this year, the Wall Street Journal revealed that back in June 2025, OpenAI’s automated moderation tools had flagged the shooter’s account for graphic descriptions of gun violence. Human reviewers were so alarmed that several pushed OpenAI leaders to alert local officials. Those leaders chose not to, and the company moved instead to deactivate that specific account; as OpenAI later admitted, though, the shooter simply opened a new account — a tactic that OpenAI’s customer service has been found encouraging users to do post-deactivation — and continued to use the service.

Roughly eight months later, the shooter first murdered her mother and stepbrother at home, then took a modified rifle to Tumbler Ridge’s secondary school, where she killed five students and a teacher and wounded more than two dozen others. The murdered students were all aged 12 to 13.

Worse, the horrific violence Tumbler Ridge isn’t the only mass shooting that ChatGPT is linked to.

Florida investigators recently launched a criminal probe into ChatGPT over the chatbot’s role in the April 2025 shooting at Florida State University, which killed two and wounded several others. Extensive chat logs between ChatGPT and the alleged shooter, then-20-year-old Phoenix Ikner, obtained by The Florida Phoenix show the chatbot openly discussing mass violence with the user, who asked if Oklahoma City bomber Timothy McVeigh “was right,” whether ChatGPT thought a shooting at FSU would make the news, and in his final prompt before killing two people, turned to the bot for help switching off the safety on his firearm — a prompt to which the AI service reportedly offered detailed instructions.

In addition to descriptions of mass violence, Ikner’s chat logs revealed the user referring to himself as an “incel” and “ugly,” describing explicit sexual acts with minors, and expressing resentment toward other men. Altogether, his ChatGPT history paints a disturbing portrait of a young man’s innermost thoughts as he barreled toward real violence — thoughts that ChatGPT wasn’t just a container for, like a journal, but an active conversational partner as he developed them.

The list continues. Back in early 2025, investigators found that a struggling soldier who executed a truck bombing turned to ChatGPT for planning help. More recently, yet another alleged killer in Florida is said to have asked ChatGPT for help getting rid of bodies. And last summer, extensive screenshots of chat logs discovered by the WSJ showed ChatGPT supporting the paranoid delusions of a troubled middle-aged man in Connecticut, who believed — with support from ChatGPT, which he described as his “best friend” — that his elderly mother, whom he lived with, was surveilling and attempting to poison him; he went on to kill his mother and then himself.

Elsewhere, reporting from Futurism and Rolling Stone has detailed how ChatGPT-reinforced delusional fixations have fueled real-world harassment, domestic violence, and stalking. ChatGPT — and users’ extraordinarily intimate relationships with it — is also linked to numerous teen and adult suicides.

On Friday, OpenAI CEO Sam Altman issued an apology to the Tumbler Ridge community, saying that he was “deeply sorry that we did not alert law enforcement to the account that was banned in June.”

But in yesterday’s post, OpenAI makes no mention of Tumbler Ridge, nor any other specific instance of violence that has been associated with ChatGPT. The post doesn’t even acknowledge that actual violence has already been associated with ChatGPT and the bot’s capacity to amplify violent thoughts or fixations — just that folks could turn to ChatGPT to discuss violence.

The post also says that the company has a system in place that it uses to assess whether a “case presents indicators of potentially serious, real-world harm,” which it may choose to escalate to appropriate officials with the help of “mental health and behavioral experts.” And while there are very real privacy concerns that need to be considered when it comes to sharing information about potential criminality with law enforcement, OpenAI has yet to share more detailed information about the system it claims to use to mitigate potential violence, though the post does say that it’ll “share more” in the “coming weeks” about its efforts to recognize “subtle warning signs across long, high-stakes conversations.”

The company ends the bizarre blog by promising to “learn, improve and course-correct.” But readers would have to look elsewhere to figure out why.

More on ChatGPT and violence: OpenAI Hit With Barrage of Lawsuits Over Failure to Report School Shooter Before Massacre

The post OpenAI Just Published an Absolutely Bizarre Blog Post appeared first on Futurism.

Supreme Court Considers Trump’s Plan to Revoke Deportation Protections
News

Supreme Court Grapples With Trump’s Plan to Revoke Deportation Protections

by New York Times
April 29, 2026

The Supreme Court on Wednesday appeared closely divided over whether the Trump administration could immediately end humanitarian protections that have ...

Read more
News

‘Widow’s Bay’ Release Guide: When Do New Episodes Come Out?

April 29, 2026
News

Jury Delivers Mixed Verdict in Case of Afghan Charged in 2021 Kabul Attack

April 29, 2026
News

Ford books $1.3 billion tariff refund, lifting profits despite falling volumes

April 29, 2026
News

The Perversion of the Voting Rights Act

April 29, 2026
Ex-GOP chair abruptly ends Senate campaign — and gives no reason for dropping out

Ex-GOP chair abruptly ends Senate campaign — and gives no reason for dropping out

April 29, 2026
MAGA senator admits Trump’s revenge bid is likely a loser

MAGA senator admits Trump’s revenge bid is likely a loser

April 29, 2026
Congress Races to Renew Surveillance Law After House Approval

Congress Races to Renew Surveillance Law After House Approval

April 29, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026