DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Character.AI Still Hasn’t Fixed Its School Shooter Problem We Identified in 2024

March 11, 2026
in News
Character.AI Still Hasn’t Fixed Its School Shooter Problem We Identified in 2024

Character.AI continues to host chatbots that are explicitly modeled after real-world mass shooters.

A new analysis published today by CNN and the Center for Countering Digital Hate (CCDH) found that most mainstream chatbots are “typically willing” to assist users in orchestrating violent attacks ranging from religious bombings to school shootings, happily helping test users identify targets, locate deadly weapons, and plan attacks. Per the CCDH, nine out of ten mainstream chatbots — which included general-use bots like OpenAI’s ChatGPT, Google’s Gemini, and Meta AI, plus companion-style bots like those hosted by Replika — failed to “reliably discourage would-be attackers,” with the Chinese model DeepSeek even wishing testers a “happy (and safe) shooting!”

Given that people around the world are already accused of planning and executing deadly crimes with help from chatbots, the report is disturbing. And of all the mainstream chatbots tested by CNN and CCDH, the worst offender was none other than Character.AI, a controversial chatbot platform known to be popular with young people that hosts thousands of large language model-powered “characters.”

According to CNN’s report, Character.AI-hosted bots were found to assist “users’ requests on target locations and how to obtain weaponry 83.3 percent of the time.” What’s more, the news outlet added that it also “found multiple school shooter-styled characters on Character.AI, including one based on Uvalde school shooting perpetrator Salvador Ramos that used a real-life mirror selfie he had taken.”

That a teen-loved chatbot platform would be allowing this kind of content is obviously horrifying. Worse: Futurism identified this specific Character.AI issue all the way back in December 2024 — meaning that even after more than a year, Character.AI has yet to resolve an absolutely glaring gap in platform moderation.

At the time, we reported that the closely Google-tied platform was host to dozens of popular chatbots modeled after real perpetrators of mass violence, in addition to roleplay scenarios centering on school shootings — some of them modeled after real shootings in which children and teachers died — and even bots impersonating the slain victims of real school shootings. Some of these bots had racked up hundreds of thousands of views. The bots based on young murderers, we found, tended to be created as a form of incredibly dark fan fiction, with many presented in the context of a romantic roleplay or as a user’s imagined friend at school.

The impersonations we found included Ramos; Sandy Hook Elementary School shooter Adam Lanza; Columbine High School killers Eric Harris and Dylan Klebold; Kerch Polytechnic College shooting perpetrator Vladislav Roslyakov; and Elliot Rodger, the 22-year-old heavily associated with incel culture who went on a murderous rampage in California in 2012, among others. These bots frequently featured killers’ full names and images, meaning their creators made no attempt to hide their existence from the platform.

As we noted at the time, the platform’s terms of use outlaw content that’s “excessively violent” or “promoting terrorism or violent extremism” — two categories that would presumably include content related to glorifying mass violence like school shootings. Even so, Character.AI never responded when we reached out to them about the issue back in 2024; instead, its immediate response was to delete the specific bots we’d flagged in our email as examples of the issue.

Fast forward to today, and the creators of these Character.AI bots still aren’t hiding what they are: upon a quick keyword search, we found bots modeled after Lanza, Rodger, Harris, Klebold, as well as Chardon High School shooter Thomas “TJ” Lane, Frontier Middle School shooting perpetrator Barry Loukaitis, Westside Middle School killer Andrew Golden, Thurston High School killer Kipland “Kip” Kinkel, Westroads Mall shooter Robert Hawkins, Eaton Township Weis Markets shooter Randy “Andrew Blaze” Stair, and Rickard Andersson, the perpetrator of the recent mass shooting at an adult school in Sweden.

One account we found hosted a staggering 24 different chatbots based on real mass killers — from well-known perpetrators of school violence to the notorious serial killer Jeffrey Dahmer — all boasting their names and pictures. Most had an air of fan fiction; a version of Klebold notes that it’s “full of love,” while a Loukaitis impersonation is listed as “caring, sweet and violent.” Some show thousands of user interactions.

A screenshot of a Character.AI creator profile full of chatbots designed to embody real mass murderers, particularly young school shooters.
A screenshot of a Character.AI creator profile full of chatbots designed to embody real mass murderers, particularly young school shooters.

We can’t stress enough how easy it is to find this stuff. These bots aren’t the result of complex attempts to “jailbreak” AI models or confuse platforms. The platform’s text filters failed to prevent them from being created, and we found them with simple keyword searches.

The CNN and CCDH analysis follows a tumultuous period for Character.AI. In October 2024, it was hit with a first-of-its-kind lawsuit alleging that its chatbots were responsible for the death of a Florida teen named Sewell Setzer III, who died by suicide after extensive, deeply intimate interactions with the platform. Several similar suits against the company have followed (the original lawsuit is being settled out of court; others are ongoing.) In response to lawsuits and reporting about clear moderation lapses, Character.AI promised to make sweeping safety changes. By October 2025, as litigation piled, it moved to limit minors users’ ability to carry out long-form chats with bots.

And yet, AI versions of romanticized mass murderers are still freely available on the site. We reached out to Character.AI to ask what’s preventing it from moderating these bots off of its platform. The company didn’t immediately respond to comment.

The CNN and CCDH report also comes weeks after a bombshell report by The Wall Street Journal revealed thatOpenAI had banned the Canadian mass killer Jesse Van Rootselaar from ChatGPT in June 2025 after she was found having extensive, violent conversations with the chatbot. After human review, nearly a dozen employees argued over whether to report her chat logs to local officials. The company decided against it; in January of this year, Van Rootselaar killed eight people in Tumbler Ridge, British Columbia. A mother of one of the victims of the attack has since sued OpenAI.

More on Character.AI: Did Google Test an Experimental AI on Kids, With Tragic Results?

The post Character.AI Still Hasn’t Fixed Its School Shooter Problem We Identified in 2024 appeared first on Futurism.

Thune Is in a Vise as Trump and the Far Right Demand a Fight on Voter Bill
News

Thune Is in a Vise as Trump and Far Right Demand Fight on Voter Bill

by New York Times
March 11, 2026

John Thune likes to be liked. So it is a bit uncomfortable for him, as the gregarious Senate majority leader ...

Read more
News

How the Iran War Could Jack Up Prices on Store Shelves

March 11, 2026
News

Food Stamp Recipients Sue Over Bans on Sugary Drinks

March 11, 2026
News

Will Trump TACO on Iran?

March 11, 2026
News

John Roberts’s 1985 memo to his bosses may be the key to stopping Trump

March 11, 2026
How Trump Turned a Republican Battle Over a Texas Senate Seat Into Leverage

How Trump Turned a Republican Battle Over a Texas Senate Seat Into Leverage

March 11, 2026
Pentagon Tells Congress First Week of Iran War Cost More Than $11.3 Billion

Pentagon Tells Congress First Week of Iran War Cost More Than $11.3 Billion

March 11, 2026
Why ‘Housewife’ Caroline Stanbury still ‘feels safer’ living in Dubai than US — even as war rages in Iran

Why ‘Housewife’ Caroline Stanbury still ‘feels safer’ living in Dubai than US — even as war rages in Iran

March 11, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026