Even though she was toughened by years spent working in internet activism, Caitlin Roper found herself traumatized by the online threats she received this year.
There was the picture of herself hanging from a noose, dead. And another of herself ablaze, screaming.
The posts were part of a surge of vitriol directed at Ms. Roper and her colleagues at Collective Shout, an Australian activist group, on X and other social media platforms. Some of it, including images of the women flayed, decapitated or fed into a wood chipper, was seemingly enabled — and given a visceral realism — by generative artificial intelligence. In some of the videos, Ms. Roper was wearing a blue floral dress that she does, in fact, own.
“It’s these weird little details that make it feel more real and, somehow, a different kind of violation,” she said. “These things can go from fantasy to more than fantasy.”
Artificial intelligence is already raising concerns for its ability to mimic real voices in service of scams or to produce deepfake pornography without a subject’s permission. Now, the technology is also being used for violent threats — priming them to maximize fear by making them far more personalized, more convincing and more easily delivered.
“Two things will always happen when technology like this gets developed: We will find clever and creative and exciting ways to use it, and we will find horrific and awful ways to abuse it,” said Hany Farid, a professor of computer science at the University of California, Berkeley. “What’s frustrating is that this is not a surprise.”
Digitally generated threats have been possible for at least a few years. A judge in Florida was sent a video in 2023, most likely made using a character customization tool in the Grand Theft Auto 5 video game, that featured an avatar who looked and walked like her being hacked and shot to death.
But threatening images are rapidly becoming easier to make, and more persuasive. One YouTube page had more than 40 realistic videos — most likely made using A.I., according to experts who reviewed the channel — each showing a woman being shot. (YouTube, after The New York Times contacted it, said it had terminated the channel for “multiple violations” of its guidelines.) A deepfake video of a student carrying a gun sent a high school into lockdown this spring. In July, a lawyer in Minneapolis said xAI’s Grok chatbot had provided an anonymous social media user with detailed instructions on breaking into his house, sexually assaulting him and disposing of his body.
Until recently, artificial intelligence could replicate real people only if they had a huge online presence, such as film stars with throngs of publicly accessible photos. Now, a single profile image will suffice, said Dr. Farid, who co-founded GetReal Security, a service that identifies malicious digital content. (Ms. Roper said she had worn the blue floral dress in a photo published a few years ago in an Australian newspaper.)
The same is true of voices — what once took hours of example data to clone now requires less than a minute.
“The concern is that now, almost anyone with no skills but with motive or lack of scruples can easily use these tools to do damage,” said Jane Bambauer, a professor who teaches about A.I. and the law at the University of Florida.
Worries about A.I.-assisted threats and extortion intensified with the introduction this month of Sora, a text-to-video app from OpenAI. The app, which allows users to upload images of themselves to be incorporated into hyper-realistic scenes, quickly depicted actual people in frightening situations.
The Times tested Sora and produced videos that appeared to show a gunman in a bloody classroom and a hooded man stalking a young girl. Grok also readily added a bloody gunshot wound to a photo of a real person.
“From the perspective of identity, everyone’s vulnerable,” Dr. Farid said.
An OpenAI spokeswoman said the company relied on multiple defenses, including guardrails to block unsafe content from being created, experiments to uncover previously unknown weaknesses and automated content moderation systems. (The Times sued OpenAI in 2023, claiming copyright infringement of news content related to A.I. systems, an assertion that OpenAI has denied.)
Experts in A.I. safety, however, said companies had not done nearly enough. Alice Marwick, director of research at Data & Society, a nonprofit organization, described most guardrails as “more like a lazy traffic cop than a firm barrier — you can get a model to ignore them and work around them.”
Ms. Roper said the torrent of online abuse starting this summer — including hundreds of harassing posts sent specifically to her — was linked to her work on a campaign to shut down violent video games glorifying rape, incest and sexual torture. On X, where most of the abuse appeared, she said, some harassing images and accounts were taken down. But the company also told her repeatedly that other posts depicting her violent death did not violate the platform’s terms of service. In fact, X once included one of her harassers on a list of recommended accounts for her to follow.
Some of the harassers also claimed to have used Grok not just to create the images but to research how to find the women at home and at local cafes.
Fed up, Ms. Roper decided to post some examples. Soon after, according to screenshots, X told her that she was in breach of its safety policies against gratuitous gore and temporarily locked her account.
Neither X nor xAI, the company that owns Grok, responded to requests for comment.
A.I. is also making other kinds of threats more convincing. For example: swatting, the practice of placing false emergency calls with the aim of inciting a large response from the police and emergency personnel. A.I. “has significantly intensified the scale, precision and anonymity” of such attacks, the National Association of Attorneys General said this summer. On a lesser scale, a spate of A.I.-generated videos showing supposed home invasions have caused the targeted residents to call police departments around the country.
Now, perpetrators of swatting can compile convincing false reports by cloning voices and manipulating images. One serial swatter used simulated gunfire to suggest that a shooter was in the parking lot of a Washington State high school. The campus was locked down for 20 minutes; police officers and federal agents showed up.
A.I. was already complicating schools’ efforts to protect students, raising concerns about personalized sexual images or rumors spread via fake videos, said Brian Asmus, a former police chief who was working as the senior manager of safety and security for the school district when the swatter called. Now, the technology is adding an extra security challenge, making false alarms harder to distinguish from true emergency calls.
“How does law enforcement respond to something that’s not real?” Mr. Asmus asked. “I don’t think we’ve really gotten ahead of it yet.”
Stuart A. Thompson contributed reporting.
Tiffany Hsu reports on the information ecosystem, including foreign influence, political speech and disinformation
The post A.I. Is Making Death Threats Way More Realistic appeared first on New York Times.




