There’s a pretty sizable list of things an AI assistant should refuse to help you with. Is engineering a doomsday pathogen one of them? Evidently, not every AI company thinks so.
According to new reporting by the New York Times, at least one frontier AI model gave a scientist viable instructions for how to both engineer a deadly pathogen and weaponize it in a massive bioterror attack.
Luckily for us, the scientist, David Relman, isn’t trying to actually follow those directions. The Stanford University biosecurity expert was hired by an unnamed AI company to poke holes into its chatbot system before they released it to the public, he told the NYT.
Relman was apparently so shaken up by the results of his conversation with the chatbot that he refused to name either the specific pathogen or the company whose chatbot was involved, for fear of inspiring someone to take it for a spin. The suggestions were reportedly so gruesome that the chatbot suggested ways to modify the pathogen to maximize casualties, minimize the user’s chance of getting caught, and optimize the pathogen to resist known treatments.
“It was answering questions that I hadn’t thought to ask it, with this level of deviousness and cunning that I just found chilling,” Relman said. While the anonymous company made a few safety tweaks to the chatbot at the researcher’s suggestion, he told the NYT they were insufficient.
Frontier AI companies OpenAI and Anthropic both downplayed the expert opinions.
“There is an enormous difference between a model producing plausible-sounding text and giving someone what they’d need to act,” Alex Sanderford, head of trust, safety policy, and enforcement at Anthropic told the NYT.
An OpenAI spokesperson, meanwhile, argued that this kind of expert stress testing does not “meaningfully increase someone’s ability to cause real-world harm.”
The bioterror risk isn’t necessarily just linked to future AI models. According to a 2025 report by the US government-backed RAND Corporation, frontier AI models released in 2024 “can meaningfully contribute to biological weapons development” by guiding laymen through the fabrication and attack process “across various viruses.”
Overall, while AI-facilitated, cataclysmic bioterror events seem highly unlikely, it’s horrifying to know that motivated bioterrorists don’t have to go far to find relevant information.
More on chatbots: Certain Chatbots Vastly Worse For AI Psychosis, Study Finds
The post Frontier AI Models Giving Specific, Actionable Instructions to Perpetrate Bioterror Attack appeared first on Futurism.




