SAN FRANCISCO — In the early morning hours on Friday, a 20-year-old man threw a firebomb at the home of OpenAI CEO Sam Altman, then tried to set fire to the company’s headquarters, according to federal charges filed Monday.
No one was hurt, the suspect was arrested and much is still unknown about the incident. But the attack fueled concerns in Silicon Valley that violence inspired by fear or dislike of artificial intelligence may become more common.
The attack also triggered a wider, heated discussion over who is responsible for the sometimes apocalyptic tone of debates over the future that AI development may bring. Silicon Valley figures, industry critics and law enforcement all called for arguments over AI’s impact to become less divisive and extreme.
But the growing influence of the technology across society, AI’s emergence as a political issue and the steady increase in political polarization in the United States over recent decades suggests that outcome may be unlikely.
Industry leaders including Altman himself have stated that AI development could have extreme outcomes, including mass unemployment or mass destruction. Similar scenarios are cited by critics and protest groups who argue the industry must slow down or halt AI development.
“This should … be a moment where our nation reflects on the often incendiary rhetoric that is being used in discussions about artificial intelligence and its future impacts on our society,” Brooke Jenkins, San Francisco’s district attorney, said at a news conference Monday about the attack on Altman’s home.
“In no way should we be at the point where a man could have lost his life over differences of opinion.”
Polls show that a majority of Americans are pessimistic about the future impact of AI, generally due to more immediate concerns such as automation in the workplace or undermining of human relationships. Stanford University’s annual AI Index report released Monday said that experts in the technology and the U.S. public have diverging views of AI’s potential.
San Francisco police arrested Daniel Moreno-Gama at OpenAI’s headquarters Friday morning, after he was confronted by company security guards. In his possession was a jug of kerosene, a lighter and an “anti-AI document,” federal officials alleged in a criminal complaint filed in court Monday.
Jenkins said at the news conference that the document could be described as a manifesto. Moreno-Gama wrote that he was planning to kill Altman, referred to humanity’s “impending extinction” by artificial intelligence, and appeared to list the names of other AI CEOs, according to the federal complaint. Jenkins said he would also face state charges, including the attempted murder of Altman.
A Substack account using Moreno-Gama’s name has written posts in recent months that at times outline fears about the risks of AI, including a potential for human extinction. In response to a question about the account at Monday’s news conference, Matt Cobo, acting special agent in charge at the FBI’s San Francisco office, said investigators had work to do on that material.
Moreno-Gama and people who may be his relatives couldn’t be reached Monday at phone numbers linked to them in public records. It could not be learned if Moreno-Gama has a lawyer.
After news of the attack first broke late Friday, some supporters of the tech industry in the White House and Silicon Valley blamed detractors of AI for sowing a climate of fear.
“The doomers need to take a serious look at what they have helped incite,” White House AI adviser and tech investor Sriram Krishnan wrote on X on Sunday, referring to an influential tech community that researches and issues warnings about extreme outcomes super-intelligent AI might cause.
Some X users responded by pointing out that AI executives have also often suggested the technology could cause disaster if not developed properly, or huge economic disruption by causing mass unemployment. The White House and Krishnan did not immediately respond to requests for comment.
Agustín Covarrubias, co-director of the nonprofit Kairos, which encourages people to work on risks from advanced AI, said in an interview Monday that it was time to think about how to stop people from developing extremist ideas inspired by debates about AI safety.
“The attacks on Sam Altman’s home are horrifying, and violence is wrong regardless of what cause someone believes they’re serving,” Covarrubias said. “While I don’t think it’s appropriate to blame the AI safety community for these incidents, we know that arguments about stakes this high can be misinterpreted as justification for extreme action — which is precisely why preventing radicalization needs to be an active, ongoing effort.”
In a blog post Friday about the incident at his home, Altman said he “empathize[d] with anti-technology sentiments and clearly technology isn’t always good for everyone” but that “we should de-escalate the rhetoric and tactics.”
“There is no place in our democracy for violence against anyone, regardless of the AI lab they work at or side of the debate they belong to. We are grateful to law enforcement for their quick response and that no one was hurt,” an OpenAI spokesperson said Monday. (The Washington Post has a content partnership with OpenAI.)
Friday’s attack on Altman’s home happened days after shots were fired into the home of Indianapolis city councilor Ron Gibson, who this month backed a plan to build a data center in his district. The data center is unpopular with some residents.
Gibson said a note reading “No data centers” was tucked under the doormat, but the Indianapolis Metropolitan Police Department did not share details about a suspect or motive. The FBI is assisting the investigation, the department said.
The Bridging Divides Initiative, a nonpartisan group based at Princeton University that tracks political violence, catalogued what it said are at least six cases in the past year of threat incidents related to policy decisions about AI or the construction of AI data centers across the country.
The facilities have met a wave of push back across the country from local residents concerned about increased power bills, noise and other disruption.
Robert Pape, a University of Chicago political scientist who researches political violence, said he wouldn’t be surprised by violence related to AI. His upcoming book is titled “Our Own Worst Enemies: America in the Age of Violent Populism.”
Pape has tracked a growing number of Americans who tolerate violence against public officials or corporations in support of their beliefs. “That’s the slippery slope of violence,” he said.
Aaron Schaffer contributed to this report.
The post Behind fiery attack on OpenAI’s Altman, a growing divide over AI appeared first on Washington Post.




