After the conservative activist Charlie Kirk was fatally shot at a university rally in Utah last week, Spencer Cox, the state’s Republican governor, called social media companies a “cancer.”
Senator Chris Coons, a Democrat of Delaware, blamed the internet for “driving extremism in our country.”
President Trump, who helped found the Truth Social platform, also pointed fingers at social media on Monday and said the accused gunman had become “radicalized on the internet.”
The response from social media companies?
Most major platforms have stayed quiet and ducked the spotlight. There has been only one notable exception — Elon Musk, the billionaire owner of the social media site X, who has spread divisive content about Mr. Kirk’s assassination by blaming the left and calling for retribution against those who criticized the right-wing activist or celebrated his death.
Their reactions are starkly different from a decade ago, when executives at Google, Facebook, Twitter and other sites repeatedly vowed to work together to limit hate speech, remove violent content and root out disinformation from their platforms. The aftermath of Mr. Kirk’s killing shows how inflammatory and hateful online content has become so intractable that social media companies are no longer promising to solve it.
“We have regressed,” said Graham Brookie, the senior director of the Atlantic Council’s Digital Forensic Research Labs, which studies online speech. “We are in a worse place now than when countries and companies came together to say never again to violence and hate speech online.”
For years, social media companies pledged to cleanse their sites of toxic content. In 2018, Mark Zuckerberg, the founder of Facebook, said he planned to hire 10,000 people to work on safety and security issues after the social network was blasted for allowing election interference on its platforms. Twitter and Google’s YouTube formed similar safety teams. And at a 2022 White House summit to address hate and extremism, tech companies promised to expand their policies to handle the noxious content.
But in the years since, misinformation and toxic speech have remained unrestrained online. After many of the platforms’ restrictions were criticized as censorship, some began disbanding their teams that worked on trust and safety and left international forums dedicated to taking action against violent content. In January, Meta, the parent company of Facebook, said it was ending its longtime fact-checking program, which had been instituted to curtail the spread of misinformation on its apps.
Immediately after Mr. Kirk was killed last week, the issue of violent online content resurfaced as videos of the assassination spread rapidly. Many videos were graphic, with high-resolution footage captured by audience members at Mr. Kirk’s rally that showed him being shot and collapsing. Within hours, the videos jumped from X to other platforms like Instagram and YouTube, gaining millions of views.
Criticism of tech platforms deepened after details emerged about the life of Tyler Robinson, whom the authorities have identified as the gunman and who was described by his friends as “terminally online.” Mr. Robinson was a gamer who took part in online communities and participated in chats on the messaging service Discord. The authorities have suggested that Mr. Robinson fell down internet rabbit holes, allegedly leading him to assassinate Mr. Kirk.
Social media companies, which have algorithms that serve more content that a user might be interested in to keep them engaged, have not provided new answers or solutions for how to stop people from being influenced online. In response to Mr. Kirk’s shooting, the companies have instead emphasized that they have rules for dealing with hate speech, violent groups and extremism.
Meta, Google, TikTok, Snap, Reddit and others said they were all either labeling videos of Mr. Kirk’s shooting as sensitive, removing the content or curbing its spread. Discord and X did not respond to requests for comment. Social media experts said that violent videos can beget more violence, so the removal or labeling of the material was important.
Yet many videos of Mr. Kirk’s shooting remain online with no warnings. On X on Monday, unlabeled posts showing the video continued proliferating on the platform, according to a New York Times review. The company said last week that users can “disable sensitive media” on their accounts to prevent violent content from being shown on their feeds.
Many lawmakers said what the tech companies have done was not enough. While Mr. Trump and conservative lawmakers previously pushed back against what they saw as censorship by social media companies, some have called in recent days for the platforms to suppress negative posts about Mr. Kirk.
Representative Clay Higgins, a Republican of Louisiana, said last week that tech companies should ban “every post or commenter that belittled the assassination of Charlie Kirk” from “ALL PLATFORMS EVER.”
Mr. Musk, who bought Twitter in 2022 and renamed it X, has steadfastly ignored the criticism. The billionaire, who supports free speech and is the most followed person on X, has instead called for the punishment of those who criticized Mr. Kirk. In one post, Mr. Musk singled out Satya Nadella, the chief executive of Microsoft, urging him to take action against Microsoft employees who had made critical public comments about Mr. Kirk.
“Unity is impossible with evil fanatics who celebrate murder,” Mr. Musk wrote in an X post on Monday, referring to Vice President JD Vance’s speech about “radical leftists” who had criticized Mr. Kirk.
Mr. Musk has also shared posts that tried drawing connections between Mr. Robinson and the transgender community, after it was revealed that the suspect lived with a roommate who was transgender. In one post, Mr. Musk referred to the transgender community as a “terrorist cell.”
Mr. Kirk’s death has fueled X’s popularity. After the shooting videos circulated on the platform last week and users expressed their emotions about the assassination, people flocked to download the app. On Friday, Nikita Bier, X’s head of product, posted that “on both Wednesday and Thursday, X had more first-time downloads in the United States than on any single day in its history — including during Twitter.”
Mr. Brookie said that as long as X allows violent content and hate speech, it was unlikely that other companies could stem such material.
“Content is collective across platforms,” he said. If even one of them permits toxic content, it will “virally spread to other platforms.”
Sheera Frenkel is a reporter based in the San Francisco Bay Area, covering the ways technology impacts everyday lives with a focus on social media companies, including Facebook, Instagram, Twitter, TikTok, YouTube, Telegram and WhatsApp.
Eli Tan covers the technology industry for The Times from San Francisco.
Kate Conger is a technology reporter based in San Francisco. She can be reached at [email protected].
The post Social Platforms Duck Blame for Inflaming Divisions Before Charlie Kirk’s Death appeared first on New York Times.