Resurgent hate and extremist groups emboldened by tech companies’ relaxed moderation efforts are raking in cash across the internet, according to reports from research groups and experts.
A July report from the Foundation to Combat Antisemitism (FCAS) found “hate not only spreading across the internet but becoming more profitable.” FCAS, a nonprofit organization started by New England Patriots owner Robert Kraft, said monetization efforts that once existed on the edges of the internet could now be found in more mainstream spaces, including cryptocurrencies, crowdfunding, livestreaming and merchandising.
“These operations are no longer confined to fringe corners of the internet,” FCAS said in the report. “Now, the pipeline of monetized hate runs directly through mainstream platforms — reaching wider audiences and creating financial incentives for others to join in.”
Those efforts have proved lucrative even on platforms that have policies against hate speech.
A June report from the Center for Countering Digital Hate, a nonprofit group that studies online extremism, found that even after YouTube had banned Andrew Tate, a self-described misogynist, many videos of him remained active on the platform with ads, with 100 of the most viewed clips having racked up almost 54 million views.
And reports from the Global Project Against Hate and Extremism said neo-Nazis had been monetizing their messages on Roblox, a popular online game with millions of young users, and Instagram.
Beyond hate speech and smaller extremist organizations, concerns about the use of the internet to facilitate violence are growing alongside the complexity and effectiveness of terrorists’ financing operations. A July report from the Financial Action Task Force, an intergovernmental antiterrorism group, warned of “serious and evolving terrorist financing risks,” as well as “gaps in countries’ abilities to fully understand terrorism financing trends and thus respond effectively.”
Taken together, the reports highlight how hate has proved resilient online after some temporary setbacks, reinvigorated by technologies meant to connect people and bring financial services to underserved populations. No longer confined to darker corners of the internet, extremism now often appears as just another piece of content, able to reach the most impressionable minds. And extremists use the internet’s now-robust money machine — from meme coins and T-shirts to tips and crowdfunding — with impunity.
“Right now, I think that some of the biggest concerns for us have been things like the content and the monetization that’s happening as a result of targeting of youth,” said Rachel Carroll Rivas, interim director of the Intelligence Project at the Southern Poverty Law Center. “And that is especially the case with things like video content, YouTube, but also the way that social media is used to monetize and make money, including Twitter.”
Extremist groups are generally credited with having been early adopters of the internet and among the first to use it to start bringing in money. As the internet evolved out of its early years of websites and message boards, centralized platforms powered by recommendation algorithms provided a fresh opportunity for extremists looking to spread their messages. That coincided with money’s increasingly flowing online.
That cash now serves as the lifeblood of many fringe groups.
“It’s crucial,” said Megan Squire, who has done extensive research on political extremism. “It’s like breathing air to them. They would be nowhere without this technology.”
Meanwhile, many of the most recent innovations and trends around how money flows online have also been put to use by extremists and people pushing hate.
FCAS noted that crowdfunding, typically associated with efforts like medical bills and memorial funds, has been used to solicit funds for people accused of racism. A fundraiser for a woman caught on video yelling a slur at a Black child brought in more than $675,000.
Cryptocurrencies and especially meme coins have been particularly lucrative. FCAS found that aside from meme coins like swasticoin and fundraising coins like the Proud Boys’ $Proud, other coins have sprung up looking to profit off particular acts of violence. A recent example was $Elias, a coin started using the name of the man charged with killing two Israeli Embassy staffers in May. Another coin was tied to the July killing of a Minnesota lawmaker and her husband.
Adam Katz, president of FCAS, said that with few barriers, little regulation and potentially international reach, cryptocurrencies have become a growing topic of concern.
“We’re seeing significantly more activity today than, let’s say, a year ago or even three, six months ago,” he said, echoing what FCAS found in its report. “We’re also seeing proliferation in coin creation, more and more groups, because they saw that somebody else did it and somebody else monetized it, and so they copycat.”
At one point, extremist groups’ operations online faced pushback. Companies from Google and Facebook to YouTube and Twitter, many of which had sought to take limited roles in moderating their platforms, began to crack down on hate speech and extremism in the late 2010s as political, popular and commercial pressures grew. Advertiser boycotts drew some concessions, while politicians — mostly Democrats — hauled tech executives in front of panels to talk about consumer safety.
Those winds have shifted significantly. Though some tech companies still retain policies against hate speech, tech industry experts broadly see moderation efforts as having been significantly reduced. Political pressure has almost entirely disappeared, fueled in part by the rise of President Donald Trump and Republican efforts to paint tech companies as unfair censors.
On at least one platform, X, formerly Twitter, not only have extremists been welcomed back, but the company has also gone after critics who once looked to hold it accountable for running ads alongside hate speech. In 2023, X sued Media Matters, a liberal investigative nonprofit organization, for reports stating that X showed ads next to Nazi content. Media Matters has countersued, and the two sides remain in litigation. Last year, an advertising industry group called the Global Alliance for Responsible Media shut down after X sued it.
Elon Musk’s takeover of Twitter in 2022 has been generally seen as a turning point in how major tech companies approached moderation — and for extremist groups to make a roaring comeback. Now, more recently developed technologies, most notably generative artificial intelligence, have added to extremists’ ability to spread their messages and make money.
Extremists have been found to use generative AI to create a variety of propaganda and hate content from memes to calls for violence.
“I think that the whole rollout of AI is kind of exacerbating an already existing problem online, which is that online hatred and antisemitism and extremism is getting monetized,” said Tal-Or Cohen Montemayor, founder and CEO of CyberWell, a nonprofit organization dedicated to tracking antisemitism online.
“It definitely is getting more monetized ever since a lot of infrastructural changes at X after the Elon Musk takeover,” she said. “But now AI tools are essentially exacerbating that problem by creating content at scale that’s very visual, very convincing, that’s evading detection models and resources that exist with platforms.”
The post For hate groups, it’s a lucrative era on the internet appeared first on NBC News.