For years, YouTube has removed videos with derogatory slurs, misinformation about Covid vaccines and election falsehoods, saying the content violated the platform’s rules.
But since President Trump’s return to the White House, YouTube has encouraged its content moderators to leave up videos with content that may break the platform’s rules rather than remove them, as long as the videos are considered to be in the public interest. Those would include discussions of political, social and cultural issues.
The policy shift, which hasn’t been publicly disclosed, made YouTube the latest social media platform to back off efforts to police online speech in the wake of Republican pressure to stop moderating content. In January, Meta made a similar move, ending a fact-checking program on social media posts. Meta, which owns Facebook and Instagram, followed in the footsteps of X, Elon Musk’s social media platform, and turned responsibility for policing content over to users.
But unlike Meta and X, YouTube has not made public statements about relaxing its content moderation. The online video service introduced its new policy in mid-December in training material that was reviewed by The New York Times.
For videos considered to be in the public interest, YouTube raised the threshold for the amount of offending content permitted to half a video, from a quarter of a video. The platform also encouraged moderators to leave up those videos, which would include City Council meetings, campaign rallies and political conversations. The policy distances the platform from some of its pandemic practices, such as when it removed videos of local council meetings and a discussion between Florida’s governor, Ron DeSantis, and a panel of scientists, citing medical misinformation.
The expanded exemptions could benefit political commentators whose lengthy videos blend news coverage with opinions and claims on a variety of topics, particularly as YouTube takes on a more prominent role as a leading distributor of podcasts. The policy also helps the video platform avoid attacks by politicians and activists frustrated by its treatment of content about the origins of Covid, the 2020 election and Hunter Biden, former President Joseph R. Biden Jr.’s son.
YouTube continuously updates its guidance for content moderators on topics surfacing in the public discourse, said Nicole Bell, a company spokeswoman. It retires policies that no longer make sense, as it did in 2023 for some Covid misinformation, and strengthens policies when warranted, as it did this year to prohibit content directing people to gambling websites, according to Ms. Bell.
In the first three months of this year, YouTube removed 192,586 videos because of hateful and abusive content, a 22 percent increase from a year earlier.
“Recognizing that the definition of ‘public interest’ is always evolving, we update our guidance for these exceptions to reflect the new types of discussion we see on the platform today,” Ms. Bell said in a statement. She added, “Our goal remains the same: to protect free expression on YouTube while mitigating egregious harm.”
Critics say the changes by social media platforms have contributed to the rapid spread of false assertions and have the potential to increase digital hate speech. Last year on X, a post inaccurately said, “Welfare offices in 49 states are handing out voter registration applications to illegal aliens,” according to the Center for Countering Digital Hate, which studies misinformation and hate speech. The post, which would have been removed before recent policy changes, was seen 74.8 million times.
For years, Meta has removed about 277 million pieces of content annually, but under the new policies, much of that content could stay up, including comments like “Black people are more violent than Whites,” Imran Ahmed, the center’s chief executive, said.
“What we’re seeing is a rapid race to the bottom,” he said. The changes benefit the companies by reducing the costs of content moderation, while keeping more content online for user engagement, he added. “This is not about free speech. It’s about advertising, amplification and ultimately profits.”
YouTube has in the past put a priority on policing content to keep the platform safe for advertisers. It has long forbidden nudity, graphic violence and hate speech. But the company has always given itself latitude for interpreting the rules. The policies allow videos that violate YouTube’s rules, generally a small set, to remain on the platform if there is sufficient educational, documentary, scientific or artistic merit.
The new policies, which were outlined in the training materials, are an expansion of YouTube’s exceptions. They build on changes made before the 2024 election, when the company began permitting clips of electoral candidates on the platform even if the candidates violated its policies, the training material said.
Previously, YouTube removed a so-called public interest video if a quarter of the content broke the platform’s rules. As of Dec. 18, YouTube’s trust and safety officials told content moderators that half a video could break YouTube’s rules and stay online.
Other content that mentions political, social and cultural issues has also been exempted from YouTube’s usual content guidelines. The platform determined that videos are in the public interest if creators discuss or debate elections, ideologies, movements, race, gender, sexuality, abortion, immigration, censorship and other issues.
Megan A. Brown, a doctoral student at the University of Michigan who researches the online information ecosystem, said YouTube’s looser policies were a reversal from a time when it and other platforms “decided people could share political speech but they would maintain some decorum.” She fears that YouTube’s new policy “is not a way to achieve that.”
During training on the new policy, the trust and safety team said content moderators should err against restricting content when “freedom of expression value may outweigh harm risk.” If employees had doubts about a video’s suitability, they were encouraged to take it to their superiors rather than remove it.
YouTube employees were presented with real examples of how the new policies had already been applied. The platform gave a pass to a user-created video titled, “RFK Jr. Delivers SLEDGEHAMMER Blows to Gene-Altering JABS,” which violated YouTube’s policy against medical misinformation by incorrectly claiming that Covid vaccines alter people’s genes.
The company’s trust and safety team decided that the video shouldn’t be removed because public interest in the video “outweighs the harm risk,” the training material said. The video was deemed newsworthy because it presented contemporary news coverage of recent actions on Covid vaccines by the secretary of the Department of Health and Human Services, Robert F. Kennedy Jr. The video also mentioned political figures such as Vice President JD Vance, Elon Musk and Megyn Kelly, boosting its “newsworthiness.”
The video’s creator also discussed a university medical study and presented news headlines about people experiencing adverse effects from Covid vaccines, “signaling this is a highly debated topic (and a sensitive political topic),” according to the materials. Because the creator didn’t explicitly recommend against vaccination, YouTube decided that the video had a low risk of harm.
Currently, the video is no longer available on YouTube. It is unclear why.
Another video shared with the staff contained a slur about a transgender person. YouTube’s trust and safety team said the 43-minute video, which discussed hearings for Trump administration cabinet appointees, should stay online because the description had only a single violation of the platform’s harassment rule forbidding a “malicious expression against an identifiable individual.”
A video from South Korea featured two commentators talking about the country’s former president Yoon Suk Yeol. About halfway through the more-than-three-hour video, one of the commentators said he imagined seeing Mr. Yoon turned upside down in a guillotine so that the politician “can see the knife is going down.”
The video was approved because most of it discussed Mr. Yoon’s impeachment and arrest. In its training material, YouTube said it had also considered the risk for harm low because “the wish for execution by guillotine is not feasible.”
Nico Grant reports on Alphabet’s Google and YouTube as well as the corporate culture of Silicon Valley. He is based in San Francisco.
Tripp Mickle reports on Apple and Silicon Valley for The Times and is based in San Francisco. His focus on Apple includes product launches, manufacturing issues and political challenges. He also writes about trends across the tech industry, including layoffs, generative A.I. and robot taxis.
The post YouTube Loosens Rules Guiding the Moderation of Videos appeared first on New York Times.