A global trade union on Thursday urged the world’s major tech companies to adopt what it says is the first set of global safety protocols for content moderators.
Content moderators are workers who remove disturbing content from social media—data which is often then used to train AI systems like ChatGPT or Facebook’s algorithms. The job comes with a high risk of trauma. According to a report released alongside the protocols, 81% of content moderators believe their employer does not do enough to support their mental health.
The eight protocols were shared exclusively with TIME ahead of their publication on Thursday by the Global Trade Union Alliance for Content Moderators. They include limiting workers’ daily exposure to traumatic content, the elimination of “unrealistic” quotas and productivity targets, and 24/7 mental health support for at least two years after moderators leave their jobs. The protocols also call for living wages, workplace democracy, mental health training for moderators and their supervisors, migrant worker protections, and the right to start or join a union.
“Exposure to distressing content may be inherent to moderation, but trauma does not have to be,” says Christy Hoffman, the general secretary of the UNI Global Union, which helped workers compile the protocols. “Other frontline sectors”—like paramedics, police officers, and war reporters—“have long implemented proven mental health protections, and there is no justification for tech companies not similarly safeguarding workers in their supply chains.”
It’s unclear whether any tech companies will voluntarily sign up to implement the new protocols. Content moderators have struggled for years to obtain even basic safety standards in their line of work. Tech companies like Meta, OpenAI, Google, and TikTok uniformly rely on outsourcing companies to employ content moderators—an arrangement that “allows platforms to maintain operational control while distancing themselves from direct responsibility for working conditions,” the report says. Even high-profile media scrutiny has not led to much improvement. For example, after TIME revealed poor conditions at a Facebook content moderation facility in Kenya, prompting two lawsuits against the company, Meta switched to a different outsourcing company based in Ghana, where pay and working conditions are reportedly worse.
Meta and TikTok did not immediately respond to requests for comment.
For this story, TIME spoke to three workers who moderate content for Meta and TikTok via the outsourcing companies Telus, Covalen, and Accenture. Each of them said they struggle with low wages, high quotas, and insufficient trauma protections in their jobs. One of them, a Meta worker based in the Philippines employed via Accenture, who asked to remain anonymous, described being traumatized by a recent uptick in videos of injured and dying children in Gaza, and the grisly aftermath of the Air India crash in Ahmedabad.
Many workers said the protocol calling for living wages would be among the most welcome. “I speak with my colleagues every day—everybody owes money to somebody,” says Berfin Sirin Tunc, a content moderator for TikTok based in Turkey, employed via Telus, who says she earns a little over $4 per hour. “If I go to my employer and say I want a decent salary, nothing will change.”
In statements to TIME, both Accenture and Telus said that the wellbeing of their employees is a top priority. A spokesperson for Telus added the company believes it already complies with all eight content moderation protocols. Covalen did not immediately respond to a request for comment.
“Content moderators are the invisible frontline [workers], sifting through this material to keep online spaces safe,” says a report released by the UNI Global Union alongside the protocols. “This work demands constant judgement, emotional resilience and cultural sensitivity, all performed under intense time pressure and strict performance targets.”
The mental health challenges of moderating for TikTok are especially profound, Tunc says. She regularly watches TikTok videos for eight hours per day in her job—a task which has decimated her attention span. “I loved reading books, but can’t read a book now,” she says. Her job would be soul-crushing enough even if all the videos were funny or benign—but every now and again, she might see a video of graphic violence or sexual assault in the mix. Her job is to flag and categorize it before quickly moving on. She says she has 150 minutes per week where she can take a “wellness break” to cool off after seeing traumatizing content—but whenever she’s taking a break, she always has a ticking clock in her head telling her she should probably get back to work.
In a statement, a spokesperson for Telus estimated that disturbing material represents less than 5% of total TikTok content reviewed by moderators. “Breaks are mandatory for our content moderators, with a minimum of 90 minutes of break time per eight-hour shift,” the spokesperson added.
Still, “even short-term exposure to explicit content can cause tremendous damage,” says Dr. Annie Sparrow, a public health expert and an associate professor at Mount Sinai’s Icahn school of medicine, who is affiliated with UNI Global Union. “The disconnection that then follows is the beginning of the road to profound depression and even suicide.”
“We know what best practice looks like,” Sparrow says. “Now is the time to build in those best practices.”
The post Exclusive: New Global Safety Standards Aim to Protect AI’s Most Traumatized Workers appeared first on TIME.