Facebook today published its latest Community Standards Enforcement Report, the first of which it released in April 2018. As in previous editions, the Menlo Park company tracked metrics across a number of policies — ten in all — in the second and third fiscal quarter, focusing on the prevalence of prohibited content that made its way onto Facebook and the volume of this content it successfully removed.
For the first time, Facebook detailed how it’s taking action on suicide and self-injury content and provided prevalence metrics regarding regulated goods content — i.e., illicit sales of firearms and drugs. Additionally, it shared data on how it’s enforcing its policies on Instagram, specifically in the areas of child nudity, child sexual exploitation, regulated goods, suicide and self-injury, and terrorist propaganda.
“We’ll continue to refine the processes we use to measure our actions and build a robust system to ensure the metrics we provide are accurate,” wrote Facebook VP of integrity Guy Rosen in a blog post.
On the subject of self-harm and self-injury content, Facebook says that it made improvements to its technology to find and remove a higher volume of violating content. As a result, the network took down 2 million pieces of self-harm and self-imagery content in Q2 2019, of which 96.1% it says it detected proactively. In Q3, that number hit 2.5 million pieces, of which 97.3% were detected proactively. And on Instagram, 835,000 pieces of content were removed in Q2 2019 (of which 77.8% it detected proactively) and about 845,000 pieces of content were removed in Q3 2019 (79.1% of which were detected proactively).
Facebook claims that for every 10,000 views on Facebook or Instagram in Q3 2019, no more than four contained content that violated its policies on suicide and self-injury and regulated goods.
With respect to terrorist propaganda, Facebook expanded this edition of its community report to include actions taken against all terrorist organizations versus those taken only against al Qaeda, ISIS, and their affiliates. The rate at which it detected and removed content was 98.5% on Facebook and 92.2% on Instagram, a discrepancy it attributed to the changing tactics of known bad actors. .
Separately, Facebook said improvements to its internal child nudity and exploitation database enabled it to better detect and remove instances of the same content shared on both Facebook and Instagram. In Q3 2019, it removed about 11.6 million pieces of content, up from Q1 2019 when it removed about 5.8 million. And over the last four quarters, it proactively detected over 99% of child nudity and exploitation content it removed. On Instagram in Q2 2019, it removed about 512,000 pieces of content, of which 92.5% it detected proactively. And in Q3, it removed 754,000 pieces of content from Instagram, of which 94.6% it detected proactively.
Facebook says that “continued investments” in its detection systems and “advancements” in its enforcement techniques allowed it to build on the progress from the last report where regulated goods are concerned. In Q3 2019, it removed roughly 4.4 million pieces of drug sale content, of which 97.6% it detected proactively — an increase from Q1 2019 when it removed about 841,000 pieces of drug sale content, of which 84.4% it detected proactively. Also in Q3 2019, it removed about 2.3 million pieces of firearm sales content, of which 93.8% it detected proactively — an increase from Q1 2019 when it removed about 609,000 pieces of firearm sale content, of which 69.9% it detected proactively.
On Instagram in Q3 2019, Facebook says it removed about 1.5 million pieces of drug sale content, of which 95.3% it detected proactively. In the same quarter, it removed about 58,600 pieces of firearm sales content, of which 91.3% it detected proactively.
Facebook also said it’s made gains in hate speech content detection and removal, thanks in part to improved text and image matching techniques (which identify images and identical strings of text that have already been removed as hate speech) and machine-learning classifiers trained on thousands to millions of data samples. Starting in Q2 2019, it began removing some posts automatically when content was identical or near-identical to text or images previously removed by its content review team or where content “very closely matched” common policy-violating attacks. Facebook notes that it only did this in select instances, and that in all other cases when its systems proactively detected potential hate speech, the content was sent to its review teams to make a final determination.
Facebook says that with the evolutions in its detection systems, its proactive rate has climbed to 80%, from 68% in its last report, coinciding with an increase in the volume of content it found and removed for violating its hate speech policy.
Yet another domain where Facebook’s AI is making a difference is duplicitous accounts. At the company’s annual F8 developer conference in San Francisco, CTO Mike Schroepfer said that in the course of a single quarter, Facebook takes down over a billion spammy accounts, over 700 million fake accounts, and tens of millions of pieces of content containing nudity and violence. AI is a top source of reporting across all of those categories, he says.
Concretely, Facebook disabled 1.2 billion accounts in Q4 2018 and 2.19 billion in Q1 2019.
The post Facebook now removes 80% of hate speech proactively appeared first on Venture Beat.