Facebook records 26.9m hate speech related contents in Q4
By Mathew Ibiyemi
Facebook has released its Community Standards Enforcement Report for October through December 2020 in which the company took action against 26.9 million pieces of hate speech content.
The Community Standards and Enforcement Report tracks the progress and commitment to making Facebook and Instagram safe and inclusive.
The Facebook’s quarterly report shares metrics on how the company is doing at preventing and taking action on content that goes against its Community Standards while protecting the community’s safety, privacy, dignity and authenticity.
The latest report shows some positive strides towards improvements in prevalence, providing greater transparency and accountability around content moderation operations across different Facebook products.
It includes metrics across 12 policies on Facebook and 10 policies on Instagram.
During the 4th quarter 2020, on Facebook took action on: “6.3 million pieces of bullying and harassment content, up from 3.5 million in Q3 due in part to updates in our technology to detect comments”
“6.4 million pieces of organized hate content, up from 4 million in Q3,26.9 million pieces of hate speech content, up from 22.1 million in Q3 due in part to updates in our technology in Arabic, Spanish and Portuguese.
“2.5 million pieces of suicide and self-injury content, up from 1.3 million in Q3, due to increased reviewer capacity”.
During the 4th quarter 2020, on Instagram we took action on: “5 million pieces of bullying and harassment content, up from 2.6 million in Q3 due in part to updates in our technology to detect comments”.
“308,000 pieces of organized hate content, up from 224,000 in Q3, 6.6 million pieces of hate speech content, up from 6.5 million in Q3, 3.4 million pieces of suicide and self-injury content, up from 1.3 million in Q3 due to increased reviewer capacity”.
“Our goal is to get better and more efficient at enforcing our Community Standards. We do this by increasing our use of Artificial Intelligence (AI), by prioritizing the content that could cause the most immediate, widespread, and real-world harm, and by coordinating and collaborating with outside experts,.” said Kojo Boakye, Director of Public Policy, Africa.
Facebook plans to share additional metrics on Instagram and add new policy categories on Facebook.
“Efforts are also being made to externally audit the metrics of these reports while making the data more interactive so people can understand it better.
“We will continue to improve our technology and enforcement efforts to keep harmful content off of our apps,” the company said.