September 20, 2020
Federal court approves $5B Facebook settlement with FTC over Cambridge Analytica

Facebook says coronavirus impacted content moderation


coronavirus-facebook-logo-9706

The social network started relying more on technology than human reviewers.


Image by Pixabay/Illustration by CNET

Facebook said Tuesday that the novel coronavirus impacted how many people could review content that violated its rules against suicide and self-injury on the main social network along with child nudity and sexual exploitation on Instagram. Users also couldn’t always appeal a content moderation decision. 

From April to June, the company said that it took action on fewer pieces of that type of offensive content because it sent its content reviewers home. Users also couldn’t always appeal a content moderation decision. 

Facebook relies on a mix of human reviewers and technology to flag offensive content. But some content, including suicide and sexual exploitation, is more tricky to moderate so Facebook relies more on people for those decisions. The company has faced criticism and a lawsuit from content moderators who alleged they suffered from symptoms of post-traumatic stress disorder after repeatedly reviewing violent images.  

“Despite these decreases, we prioritized and took action on the most harmful content within these categories. Our focus remains on finding and removing this content while increasing reviewer capacity as quickly and as safely as possible,” Facebook said in a blog post

The company said that it was unable to determine how prevalent violent and graphic content, and adult nudity and sexual activity was on their platforms in the second quarter because of the impact of the coronavirus. Facebook routinely publishes a quarterly report on how it enforces its community standards.

Facebook has also been under fire for not doing enough to combat hate speech, an issue that prompted an ad boycott in July. On Monday, NBC News reported that an internal investigation within the company found that there were thousands of groups and pages on the social network that supported a conspiracy theory called QAnon that alleges there’s a “deep state” plot against President Donald Trump and his supporters. Facebook said that it took action on 22.5 million pieces of content for violating its rules against hate speech in the second quarter, up from the 9.6 million pieces of content in the first quarter. Facebook attributed the jump to the use of automated technology, which helped the company proactively detect hate speech. The proactive detection rate for hate speech on Facebook increased from 89% to 95% from the first to second quarter. 

The proactive detection rate for hate speech on Instagram from 45% to 84% during that same period.  Instagram took actions against 808,900 pieces of content for violating its hate speech rules in the first quarter and that number jumped to 3.3 million in the second quarter. 

The company also took action on 8.7 million pieces of content for violating its rules against terrorism in the second quarter, up from 6.3 million in the first quarter. 



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *