Facebook admits 4% of accounts were fake

Facebook remove more than 20 million pieces of adult nudity or pornography in three months

Facebook remove more than 20 million pieces of adult nudity or pornography in three months

It also said Facebook "disabled" about 583 million fake accounts in Q1 - "most of which were disabled within minutes of registration". The company credited better detection, even as it said computer programs have trouble understanding context and tone of language.

The company removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, almost triple the 1.2 million a quarter earlier, according to the report.

The report was Facebook's first breakdown on how much material it removes for violating its policies.

This, the company says, is because there is little of it in the first place and because most is removed before it is seen.

The company previously enforced community standards by having users report violations and trained staff then deal with them.

The prevalence of graphic violence was higher and received 22 to 27 views-an increase from the previous quarter that suggests more Facebook users are sharing violent content on the platform, the company said.

The response to extreme content on Facebook is particularly important given that it has come under intense scrutiny amid reports of governments and private organizations using the platform for disinformation campaigns and propaganda.

Facebook also managed to increase the amount of content taken down with new AI-based tools which it used to find and moderate content without needing individual users to flag it as suspicious.

The first of what will be quarterly reports on standards enforcement should be as notable to investors as the company's quarterly earnings reports.

The company said in the first quarter it took action on 837 million pieces of content for spam, 21 million pieces of content for adult nudity or sexual activity and 1.9 million for promoting terrorism.

Of course, the authors note, while such AI systems are promising, it will take years before they are effective at removing all objectionable content.

Facebook is struggling to block hate speech posts, conceding its detection technology "still doesn't work that well" and it needs to be checked by human moderators.

Facebook also disclosed that it disabled almost 1.3 billion fake accounts in the six months ending in March.

Several categories of violating content outlined in Facebook's moderation guidelines - including child sexual exploitation imagery, revenge porn, credible violence, suicidal posts, bullying, harassment, privacy breaches and copyright infringement - are not included in the report.

While artificial intelligence is able to sort through nearly all spam and content glorifying al-Qaeda and ISIS and most violent and sexually explicit content, it is not yet able to do the same for attacks on people based on personal attributes like race, ethnicity, religion, or sexual and gender identity, the company said in its first ever Community Standards Enforcement Report.

"We use a combination of technology, reviews by our teams and reports from our community to identify content that might violate our standards", the report says.

"My top priorities this year are keeping people safe and developing new ways for our community to participate in governance and holding us accountable", wrote Facebook CEO Mark Zuckerberg in a post, adding: "We have a lot more work to do".

Recommended News

We are pleased to provide this opportunity to share information, experiences and observations about what's in the news.
Some of the comments may be reprinted elsewhere in the site or in the newspaper.
Thank you for taking the time to offer your thoughts.