Facebook said it has recently developed metrics as a way to review the content shared on its platform and the transparency report reviewed the content posted in the community during the period from October 2017 through March 2018. Most of the action taken was to remove spam content and the fake accounts used to distribute that spam.
Facebook today revealed for the first time how much sex, violence and terrorist, propaganda has infiltrated the platform-and whether the company has successfully taken the content down. It said it disabled 583 million fake accounts.
In a post on the company's website, vice president of product Guy Rosen says Facebook still has a lot of work still to do to prevent abuse.
"We estimate that fake accounts represented approximately 3-4 per cent of monthly active users on Facebook during Q1 2018 and Q4 2017", the report said.
Facebook said in a written report that of every 10,000 pieces of content viewed in the first quarter, an estimated 22 to 27 pieces contained graphic violence, up from an estimate of 16 to 19 late previous year.
Facebook's vice president of product management, Guy Rosen, said in a blog post Tuesday about the newly-released report that nearly all of the 837 million spam posts Facebook took down in the first quarter of 2018 were found by Facebook before anyone had reported them.
The last stat that Facebook highlighted was hate speech; it admitted its technology wasn't very good at picking it up so it still gets reviewed by review teams.
"It's why we're investing heavily in more people and better technology to make Facebook safer for everyone". Facebook noted that while its artificial intelligence technology found and flagged many standard violations, more progress needed to be done. In April, Facebook published its internal guidelines on how it decides to remove posts that include hate speech, violence, nudity, terrorism and more. The company has evaluated thousands of apps to see if they had access to large amounts of data, and will now investigate those it has identified as potentially misusing that data, it said in a blog post.
In terms of graphic violent content, Facebook said more than 3.4 million posts were either taken down or given warning labels, 86% of which were spotted by its detection tools.
It's also why we are publishing this information. This means, the social media company still relies on its users and reviewers to check up on hate speeches and it will take some time for its AI to learn sarcasm and detect abusive hate speeches.
"In addition, in many areas - whether it's spam, porn or fake accounts - we're up against sophisticated adversaries who continually change tactics to circumvent our controls, which means we must continuously build and adapt our efforts".