Facebook disclosed earlier this week its efforts to fight back against fake and offending content on its platform, revealing that in the first quarter it took action on 3.4 million pieces of content, which is an increase from 1.2 million in the fourth quarter of last year.
In a blog post, the social media company said the increase was due in large part to enhancements to its detection technology. This includes the use of photo matching to add warnings to photos that match ones previously marked as disturbing, which Facebook said was responsible for about 70 percent of the first-quarter increase.
The social media company also said that during the first quarter, it found and flagged about 86 percent of the content it took action on prior to a user reporting it, and addressed the additional 14 percent after it was reported. In the fourth quarter of 2017, Facebook flagged around 72 percent of content without having to be alerted.
As for fake accounts, Facebook said it disabled close to 1.3 billion of them during the past two quarters, many of which were bots that were released to spread spam or to engage in other illegal activities, such as computer scams. The company also disabled 583 million accounts during the first quarter, down from 694 million in the fourth quarter. Most were disabled within minutes of registering on the social media platform.
The disclosure of this data is part of Facebook’s efforts to increase transparency and to appease upset customers, lawmakers and regulators. Facebook has been in “damage control” mode since the middle of March, when news broke that the now-defunct political consulting company, Cambridge Analytica, accessed the data of 87 million Facebook users without their consent.
“We want to protect and respect both expression and personal safety on Facebook. Our goal is to create a safe and welcoming community for the more than two billion people who use Facebook around the world, across cultures and perspectives,” Facebook wrote.