Facebook, the social media giant, announced on Wednesday (May 23) that it has improved two-factor authentication, the industry-best practice for providing additional account security.
In a blog post, Scott Dickens, a product manager at Facebook, said the site is making it easier to enable two-factor authentication with a streamlined setup flow that walks users through the process. The company is also expanding the ways in which users can secure their accounts with a second factor without needing to register a phone number.
“We previously required a phone number in order to set up two-factor authentication, to help prevent account lock-outs. Now that we have redesigned the feature to make the process easier to use third-party authentication apps like Google Authenticator and Duo Security on both desktop and mobile, we are no longer making the phone number mandatory,” wrote Dickens.
He added that this strategy is an industry-best practice for providing additional account security. “We continue to encourage enabling two-factor authentication to add an extra layer of protection to your Facebook account,” he wrote.
This move comes as the social media company is taking steps to protect users’ data and privacy amid a scandal in which Cambridge Analytica, the now-defunct political consulting firm, accessed the data on 87 million Facebook users without their consent. The company has been making its policies more transparent, working to clean up its network of fake news and misinformation, and has been on an “apology tour” to win back users’ trust.
Earlier this month, Facebook disclosed that in the first quarter, it took action on 3.4 million pieces of content, which was an increase from 1.2 million in the fourth quarter of last year. In a blog post, the social media company said the increase was due in large part to enhancements to its detection technology. This includes the use of photo matching to add warnings to photos that match ones previously marked as disturbing, which Facebook said was responsible for about 70 percent of the first-quarter increase.
Facebook also said that during the first quarter, it found and flagged about 86 percent of the content it took action on prior to a user reporting it, and addressed the additional 14 percent after it was reported. In the fourth quarter of 2017, Facebook flagged around 72 percent of content without having to be alerted.
As for fake accounts, Facebook said it disabled close to 1.3 billion of them during the past two quarters, many of which were bots that were released to spread spam or to engage in other illegal activities, such as computer scams. The company also disabled 583 million accounts during the first quarter, down from 694 million in the fourth quarter. Most were disabled within minutes of registering on the platform.