News

Yes, Fake Accounts On Facebook Are Still A Problem, But Changes Are In The Works

by Hannah Golden
Chip Somodevilla/Getty Images News/Getty Images

Following a year of damning reports about Facebook's role in letting slip through the cracks users and content that were in violation of its policies, founder and CEO Mark Zuckerberg made clear on Thursday, Nov. 15 that the company was committed to taking responsibility for its mistakes. After several months of work to redeem itself, has anything changed? According to a new company report, yes, fake accounts on Facebook are still a problem — but the company appears to be working on getting better at detecting and removing them.

According to the company's transparency report, in the last six months, the company disabled a cumulative 1.5 billion fake accounts, but the problem of fake news isn't going to be solved that easily. While Facebook might be taking the accounts down as fast as they can find them, "The prevalence of fake accounts on Facebook remained steady at 3 to 4 percent of monthly active users," according to the company's report. With some 2.27 billion monthly active users as of the third quarter of 2018, that's still ... around 68 million users that were potentially fake accounts, at the low end of the scale.

Over the last year-plus, the company has been under fire for, among other things, its role in allowing for the rise of fake accounts and the spread of disinformation on its platform, which were found to be linked with Russian operatives in part to sway elections in the U.S.

"We were too slow to spot Russian interference," Zuckerberg said in a call to reporters on Thursday, Nov. 15, acknowledging the Times report from Wednesday. "But to suggest that we weren't interested in knowing the truth ... is simply untrue. People have been working on this nonstop for over a year. We're in a much stronger place today than we were in 2016, but we have more work to do as well."

The company has been tracking several broad categories of content that violate the platform's community standards, including bullying, hate speech, terrorism and propaganda, fake accounts, spam, and child and adult nudity. Over the last year, the most recent Community Standards Enforcement Report shows the company has gotten more efficient at identifying and removing bad actors and content that violates those standards. There is a content appeals process in place to correct any mistakenly removed content.

Zuckerberg says a team of 30,000 people are working to combat the problem. Since it began tracking this content last year, some progress has been made, but it's not clear whether the platform has fared any better or worse at preventing this content from reaching the site in the first place.

"The majority of those [fake accounts] are very short-lived," Guy Rosen, Facebook's Vice President of Product Management, told reporters on the press call. "We take down millions of fake accounts every day; the actual prevalence has remained steady." It's not clear how many users were and continue to be exposed to these fake accounts despite the short duration.

And its potential effect on the midterms in the U.S.? No news is good news, apparently. "In terms of the election, we didn’t see a specific spike around that," he added.

As for "fake news," Zuckerberg Thursday added that the company's goal was to lower the overall prevalence of such content and reduce "the spread of sensational and provocative content," saying that users tend to engage with it more than other content. He pointed out so-called "borderline content," which may not overtly violate community standards but may, for example, spread disinformation. The aim, he says, is that it "gets less distribution and not more."

The report comes at a time when the company has been on the defensive for more than a year over its associations and practices in regards to interference in elections as well as breaches of user data. But increased transparency about what the company is doing to address bad actors and violating content, Zuckerberg stressed, was going to be part of the company's ongoing approach to addressing concerns.

In the meantime? Keep flagging those Russian bots, guys.