Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.
Facebook added warning labels to more than 180 million posts that included election misinformation between March and Nov. 3, providing the first accounting of the huge problem faced by the company in countering lies on its service about the democratic process.
The company added labels to the posts it flagged that provided users with additional context about voting and the election’s projected winners. The labels also included links to information from credible sources.
Although Facebook patted itself on the back for flagging misinformation, it didn’t have any numbers for how many election-related lies that it missed. So it’s unclear whether the 180 million posts represented the majority or just a sliver of misinformation that flooded the service.
Facebook released the data on Thursday as lawmakers from both parties pummel it over its handling of the election. Democrats argue that Facebook’s labels don’t do enough to stop the spread of misinformation, while Republicans say the company unfairly polices posts by conservatives.
In addition to flagging election misinformation, Facebook said it removed 265,000 posts on Instagram and Facebook for violating its policy against voter suppression.
The company also rejected U.S. political ads 3.3 million times after ad publishers failed to complete the authorization process, which was made more strict leading up to the election. One week before the vote, Facebook temporarily halted all political ads and has yet to say when it will reinstate them.
“All of these efforts were part of our goals of … protecting the integrity of our election by fighting interference, misinformation, voter suppression,” Guy Rosen, Facebook’s vice president of integrity, said on a call with the media on Thursday.
The report also come as Facebook tries to prove it has a better handle on elections following the 2016 Cambridge Analytica scandal, in which Facebook allowed a data analytics firm to harvest user information in an attempt to influence the election.
Last week, Twitter released a similar update about its elections work, saying it had labeled nearly 300,000 tweets for disputed and potentially misleading content from Oct. 27 to Nov. 11. That represented .2% of U.S. election-related tweets posted during that period.
Facebook on Thursday also touted the work it did to help people access reliable election information and register to vote. The company said it had helped nearly 4.5 million people register to vote via Facebook, Instagram, and Messenger and that it had assisted 100,000 sign up as election poll workers.
Facebook said since the debut of its voter information center in June, that about 140 million people visited the hub, which connects users to information from credible political organizations, media groups, and governmental authorities. And 33 million people visited the hub on Election Day.
As it continues a crackdown on hate speech, Facebook also provided a new metric as part of its quarterly content moderation report released on Thursday: the prevalence of hate speech. Between July and September, hate speech was included in .1% to .11% of posts viewed, or 10 to 11 of every 10,000 views.
“From the beginning, we’ve mentioned how prevalence we believe is really the correct north star metric for work like this,” Rosen said. “It represents not just what we caught but what we missed and what the actual experiences are of the people on this platform.”