A whistleblower’s power: Key takeaways from the Facebook Papers

Interviews with dozens of current and former employees and a trove of internal documents show how the social media company inflamed real-world harms

October 26, 2021 at 7:00 a.m. EDT
(The Washington Post)
6 min

A personal decision by Facebook CEO Mark Zuckerberg leads to a crackdown on dissent in Vietnam. Measures to suppress hateful, deceptive content are lifted after the American presidential election in 2020, as pro-Trump groups disputing the legitimacy of the election experience “meteoric” growth. A dummy test account on Facebook in India is flooded with violent anti-Muslim propaganda — which remains visible for weeks on the real account of a frightened Muslim college student in northern India.

A trove of internal Facebook documents reveals that the social media giant has privately and meticulously tracked real-world harms exacerbated by its platforms, ignored warnings from its employees about the risks of their design decisions and exposed vulnerable communities around the world to a cocktail of dangerous content.

Disclosed to the U.S. Securities and Exchange Commission by whistleblower Frances Haugen, the Facebook Papers were provided to Congress in redacted form by Haugen’s legal counsel. The redacted versions were reviewed by a consortium of news organizations, including The Washington Post, which obtained additional internal documents and conducted interviews with dozens of current and former Facebook employees.

A mix of presentations, research studies, discussion threads and strategy memos, the Facebook Papers provide an unprecedented view into how executives at the social media giant weigh trade-offs between public safety and their own bottom line. Some of the documents were first reported by the Wall Street Journal.

Here are key takeaways from The Post’s investigation:

Zuckerberg’s public claims often conflict with internal research

Haugen references Zuckerberg’s public statements at least 20 times in her SEC complaints, asserting that the CEO’s unique degree of control over Facebook forces him to bear ultimate responsibility for a litany of societal harms caused by the company’s relentless pursuit of growth.

The documents also show that Zuckerberg’s public statements are often at odds with internal company findings.

For example, Zuckerberg testified last year before Congress that the company removes 94 percent of the hate speech it finds before a human reports it. But in internal documents, researchers estimated that the company was removing less than 5 percent of all hate speech on Facebook.

Facebook spokeswoman Dani Lever denied that Zuckerberg “makes decisions that cause harm” and dismissed the findings, saying they are “based on selected documents that are mischaracterized and devoid of any context.”

The case against Mark Zuckerberg: How Facebook’s CEO chose growth and free speech over safety

Facebook dropped its guard before the Jan. 6 insurrection

During the run-up to the 2020 U.S. presidential election, the social media giant dialed up efforts to police content that promoted violence, misinformation and hate speech. But after Nov. 6, Facebook rolled back many of the dozens of measures aimed at safeguarding U.S. users. A ban on the main Stop the Steal group didn’t apply to the dozens of look-alike groups that popped up in what the company later concluded was a “coordinated” campaign, documents show.

By the time Facebook tried to reimpose its “break the glass” measures, it was too late: A pro-Trump mob was storming the U.S. Capitol.

Facebook officials said they planned exhaustively for the election and its aftermath, anticipated the potential for post-election violence, and always expected the challenges to last through the inauguration of President Biden on Jan. 20.

Inside Facebook, Jan. 6 violence fueled anger, regret over missed warning signs

Facebook fails to effectively police content in much of the world

For all Facebook’s troubles in North America, its problems with hate speech and misinformation are dramatically worse in the developing world. Documents show that Facebook has meticulously studied its approach abroad, and is well aware that weaker moderation in non-English-speaking countries leaves the platform vulnerable to abuse by bad actors and authoritarian regimes.

According to one 2020 summary, the vast majority of its efforts against misinformation — 84 percent — went toward the United States, the documents show, with just 16 percent going to the “Rest of World,” including India, France and Italy.

Though Facebook considers India a top priority, activating large teams to engage with civil society groups and protect elections, the documents show that Indian users experience Facebook without critical guardrails common in English-speaking countries.

Facebook’s Lever said the company has made “progress,” with “global teams with native speakers reviewing content in over 70 languages along with experts in humanitarian and human rights issues.”

“We’ve hired more people with language, country and topic expertise,” Lever said, adding that Facebook has “also increased the number of team members with work experience in Myanmar and Ethiopia to include former humanitarian aid workers, crisis responders and policy specialists.”

How Facebook neglected the rest of the world, fueling hate speech and violence in India

Facebook chooses maximum engagement over user safety

Zuckerberg has said the company does not design its products to persuade people to spend more time on them. But dozens of documents suggest the opposite.

The company exhaustively studies potential policy changes for their effects on user engagement and other factors key to corporate profits. Amid this push for user attention, Facebook abandoned or delayed initiatives to reduce misinformation and radicalization.

One 2019 report tracking a dummy account set up to represent a conservative mother in North Carolina found that Facebook’s recommendation algorithms led her to QAnon, an extremist ideology that the FBI has deemed a domestic terrorism threat, in just five days. Still, Facebook allowed QAnon to operate on its site largely unchecked for another 13 months.

“We have no commercial or moral incentive to do anything other than give the maximum number of people as much of a positive experience as possible,” Facebook’s Lever said, adding that the company is “constantly making difficult decisions.”

Facebook took years to implement a simple fix for anger and misinformation

Starting in 2017, Facebook’s algorithm gave emoji reactions like “angry” five times the weight as “likes,” boosting these posts in its users’ feeds. The theory was simple: Posts that prompted lots of reaction emoji tended to keep users more engaged, and keeping users engaged was the key to Facebook’s business.

The company’s data scientists eventually confirmed that “angry” reaction, along with “wow” and “haha,” occurred more frequently on “toxic” content and misinformation.

Last year, when Facebook finally set the weight on the angry reaction to zero, users began to get less misinformation, less “disturbing” content and less “graphic violence,” company data scientists found. Lever said that the company continues to work on its understanding of negative experience to reduce its spread.

Five points for anger, one for a ‘like’: How Facebook’s formula fostered rage and misinformation

Elizabeth Dwoskin, Shibani Mahtani, Cat Zakrzewski, Craig Timberg, Will Oremus and Jeremy Merrill contributed to this report.

correction

A previous version of this article incorrectly described the content of congressional testimony by Facebook's CEO, Mark Zuckerberg. Zuckerberg testified that the company removes 94 percent of the hate speech it finds before a human reports it, not just that it removes 94 percent of the hate speech it finds. The article has been corrected.

Read the series: Facebook under fire

The Facebook Papers are a set of internal documents that were provided to Congress in redacted form by Frances Haugen’s legal counsel. The redacted versions were reviewed by a consortium of news organizations, including The Washington Post.

The trove of documents show how Facebook CEO Mark Zuckerberg has, at times, contradicted, downplayed or failed to disclose company findings on the impact of its products and platforms.

The documents also provided new details of the social media platform’s role in fomenting the storming of the U.S. Capitol. An investigation by ProPublica and The Washington Post found that Facebook groups swelled with at least 650,000 posts attacking the legitimacy of Joe Biden’s victory between Election Day and Jan. 6.

Facebook engineers gave extra value to emoji reactions, including ‘angry,’ pushing more emotional and provocative content into users’ news feeds.

Read more from The Post’s investigation:

Key takeaways from the Facebook Papers

Frances Haugen took thousands of Facebook documents. This is how she did it.

How Facebook neglected the rest of the world, fueling hate speech and violence in India

How Facebook shapes your feed