Facebook’s America-centrism Is Now Plain for All to See

The vast majority of Facebook’s users live outside the United States. Yet most of the company’s content moderation efforts — fully 87 percent — are devoted to American posts.

October 4, 2021
2020-11-17T000000Z_1966329897_MT1SIPA0004ANP65_RTRMADP_3_SIPA-USA.jpg
Facebook CEO Mark Zuckerberg testifies via videoconference as U.S. Senator Thom Tillis (R-NC) listens during a Senate Judiciary Committee hearing about Facebook and Twitter's content moderation, in Washington DC, November 17, 2020. (Hannah McKay/Sipa USA)

More than 90 percent of Facebook’s monthly active users live outside the United States and Canada. You’d think the company would, therefore, invest more than 90 percent of its time on content moderation outside North America. But the opposite is true. The Wall Street Journal has uncovered from internal Facebook materials that in 2020 Facebook employees and contractors spent over 3.2 million hours finding, labelling and removing false or misleading content. Fully 87 percent of that time was spent on posts in the United States.

Think about that. More than 90 percent of the users receive 13 percent of a company’s effort, on the crucial issue of content quality control. That fact has gone under-remarked compared with the other revelations in The Wall Street Journal’s investigation, but surely it’s the most important finding for the 96 percent of the world’s population who reside outside the United States. The company devoted almost three times more hours on the problem of “brand safety” than on content posts. This tells every country where Facebook’s priorities lie.

And these statistics raise all sorts of questions. To what extent is Facebook using artificial intelligence (AI) to moderate content outside the United States? Content moderation is time-consuming. Despite the inflated promises around AI systems, much of this work still needs to be done by humans. What does the comparative lack of moderation mean for Facebook content elsewhere? Is more content deleted erroneously? How different is Facebook in another English-speaking country, such as Australia or New Zealand, from the platform in the United States? And what about in countries where the content appears in languages for which Facebook does not employ any content moderators at all?

In some cases, countries are fighting back. In several recent rulings the German Federal Court of Justice has laid out how Facebook’s Community Standards interact with Germany’s constitutional requirements. One crucial element is known as the Drittwirkung doctrine, generally translated as “third-party effect.” As Matthias Kettemann and Torben Klausa explained in a piece for Lawfare, “because Germany’s constitution is not value-free, these values — embedded for example in fundamental rights — radiate into nonpublic fields of law and private law relationships, like contracts. Therefore, courts have to consider rights in their reading, potentially resulting in an indirect application of fundamental rights to private actors.”

This has now happened to Facebook. The German court has ruled that Facebook must implement new procedural safeguards around content deletion, including telling users when their content is deleted and why, as well as giving them an opportunity to respond and appeal. All this will require considerably more financial investment in Germany and it may lay the groundwork for other countries to follow suit. Kettemann and Klausa summarized the court’s decisions best: “Content moderation is messy, difficult, and costs a lot of money — and Facebook has to pay for it and get better at it.”

Meanwhile, The Gambia is using international law to understand Facebook better. The smallest country in mainland Africa is pursuing a genocide case before the International Court of Justice around the expulsion of the Rohingya from Myanmar. Over a year ago, The Gambia requested data from Facebook around Myanmar officials’ deleted posts that promoted human rights abuses against the Rohingya. Although Facebook provided information to the United Nations Independent Investigative Mechanism for Myanmar, the company refused to supply any deleted posts to The Gambia’s legal team, citing privacy reasons. The Gambia then took Facebook to court in the United States.

In September 2021, a District of Columbia judge sided with The Gambia. The judge, Zia Faruqui, found that “Facebook taking up the mantle of privacy rights is rich with irony. News sites have entire sections dedicated to Facebook’s sordid history of privacy scandals.” The case may reveal more details about how Facebook’s algorithms may have spread Myanmar officials’ posts and, perhaps, some sense of why Facebook deleted posts when it did.

Such cases may be the start of a new international approach to social media companies. The consequences of under-investment may become graver than public opprobrium. The legal scholar Rebecca Hamilton has suggested that legal scholars and practitioners might consider a new category of “platform-enabled crimes.” This could include the expulsion of the Rohingya people from Myanmar, for example.

All of this raises an even broader question. Do the countries of the world want companies like Facebook to do more moderation, or do we want to think carefully about new models of content moderation altogether, ones that do not depend on more Facebook? I have suggested elsewhere that it is worth considering e-courts as an another adjudicatory mechanism, for example. Now is just the right time for more out-of-the-box ideas.

Finally, the revelations remind us, once again, of the broader nature of transparency. Even Facebook’s own Oversight Board has now called for greater transparency from the company. Transparency on budgets is as important as, if not more important than, transparency on content detection and deletion. Budgets reveal priorities. Facebook’s US-centrism is now plain for all to see.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Heidi Tworek is a CIGI senior fellow and an expert on platform governance, the history of media technologies, and health communications. She is a Canada Research Chair, associate professor of history and public policy, and director of the Centre for the Study of Democratic Institutions at the University of British Columbia, Vancouver campus.