In the lead-up to the US president impeachment hearings, platforms such as YouTube, Facebook, Instagram, Twitter and Reddit were faced with a question: should they stop the publication and circulation of the whistle-blower’s name?

On the surface, the answer seems clear. Publishing the name, while not illegal, is clearly harmful to both the security of the individual and the integrity of the whistle-blowing process; who would come forward in the future if they believed their name would be shared online? In the name of free speech (perhaps a warped view of free speech), Twitter and Reddit decided to allow the name to circulate. But Facebook, Instagram and YouTube banned users from publishing the name.

In the world of online speech moderation, this is the low-hanging fruit: a clear policy determination made about an issue taking place in the United States with posts authored in English. And still, the whistle-blower’s name was shared widely. Now imagine if this issue had arisen in a more challenging grey zone — say it took place in a country where the platform has no physical presence or jurisdiction and where the common language isn’t spoken by anyone who works for the platform. Imagine the protected information was spreading at a much larger scale — millions of posts. Those affected by the content in question wouldn’t stand a chance.

This is the problem of content moderation, and it is a wicked one. As I have written in The Case for Platform Governance, there are easy and hard platform governance issues. Making ads more transparent, running digital literacy campaigns and even coming up with competition policies are — in comparison — easy issues. But content moderation is hard. It involves sensitive issues, such as weighing free speech against the protection of the rights of those harmed by speech, and bumps into national legal interpretations that are steeped in historical and cultural contexts.

Despite the difficulty, it is urgent that we figure out how to effectively and fairly moderate content. If we don’t,  the spread of harmful content — be it mass shooting videos, misleading information about vaccines, hate speech or even genocidal incitement — will continue to go viral and cause real-world damage.

Enter the Facebook oversight board. The idea was first proposed by Harvard law professor Noah Feldman in early 2018and then suggested by Mark Zuckerberg on Ezra Klein’s podcast in April 2018; it has now been ensconced, with a charter. The board will be a 40-person international committee tasked with making binding and precedent-setting decisions about what content is allowed on the site. This is a big idea to address a big problem. And, it represents a fundamental shift for the company — part of a wider shift in policy to address election interference, disinformation campaigns and abuse of ad targeting with far more urgency and seriousness.

But it is also indicative of the very Silicon Valley notion that problems, however wicked, can be solved with tweaks to an operating system. Unfortunately, content moderation is a challenging issue due to more than the behaviour of bad actors, or tough take-down decisions — the design and function of the platform itself present roadblocks too.

More than a billion pieces of content are posted to Facebook every day, and that content spreads with immense speed, and is designed to feed Facebook’s need for reach and engagement. Those billions of posts can all be microtargeted to individuals in a manner designed to nudge their behaviour. As a result, subtle content (which is far harder to moderate) can be just as powerful as egregious lies. Because hiring humans to review this scale of this content would break the business model of these companies, automated systems are increasingly relied on to do the work of finding and flagging the harmful and incorrect content. But these systems have to sift through content posted in hundreds of languages, each grounded in unique cultural contexts that are not easily written into an artificial intelligence system.

None of these problems will be fixed by an oversight board.

For me, the nature of this problem signals a need for democratic governance. The very need for Facebook to create this board is a clear sign of the failure of democratic states to govern the digital public sphere. In democratic societies, it is the responsibility of governments to set the rules for speech and to enforce them. Governments have done this for the broadcast world. But they have utterly failed to either evolve or enforce the existing rules for the digital world. They have abdicated their responsibility, creating a governance void slowly being filled by a private company making key decisions about speech.

Many argue that it is preferable to let private companies, rather than the state, adjudicate speech. And, on an individual implementation level, I agree. We do not want governments making case-by-case speech determinations on millions or billions of posts. But democratic governments should be setting the rules, and those rules must be specific and nuanced about the precise kind of speech allowed or not allowed within a jurisdiction. While this notion of government involvement gets us into uncomfortable territory — governments have imperfect systems of accountability and there is a real risk of government overstep — the alternative is to cede governance decisions to publicly traded, private companies sitting outside of our jurisdiction, driven by commercial interests and based on an American-style First Amendment absolutism that is not, for example, reflected in the Canadian Charter of Rights and Freedoms? Of course, even having these options is the luxury of living in a democracy. Those in illiberal or autocratic regimes would likely prefer Facebook’s version of free speech over their governments’.

While platform company design may favour a global standard for free speech that can be applied equally to everyone in the world, that is not how speech laws and governance work. The world is messier than that. Ultimately, the challenge of content moderation is structural; it is a function of the design of the platform itself bumping up against the nature of our governance systems. And this challenge is unlikely to be solved by an oversight board. It is time for democratic governments to step into this debate and to lead the difficult but urgent conversations ahead.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.
  • Taylor Owen is a CIGI senior fellow and the editor of Models for Platform Governance. He is an expert on the governance of emerging technologies, journalism and media studies, and on the international relations of digital technology. 

About the Podcast

A podcast about the emerging technologies that are reshaping democracy, the economy and society.