In the past few years, several catchwords have emerged to describe the technology policy issues surrounding the intersection of user-generated content, online intermediaries for that content, and the host of legal and political frameworks that shape the relationships between them. Policy makers and journalists working in the area have had to sift through a host of similar-sounding concepts often used interchangeably — three common ones being platform governance, platform responsibility and platform regulation.
It’s not always easy to navigate this rocky and shifting conceptual and theoretical territory. But at this moment, when representatives from more than a dozen countries are about to meet (as the International Grand Committee) it’s worth unpacking the term: what exactly is platform governance?
Platform Governance as a Holistic Perspective on Content Moderation
Content moderation has become a matter of significant public interest and policy-oriented attention of late, and the decisions made by companies about whether to uphold, remove or downrank the tweets or posts made by certain high-profile users are now seen as political and have drawn considerable scrutiny. But moderation entails more than just setting policies around speech and enforcing them with a mix of human labour and automated decision making at scale. As the technology law scholar James Grimmelmann wrote in an early and influential article, moderation should be understood more broadly as consisting of the set of rules, architectures and norms that “structure behaviour in a community to facilitate cooperation and prevent abuse.”
Channeling this kind of approach, “platform governance” has become increasingly used by some researchers to explicitly foreground a broader notion of content moderation —for example, also including a host of design decisions around the way that content is filtered, presented, and can be interacted with — to capture all of the many facets of how a social network governs the activity of its participants.
As Sarah Roberts — a leading scholar who has done invaluable work on the human labour of moderation — recently pointed out, a major limitation of many policy discussions around content moderation is a lack of precision in delimiting exactly what qualifies as “content.” After all, companies moderate advertisements by commercial entities, the pages of businesses and public figures, and the authenticity of accounts, besides the text, images, video and audio of ordinary users.
Platform governance, as a term, can thus capture not only all of that activity (and the varying political and commercial incentives underlying them), but also the broader philosophy, norms and governance dynamics that an online space may have. For instance, Nathan Matias and Merry Mou have produced fascinating work examining alternative, community-driven models of platform governance, where moderators and participants have far more of a say in shaping the policy decisions made by their communities than their counterparts have in the top-down, “industrial” moderation enacted on mega-platforms such as YouTube or Facebook. Nic Suzor, Tess Van Geelen and Sarah Myers West have proposed helpful, human rights-oriented categories that can be used to research and evaluate the various policies and practices companies apply in governing their online spaces.
Platform Governance as a Set of Legal and Political Relations
Another body of scholarship uses “platform governance” to refer more broadly to how platform companies — in particular, major intermediaries for user-generated content, such as Facebook, Twitter, WhatsApp and YouTube, but also “gig economy” firms, such as Uber and Airbnb — are enmeshed in international regulatory dynamics. This usage of the term is closer in meaning to the term “global governance,” as found in political science, or to the term “platform regulation,” which has caught on in European policy circles.
This is scholarship that focuses on how platform companies are currently regulated or governed at the national and global level, especially with relation to three policy areas: intermediary liability (online content regulation); data protection regulation; and competition policy. Often, governance gaps are highlighted. How do platform intermediaries challenge existing notions of market power (and thus competition enforcement)? Which markets and firms should be targeted by the limited resources of data protection authorities? How have interest groups pushed for various regulatory strategies, and how have these been supported or opposed by various governmental, industry and civil society actors?
In many ways, platform companies are political actors making important political decisions and engineering what has become the global infrastructure of free expression. So many platform users reside outside of North America, but the choices made in California — and largely shaped in response to Western demands, debates and discourse — can affect the day-to-day lives of millions around the globe. While we’re increasingly talking about the internal politics of these “new governors,” we should not forget that these companies are constantly subject to governance on all fronts, and that their conduct of governance is directly shaped by a multitude of local, national and supranational political and regulatory factors.
Global platform governance thus consists of the interlinkages between the micro and macro levels described above. It may best be understood as the interplay between platform policies and a complex mesh of regulatory instruments (ranging from traditional hard legislation to soft, private regulation, informal codes of conduct, and various forms of transnational governance at the national, regional and global level).