Does Deplatforming Trump Set a New Precedent for Content Moderation?

January 18, 2021
2021-01-06T173557Z_1599345556_RC2H2L9KYEO9_RTRMADP_3_USA-ELECTION-TRUMP.JPG
President Donald Trump holds a rally to contest the certification of the 2020 US presidential election results in Washington DC on January 6, 2021. (Reuters/Jim Bourg)

Following the January 6 riots on Capitol Hill, US President Donald Trump was blocked or banned by a number of platforms. When Facebook decided to lock Trump’s account, chief executive Mark Zuckerberg said that “the risks of allowing the president to continue to use our service during this period are simply too great.” The decision, although not unwarranted, was unprecedented. Other leaders who have contributed to the spread of disinformation or incited violence still have access to social media platforms. And Trump’s earlier posts — which also shared varying levels of harmful content — inspired little more than temporary content removal or warning labels. 

We asked five experts two questions: Why did platforms take such a hard line this time? And does the widespread decision to deplatform Trump set a new precedent for content moderation? This article compiles their responses.


Jameel Jaffer, Columbia University

Twitter had no choice but to suspend Trump’s account last week, and the same is true of the other major platform companies. Before last week, the companies repeatedly bent and adjusted their rules to allow Trump’s speech to stay on their platforms. I think they were generally right to do this — not because they owed it to Trump, but because they owed it to the public. The public needed to know what Trump was saying, even when — and perhaps especially when — what he was saying was wrong or outrageous. But there are limits to this principle.

When the platforms concluded that Trump was using his account to encourage or incite imminent violence, they appropriately decided that leaving Trump’s speech up would make them complicit in real-world harms that couldn’t be addressed through the “marketplace of ideas” and couldn’t be undone by a future election. (I don’t doubt that the fact that the presidency and both houses of Congress will soon be controlled by Democrats also factored into their thinking.) All of this said, I think that even those who agree with me that the companies were right to deplatform Trump last week should be concerned about the immense power that a small number of platform companies have acquired as gatekeepers to public discourse. That power is a real challenge for democracy, even if the power was used appropriately last week.

Shannon McGregor, the University of North Carolina

The violent insurrection incited at least in part by Trump’s rhetoric in tweets and posts was sadly a long-overdue wake-up call for social media platforms. Researchers and journalists examining far-right extremists and mis- and disinformation have been sounding the alarm for years. Many of Trump’s previous posts violated social media platforms’ own content moderation policies, but were mostly given a pass because he was the president. What last week’s event showed is that enforcing policies aimed at supporting democracy must especially be enforced against those with great power and influence like the president. Context and timing matter as well. After years of hedging (given Trump’s repeated violations of various platforms’ policies) the platforms’ actions now — only after white lives were threatened and lost, and with mere days left in his presidency — can charitably be viewed with a heap of cynicism. All that being said, the decision to deplatform was right, for this terribly dangerous moment. At the same time, it sets a terribly dangerous precedent.

Jonathan Corpus Ong, the University of Massachusetts

For global scholars like me and my human rights activists and journalist colleagues, the questions of the moment are: Could a “great deplatforming” happen in other countries too? Will normative standards be set top-down in Silicon Valley, or will these emerge bottom-up in local contexts? Will platforms face more pressure from autocratic governments who might use last week’s events to enforce more social media regulation that would be used to simply silence their opponents? Or will platforms realize they need to build new collaborative spaces so they can work with local researchers and activists to review a dizzying range of “incitement to discrimination and violence”? These incidents find expressions in local languages and meme cultures, shaped following individual countries’ complex racial hierarchies, and hosted not just by Silicon Valley-based platforms, but also by Russian-, Japanese- and Chinese-owned sites less invested in US liberal democratic principles and thus less likely to follow each other’s lead.

One positive global consequence of deplatforming Trump, as well as his legion of QAnon supporters and extreme “conspiritualists,” is that it would hurt the global supply chain for hateful propaganda. Populist publics from India to Turkey to the Philippines, even activists in Hong Kong, found affirmation in the anti-elite and anti-mainstream media rhetoric peddled by Trump as well as by the YouTube conspiracists and Instagram influencers who support him. The Great Deplatforming disrupts the transnational flows of racist, sexist and transphobic content, which, for example, directly led to the organization and amplification of white supremacist terror attacks in New Zealand.

Moving forward, we will need to pay attention to whether production economies for hateful propaganda become more local, thus increasing urgency for more robust platform governance at the country level. We will need also to watch for how countries with much weaker institutions could pass government regulations that might deal even greater harm than platforms’ lax policies. 

Taylor Owen, the Centre for International Governance Innovation

The furor of Twitter’s and Facebook’s suspensions of Trump’s accounts and the subsequent understandable free-speech debate are a distraction. They are a distraction from the failure of platform content moderation policies, which failed to rein in a year of harmful speech circulating widely on their sites, and which operate in a largely ad hoc manner, with platforms applying policies differently to their billions of users around the world. They are a distraction from the more consequential platform power on display last week — Google, Apple and Amazon Web Services’ deplatforming not of an account, but of another platform. They are a distraction from the structural platform problem, the ways in which the design of the platforms themselves (their business model, scale and market concentration) are the root causes of many of the harms so clearly apparent. And most importantly, a focus on Trump’s accounts distracts us from the solution to these problems. If you believe platforms have too much unaccountable power, then the solution is democratic governance.

Heidi Tworek, the Centre for International Governance Innovation

Political scientists, historians, scholars of race, scholars of genocide, scholars of rhetoric and many others have long warned about the dangers of Trump’s administration. But there is also a major problem of focusing on the United States, as many even within the United States, such as David Kaye and Rebecca MacKinnon, have long been pointing out. Platform governance is a global question that has for too long taken its cues from American events.

Trump is not the first leader to use social media to support, stoke and defend violence. Why wasn’t Myanmar already enough to illustrate these dangers? After all, an independent commission found several years ago that Facebook had failed to react swiftly enough to prevent anti-Rohingya messages circulating. In November 2018, a Facebook product policy manager wrote in response to that report: “The report concludes that, prior to this year, we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence. We agree that we can and should do more.” While Facebook acted within hours of the January 6 attack, it took years to acknowledge fault in Myanmar. That non-US history and context reminds us of the inconsistency of platforms’ principles and problematic focus in countries where the vast majority of platforms’ employees reside

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Authors

Jameel Jaffer is the director of the Knight First Amendment Institute at Columbia University.

Shannon McGregor is an assistant professor at the Hussman School of Journalism and Media, and a senior researcher with the Center for Information, Technology, and Public Life, both at the University of North Carolina, Chapel Hill.

Jonathan Corpus Ong is an associate professor of global digital media at the University of Massachusetts, the co-editor-in-chief of Television & New Media and a research fellow at the Shorenstein Center on Media, Politics and Public Policy, located at the Harvard Kennedy School.

Taylor Owen is a CIGI senior fellow and the host of the Big Tech podcast. He is an expert on the governance of emerging technologies, journalism and media studies, and on the international relations of digital technology. 

Heidi Tworek is a CIGI senior fellow and an expert on platform governance, the history of media technologies, and health communications. She is a Canada Research Chair, associate professor of history and public policy, and director of the Centre for the Study of Democratic Institutions at the University of British Columbia, Vancouver campus.