Should Big Tech Be Setting the Terms of Political Speech?

October 5, 2020
2019-10-23T143655Z_1339942028_MT1USATODAY13556705_RTRMADP_3_FACEBOOK-CEO-MARK-ZUCKERBERG-ARRIVES-TO-TESTIFY-BEFORE-THE.JPG
(Reuters/Jack Gruber, USA Today network via Imagn Content Services, LLC)
Following the January 6 riots on Capitol Hill, President Donald Trump was blocked or banned by a number of platforms. When Facebook decided to lock Trump’s account, chief executive Mark Zuckerberg said that “the risks of allowing the president to continue to use our service during this period are simply too great.” Prior to the election (and the riots) content moderation policies looked significantly different.

In the run up to the US presidential election on November 3, digital platforms are releasing a number of new or updated policies related to disinformation, election advertising and content moderation. There is precedent for the onslaught of new decisions — misinformation related to the election is already prominent, and platforms have faced both public scrutiny and fines for their user privacy and advertisement targeting practices. Unfortunately, some of the content moderation policies to date have been reactive. They’ve also been developed platform by platform. What can’t be shared on Twitter might be acceptable on YouTube. And what Google says is okay might be published on Facebook but labelled as false information. The inconsistent (or insufficient) policies have led some groups to call on platforms to develop a framework for identifying and responding to election disinformation, and for platforms to formalize these standards across the board.

We asked five experts if big tech should be setting the terms of political speech. And if it does, how might this ad hoc and disjointed approach to platform governance impact democracy? This article compiles their responses.

Samantha Bradshaw, the Centre for International Governance Innovation

Platforms are taking a number of steps to protect the integrity of the 2020 elections in the United States, such as labelling disinformation and moderating political advertisements. Since social media has become such an important part of political campaigning, platforms must play a role in governing the content that politicians, candidates and parties share in the run up to the vote. But without appropriate transparency into practices and accountability for erroneous decisions, the moderation of our elections by private platform companies can erode, rather than protect, our democracies. And without coordination across the platforms as to what content or advertisements should or should not be labelled or taken down, the information environment might become skewed, leaving some voters misinformed.

evelyn douek, the Berkman Klein Center

We are now firmly in a world of second or third or fourth bests. No one’s ideal plan is the current patchwork of hurriedly drafted policies written and enforced by unaccountable private actors with very little transparency or oversight. Nevertheless, here we are. So platforms should be as clear and open as possible about what they will do in the coming weeks and tie themselves to a mast. Comprehensive and detailed policies should not only be the basis for platform action but a shield for it, when inevitable charges of bias arise. Platforms have been talking tough on the need to remove misinformation about election integrity, and rightly so — it’s an area where relying on democratic accountability for false claims is especially inadequate, because the misinformation itself interferes with those accountability mechanisms. You can’t vote someone out if you’re scared or misled out of voting at all. But there are still gaps in platform policies (thanks to the Election Integrity Partnership for tracking and bringing attention to them!). YouTube, for example, still has not released a policy for what it will do about premature claims of election victory. If having uniform standards across the industry emboldens platforms to actually commit to policies or helps with enforcement, then at this stage I’m in favour.

Dipayan Ghosh, the Mossavar-Rahmani Center for Business and Government

The political discourse is increasingly moving online, and particularly to dominant digital platforms like Facebook and YouTube — we know that. Internet companies have variously enforced new policies — such as Facebook’s new restrictions against certain hateful ads, and Google’s limitations on the micro-targeting of political ads. These are half-measures: they are not enough. Dominant digital platforms should be liable for facilitating the dissemination of political advertising at segmented voting audiences. In the absence of such a policy, we will never diminish the disinformation problem — let alone the slate of related negative externalities that have been generated by the business models at the core of the consumer internet. We need to start thinking about making targeted amendments to Section 230 to establish carve-outs from the blanket liability shield the law currently offers internet firms such that they are inherently incentivized to protect the public against harmful content like misinformation and hateful conduct — and those standards must be applied consistently across the digital sector. Markets must take a back seat to our democracy.

Daniel Kreiss, the University of North Carolina

Technology companies must set the terms of political speech to protect the integrity of the US elections. The US president is engaging in unprecedented efforts to interfere in the election, from systematic attempts to undermine public confidence in its legitimacy and dissuade people from voting to thinly veiled attempts to encourage vigilantism among his supporters at polling locations. The Republican Party has generally abnegated its responsibility to protect democracy by failing to condemn the president’s words and actions and secure democratic institutions. The Federal Election Commission has failed to set effective rules for paid and other political speech on platforms. The threat to democracy comes from the president and the ruling party, not technology platforms. As such, platforms must do all that they can to set political speech policies that protect the integrity of the vote, fight voter suppression and intimidation, and secure both the outcome of the election and the peaceful transfer of power depending on the result. As much as possible, platforms should coordinate their efforts and develop shared frameworks for governing political speech so they act with one voice to serve as democratic gatekeepers.

Heidi Tworek, the University of British Columbia

Beyond the question of who should set the terms of political speech lies another important question — when should that someone set the terms. Over the past few weeks, platforms like Facebook have continued to clarify how they will address electoral integrity. Regardless of whether the policies make sense or not, they come far too late. The rules have changed in the middle of the campaign. This is troubling for numerous reasons. First, it shows that platforms did not reckon fast enough with the problems created by the Brexit vote and the American election in 2016, among many other examples. Second, it shows the importance of governments in pushing for that recognition. The Canadian Elections Modernization Act required ad transparency from social media platforms, and such acts could go further by mandating that changes be completed a certain amount of time before election campaigning begins (this is admittedly simpler in jurisdictions with defined election periods). Third, this has created a fundamental issue of inconsistency. More than one million Americans have already voted. Many millions will now vote under potentially different rules on social media platforms for advertising and discussions of voting. Some voters have made their choices under one regime of political speech; others will make their decisions under other policies. The when of setting the terms of political speech matters as much as the who.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Authors

Samantha Bradshaw is a CIGI fellow and assistant professor in new technology and security at American University.

Evelyn douek is a lecturer on law and S.J.D. candidate at Harvard Law School, and an affiliate at the Berkman Klein Center for Internet & Society. 

Dipayan Ghosh is a research fellow at the Mossavar-Rahmani Center for Business and Government at the Harvard Kennedy School and is co-director of the Digital Platforms & Democracy Project at the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School.

Daniel Kreiss is an associate professor in the Hussman School of Journalism and Media at the University of North Carolina at Chapel Hill.

Heidi Tworek is a CIGI senior fellow and an expert on platform governance, the history of media technologies, and health communications. She is a Canada Research Chair, associate professor of history and public policy, and director of the Centre for the Study of Democratic Institutions at the University of British Columbia, Vancouver campus.