Purging the Web of Lies Will Not, of Itself, Repair Our Politics

Policy makers, social media companies and activists have focused too much on technocratic algorithmic adjustments as our salvation from political trauma.

October 26, 2022
Photo illustration by Filip Radwanski/SIPA via REUTERS.

In recent months the academic community has produced some substantial and long-overdue new thinking on disinformation. The Knight First Amendment Institute and the International Communication Association both held major conferences (here and here) that provided disinformation researchers and scholars a chance to raise doubts about the assumptions of the field and propose new directions.

This new thinking challenges many of the presuppositions of disinformation initiatives undertaken thus far by governments and social media companies. As participants at these conferences demonstrated, too often these efforts treat citizens as ignorant or irrational, or even as foreign enemies. They seem to be built on the assumption that disinformation began in 2016 with social media companies, although it has long been a feature of political life, persisting through changes in media technology and business models. The new thinking emphasizes the limitations of content regulation, fact-checking and media literacy efforts as remedies for our current political crises. Critically, it reminds us that the key work of political governance should not be technocratic efforts to police discourse for accuracy, but rather the construction of attractive and achievable political visions of collective life.

To some degree, this is a welcome breath of fresh air. Policy makers, social media companies and activists have focused too much on technocratic algorithmic adjustments as our salvation from political trauma.

But there is also risk in this new direction. Its emphasis on the limits of disinformation measures could potentially lead social media companies to abandon or reduce their efforts to purge lies and disinformation from their platforms. It could likewise lead policy makers to stop encouraging social media companies to keep their systems as free as possible from falsehoods and propaganda. Throwing out the baby with the bathwater would be a mistake. Blatantly false political or public health narratives may not be the major cause of our political challenges, and better information may not restore the “good old days” of moderation and political cooperation. But a steady flow of accurate information is an absolute necessity for any effective system of political governance.

Writing in The Atlantic more than a century ago, Walter Lippmann, America’s premier public intellectual of the early twentieth century, said, “The cardinal fact always is the loss of contact with objective information. Public as well as private reason depends upon it. Not what somebody says, not what somebody wishes were true, but what is so beyond all our opining, constitutes the touchstone of our sanity. And a society which lives at second-hand will commit incredible follies and countenance inconceivable brutalities if that contact is intermittent and untrustworthy.”

Lippmann was worried about the flood of propaganda, public health misinformation, racism and lies in the newspapers that dominated the media world of 1919. This should sound familiar to contemporary readers. The old system of gatekeeper control was not a paradise of truth and accuracy. The systemic flaws that allow lies and disinformation to flourish are not new. But they are flaws nonetheless and require public and private countermeasures. Efforts to ensure truth in political discussions must continue, in full awareness of their limitations.

The new direction in disinformation studies recommends the creation of attractive political alternatives rather than technocratic fact-checking as the way to counter lies and disinformation. But it should not be a question of either/or. Even if efforts to counter disinformation are not enough in and of themselves, they are still worthwhile. Social media platforms and policy makers need to purge lies and distortions from social platforms even as political leaders are pushed to provide an attractive political alternative to surging right-wing authoritarianism.

The new emphasis on the limits of disinformation efforts has another weakness. Besides that it could be interpreted as suggesting public and corporate efforts against disinformation should be relaxed, this emphasis may also distract from the need to construct transparency safeguards around efforts to debunk false narratives. Transparency is vital to ensure that the fight against disinformation does not turn into censorship disguised as fact-checking. Agencies of government charged with identifying political disinformation and discrediting it are especially vulnerable to political misuse and must be subject to the disinfectant of sunlight.

Lebovic reminds us that there was ample lying in the media and public life before the social media debacles in 2016 — including the misinformation spread by Senator Joseph McCarthy in the 1950s and regular half-truths fed to the American people about foreign policy.

The New Thinking

In April 2022, the Knight First Amendment Institute organized a symposium for scholars on lies, free speech and the law. A paper presented there by George Mason University historian Sam Lebovic entitled “Fake News, Lies, and Other Familiar Problems” captured the overall theme of the conference. Professor Lebovic kindly provided me with a prepublication final draft, from which the following quotations are drawn.

In his insightful paper, Lebovic attacks the idea of treating disinformation as a cancerous growth that needs to be cut “out of the body politic.” The lies of the present moment, he says, are not “an unprecedented epistemic crisis” but rather “an expression of a conservative political formation in American political life.”

He reminds us that there was ample lying in the media and public life before the social media debacles in 2016 — including the misinformation spread by Senator Joseph McCarthy in the 1950s and regular half-truths fed to the American people about foreign policy, from the Bay of Pigs to the Iran–Contra scandal and beyond. Among the most important “fake news” stories of our time, he says, was “public misapprehension of climate change, where a self-conscious conservative political formation adopted a strategy in the 1990s of casting doubt on the science, of creating debate instead of consensus.”

Lebovic rejects the tendency in current disinformation studies and initiatives to “pathologize” many Americans, to treat them as beyond the reach of argument. He attacks as well the condescending assumption that people could be led back to the path of reason if only political lies were “expunged from the body politic.” He thinks that to focus on the spread of lies misses the point. The critical issue is not the “epiphenomenon” of lies, he argues, nor an “unprecedented epistemic crisis.” Lies and disinformation are political, he says, so the solutions cannot be mere technical adjustments. They must also be political.

Lebovic further warns against the “temptation to regulate lies.” And he rejects content regulation for accuracy and fairness and media literacy efforts as remedies to disinformation. Such measures, he argues, “assume that there is a relatively small class of ‘lies’ that can be identified in some procedurally neutral way and then expunged from the body politic through technocratic means.”

Rather, he calls instead for “counter-mobilization” through the development of plausible and attractive alternative visions of collective life. In the presence of these political possibilities, he thinks, a politics based on lies would be less appealing. What is needed most urgently, he concludes, is not adjustments to freedom of expression or media policy, but a revitalized movement for political reform.

In May 2022, the International Communication Association held a conference entitled “What Comes After Disinformation Studies?”

Heidi Tworek, a professor of history and public policy at the University of British Columbia, and a CIGI senior fellow, summarized her contribution to this conference in a recent CIGI article. Like Lebovic, she reminds us that disinformation is nothing new. She refers to work by Alice Marwick and other scholars who’ve examined communications in American society around subjects such as the Japanese incarceration, the “welfare queen” myth, the Central Park Five and AIDS/HIV to demonstrate that governments and traditional media regularly used disinformation in the past. She joined other scholars at the conference in questioning whether the current disinformation frame still makes sense, or whether it creates more difficulties than it solves.

As described in a summary of the conference by Théophile Lenoir, a Ph.D. student at the University of Leeds, other invited researchers made many of the same points as Tworek and Lebovic. The pre-conference opening remarks advised attendees that “we can’t fact check our way out of global regimes of white supremacy and racial hierarchies in their myriad of forms.” According to Lenoir, those who struggle against disinformation see it not as a perennial problem in a democracy but as a “fundamental threat to democracy itself.” How can citizens make up their minds on political issues, he asks, “if the information on which they base their arguments is false?”

In this way, the standard disinformation narrative, according to Lenoir, sees the present as a dramatic change from the past: “Truth used to surround us: in the press, in government communications, and on television. But in the social media age, with gatekeepers displaced, we’ve lost it.”

Lenoir points out that those who focus on disinformation as the key problem share certain stereotypes and metaphors about information “warfare” and the so-called infodemic. The problem conference participants had with these narratives, according to Lenoir’s report, is that they “frame large parts of the population as either enemies or irrational beings that can be easily manipulated.” Moreover, they lead to technical, politically neutral solutions such as monitoring algorithms, sharing data or building alert systems. But such fixes are hopelessly inadequate to the political challenge, Lenoir argues, adding that it would be unrealistic to expect racism, populism, discontent, polarization and anger to disappear “once the big information machinery is fixed and disinformation is kept to a minimum.”

Echoing Lebovic and Tworek, Lenoir writes that falsehoods have always existed in democracies. Lies and disinformation are not a product of social media technology. And they will not simply go away if we tweak this technology. There is no way forward without political conflict and the clash of rival political visions. Lenoir concludes that “fact-checking our way out of politics will not work.” Technical solutions cannot fix political problems. Political leaders must lead by “framing narratives that make people want to live in the world they seek to build.”

Where Do We Go from Here?

According to A. Adam Glenn, a former journalist and now a writer at the Knight First Amendment Institute, the key takeaway from its symposium is that “laws that regulate lies” are not the way forward. False speech, harmful though it is, is best addressed through social and institutional change rather than speech laws and regulation.

Lebovic and others at these two conferences worry that the restrictions on expression needed to purge lies from our information systems might simply create free-speech martyrs and so be counterproductive.

But there are smart ways to counter disinformation, and less effective ones. In his recent book Liars: Falsehoods and Free Speech in an Age of Deception, US legal scholar Cass R. Sunstein says it would be constitutional under the US First Amendment to regulate lies through punishments, censorship, warning labels and so on when lies are harmful, and that there is no less-speech-restrictive way to mitigate the harm. The US regulates defamation and deceptive consumer advertising. Why not disinformation? Other countries have gone in that direction. Taiwan, for instance, outlaws disinformation. Singapore does as well.

But even if it is constitutional, banning disinformation would be unwise, for precisely the reasons Lebovic and others provide. Rather, social media companies should be encouraged to take strong countermeasures against disinformation. The European Commission’s updated 2022 voluntary Code of Practice on Disinformation is a good example of this. The European Union’s Digital Services Act also moves in the right direction by requiring social media companies to describe how they deal with disinformation risks.

But social media companies should not be required to take any specific measures, and governments should not have the power to require them to take down or demote disinformation. Many countries are beginning to understand the limits of compulsion in this area. For instance, despite its law outlawing disinformation, Taiwan recently rejected a law that would have required social media companies to ban or downplay disinformation.

In that case, what sort of institutional change other than new speech laws would help combat lies and disinformation? Lebovic outlines one such change: the construction of what Walter Lippmann, in his 1920 book Liberty and the News, called “political observatories.” These would be non-profit institutions, separate from commercial media, dedicated to the production of information, a “stream of news” that could be used as the basis for opinion, commentary, political organizing and the construction of an alternative, more appealing political vision.

Lebovic does not think a greater flow of accurate information will stop the spread of lies. “You cannot drag a citizen to the ‘stream of news,’” he says, “and you can’t make them drink only what you’d like.” But policy makers can encourage the production of truthful information. Today’s challenge is no different, Lebovic writes, from that which confronted Lippmann in 1919 — to “revitalize the flow of information in the American public sphere.”

Political observatories could be attached to universities or established as administrative agencies or non-profit media outlets. They would not simply record random facts about the world, but curate them in some fashion to provide meaningful political information to decision makers. “What is needed,” Lebovic writes, “is the presence of political observers on local beats who can keep an eye on the ongoing, daily work of politics.” They must be “ready to provide context and background” to guide public understanding of important political developments.

Lebovic correctly diagnoses the crisis in the production of news, even if he seems overly optimistic in thinking that his posited political observatories could themselves escape the political polarization and distortion we face. Others, including media scholar Victor Pickard, have called for public funding of organizations dedicated to covering local news, and such funding would certainly improve the flow of accurate information. But funding local news adequately is a solution to the problem of ignorance, not lies. It is not something to do instead of suppressing lies but something that needs to be done in addition to measures to address lies and disinformation.

Lenoir is more accommodating toward current disinformation efforts. He rejects the idea that his arguments could justify “a lax attitude towards truth in public debates, or that governments, universities and foundations should stand down in their efforts to combat disinformation.” In the end, he thinks that such efforts should continue, but with a clearer understanding of their limits and the recognition that they will not by themselves produce needed political change. His focus, however, is on the construction of alternative political visions, not on technocratic fact-checking.

This seems right. Even if efforts to counter disinformation are not enough, they are still worth doing. Political leaders should not caricature their opponents as deplorable, foolish or evil. But neither should they allow public information systems to be used by foreign or domestic actors willing to distort reality to gain political advantage. Political victory for liberals requires both accuracy and an appealing political vision.

The potential for abuse is considerable, since disinformation is neither a precisely defined term nor immune from the same misuse as the term fake news, where often the allegation of fake news is itself fake news.

Safeguards against Political Censorship

This brings us to the question of censorship. Even if regulatory agencies cannot require social media companies to take down or demote disinformation, what is the role of an administrative agency of government in combatting disinformation? This question was raised perhaps inadvertently by the recent controversy in the United States regarding the proposed Disinformation Governance Board. On August 24, 2022, the Department of Homeland Security (DHS) officially terminated the board, thus ending a self-created political firestorm that embroiled the DHS in charges of attempting to use an administrative agency as a tool of political censorship.

The danger of administrative state censorship did not die, however, with the demise of the Disinformation Governance Board. In terminating the board, the DHS reaffirmed that “countering disinformation that threatens the homeland, and providing the public with accurate information in response,” is part of its mission. Indeed, as the DHS inspector general noted on August 10, the DHS Cybersecurity and Infrastructure Security Agency (CISA) makes social media platforms aware of posts it thinks constitute disinformation and has been doing so since 2018, focused especially on disinformation about election infrastructure.

Is this an appropriate role for an administrative agency? The new thinking on disinformation I’ve summarized here distracts us from thinking about this key question, but it is vital that both the public and policy makers focus on it. It might seem that more information about disinformation from government agencies can only help social media companies do a better job of keeping their systems free of lies. But the potential for abuse is considerable, since disinformation is neither a precisely defined term nor immune from the same misuse as the term fake news, where often the allegation of fake news is itself fake news.

The insertion of shadowy government agencies with secret ties to media companies is the stuff of right-wing nightmares. But it’s not hard to imagine what an authoritarian government would do once it had control of such mechanisms. Government agency disinformation efforts can punch left as well as right.

The bottom line? If such an agency has a mandate to flag what it considers disinformation to social media companies, it must be transparent. Any such material must be made public, ideally in a file accessible to researchers, journalists and civil liberties groups. Such an agency would need to issue regular reports listing the material it flagged to the platforms and what the social media companies did as a result.

On the other side, social media companies would need to acknowledge when they removed or flagged content on the basis of a government agency referral and continue to allow users the full opportunity to appeal whatever action is taken. The companies would also need to publish regular reports recording the flags they received and the actions they took in response.

None of this is being done now. Existing codes of practices and current law and regulations do not require this transparency, and so social media companies and government agencies do not provide it. The need for further government action to impose sunshine laws on these referral activities of government agencies is urgent.


Social media companies should continue their fact-checking efforts, acknowledging that they will sometimes make mistakes, as Twitter recently did concerning some posts about COVID-19 from researchers that it wrongly labelled as misinformation. They should continue deploying their politically neutral techniques for media literacy, such as the “pre-bunking” techniques pioneered by Google. They should continue to expose efforts at disinformation and misinformation, regardless of the political perspective or the geopolitical allegiance of these campaigns. But the companies and policy makers should acknowledge that these activities are not likely to restore political harmony.

The most vital public protection in all these approaches is transparency. The public, watchdog groups and legislative oversight bodies need to know what social media companies and government agencies are doing in the fight against disinformation. Only then will they be able to assess whether enough is being done and whether disinformation efforts mask illegitimate efforts to censor disfavoured political perspectives.

Recent thinking on combatting disinformation has focused attention on what is undoubtedly the most urgent step of all in the current political climate. That is the need for political leaders, and thought leaders more broadly, to formulate visions of the collective good and to seek to persuade citizens that such visions are attractive and achievable. Technocratic efforts to purge our information systems of lies and disinformation can only clear the way. Debunking false narratives is not a replacement for this most vital task of political governance.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Mark MacCarthy is an adjunct faculty member in the Communication, Culture & Technology Program in the Graduate School at Georgetown University, where he teaches courses in technology policy.