Internet Infrastructure as an Emerging Terrain of Disinformation

July 4, 2022
16_Bradshaw-deNardis_BG 16_Bradshaw-deNardis_MG1 16_Bradshaw-deNardis_MG2 16_Bradshaw-deNardis_FG1 16_Bradshaw-deNardis_FG2 16_Bradshaw-deNardis_FG3-flying-bits

This essay is part of The Four Domains of Global Platform Governance, an essay series that examines platform governance from four distinct policy angles: content, data, competition and infrastructure.

One of the most consequential and intractable of internet governance problems in the twenty-first century is the spread of disinformation online. But it is about to get more complicated as the target of disinformation moves from social media into technical architecture. Trust in internet infrastructure is a prerequisite for a stable, functioning and free society. Although this infrastructure has long been co-opted as a proxy for content control, such as the use of the Domain Name System (DNS) for blocking access to websites (Bradshaw and DeNardis 2016), the big shift is that infrastructure is now itself a target of disinformation. Disinformation is moving from ideas to infrastructure, requiring a society-wide shift to view cybersecurity — not only content moderation — as the key disinformation problem to solve.

From foreign influence operations disrupting democratic elections to conspiracy theories about the safety of the COVID-19 vaccines, growing anxieties over the threat of disinformation have led policy makers to myopically focus on the role of social media platforms, not only as a source of disinformation, but also as a mechanism for addressing them. In response, social media companies have taken a number of actions against content and accounts spreading disinformation on their platforms. Examples include deplatforming or removing accounts, labelling false or inaccurate information, or limiting the kinds of content algorithmically recommended to users.

However, there are more powerful mechanisms for moderating disinformation. Beneath the level of content are internet intermediaries that have traditionally played a role in moderating access to content online (DeNardis 2012). But unlike social media platforms that host highly visible human-facing content, internet intermediaries are more hidden and much less accountable, and their moderation practices often involve greater overreach, such as blocking access to an entire website, deleting an app or shutting down all mobile and internet traffic in an entire region.

For example, when America’s Frontline Doctors, a far-right organization, began spreading false information about COVID-19, the web-hosting company Squarespace took its entire website off the internet (Passantino and Darcy 2020). Similarly, Apple removed an anti-vaxx dating application from its App Store (Hill 2021), preventing mobile users from accessing the service. The power and sweep of these infrastructure points of control are immense. Internet shutdowns in India are a regular occurrence (#KeepItOn 2021), with governments ordering network operators to block access — often under the guise of stopping the spread of disinformation (Burgess 2018).

While disinformation spread primarily on social media has created an existential crisis over truth and the stability of open societies, the internet’s technical architecture is also becoming an emerging terrain of disinformation.

The turn to internet infrastructure for content moderation and blocking has a very long history. The Global War for Internet Governance (written by one of this essays’s co-authors, Laura DeNardis [2014]) explained this institutional ecosystem of control points: DNS registries and registrars have, for decades, seized domain names or blocked access to sites that infringe upon copyright (Kravets 2012); financial intermediaries such as PayPal cut off financial transactions to WikiLeaks (Poulsen 2010) and Amazon ceased hosting the servers of WikiLeaks in 2010 (MacAskill 2010); app intermediaries such as Apple removed a Hezbollah-related application from its online store in 2012 (Cooper 2012); and the Egyptian government imposed a country-wide internet outage in 2011 via network providers (Singel 2011). Countless historical examples demonstrate how governments and the private sector long ago recognized that they could turn to infrastructure companies — rather than social media companies — to block and control content, to great effect and with collateral implications for freedom of expression and a free and open internet.

While disinformation spread primarily on social media has created an existential crisis over truth and the stability of open societies, the internet’s technical architecture is also becoming an emerging terrain of disinformation. This technical architecture includes the systems of routing, cybersecurity and public-key cryptography; addressing; standardization; and interconnection that keeps the internet operational.

The internet is based on trust architectures. Someone visiting a website, such as an online banking site, can be assured that it is authentic because trusted organizations called certificate authorities validate websites by issuing cryptographic certificates for each site. Disinformation injected into this certification process breaks this trust. There are many examples of such manipulation, including when Iranian hackers breached a Dutch certificate authority, DigiNotar, in order to create fake Google certificates that would allow them to spy on Iranian Gmail accounts (Charette 2011). Trust in a website’s authenticity is only as sound as trust in the third-party companies — which most people have not heard of — that vouch for the site.

Routing is another threat vector for disinformation that has remained under the radar of policy makers. When a video violating Pakistan’s blasphemy laws appeared on YouTube, the Pakistan government directed Pakistan Telecom to block access to YouTube by redirecting Internet Protocol addresses associated with Google’s servers to nowhere (Walsh 2010). However, these false routes continued to propagate to adjacent networks, eventually resulting in a global outage of YouTube. The disinformation (i.e., false routes injected into systems of routing and addressing) had globally cascading effects. A more extensive incident that calls into question the integrity of the internet's routing system would sow chaos in the financial and political systems that rely upon the stability and security of this ecosystem.

No less than the global economy, health care, food supply, transportation, political institutions and energy infrastructure depend upon this system of internet architecture and governance, often taken for granted because it is not visible in the same way content platforms are, and because it has, arguably, worked so well. A concerted effort to create disinformation in this core architecture would have catastrophic effects on society, the economy and human security.

The most socially consequential arena of disinformation threats to technical architecture arises from the internet’s latest major transformation. The internet is directly embedded in the real, material world (DeNardis 2020). More things than people are connected to the internet, including Wi-Fi-connected insulin pumps; cyber-physical systems in industrial settings; smart city infrastructure around water and energy; and the consumer-based Internet of Things (IoT), such as Wi-Fi-connected alarm systems, smart watches and security cameras.

Disinformation in the IoT ecosystem is not about the manipulation of ideas and political deception but about potentially harming people or crippling critical systems. Hacking into internet-connected medical devices to create health disinformation is a clear threat, and one of particular concern because vulnerabilities in these devices are constantly being identified (Criss 2019). Instead of manipulating voters with social media disinformation, attacks on the IoT could stop people from voting altogether, by generating false sensor readings for weather reports, distracting people from voting by hacking into home systems in swing districts, or creating false traffic jam alerts that dissuade voters from going to the polls.

The unique challenges raised by disinformation in infrastructure require a transformation of policy attention. The solution is not content moderation — it is cybersecurity. As the internet’s architecture increasingly embodies the material world, the new front for battles over truth and freedom will not only be fought at the level of ideas, but also around cyber-physical technical transactions that lay the foundations for a stable and functioning society.

Trust is the basis of the internet’s architecture and the reason financial, social and industrial systems work. However, trust in these technical systems is being compromised by disinformation in the same way human trust in information and news is deteriorating. Whether technical disinformation is inadvertent or intentional, it can have serious consequences. National security, the global economy, the political sphere and human safety are all at stake, making attention to this problem the great cyber-policy issue of our time.

Works Cited

Bradshaw, Samantha and Laura DeNardis. 2016. “The politicization of the Internet’s Domain Name System: Implications for Internet security, universality, and freedom.” New Media & Society 20 (1): 332–50. https://journals.sagepub.com/doi/abs/10.1177/1461444816662932.

Burgess, Matt. 2018. “To fight fake news on WhatsApp, India is turning off the internet.” Wired, October 18. www.wired.co.uk/article/whatsapp-web-internet-shutdown-india-turn-off.

Charette, Robert N. 2011. “DigiNotar Certificate Authority Breach Crashes e-Government in the Netherlands: A taste of what is to routinely come?” IEEE Spectrum, September 9. https://spectrum.ieee.org/diginotar-certificate-authority-breach-crashes-egovernment-in-the-netherlands.

Cooper, Charles. 2012. “Apple, Google remove Hezbollah TV app.” CNET, July 31. www.cnet.com/tech/tech-industry/apple-google-remove-hezbollah-tv-app/.

Criss, Doug. 2019. “Software vulnerabilities in some medical devices could leave them susceptible to hackers, FDA warns.” CNN, October 2. www.cnn.com/2019/10/02/health/fda-medical-devices-hackers-trnd/index.html.

DeNardis, Laura. 2012. “Hidden levers of Internet control: An infrastructure-based theory of Internet governance.” Information, Communication & Society 15 (5): 720–38.

———. 2014. The Global War for Internet Governance. New Haven, CT: Yale University Press.

———. 2020. The Internet in Everything: Freedom and Security in a World with No Off Switch. New Haven, CT: Yale University Press.

Hill, Clara. 2021. “Apple removes anti-vaxx dating app from app store.” Independent, August 2. www.independent.co.uk/news/world/americas/apple-anti-vaxx-dating-app-unejected-b1895067.html.

#KeepItOn. 2021. Shattered Dreams and Lost Opportunities: A year in the fight to #KeepItOn. Access Now, March. www.accessnow.org/cms/assets/uploads/2021/03/KeepItOn-report-on-the-2020-data_Mar-2021_3.pdf.

Kravets, David. 2012. “Uncle Sam: If It Ends in .Com, It’s .Seizable.” Wired, March 6. www.wired.com/2012/03/feds-seize-foreign-sites/.

MacAskill, Ewen. 2010. “WikiLeaks website pulled by Amazon after US political pressure.” The Guardian, December 2. www.theguardian.com/media/2010/dec/01/wikileaks-website-cables-servers-amazon.

Passantino, Jon and Oliver Darcy. 2020. “Social media giants remove viral video with false coronavirus claims that Trump retweeted.” CNN Business, July 28. www.cnn.com/2020/07/28/tech/facebook-youtube-coronavirus/index.html.

Poulsen, Kevin. 2010. “PayPal Freezes WikiLeaks Account.” Wired, December 4. www.wired.com/2010/12/paypal-wikileaks/.

Singel, Ryan. 2011. “Report: Egypt Shut Down Net With Big Switch, Not Phone Calls.” Wired, February 10. www.wired.com/2011/02/egypt-off-switch/.

Walsh, Declan. 2010. “Pakistan blocks YouTube access over Muhammad depictions.” The Guardian, May 20. www.theguardian.com/world/2010/may/20/pakistan-blocks-youtube-sacrilegious.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Authors

Samantha Bradshaw is a CIGI fellow and assistant professor in new technology and security at American University.

Laura DeNardis is a CIGI senior fellow and professor and endowed Chair in Technology, Ethics, and Society at Georgetown University.

The Four Domains of Global Platform Governance

In the span of 15 years, the online public sphere has been largely privatized and is now dominated by a small number of platform companies. This has allowed the interests of publicly traded companies to determine the quality of our civic discourse, the character of our digital economy and, ultimately, the integrity of our democracies. This essay series brings together a global group of scholars working in four distinct domains of the platform governance policy discourse: content, data, competition and infrastructure.