onsumers today are confronted by a host of digital threats to their rights, safety and information environment: fake news; disinformation and the bots and automated networks amplifying it; online hate and harassment; mass data breaches; election interference by hostile foreign states; and algorithmic and big-data-driven targeting and manipulation, just to name a few. And the social media platforms where most users encounter these challenges — those with global reach such as Google, Facebook, Twitter, Instagram and YouTube — have utterly failed to take adequate steps to address to them.
A central part of the challenge is that most of these platforms are based in the United States and thus possess broad First Amendment protection — which limits content restrictions and other forms of speech regulation — while also enjoying blanket immunity from tort liability under Section 230 of the Communications Decency Act. This framework provides little incentive for them to act. With public opinion now turning, and the threat of tougher regulations, social media companies are finally beginning to act (Hirsh 2019) — but slowly, unevenly and still with a tendency to paralysis when competing claims arise. As recent examples, Facebook reduced the platform distribution of the widely shared fake video of House Speaker Nancy Pelosi but refused to remove it, citing the need to balance authenticity concerns with freedom of expression (Waterson 2019). And Pinterest aggressively pursued the takedown of harmful vaccine conspiracies (Sky News 2019), while ignoring other kinds of politically focused conspiracies and disinformation.
In short, the present law and policy framework governing these platforms is wholly inadequate. But what should replace it? This essay proposes a new comprehensive regulatory framework — one outlining information consumer protection — to hold these companies accountable and to better address the threats and challenges of the information and digital age.
The Evolution of Consumer Protection
Consumer protection is ancient. There are elements of it in Hammurabi’s Code and evidence of consumer protective ordinances in ancient Greece and the Bible (Geis and Edelhertz 1973). To meet each era’s different social, economic and technological challenges, consumer protection has evolved. The English common law’s tort of deceit and doctrine of caveat emptor — let the buyer beware — suited consumers who mostly dealt with small merchants face to face (Pistis 2018). The consumer of the late eighteenth and nineteenth centuries, however, required greater protection from new manufacturing processes developed during the Industrial Revolution, such as food adulteration — the use of harmful preservatives in food — and from the lack of safety standards in increasingly large and impersonal industries. These changes led to new product liability laws. And Upton Sinclair’s 1905 novel The Jungle, which chronicled the unsavoury conditions of Chicago’s then meat-packing industry, famously helped foster the passage of the Pure Food and Drug Act in 1906.1 Following World War II, the public’s perception that industry had become too impersonal and powerful led to a strong mid-century consumer protection movement. This was exemplified in the United States by the expansion of federal consumer agencies and President John F. Kennedy declaring a Consumer Bill of Rights2 in 1962, based on consumer rights to safety, to be informed and not deceived, to have choices among competitive options and to be heard and represented in policy making and related administrative processes. It would later be expanded in 1985 to include rights to basic consumer needs, to redress against businesses for wrongs, to consumer education and a healthy environment.3
In one sense, information consumer protection is simply a continuation of this evolution: a new consumer protection regulatory framework largely based on these same values — quality and safety, transparency, anti-deception, antitrust/consumer choice and accountability — but updated and refocused on today’s data-driven information sphere. However, this framework requires an essential caveat: consumers of information, as well as the environment in which they are embedded, are fundamentally different from consumers of previous eras.
The Information Consumer
The digital age is driven by data and information, and so a renewed consumer protection movement should focus predominantly on information consumers — people who generate, share and consume news, information and content in the social media and digital spheres, not just for commerce, but also socially and democratically. The information consumer faces several unique threats and challenges.
First, the consumer is also the product. A traditional definition of a consumer is someone who engages in a transaction for a product or service. In the age of surveillance capitalism, people are both consumers and the consumed — information and data, predominantly about consumers themselves, is collected, analyzed and monetized to drive the digital economy (Srnicek 2017; Zuboff 2015). Consumers have always been the targets of deception and manipulation, but a combination of big data, powerful analytics driven by and targeting artificial intelligence (AI), platforms with entrenched market monopolies, and multiple public and private sector actors seeking to influence them, distort the information environment and create profound new possibilities for digital manipulation and “systematic consumer vulnerability” (Calo 2014; Zuboff 2019).
Second, the digital sphere in which the information consumer exists is not just a business or consumer environment, but a democratic one. Social media platforms are businesses with corporate aims, and people certainly use them to do business or to obtain goods or services. But that is not the primary reason people use them. Most do so for news and information and to connect with friends, families and others in their community. They are also now important sites for citizen engagement and democratic engagement. People obtain and share news and information on these platforms, and debate and deliberate on politics. Social media platforms are the new “quasi-public sphere” (York 2010) or, to use danah boyd’s (2011) term, “networked publics,” defined by a blurring of public and private. But they are also defined by a unique combination of digital consumerism and democracy — where the most important democratic spaces for the information consumer are owned, operated, shaped and controlled by private sector interests.
Third, the information consumer exists in an era of unparalleled distrust. Corporate and governmental failures to address many of the earlier noted threats — fake news, mass data breaches, online hate and abuse, and digital manipulation — have created a corrosive information environment. The quasi-public sphere is now an “information-industrial complex” (Powers and Jablonski 2015), where people have little choice but to endure these harms in order to engage socially or democratically.
Not surprisingly, these failures have deeply eroded public trust in social media, governments and the integrity of the broader information environment. In a recent Pew Internet Study, more than half of Americans cited false news and misinformation as a greater threat than terrorism, with majorities indicating the issue has reduced their trust in both government (68 percent) and other citizens (54 percent) (Siddiqui 2019). Another recent poll found a majority believed social media does more to “spread lies and falsehoods” (55 percent) and “divide the country” (57 percent) than it does to spread actual news (Murray 2019). Most also distrusted social media companies — 60 percent did not trust Facebook “at all” to protect their information — yet seven in 10 report using social media daily (ibid.).
Americans are not alone. CIGI’s recent poll of 25,229 internet users in 26 countries found an average of 86 percent of people internationally reported falling for fake news (quoted in Thompson 2019). Canadians were fooled at an even higher rate (90 percent) and also cited social media as their top source of distrust online (89 percent), more than even cybercriminals (85 percent). Yet Canadians do not stay away. According to the Canadian Internet Registration Authority’s 2019 Internet Factbook, an annual online survey of Canadian internet use, 60 percent of the 2,050 Canadians polled in March 2019 indicated that they use social media daily.
Any new consumer protection paradigm designed for the digital era must address, beyond traditional consumer concerns, these realities — ensuring protection for consumers whose information environment and the democratic activities therein, is targeted, surveilled, manipulated and distorted by public and private sector forces in ways no other previous era has experienced. It must also be driven by an agenda that rebuilds trust in the information environment and is sensitive to its importance to — and impact on — the democratic activities of citizens.
Why Information Consumer Protection?
These are complex challenges without simple solutions. Why might information consumer protection offer the right regulatory framework to address them?
First, social media platforms’ capacity to manipulate, deceive and mistreat users derives in no small part from their powerful monopolies in the information environment. Countering the power, leverage and abusive practices of such entrenched industries has been a core driver of consumer protection law and policy for a half century. The right to consumer choice, heralded by Kennedy in 1962, provides the link between antitrust and consumer protection law. The former aims to ensure competition so that consumers have a range of choices, while the latter aims to enable consumers to make their choices effectively (Lande and Averitt 1997). An information consumer protection paradigm, attuned to and reoriented for the digital era, offers a solid regulatory framework with which to tackle the market monopolies and abuses of unresponsive platforms.
Second, another central focus of consumer protection has been addressing and mitigating information asymmetries — another threat to effective consumer choice, and a pervasive and entrenched dimension in the data-driven economy (Ciuriak 2018). Information asymmetry refers to the uneven balance of information between parties in a transaction, giving the party possessing power leverage over the other (Akerlof 1970). Such imbalances underpin many digital era harms and democratic challenges. They exist, for example, between platforms and information consumers, with platforms possessing vast resources, technical expertise and power to experiment on users and shape the information environment for corporate aims with little transparency. But these are far from the only information asymmetries in the digital sphere. Others include information imbalances between any state or corporate interests and the regular users they are seeking to influence online. These state and corporate actors increasingly tailor, amplify and spread their targeted messages or disinformation through other mechanisms, such as data-driven profiling, promoted or paid content, automated accounts/botnets and coordinated troll networks. And with the emergence of big data, algorithms and AI, these imbalances may only deepen. An information consumer protection framework, informed by countless historical successes in overcoming such asymmetries — including in the technology context — is well positioned to offer a path forward (Morgner, Freiling and Benenson 2018).
Third, other core consumer protection principles — such as quality and safety, transparency, accountability and consumer representation — are broad enough to encompass the wide range of market-based and democratic threats in the digital information environment. Consumer protection laws have long provided people with assurance of quality and safety (Klein, n.d.) and thus can provide a framework to regulate typical user concerns about information and content quality, such as fake news, disinformation and content moderation, or health and safety concerns relating to cyberbullying, harassment and online child safety. Their transparency and accountability principles likewise provide a regulatory foundation for concerns about algorithmic accountability and information and data protection audits. Consumer protection laws against food adulteration also provide a regulatory framework to prevent what might be called “information adulteration” — the addition of harmful additives to a person’s information environment, such as fake news, that reduce its quality and integrity. An information consumer protection paradigm is both normatively and practically broad to cover a wide range of information consumer interests.
Finally, a new information consumer protection movement can be a global move and also hit the ground running by taking advantage of existing legal, regulatory and government infrastructure around the world. Consumer protection is international in scope and origins, with historical precedents in the United States, Europe and Asia (Hilton 2012). And international consumer protection regimes and dedicated governmental agencies now exist (Corradi 2015; Micklitz and Saumier 2018). This legal and governmental infrastructure at both the national and the international levels can be immediately built upon, reshaped or repurposed, and then deployed. This is important, as information consumer protection cannot be left solely to users and consumers to enforce — one lesson learned in European data protection. Powerful agencies, such as the Federal Trade Commission (FTC), will be essential to success in exercising new antitrust and information consumer protection powers. And, in the United States, building information consumer protection on the existing consumer protection framework — given its long legal history and application — has a good chance of withstanding First Amendment scrutiny.
Putting Information Consumer Protection into Practice
Although a comprehensive treatment is beyond the scope of this essay, this section offers some concrete ideas for reform under a new information consumer protection framework.
Information Quality and Adulteration
An information consumer protection regulatory framework would empower users to better judge the quality and integrity of information on platforms. First, following past consumer protection quality and safety measures such as food labelling or energy product disclosure laws, social media platforms could be subject to mandatory “account labelling.” For example, labels could include a verification of the account’s identity, location and whether the account is real or automated, and whether the account or its content has been sponsored or promoted presently or in the past. These labels would immediately allow users to better judge the quality and integrity of an information source. Second, regulatory measures can be taken to prevent what I call “information adulteration,” that is, the reduction in quality of the information environment through harmful additives such as fake news. Here, platforms could be required by law to, on notice, remove or reduce the visibility and/or distribution of patently false information or deceptive media (such as the faked Nancy Pelosi video). They also could be mandated to issue standardized disinformation corrections, recalls or warnings, like safety warnings on consumer packaging. A combination of strict liability, reasonable due diligence requirements and conditional safe harbour from penalties and legal liability would ensure compliance.4 During election periods, these duties could be heightened with additional transparency — as with political ads — required.
Content Transparency, Accountability and Representation
An information consumer protection would also seek to improve existing platform content moderation practices through greater transparency, by requiring disclosure of content or account removals (by notice or placeholders), with information as to the basis for removal and the specific law or term of service violated. Following consumer protection safety inspection regimes in other contexts, platforms could be subject to information and content moderation audits, for example, inspection of algorithms and moderator teams. Stronger consumer or user representation would also be warranted in any new procedural content moderation solutions proposed by platforms, such as Facebook’s internal review panel — presumably based in part on institutional review boards found at universities — established in 2014 to ethically review internal studies (Facebook 2014), or its forthcoming “Oversight Board for Content Decisions” to review the platform’s content moderation practices (Facebook 2019).
Information Environment Safety
Safety for information consumers would be another central focus. One regulatory model here would be preventative protection measures that aim to prevent unsafe content or other harmful activities on platforms before being introduced. Such preventative measures have been central to consumer protection in the past, such as drug testing and approval laws that require companies to establish the safety of drugs before being introduced into the market. Examples of such protections for information consumers would be mandated content warnings, so that users receive a warning when content they seek to post violates terms of services before they share harmful content. There is new research suggesting such warnings reduce polarizing behaviour and promote engagement (Matias 2019). Another example would be requiring a “cooling off” period for new users, so that new accounts on platforms would be restricted in functionality until they had “proven” themselves to be safe, through good behaviour. This period would help undercut spammers, trolls and harassers from circumventing bans or propagating spam and false information through new or multiple “sock puppet” (fictional) account identities.
Information Consumer Rights Enforcement
A new information consumer protection framework would empower governmental consumer protection agencies such as the FTC with new powers to enforce information consumer rights. Efforts to enforce these rights would include working to rebuild trust in the information environment by aggressively pursuing new forms of deceptive and unfair platform practices — for example, distorted content moderation, information consumer manipulation, and data mishandling and misappropriation. Another measure that would build on traditional consumer protections against deceptive and unfair business practices — but be redefined for information consumer challenges such as distorted content moderation and fake news — would be information consumer audits, carried out by the FTC or equivalent agencies, to investigate forms of user experimentation and manipulation on platforms, such as the Facebook contagion study (McNeal 2014).
The proposed new regulatory framework to protect the unique vulnerabilities of the information consumer may be an appropriate solution to meet challenges we are aware of today, but the public and private threats and democratic challenges of the digital age are complex and constantly evolving. There is no silver bullet. And platform and governmental inaction have made things worse in this space, creating a corrosive information environment, rife with distrust. This is a problem: extensive literature speaks to the importance of trust (Rainie and Anderson 2017). It is a fundamental social, political and economic “binding agent,” key to social capital, commerce, democracy and overall public satisfaction. Consumer protection laws have historically been employed to rebuild and sustain public trust in business, government and social institutions (Klein, n.d.). A strong information consumer protection framework would be an important step in this worthwhile direction.
- See https://history.house.gov/Historical-Highlights/1901-1950/Pure-Food-and-Drug-Act/.
- See https://hoofnagle.berkeley.edu/2015/05/07/president-kennedy-consumer-bill-of-rights-march-15-1962/.
- See www.encyclopedia.com/finance/encyclopedias-almanacs-transcripts-and-maps/consumer-bill-rights.
- In the United States such measures would be contrary to blanket immunity for platforms under section 230 of the Communications Decency Act. However, section 230 has come under increasing criticism in recent years, with experts calling for reforms to its legal immunity. See, for example, Citron and Wittes (2017). Protection for information consumers could constitute an exception to its blanket protections.
Akerlof, George A. 1970. “The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism.” The Quarterly Journal of Economics 84 (3): 488–500. https://doi.org/10.2307/1879431.
boyd, danah. 2010. “Social Network Sites as Networked Publics: Affordances, Dynamics, and Implications.” In Networked Self: Identity, Community, and Culture on Social Network Sites, edited by Zizi Papacharissi, 39–58. New York, NY: Routledge.
Calo, Ryan. 2014. “Digital Market Manipulation.” The George Washington Law Review 82 (4): 995–1051. www.gwlr.org/wp-content/uploads/2014/10/Calo_82_41.pdf.
Citron, Danielle Keats and Benjamin Wittes. 2017. “The Internet Will Not Break: Denying Bad Samaritans §230 Immunity.” Fordham Law Review 86 (2): 401–23. https://ir.lawnet.fordham.edu/flr/vol86/iss2/3/.
Ciuriak, Dan. 2018. “The Economics of Data: Implications for the Data-driven Economy.” Cigionline.org, March 5. www.cigionline.org/articles/economics-data-implications-data-driven-economy.
Corradi, Antonella. 2015. “International Law and Consumer Protection: The history of consumer protection.” Hauser Global Law School Program. www.nyulawglobal.org/globalex/International_Law_Consumer_Protection.html.
Facebook. 2014. “Research at Facebook.” FB Newsroom, October 2. https://newsroom.fb.com/news/2014/10/research-at-facebook/.
———. 2019. “Draft Charter: An Oversight Board for Content Decisions.” FB Newsroom, January 28. https://fbnewsroomus.files.wordpress.com/2019/01/draft-charter-oversight-board-for-content-decisions-2.pdf.
Geis, Gilbert and Herbert Edelhertz. 1973. “Criminal Law and Consumer Fraud: A Sociolegal View.” American Criminal Law Review 11: 989–1010.
Hilton, Matthew. 2012.”Consumer Movements.” In The Oxford Handbook of the History of Consumption, edited by Frank Trentmann, 505–20. Oxford, UK: Oxford University Press.
Hirsh, Jesse. 2019. “Why Social Platforms Are Taking Some Responsibility for Content.” Cigionline.org, September 11. www.cigionline.org/articles/why-social-platforms-are-taking-some-responsibility-content.
Klein, Daniel B. n.d. “Consumer Protection.” The Library of Economics and Liberty. www.econlib.org/library/Enc/ConsumerProtection.html.
Lande, Robert H. and Neil W. Averitt. 1997. “Consumer Sovereignty: A Unified Theory of Antitrust and Consumer Protection Law.” Antitrust Law Journal 65: 713–45.
Matias, J. Nathan. 2019. “Preventing harassment and increasing group participation through social norms in 2,190 online science discussions.” PNAS 116 (20): 9785–89. https://doi.org/10.1073/pnas.1813486116.
McNeal, Gregory S. 2014. “Facebook Manipulated User News Feeds To Create Emotional Responses.” Forbes, June 28. www.forbes.com/sites/gregorymcneal/2014/06/28/facebook-manipulated-user-news-feeds-to-create-emotional-contagion/#5291200139dc.
Micklitz, Hans-W. and Geneviève Saumier, eds. 2018. Enforcement and Effectiveness of Consumer Law. New York, NY: Springer.
Morgner, Philipp, Felix Freiling and Zinaida Benenson. 2018. “Opinion: Security Lifetime Labels — Overcoming Information Asymmetry in Security of IoT Consumer Products.” In Proceedings of the 11th ACM Conference on Security & Privacy in Wireless and Mobile Networks, 2018–11. Stockholm, Sweden: Association for Computing Machinery.
Murray, Mark. 2019. “Polls: Americans give social media a clear thumbs-down.” NBC News, April 5. www.nbcnews.com/politics/meet-the-press/poll-americans-give-social-media-clear-thumbs-down-n991086.
Pistis, Marco. 2018. “Italy: From Caveat Emptor to Caveat Venditor — a Brief History of English Sale of Goods Law.” Mondaq, January 26.
Powers, Shawn M. and Michael Jablonski. 2015. The Real Cyber War: The Political Economy of Internet Freedom. Urbana, IL: University of Illinois Press.
Rainie, Lee and Janna Anderson. 2017. “The Fate of Online Trust in the Next Decade.” Pew Research Center, August 10. www.pewinternet.org/2017/08/10/the-fate-of-online-trust-in-the-next-decade/.
Siddiqui, Sabrina. 2019. “Half of Americans see fake news as bigger threat than terrorism, study finds.” The Guardian, June 7. www.theguardian.com/us-news/2019/jun/06/fake-news-how-misinformation- became-the-new-front-in-us-political-warfare.
Sky News. 2019. “Pinterest teams up with health bodies to tackle false vaccine information.” Sky News, August 29. https://news.sky.com/story/pinterest-teams-up-with-health-bodies-to-tackle-false-vaccine-information-11796591.
Srnicek, Nick. 2016. Platform Capitalism. Hoboken, NJ: Wiley.
Thompson, Elizabeth. 2019. “Poll finds 90% of Canadians have fallen for fake news.” CBC, June 11. www.cbc.ca/news/politics/fake-news-facebook-twitter-poll-1.5169916.
Waterson, Jim. 2019. “Facebook refuses to delete fake Pelosi video spread by Trump supporters.” The Guardian, May 24. www.theguardian.com/technology/2019/may/24/facebook-leaves-fake-nancy-pelosi-video-on-site.
York, Jillian C. 2010. “Policing Content in the Quasi-Public Sphere.” OpenNet Initiative bulletin, September. https://opennet.net/policing-content-quasi-public-sphere.
Zuboff, Shoshana. 2015. “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.” Journal of Information Technology 30 (1): 75–89. https://doi.org/10.1057%2Fjit.2015.5.
———. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York, NY: PublicAffairs.