We Can Protect Freedom of Thought by Deciding What We Feed Our Brains

Because this is a socio-technical problem, technology alone can’t solve it.

January 17, 2024
techlogos
By restoring agency to thinkers as listeners, not just as speakers, our marketplace of ideas can be largely self-regulated by citizens in concert with the communities with which they choose to associate, the author argues. (Photo illustration by Beata Zawrzel/NurPhoto via REUTERS)

Battles rage over social media platforms and how they limit freedom of expression — or fail to. How do we handle “lawful but awful” speech, when awful is often in the eye of the beholder, and highly dependent on context and community? As society moves further online, the flow of speech among individuals and the ecosystems of society becomes encoded and mediated by software and decision algorithms ill-equipped to decipher the tangled web of human truth, value, individuality and community. That complexity presents a huge socio-technical dilemma, now mired in conflict, and driving controversial cures that may be worse than the disease.

Cutting through this dilemma requires rethinking the social component of online discourse, not just the technology. That leads to a generally unrecognized social aspect of freedom of thought — freedom of impression — that is to say, the freedom to choose to whom we listen — and with whom we “virtually” assemble and associate. Consider how thought is a very social process. Thoughts develop in our minds, then are expressed out to others; but that is just the start. Expressed thoughts then flow through a rich social mediation ecosystem in which listeners participate and offer feedback, individually and as communities; thoughts are then impressed back into our heads, leading to continuing cycles of thought.

These patterns of thought, speech and the ensuing social mediation evolved over centuries, until the twentieth century — dominated by mass media — refocused society on a restrictive logic for controlling the broadcasting of speech. The logic of mass media emphasized controls that limited expression because broadcast channels amplified expressions from a speaker directly to all listeners — acting as a megaphone. That technology of scarce channels with broad, direct amplification made curation of expression by channel-owning publishers the only workable point of control over the power of mass-scale reach. Even so, individuals retained reasonable freedom of impression, that is, the freedom to choose which channels they listen to.

That logic of restricting expression is no longer appropriate to the emerging world of massively open online discourse. Social media platforms have created a new kind of global public sphere in which the cycles of expression, social mediation and impression are far tighter, faster, and wide-reaching — and less under our own control.

Unlike with broadcast media, these cycles are highly reflexive in ways that drive viral cascades that can become harmful in the real world. That hyper-reflexivity results from how computer mediation works — not as direct one-to-many amplification, but as small, fast, multi-stage cycles of propagation from person to person — spreading much like word-of-mouth rumour, but at unprecedented speed and scale. That is best managed not by new limits on expression, but by productively supporting the social mediation process and how that feeds into impression. Unfortunately, the platforms are neither motivated nor able to manage that social process effectively and with legitimacy.

Instead, social media platforms have exploited technology and network effects to replace the traditional mediation ecosystem, so that our cycles of thought are mediated not via human speakers, listeners and their communities, but through software decision algorithms (rules) the platforms control. We no longer actively choose among many publishers who broadcast to all who tune in. Instead, we passively receive, from each of a very few platforms, a collected feed of expression from a universe of speakers — personalized to feed us what the platforms predict will engage each of us. They feed us junk food so we will keep clicking the ads they sell. The health effects of what is fed into our heads is a very secondary objective.

Because this is a socio-technical problem, technology alone can’t solve it. Algorithms control and compose our feeds, but they do so based on the behaviour of listeners in each of a series of cycles, as they “like,” share and comment on content. Those feedback loops cause the cycles to cascade or peter out in ways that depend on complex, multi-stage interactions of both algorithms and these reactive human behaviours. That complexity is compounded by how the current practice of “moderation as removal,” a form of censorship, runs into knotty questions of legitimacy — what should or should not be removed, in what context, and for what community?

Conceptually, the lesson is simple: rather than allowing a few private global platforms to co-opt freedom of impression in composing each listener’s information feeds (and related recommendations), let’s return agency to each listener over who composes their feeds, using which algorithms.

But there is a way to step back to a more organically nuanced solution — and that is to restore the listener’s freedom of impression, which has worked well for centuries. As social creatures, humans evolved a dynamic ecosystem of communities and institutions (gathering places, social and civic associations, the press, academia and churches) that helped shape their thoughts, creating “social trust.” This social mediation ecosystem was federated — a multi-level system given legitimacy in accordance with principles of subsidiarity, by which people opted into social mediation communities from the bottom up, at levels that reflected contexts and norms local to their community, while select institutions offered balance, from the top down.

Conceptually, the lesson is simple: rather than allowing a few private global platforms to co-opt freedom of impression in composing each listener’s information feeds (and related recommendations), let’s return agency to each listener over who composes their feeds, using which algorithms.

Of course, most users lack the skill and patience to manage their feeds by themselves. The solution is not to abdicate that user agency to platform control, but rather to enable users to delegate that agency to new kinds of attention-agent services, as emerging in Gobo, Block Party and, most prominently, Bluesky.

These intermediaries (sometimes called “middleware”) would be chosen by users to fit between them and the platform, to manage composing their feeds in ways that match their personal interests and values. Such services could develop and draw on the full richness of a social mediation ecosystem. New entities of all kinds would emerge to offer these attention-agent services. But existing real-world communities and institutions can also be re-empowered to mediate for their participants electronically, much as they did traditionally. That ecosystem is essential to re-establish the social trust that society is now losing as it exists increasingly online.

Returning agency to the listener would shift direction away from “platform law” — a regime of corporate governance of discourse — back to a state in which that governance is again dispersed organically among users and the ecosystem of communities and institutions with which they choose to associate, to help mediate their thinking.

This could revive the vision of computers as empowering and responsive “bicycles for our minds” that motivates the primacy of personal agency. (Would anyone want to use a bicycle that is steered externally by some corporation or a government?)

This is not to suggest there should be no top-down controls — the subsidiarity of federation can include them as needed, to control harmful content that is truly illegal and to bridge open the online feedback loops that, poorly managed, can feed the narrowing insularity, intolerance, partisan sorting, and polarization of so-called echo chambers and filter bubbles.

An emerging movement to restore user agency in this way has begun to gather momentum, including not only scholars (as above), but also government and businesses (as below), as I describe in depth on my blog.

Legislatively, the preamble of the much-debated “section 230” US law from 1996 enshrines “user control over what information is received by individuals…who use the Internet” as “the policy of the United States.” Provisions toward requiring platforms to enable that control to be delegated to user-chosen agents are in the pending Senate ACCESS Act, in the European Union’s recently enacted Digital Markets Act and still more deeply in pending New York State Senate Bill S6686. The pendulum of centralization may also be reversing direction in the market, as platforms struggle and users turn to more distributed solutions that offer subsidiarity.

As for implementation, there is already the thriving “fediverse” (federated universe) of Mastodon communities, which Meta’s new Threads is pledged to join, and there are more dimensions of subsidiarity in Bluesky and similar projects that promise delegated and composable user control over how their feeds work — not as a monolithic community/platform, but as a “pluriverse.” These new services will enable users to choose which algorithmic services they delegate to uprank what is “desirable,” and which services they empower to downrank “undesirable” content from view — while others may choose very differently.

The digitization of human discourse is an immense undertaking that will radically reshape humanity and the nature of human thought over many decades. It will take rich and flexible support for social mediation by other users and communities, to enable each user to control what they want to drink — from the firehose of global speech — without being overwhelmed.

The time-tested way for users and open societies to mediate that is not by censoring expression by removal, but by letting listeners control what they consume, enabling freedom of impression. This is achievable with minimal government restriction of freedom of expression (as US First Amendment law requires) and building on explicitly protected rights of association and assembly.

By restoring agency to thinkers as listeners, not just as speakers, our marketplace of ideas can be largely self-regulated by citizens in concert with the communities with which they choose to associate. Just as financial markets maintain an open but balanced “orderly market” through a marketplace of mediating services (and regulation), the marketplace of ideas can maintain an open but balanced orderly market through a marketplace of mediators.

A version of this piece has been published by Tech Policy Press.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Richard Reisman is a non-resident senior fellow at the Foundation for American Innovation and an expert on tech policy.