The EU’s Digital Services Act Doesn’t Go Far Enough

The European Union has reached an agreement about the Digital Services Act, landmark legislation that would impose stricter regulation on digital platforms.

May 16, 2022
EUdata2020-11-25T111507Z_541614914_RC2BAK9FN11J_RTRMADP_3_EU-TECH
European Commission Executive VP Margrethe Vestager and Commissioner for Internal Market Thierry Breton speak to the press about the first of three new digital policies — the Data Governance Act, the Digital Markets Act and the Digital Services Act — in Brussels, November 2020. (Stephanie Lecocq/via REUTERS)

A couple of weeks ago the European Union came to an agreement about the Digital Services Act, a landmark piece of new legislation that would impose stricter regulation on digital platforms. The scope of the act is impressive, covering a wide range of policy areas from content moderation, to online advertising and algorithmic transparency, to disinformation.

The final text is not yet public, and it’s unclear how the new mechanism will relate to other pending EU legislation, such as the Digital Markets Act or the Artificial Intelligence Act. But despite the uncertainties, many agree that the Digital Services Act is a once-in-a-generation overhaul with the potential to create more transparency around how digital platforms operate.

And yet, privacy activists say the agreement isn’t doing nearly enough to protect our rights online, or to create a safer online experiment for many of us.

Allow me to join the choir of skeptics. My fear is that while it introduces important new protections, this legislation misses the opportunity to reframe how we think about sensitive data. As a consequence, it also falls short of pushing meaningful restrictions on extractive, intrusive and otherwise problematic data practices — most notably, behaviourally targeted advertising.

The Evolution of Behavioural Targeting

Back in 2017, techno-sociologist Zeynep Tufekci gave a TED talk on the evolution of algorithms in online advertising. She explained in vivid detail how persuasion architectures (the organization of products and services in ways that attract consumer attention) became increasingly sophisticated due to ever-larger troves of information about our thoughts and behaviour. Tufekci called this a “digital dystopia,” built for the sake of profit.

Tufecki offered the disturbing hypothetical example of an algorithm promoting Las Vegas tickets on social media. Suppose, she says, that the algorithm has figured out when people with bipolar disorder — a condition marked by dramatic shifts in mood and energy, from extreme highs (the manic phase) to extreme lows (the depressive phase) — are about to enter the manic phase, and targets its marketing to them. During this phase, people can become at increased risk of compulsive behaviour, such as gambling. And, since the onset of mania is now alarmingly easy to detect — by algorithms monitoring online behaviour, or through wearable technology, such as smart watches — it is not far-fetched to assume people who are more prone to overspending and addiction could become easy targets for highly manipulative advertising.

While Tufecki’s example may seem extreme, online behavioural targeting has become widespread in the consumer space, with some occasionally humorous results (as John Oliver opens with in this Last Week Tonight take on data brokers) amid a more ominous sea of dubious, discriminatory and unethical solutions.

In the broader public sphere, targeted ads famously paved the way for the spread of politically charged mis- and disinformation, posing major security concerns and shifting socio-political dynamics across the world.

But despite its wide span and far-reaching implications, the world of data-driven personalized ads has remained largely under-regulated.

The landscape of data extraction has changed significantly. We now know not only how easy it is to make increasingly accurate predictions about our behavioural characteristics and preferences from just about anything we share online.

What Is Sensitive Data Anyway?

In its defence, the European Union’s new Digital Services Act will introduce some important protections. Even though it stops short of prohibiting surveillance advertising across the board, the legislation would forbid any targeted advertising directed at minors. Further, it would ban all targeting based on religion, sexual orientation, health, ethnicity or political affiliation.

But while that is certainly laudable, the act doesn’t provide anything new to further protect sensitive personal data — nothing beyond what’s already been established by the European Union’s key legal tool for data protection, the General Data Protection Regulation, adopted fully six years ago.

Since then, the landscape of data extraction has changed significantly. We now know not only how easy it is to make increasingly accurate predictions about our behavioural characteristics and preferences from just about anything we share online, but also how hard it is to meaningfully enforce even the most robust legislation when it comes to our digital rights.

That is why privacy groups argue that unless these companies are completely banned from making such predictions in the first place, tech will find a way to profit from consumers’ vulnerabilities. Meta may no longer be allowed to openly sell information about our addictive tendencies to an online gambling platform (which feels like a low bar anyway), but the company will be able to continue making complex predictions about our behaviour. And we must expect that it will find new ways to share its findings with third-party advertisers.

We Need More Radical Reform

The solutions are not immediately obvious. Introducing more transparency on the algorithmic level may help us gain better insight into the machine-learning models that fuel behavioural targeting, but these models are often too complex to be interpretable to humans. Even when the models are easier to decode, tech companies can cherry-pick which algorithms to showcase and which to keep in the dark.

The truth is that reform can and should be much more radical, and should include broader definitions around sensitive data, meaningful consent mechanisms, and restrictions on digital experimentation.

First, since it is so easy for algorithms to derive complex conclusions from even the most seemingly innocent data points, our approach to what constitutes sensitive personal data must change drastically. Elsewhere I have argued that we should stop obsessing over the type of data that people share online and refocus our attention on the broader impact of digital systems on our physical integrity and dignity. Eventually, this refocus may lead to an overall ban on behaviourally targeted advertising and much stricter rules around data minimization.

Second, regarding consent, opting out of any data collection process without any repercussions should be a true alternative, just as our consent mechanisms should be more meaningful in instances where data collection is justified and necessary. Having the ability to say no to non-essential cookies is a step in the right direction, but giving consumers access to a completely tracking-free advertising option could further help the move away from the industry’s intrusive business model.

Finally, we need a new generation of laws to define and regulate “digital experimentations” — online interactions in which advertisers acquire information about our innermost thoughts, moods and feelings, and analyze and monetize it. Some of the rules and norms that guide medical research and clinical trials could easily apply to such mechanisms. And while enforcement would probably remain difficult, remedies to end-users may become slightly easier with the help of robust legislation.

To conclude, we have a long way to go when it comes to creating an online environment that is safe, healthy and enjoyable for everyone. For that to happen, a larger paradigm shift may be necessary. Proposed new legislation at the EU level is a step in the right direction, but reform can and has to be much more radical in future.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Julia Keseru is a Budapest-based writer and activist who studies connections between technology, data, rights and justice.