Illustration by Abhilasha Dewan
Illustration by Abhilasha Dewan

Just over a year ago, on May 25, 2018, the European General Data Protection Regulation (GDPR) came into effect. The first-of-its-kind policy showed great promise during development; it was intended to harmonize privacy and data protection laws across Europe while helping EU citizens to better understand how their personal information was being used, and encouraging them to file a complaint if their rights were violated. As a new regulatory framework, the GDPR was an acknowledgement that the digital economy — fuelled by (personal) information — should operate with the informed consent of users and clear rules for companies who seek to do business in the European Union.

Implementing the policy, however, is illustrating just how much more work must be done before the GDPR is fully functional. European citizens, corporations and data governance frameworks still face a number of issues that the GDPR was intended to mitigate, as well as a handful of new problems. Stronger fines, greater collaboration and an acknowledgment of some of the policy’s blind spots are sorely needed for the GDPR to be more effective in the months and years to come.

A Global Concern for Citizen Data Protection

The political will behind and mandate of the GDPR were driven by the concern that individuals’ personal information was being exploited in ways that undermined privacy and, by extension, democracy.

Austrian lawyer and data rights activist Max Schrems played an important role in developing both the awareness of and the eventual legal response to the exploitation of Europeans’ personal information. After studying in the United States in 2011, Schrems returned to Europe and filed a request with Facebook for all the information the company had on him. Shocked by the 1,200-page response, Schrems then started the group “Europe v Facebook,” which, until 2017, helped build the popular case and support for expanded privacy and data rights, as articulated in the GDPR.

While existing legislation already provided a fairly high level of privacy protection, the GDPR extended the scope of this standard to non-EU organizations that process Europeans’ personal data. In anticipation of the passage of the GDPR, Schrems then founded noyb (short for “none of your business”), a European privacy enforcement non-governmental organization (NGO). Schrems also filed the first complaints, mere minutes after the GDPR came into effect.

Illustration by Abhilasha Dewan
Illustration by Abhilasha Dewan

A similar French NGO, La Quadrature du Net, also launched some of the earliest complaints, against what they dubbed GAFAM (Google, Apple, Facebook, Amazon and Microsoft). Crowdsourced from 12,000 French citizens, these complaints were subsequently made available as templates for others in the European Union to reuse.

Given the GDPR’s citizen-focused origins, the regulation’s impact on individuals — in Europe and elsewhere — is an important benchmark for understanding its successes and shortcomings.

Informed consent: Success, however, must first be defined. Since the implementation of GDPR, more people clicked “I agree” and “I accept” than in previous years. In fact, for most individuals, pop-up buttons and persistent emails asking for consent were their primary interactions with the new legislation; providing a privacy notice and soliciting user consent were the dominant approaches to compliance taken by most organizations. However, the act of quickly clicking a button is fairly incongruent with the concept of offering meaningful consent, particularly when “consent fatigue” arises in the face of an endless stream of vaguely worded, often unreadable, notifications. For this reason, allowing organizations to use this form of individual consent to signal compliance may not be the most effective means of reducing the use of individuals’ data without their knowledge.

If anything, the GDPR has exposed just how low the bar was for transparency and obtaining consent. As a direct result of the complaints jointly filed by noyb and La Quadrature du Net, Google was fined €50 million by the French data protection authority CNIL, for forcing consent by only giving one option: consent in full to non-specific, poorly explained uses of your data or don’t proceed at all. In a similar case that is currently active, the European Union’s highest court is considering lottery organizer Planet49’s use of pre-ticked boxes to obtain consent to the use of cookies. And, highlighting the insufficiency of digital consent practices, 25 of 28 official EU websites were revealed to have been infected with advertising scripts that tracked visitors without consent.

Breach notification: In contrast, the GDPR framework has actually been a resounding success as a model for breach notification policy. This policy is built on the idea that when a breach happens, the “supervisory authority” needs to be notified within 72 hours, with the ultimate goal being the notification of affected users so they can take action to protect themselves (and their information). There has been a massive increase in the reporting of breaches (including self-reporting). According to the International Association of Privacy Professionals (IAPP), more than 89,000 incidents have been reported — roughly double the previous rate. The obligation to directly notify individuals of potentially damaging breaches in a timely fashion is an example of the unambiguously positive impact of a unified regulation that expands the definition of personal data and the protocols around its use.

Automated decision making: Provisions exist that address individuals’ right to keep their data from being subject to solely automated decision making that has legal or other significant impacts, such as profiling. However, the overall lack of precision in how the rights of data subjects are defined with regard to artificially intelligent algorithmic systems makes the GDPR a bit “toothless” in this area.

The problem is that automated decision making is still relatively new, and accountability systems to audit and oversee them are generally not yet in place. How would a citizen first know that such a decision is taking place when all they may receive is the result? While the GDPR does increase the likelihood of being informed of the general logic used and the possible consequences (for example, how a low credit rating would affect available payment options), it does not guarantee an explanation of how the algorithm made a specific decision.

This is particularly concerning if automated decision making is regarded as more “objective” when it is actually perpetuating historical biases. Researchers have documented how risk-assessment algorithms used in the US justice system are racist (not that different from human-led criminal justice systems, which are also prone to bias), and yet their results are accepted as authoritative guides to sentencing without being tested or any real understanding of how they work.

The GDPR’s provision against profiling in automated decision making also states that the profiling may be allowed if the user consents, or if the profiling is necessary for entering into or the performance of a contract. This could be argued as the basis of all algorithmic media, that it creates a profile of the user to customize content, and that said profile is necessary for the operation of the algorithm. Profiling is, in fact, where the failure of the individual consent model is most apparent, because profiles created by using aggregated, de-anonymized data, or inferred or predicted data, can be generated without the knowledge of the individual.

Citizen awareness: While user interactions with GDPR-sparked initiatives (such as the reporting process or consent buttons) are on the rise, citizens’ attitudes about and expectations of data governance are not keeping pace. Certainly, Europeans’ awareness of data protection and data privacy has increased; according to an EU survey, Eurobarometer, 73 percent of Europeans have heard about at least one of their new rights. Unfortunately, only three in 10 Europeans are aware of all of their rights.

On a more positive note, there has been a huge increase in people exercising their rights, with 144,000 individual complaints (concerning access requests, unwanted marketing, employee privacy and deletion requests). The GDPR also seems to have brought to the fore a new awareness of the many potential flaws or shortcomings regarding data protection in many smart city plans, and has given the entire notion of privacy as a human right a currency it did not possess before.

An interesting, if foreseeable, turn of events is the apathy evident among EU citizens in their attitudes toward data protection. The 2019 CIGI-Ipsos Global Survey on Internet Security and Trust found that Europeans are the least concerned about online privacy among those surveyed (who, overall, are more concerned than they were before the GDPR).

Another survey reveals that one year after the GDPR’s implementation, the population’s high general awareness of the GDPR (67 percent) is paired with a decreased sense among individuals that the regulation will improve their interactions with organizations. For example, respondents had lower expectations that companies would stop selling their data. Still, 63 percent of respondents believe that the GDPR improved data privacy and five percent fewer people are now opting out of data collection; as well, there has been an 11 percent decrease in the number of people asking for data deletion. If a certain cynicism about corporate behaviour prevails, so does an implicit faith that the mere existence of the new regulation precludes the need to take further action to protect their data.

Yet, unknown to most citizens, potential loopholes persist in the GDPR. For instance, an exemption from obtaining explicit, prior consent is available to those who can argue a “legitimate interest” as a business in processing personal data in a manner users might “reasonably expect,” such as on the basis of an existing relationship. A customer of a store, for example, can be sent marketing emails as long as an opt-out option remains. These provisions have been subject to abuse by data brokers and others whose activities remain opaque to the general public.

While lack of consumer awareness of rights probably contributes to corporate non-compliance (why change when no one is reporting you?), a greater number of fines and actions could dramatically reduce such practices.

Regulating Cash-rich Corporations

Concern for citizen data sparked the development of the GDPR, and the technology companies that deal primarily in data were the cause for that concern. Corporate data collection practices were reaching a level that left many citizens uneasy, and as the value of data grows, so does the industry competition to gather more personal information.

The GDPR’s role in addressing industry practices included not-so-radical policy changes; the new framework is more of an enhancement of the laws that were already in place. Ideally, future revisions to the GDPR would go beyond penalizing a handful of companies for their operations. The GDPR has potential to help change the data collection ecosystem as a whole — whether or not it has done so yet is up for debate.

Corporate bureaucracy: The first year of the implementation of the GDPR seems to have had a negative impact on the funding of EU-based tech companies (and in particular, start-ups), which saw a downturn in venture capital investment and a similar decrease in advertising budgets. Further, there is a widespread perception that the GDPR has not changed corporate practices but instead added a layer of bureaucracy that is especially onerous to smaller enterprises.

UK Information Commissioner Elizabeth Denham argues that data protection officers (DPOs) have an important role to play in helping companies shift from baseline compliance to real accountability. And, in fact, there are an estimated 500,000 organizations that have registered DPOs across Europe, according to IAPP. However, 52 percent of the organizations that had done so by the end of 2018 said they had only done it for compliance with the law (although 48 percent felt it served a valuable purpose within the company).

Illustration by Abhilasha Dewan
Illustration by Abhilasha Dewan

Because the regulation leans toward a self-policing, self-reporting model, companies have been focusing on adding personnel in order to achieve compliance rather than actually changing what they do and why. For example, Facebook has not changed their business model; rather, they have hired more lawyers to defend their model and adopted language that makes it easier to obtain consent, disregarding the fact that most Facebook users have little choice but to consent if they want to communicate with friends and family. Similarly, Google has not changed its business model around search or YouTube but instead has added language so that people better understand why they receive customized results (and why the service will be inferior if they don’t opt in and provide their personal information for customization). Combine this with poor board-level awareness and superficial efforts at compliance, and it is no wonder that the GDPR instigated new bureaucracy and not a culture change for corporate practices.

Industry competition and consolidation: Author and data rights advocate Cory Doctorow argues that the complexity and costs of implementation are driving industry consolidation; compliance is complicated and expensive. The current regulatory model, Doctorow says, favours American giants who have figured out how to make the system work. Others have similarly argued that larger companies are able to game the system by giving the appearance of compliance by changing user interfaces, for example, but not their practice.

While the GDPR’s self-reporting model has some upsides, it must be coupled with stronger enforcement mechanisms. Antitrust actions that acknowledge the power dynamics at play (in particular, those between data and technology giants) and the GDPR’s impact on competition are strong first steps.

Inconsequential fines: While the GDPR’s promised fine structure had everyone’s attention initially, some flaws and inconsistencies are emerging. For example, Knuddels.de, a German chat site, was fined a modest €20,000 for a self-reported data breach, while the Portuguese Hospital do Barreiro was fined €400,000 for a seeming lack of regard for security around access to patient records.

Further, most price tags pose little threat to the cash-rich companies likely to face the largest fines. Although fines were imposed on 91 different companies in GDPR’s first year of implementation, most were relatively minor; a single fine accounted for 89 percent of the total €56 million in fines issued. And even this €50 million fine levied against Google is far from the maximum allowable fine of €3.7 billion (which would be four percent of Google’s entire global revenue).

On a positive note, some organizations are now openly discussing the changes needed to reduce the data they require or to be less intrusive. And there have been hints (pretty explicit ones from France, Germany and Ireland) that this grace period is now coming to an end. Ireland — which hosts the EU headquarters of every major digital player — has 19 statutory inquiries in progress right now against big tech. Across the European Union, a ramping up of staffing in data protection agencies is under way.

Recently, the UK Information Commission Office (ICO) has fined British Airways £183.39 million for a major data breach resulting from poor security, roughly four times the amount the largest previous fine under GDPR (CNIL’s €50 million against Google). Commissioner Denham’s accompanying statement that personal data loss is “more than an inconvenience” and that organizations are obligated to “protect fundamental privacy rights” seems to indicate a willingness to push companies through accountability measures to embrace more than just the letter of the law. As the GDPR moves into its second year, the role of fines in changing corporate behaviour will undoubtedly come back into the spotlight.

A Step Forward for Data Governance

Arguably, the GDPR made a greater impact on national and international governance than it did on citizen data or industry practice.

Countries around the world are now debating or passing new privacy legislation, as well as entertaining greater regulatory action against growing global technology giants. The GDPR has been regarded as a new standard that many countries are aspiring to align with. While this does not mean that the GDPR is the ultimate regulatory goal, it has presented a target or milestone that other countries are now moving toward.

A global conversation on data protection and privacy is expanding, and the impact on non-EU countries is in evident. This is true both inside Europe (Switzerland, Norway, Iceland, Liechtenstein) and out: California’s upcoming Consumer Privacy Act, India’s soon-to-be-tabled Personal Data Protection Act and South Korea’s updating of its Personal Information Protection Act are among the standouts globally.

The GDPR’s most commonly reproduced characteristics are likely its provisions around data breaches, data subject rights and accountability. Its omnibus-law approach to data protection across all industries and contexts is also proving popular, as countries engage in widespread upgrades of their laws to reflect the challenges posed by the digital economy. In part this approach reflects the European Union’s linkage of the adoption of its privacy standards with its free trade agreements, through “adequacy decisions”: countries such as New Zealand, Israel, Argentina, Japan, Colombia, South Korea and Bermuda have sought to mirror the GDPR’s standards in their own reforms.

But, as the main principles of data protection enshrined in the GDPR are being fleshed out in practice, a fragmented system of data governance is still apparent. Although the framework’s explicit goal was unification of disparate existing legislation, embedding the GDPR into national law and creating agencies to execute it has not happened uniformly across Europe.

Not only are there variations in approach to enforcement (for example, many countries haven’t issued a single fine yet), but a number of member states have been late in adopting legislation necessary to roll out the GDPR, or have interpreted the guidelines on derogations (rules specific to that country), exceptions and restrictions quite differently. A small number of nations have actually adopted measures that contradict the GDPR; for example, Romania lifted restrictions on the processing of personal data by political parties. As the EU Justice Commissioner Vĕra Jourová observes, implementation has been especially weak in states that didn’t take concerns about their citizens’ data rights very seriously in the first place.

Outside of Europe, the GDPR faces more challenges still. The framework recognizes that the collection and processing of data is not typically confined within national borders, extending its protection of EU citizens’ data outward, but enforcement is typically jurisdiction-bound. However, several recent high court decisions have added momentum to a more pan-European approach. Recently, Facebook saw the Irish Supreme Court dismiss its attempt to quash the referral of questions around US-EU data transfers to the Court of Justice for the European Union for a determination, and saw the Austrian Supreme court rule that its attempt to block an noyb lawsuit by invoking lack of jurisdiction was invalid, since Austrian law cannot be used to restrict the GDPR.

Cross-border processing of cases by EU supervisory authorities is also on the rise, and the continuing evolution of mechanisms for cooperation, such as procedures for mutual assistance, joint operations and the “one-stop shop” (which designates a lead for cross-border cases based on where the company is headquartered), will be critical to the success of GDPR implementation.

Conclusion

The GDPR is an ambitious and pioneering attempt to create a comprehensive, unified standard for digital privacy and data protection. The problems it addresses are complex, and as an enforcement mechanism it will continue to mature over time. Right now, its mandate is primarily educational, demanding transparency in the name of keeping citizens informed about the use of their data. And it has been pretty successful at shining a spotlight on shady practices that only tech experts and academics were widely familiar with before its implementation. Secondarily, the GDPR can be a useful tool for policing and curbing the worst excesses and exploitation (such as dark patterns, data mining and so on).

However, it is important to note that for all its virtues, the GDPR does little to question existing models. The emerging unintended side effects are the wholly foreseeable consequences of treating data as a commodity rather than as a collective good; the GDPR could certainly boost the power of big tech or reinforce the concerning data use practices that inspired the GDPR to begin with. One year in, it seems as if the GDPR has failed to mitigate the de facto monopoly technology giants have on the collection and use of data. And frankly, if that is what needs to happen, more than the GDPR is needed. Bureaucratic control will never be as effective as a mobilized and vigilant citizenry that uses democratic voices to demand new rules and a different society. That citizenry is beginning to demand — and deserves — better governance of technology, data collection and automated decision making. 

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.
  • Jeanette Herrle, PhD, is a researcher, teacher and public speaker based in Lanark County, Ontario. Jeanette investigates the production and mobilization of knowledge, technology and innovation and is currently researching the intersection of agro-ecology, health, technology and lifelong learning.

  • Jesse Hirsh is a researcher, artist and public speaker based in Lanark County, Ontario. His research interests focus largely on the intersection of technology and politics, in particular artificial intelligence and democracy. He recently completed an M.A. at Ryerson University on algorithmic media.