Europe at a Crossroads over Planned Use of Biometrics

Due to go live in May 2023, the European Union’s Entry/Exit System could cost half a billion euros in the first few years, but the potential cost to the European Union’s moral authority is much higher.

November 2, 2022
passport
Ukrainian biometric passport IDs allowing travel to Europe without visas are displayed on a table. (Igor Golovniov/Sipa USA via REUTERS)

The European Union is facing a mounting identity crisis, and a genuine test of its purported values, when it comes to its use of biometric identification technologies.

Next year, it will deploy a new information technology system known as the Entry/Exit System (EES) for registering entry, exit and refusal-of-entry information of non-EU nationals visiting the passport-free Schengen area for short stays (defined as a maximum of 90 days in any 180-day period). A consortium formed by IBM, Atos Belgium NV and Leonardo S.p.a. was awarded a contract worth 140 million euros for the underlying software, while IDEMIA and Sopra Steria will provide the corresponding facial recognition system for an estimated price tag of 300 million euros. The EES is expected to cost nearly half a billion euros in the first few years alone. But the cost to the European Union’s moral authority is potentially much higher.

First proposed in 2013, a regulation authorizing the EES was formally adopted in 2017. At issue are not necessarily the merits of the new system itself. Rather, the problem, as the deficiencies in the process to implement the EES highlight, is the growing precarity of fundamental rights in Europe’s approach to digital governance, as well as the competing values at stake in respect of biometrics. The system is due to go live in May of 2023 after significant delays, notwithstanding that attitudes toward facial recognition and other biometrics have changed significantly since the project’s inception nearly a decade ago.

Just last year, the EU Parliament adopted a resolution calling for a ban on certain uses of facial recognition technologies, and specifically cautioned that “the use and collection of any biometric data for remote identification purposes, for example by conducting facial recognition in public places, as well as at automatic border control gates used for border checks at airports, may pose specific risks to fundamental rights.” Individual member states have also called for limits and even outright moratoria on certain uses of the controversial technology by the public and private sectors, and have levied hefty fines against companies such as Clearview AI for commercial deployments.

More recently, the EU Commission’s proposal for a new regulation on artificial intelligence (AI) — the AI Act — identified facial recognition and other biometric AI tools as “high-risk” use cases, with several drafts even calling for a prohibition on “the use of ‘real time’ remote biometrics identification systems in publicly accessible spaces for the purpose of law enforcement.” Although the AI Act is part of a broader EU-wide legislative overhaul designed to preserve fundamental rights with respect to digital technologies, the treatment of biometrics under the act remains one of the primary sticking points holding up a final draft of the proposed legislation. This foreshadows the European Union’s looming identity crisis when it comes to facial recognition and other biometric identification tools. Although the EES is a prime example of this internal-values clash playing out in real time, it has received limited scrutiny — despite what it may suggest about the way forward, including on the AI Act.

Just last year, the EU Parliament adopted a resolution calling for a ban on certain uses of facial recognition technologies, and specifically cautioned that “the use and collection of any biometric data for remote identification purposes […] may pose specific risks to fundamental rights.”

Inspired in part by a series of terror attacks across Europe at the time of its introduction, and forming part of a broader “Smart Borders” package, the EES is alleged to support the “identification of terrorists, criminals, suspects, and victims of crime” by providing a centralized database intended to facilitate EU-wide cooperation on border management. It will replace manual document checks and passport stamping by border guards with automated biometric checks, similar to the self-service kiosks or “eGates” found in many airports; enable automated alerts of suspected overstays; and provide access to Europol for law enforcement purposes.

The system will collect biometrics in the form of facial images and fingerprints for purposes of one-to-one identity verification (comparing individuals to their own biometrics), one-to-many identification (comparing an individual to all stored entries in the database), and reduced duplication of personal data processing. A proposal to include iris scanning was abandoned due to logistical challenges. Individuals refusing to provide their biometrics will be denied entry to the European Union.

The EES and Smart Borders are also in line with a broader trend of the EU Commission’s funding of an increasingly high-tech suite of digital tools and technologies in the name of streamlining immigration enforcement and border control, such as the use of drones and thermal cameras at refugee camps in Greece, in a phenomenon some dub “Fortress Europe.”

The primary motivations articulated for the border management “upgrade” are mostly technocratic. According to the European Union, “the main advantage of the EES is saving time” by “modernising border management,” making travel “easier” and “border checks more efficient” (although many fear that it will actually introduce significant delays and inefficiencies for travellers, a particular concern for those in a post-Brexit United Kingdom). But faster, easier and more efficient for whom? Certainly not for those meant to render their faces and fingerprints at the border. What’s more, the underlying values driving these systems — namely, speed, ease and efficiency — are more corporate than democratic, imbuing core functions of the state with the values of companies providing the underlying technologies.

Worse yet, despite press releases emphasizing “fundamental rights,” a deep dive into the EES documentation reveals only a superficial engagement regarding the fundamental rights of those who are impacted beyond the privacy and security of their data. For example, a lengthy impact assessment undertaken by the Commission promises the system “will be developed in full respect of the privacy by design principles,” without further explanation of what that design entails. Even within the narrow lens of data protection, it concludes that the system is “proportionate” because a few key principles are satisfied — namely, that the system “does not require the collection and storage of more data for a longer period than is absolutely necessary to allow [it] to function and meet its objectives.” There is no explication of what it means for the system to “function and meet its objectives.”

The impact assessment effectively finds that because people believe or assume their rights to be safeguarded, their rights are, in fact, safeguarded — a kind of jurisprudential gaslighting.

Shorter sections of the impact assessment, conspicuously captioned “Other Fundamental Rights,” represent a mere fraction of the attention paid to data protection, and an even smaller fraction compared to an analysis of the economic impacts — a clear signal of the relative weight of these concerns for European law makers.

Despite the direct impacts the EES is likely to have on the freedom of movement and association, as well as on a host of economic, social and cultural rights, these impacts appear to have been overlooked or unacknowledged in the planning process. Rights to dignity, liberty and security and prohibitions on slavery, forced labour and discrimination are offered cursory treatment — and, oddly, that brief view emphasizes how the EES measures may potentially bolster these rights for Europeans (by, potentially, playing a role in reducing crime or terrorism), rather than how it threatens the rights of those individuals whose biometrics will be harvested. It’s tantamount to a reverse impact assessment of fundamental rights.

Indeed, the assessment asserts that it’s been established that such individuals will suffer no violation of the right to dignity, on the basis of a survey querying “1,234 randomly selected third-party nationals” at border control posts about their comfort with the use of biometrics — a methodologically curious approach by any measure. And despite substantial and growing evidence to the contrary, respondents were found to share a “widely held view that automated systems could cause less discrimination compared to checks carried out in person by border guards … based on the assumption that machines entail a lower risk of discriminatory profiling.” Thus, the assessment effectively finds that because people believe or assume their rights to be safeguarded, their rights are, in fact, safeguarded — a kind of jurisprudential gaslighting.

Finally, on law enforcement access to EES biometric data, the Commission’s impact assessment concludes that “it is difficult to justify that no access is granted to data that can be helpful in preventing terrorist attacks and stopping criminal activities, both having a negative impact on the fundamental rights of their victims.” In other words, the impact on fundamental rights of people not subject to the EES (from presumed but unsubstantiated benefits of the planned technological improvements) is privileged above the rights of non-EU nationals who will be subjected to the biometrics checks. In this way, the approach to the EES also demonstrates how an emphasis on European-centric rights such as the right to data protection can displace the fundamental human rights of people more generally.

The process around the EES raises a host of important questions about the European Union’s approach to technology governance. Specifically, what becomes of fundamental rights when law makers can choose to override or sidestep them with such ease? What does it mean when corporate values displace democratic ones vis-à-vis digital technologies? What does it mean to protect and secure data more than we protect and safeguard the people who are most directly impacted by technologies harvesting that data? And what happens when the fundamental rights of one group are privileged over those of another?

The European Union is often considered an exemplar on fundamental rights in the digital age. Its approach to data protection has been replicated the world over in what has been termed the Brussels Effect. And, at present, the region is rolling out a host of new laws and regulations focused on digital markets, platforms and data. But a closer look at the bloc’s approach to biometrics, such as in the case of the EES, raises serious questions about its approach to the governance of digital technologies more generally. It also points to a deeper conflict between the European Union’s stated democratic values rooted in human rights, on the one hand, and a technocratic impulse to digitalize at all costs, on the other.

With new regulations on the table, European law makers have a choice — confront and reconcile these contradictions or lose their moral authority and exemplary status when it comes to technology governance. The world is watching.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Elizabeth M. Renieris is a CIGI senior fellow, lawyer, researcher and author focused on the ethical and rights implications of new technologies. Her latest book is Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse.