Let’s Face the Facts: To Ensure Our Digital Rights, We Must Hit Pause on Facial Recognition Technology

February 16, 2020

In January, a new pro-democracy group called Alliance Canada Hong Kong hosted a lecture series. Like a growing number of political activists and citizens, the organizers were concerned about the increasing use by autocratic governments of facial-recognition technology, so volunteers covered their faces and used pseudonyms. The organization’s executive director still received a threatening call to her hotel room.

The event, however, did not take place in an autocratic country. It happened at the Vancouver Public Library.

Facial-recognition technology uses a form of artificial intelligence (AI) called neural nets to match biometric features of faces to images in photos or videos. To work at scale, it needs to be trained on data sets of millions or ideally billions of images — something the Chinese government has ready access to. Using the data produced by Chinese citizens on the technology platforms Beijing controls, the government is using this system to monitor populations, most notably the Uyghurs, and manipulate the behaviour of 1.4 billion people through the deployment of a social credit system that ranks people via an aggregate of their movements and activities.

So the democracy activists in Vancouver were right to be concerned and cover their faces. But the invasive use of surveillance data, AI and facial recognition is not just a problem when used by autocratic governments. These very same tools are increasingly being used in democracies as well.

The New York Times recently revealed new details about a company called Clearview AI, which scrapes the internet for photos of people; so far, it has collected more than two billion images from sites such as Facebook with the stated intention of using facial-recognition technology to match a photo of a face with all the information available online about that person. This clearly has widespread potential for harm — it threatens to become a tool for stalkers — but its primary client base is currently US police forces, with more than 600 already working with Clearview AI.

The use of facial recognition for policing has also come to Canada. Clearview AI reportedly has Canadian contracts, and the company’s technology was being tested by members of the Toronto Police Service before its chief ordered them to stop following the New York Times story. The surveillance technology company Palantir has begun a push into Canada, with David MacNaughton, our former ambassador to the United States, at its head. And police forces in Edmonton and Calgary, as well as the Ontario Provincial Police, have said that they use some kind of facial-recognition technology, though not through Clearview AI.

The problem of large-scale surveillance-data collection and the deployment of AI-based facial recognition technology will only get more challenging as more of our public spaces become governed by private companies. Amazon’s Ring doorbell has effectively turned millions of front doors in the U.S. into surveillance devices, in many cases providing near-complete coverage of neighbourhoods. Amazon has partnered with both police departments and US Immigration and Customs Enforcement and boasts one of the world’s most powerful facial-recognition systems, which could be deployed on Ring’s video data.

Facial-recognition technology has worked its way into our everyday lives, far beyond policing purposes. From Instagram filters to identification services, it has emerged as an important tool for how we engage with social media and government services. But the potential harms have become increasingly clear: The underlying algorithms have been shown to be prone to false positives, to function poorly on darker skin tones and to be biased in favour of white men over women, in particular women of colour. Efforts to correct these problems of bias in their data sets can also be problematic: Google allegedly offered $5 Starbucks gift cards to homeless African-Americans for 3-D images of their faces, and a Chinese facial-recognition company has signed an agreement with the government of Zimbabwe that would give it access to millions of dark-skinned faces.

Meredith Whittaker, co-founder of the AI Now Institute, recently testified to the US House of Representatives committee on oversight and reform: “Facial recognition poses an existential threat to democracy and liberty, and fundamentally shifts the balance of power between those using facial recognition and the populations on whom it’s applied.”

In response to the scale and speed of these developments and the clear potential for harm, a movement has emerged to ban facial recognition. San Francisco, Oakland, Calif., and Somerville, Mass., have all banned their employees and departments from buying or using facial-recognition technology within the public service. All these efforts have focused on police departments, where the technology is already being deployed and the risks are known, but the scope is likely to expand as the technology extends into schools, governments, airports and private businesses — not to mention broader, illicit uses.

Others are mulling temporary measures. The science and technology committee of the British House of Commons has recommended a moratorium until a broader tech-policy agenda can be developed. Similarly, the European Commission is considering a temporary ban on the use of facial recognition in public spaces to give regulators time to catch up. Democratic presidential contender Bernie Sanders has called for a moratorium on police forces using the technology, and even Alphabet chief executive Sundar Pichai has called for a pause, arguing that the technology is “fraught with risk” and urging regulators to pay attention. This is likely the right approach — and it is not radical.

Here in Canada, our data privacy and digital governance laws have governments and the private sector operating in a Wild West and leave Canadians vulnerable. Ottawa is exploring the notion of trust within its proposed Digital Charter, which creates a perfect opportunity to reflect on how we’ll respond to such emerging technologies. As Open Media calls for a ban — its petition has garnered more than 10,000 signatures — the only rational approach to this issue is to press pause.

First, the federal government should adopt Whittaker’s recommendation to impose a moratorium on “governmental and commercial use of facial recognition in sensitive social and political contexts until the risks are fully studied and adequate regulations are in place.” This should include its use in surveillance, policing, education and employment.

Second, the government should simultaneously launch a high-level, multisectoral panel with the mandate of reviewing the likely progression and use of the technology. This process should detail the risks and benefits associated with facial recognition, biometric technology and AI; identify gaps in existing public policy; and propose new regulatory requirements that would need to be met to lift a moratorium.

Third, the government should continue to invest in digital literacy programs that create opportunities for communities to navigate the impact of technology on their lives. Consumers are too often unaware of the ways emerging technologies are being used in their everyday experiences. We need to build digital-literacy content and programming to ensure that regulatory frameworks are built collaboratively.

This is a tale as old as time: Technology moves fast, while governments move slowly and cautiously — and rightly so. But in the case of rapid advances in a technology with vast social consequences, the need to move ahead with caution and deliberation could not be clearer. A moratorium on the use of facial recognition technology will give all of us the space and time we need to ensure it aligns with our values, rather than threatens them, and to ensure that we don’t blindly take such a complicated technology at face value. Let’s move at the speed of trust — whatever that speed may be.

This article first appeared in The Globe and Mail.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Authors

Nasma Ahmed is the director of the Digital Justice Lab, which is based in Toronto.

Taylor Owen is a CIGI senior fellow and the host of the Big Tech podcast. He is an expert on the governance of emerging technologies, journalism and media studies, and on the international relations of digital technology.