How Facial Recognition Technology Permeated Everyday Life

Facial recognition technology has found its way into shopping malls, law enforcement and border security. Are consent, privacy and bias standards being upheld?

Published: September 19, 2018

Author: Nikki Gladstone

Facial recognition technology conjures up stereotypical scenes from futuristic movies and shows like Minority Report or Black Mirror. It is tempting to let popular culture colour our vision of what that technology looks like: a lone person sits in front of a wall of screens in a dark room, navigating an expansive surveillance system.

In reality, facial recognition technology has already permeated our day-to-day activities. It unlocks phones, tags friends on Facebook and secures homes. In part, these small integrations fulfill the overarching promise of technology by providing a simplified, expedited and, in some cases, more secure way to proceed with daily life. These convenient, personal and consensual interactions with facial recognition technology might reduce or even dispel common clichés about surveillance. But personal engagement with a technology doesn’t always translate into a full understanding of how that technology collects and uses data. This is exacerbated when the technology’s use isn’t limited to the interactions that we can see and catalogue.

This reality became evident for Canadians in late July, when a customer shopping at a South Calgary mall came across a digital directory that mistakenly exposed facial recognition software running in the background, reportedly classifying shoppers by age and gender. The case prompted the Canada and the Alberta privacy commissioner offices to open investigations and reinvigorated public debate on the use of the technology and the efficacy of related privacy protection laws.

This story isn’t unique to Canada. Globally, numerous cases have emerged highlighting the use — and abuse — of facial recognition technology, raising key questions about its scope, the sensitivity of the associated biometric data and the need for improved governance.

Facial recognition technology has already permeated our day-to-day activities. It unlocks phones, tags friends on Facebook and secures homes. But personal engagement with a technology doesn’t always translate into a full understanding of how that technology collects and uses data.

Automating Facial Recognition

Face detection and facial recognition tactics are not as novel as the technologies now facilitating them. The measurement, mapping and analysis of our biological information was first introduced in the late nineteenth century with the advent of cameras and photographs.

It’s the innovations in computer science and machine learning that have automated these processes, facilitating the real-time location, authentication, verification and categorization of an individual based on their facial features. Now, facial recognition technology can obtain an image and instantly compare the distinct geometric characteristics of a person’s facial features — data points on the distance between their eyes, or the shape of their nose — to a whole database of photographs, referred to as one-to-many searches. In some tests, software is outperforming humans in recognizing images.

Technological advancements in the software have facilitated a growing global market, valued at US$3.85 billion in 2017, with projections indicating it will triple by 2023. With this economic growth comes a simultaneous growth in applications.

A Myriad of Uses

You don’t have to dig far in the daily news to find headlines outlining the many ways in which facial recognition technology is being used by governments and the private sector alike. Perhaps one of the most notorious examples is China’s rapidly evolving surveillance state. Over the last few years, the country has embraced innovations in facial recognition technology and artificial intelligence to build a national surveillance system that aims to track and identify its citizens in just a few seconds. According to The New York Times, a network of over 200 million surveillance cameras is used to catch criminals and publicly shame citizens for small infractions, like jaywalking or failing to pay debts. In one instance reported by the BBC, it identified one man out of a crowd of 60,000 concert attendees.

This example is an extreme one in comparison to the more elementary uses of facial recognition technology— unlocking a phone, for example — but it is, in many ways, a cautionary tale about the consequences of technologies that require more thoughtful evaluation and improved governance.

For governments, facial recognition technology has many uses related to security and identity management. It’s used by law enforcement around the world to identify, monitor and catch criminals. Police services can acquire private sector technology to aid in facial recognition, and in some cases, the technology can be connected with existing video surveillance systems — such as closed-circuit television cameras (CCTV) — that monitor public places. This was reportedly first explored in 2015 by the United Kingdom, where a “smart CCTV” system attempted to use facial recognition technology to conduct “person tracking.” This type of integration has facilitated the technology’s use for security purposes at large events, like soccer games or concerts.

Government-issued identification processes such as driver's licences and passports also employ facial recognition technology as a tool to prevent identity fraud. Some of Canada’s airports began implementing one-to-one facial recognition — which in contrast to the one-to-many matching described above, compares an individual’s face to the digital image that is stored in their ID card — to authenticate the identity of travelers last year. US Customs and Border Protection announced in late August that new facial recognition technology that had been implemented just a few days earlier at the Washington Dulles International Airport was instrumental in identifying a man using false documents.

shutterstock_1069835033.jpg
Facial recognition technology is used at an intersection to identify jaywalkers in Shenzen, China. Offenders' faces are often displayed on the screen. (Shutterstock).

For some governments, facial recognition technology is just one of a suite of biometric identification tools, such as fingerprints and iris scans, used to create national digital identity programs. The programs typically seek to create a centralized database with a unique digital identity for every citizen. The citizen’s digital identity then becomes their key to accessing basic services from the government. India’s Aadhaar program is the world’s largest of these, and its implementation has not escaped reactions from human rights groups, who remain concerned about the collection, use and security of a centralized digital identity database.

For the private sector, facial recognition technology can serve similar security, identity and fraud detection purposes. Uber uses the technology as a safeguard against fraud and to confirm the identity of its drivers. A number of smartphone companies allow users to unlock their phone using facial feature recognition. Mexia One, a Canadian company, provided facial recognition technology to the international Mobile World Congress conference as a way to check in attendees. Some companies are even introducing systems where customers can pay for items by scanning their face. There is also a growing trend in its use for marketing and ad purposes, as was the case with the Calgary mall implementation, where customers can be categorized and targeted based on identifiers like their age or gender.

This is certainly not an exhaustive list of uses. As the technology advances and the industry grows, new ways of employing these tools will emerge, as will the potential applications by individuals. In 2016, a Russian photographer used a facial recognition app to identify photos of strangers taken on the streets of St. Petersburg by searching through a database developed with open source data from social media sites. The app is no longer available, but this incident sparked widespread coverage and concern about the technology’s potential role in the public’s loss of anonymity.

Data, Data and More Data

Making facial recognition systems work — and work effectively — requires troves of information, not only as a data set to compare facial images against, but also in the training of algorithms. In order to obtain a sufficient amount of data, facial recognition software can be connected to existing databases, traffic cameras or video surveillance systems. There have even been instances where open source social media data collected by private sector companies has been used to feed into government databases. In some cases, governments share information collected between agencies and departments — often with  insufficient transparency.

The need for clearly defining how the data can be used and for what purposes — whether by the government, the private sector or the public — is urgent. Data points about facial features contain personally identifying biometric information, and the average person isn’t likely to alter their facial appearance or hide their face entirely offline and online. While facial recognition technology is physically non-invasive, the type of data collected requires special consideration.

AP_18172737317458.jpg
A visitor from Denmark stands in front of a face recognition camera after arriving at customs at Orlando International Airport on June 21, 2018. Florida's busiest airport is becoming the first in the nation to require a face scan of passengers on all arriving and departing international flights. (AP Photo/John Raoux)

Ripe for Abuse: Invisible Technology, Consent and Privacy

Facial recognition technology’s undetectable nature makes it easy to abuse. To acquire a fingerprint or conduct an iris scan, there’s a degree of involvement required from the person whose information is being collected. Facial features, on the other hand, can be collected from a distance without direct participation, meaning the whole process can occur without the individual’s knowledge — and importantly, consent.

Consent is a critical step in protecting personal information. In Canada, it’s a key tenet of the Personal Information Protection and Electronic Documents Act (PIPEDA), which governs private companies. Companies require “meaningful consent” in order to collect, use and disclose personal information — which means they must be clear with individuals about how their personal information will be used.

However, the concept of meaningful consent has been blurred by the digital age, with new technologies challenging the way in which consent can be communicated and obtained. For privacy advocates in Canada, the Calgary mall case is a textbook example of how old laws are not up to par when it comes to protecting the public from facial recognition technology.

Cadillac Fairview, the company that owned the mall and ran the facial recognition software, did not obtain consent from shoppers or notify them of the technology’s presence and use. The company claimed they didn’t require consent, because they weren’t “capturing or retaining” photos or videos, and said instead that the technology in the directory was just “counting and predicting” the age and gender of customers.

Yet, many experts, including Kris Klein, a Canadian privacy lawyer, called into question how any facial recognition software could be classifying people without capturing or collecting any information about them, even if it wasn’t retained. “It’s hard to think of a scenario where the facial recognition [technology] could be used without the collection of personal information,” he noted.

Technology can be used as a tactic to suppress dissidents or undermine other fundamental human rights by facilitating the identification of those who join a protest or attend a meeting in public spaces.

Concerns surrounding privacy relate not only to whether or not consent is given but also to how the personal information, once collected, is analyzed, retained, stored and shared. This applies to both companies and governments. Do potential security concerns sufficiently justify the use of privacy-invading technology? Are there other, less invasive options? If the data is not properly secured, who can misuse or obtain access to it?

When the use of the technology is not adequately governed, those practices can be exported, too. One Chinese company, CloudWalk Technology, has plans to share its mass facial recognition software with the Zimbabwean government. According to reporting in Foreign Policy, if the software is implemented in Zimbabwe it will “send millions of black faces to the Chinese company to help train the technology toward darker skin tones.” The high risk of intentional misuse by countries where human rights are already under threat cannot be overstated, and is further exacerbated when the data it collects can then be transferred and used without restriction across borders.

In many places around the world, questions of consent or the right to privacy aren’t even a consideration in the implementation of facial recognition technology. Instead, the technology can be used as a tactic to suppress dissidents or undermine other fundamental human rights, such as freedom of expression or assembly, by facilitating the identification of those who join a protest or attend a meeting in public spaces. This has been seen closer to home, too; in the United States, there have been cases of facial recognition technology being used to identify protestors. ­

Questions of Accuracy and Bias

Even if legitimate uses for facial recognition technology are clearly defined, experts continue to express concern over the technology’s accuracy. Existing systems have been criticized as misidentifying or failing to identify individuals from certain regions of the world. Studies have also indicated that algorithms are more likely to accurately identify the ethnicity that is dominant wherever the code is created, reflecting ongoing criticism of the ways in which bias is unintentionally coded into algorithms.

In a public campaign against Amazon’s face surveillance technology (called Rekognition), the American Civil Liberties Union conducted tests of the software on 535 members of US Congress — 28 were incorrectly identified as people who had been arrested for a crime. In the tests, nearly 40 percent of the false matches were people of colour. In a near-comical demonstration of the technology’s potential for error, 2,000 people at a soccer match in the United Kingdom were falsely identified as potential criminals.

Another study found that darker-skinned women are misclassified far more often than lighter-skinned men (with the error rates being 34.7 percent and 0.8 percent, respectively). This problem is particularly grave when the technology is used for predictive measures, because these errors can further entrench the discrimination against marginalized groups already prevalent in the criminal justice system. False accusations, even if proven as such, can have astronomical consequences.

In some cases, technology has been developed to allow systems to sift through a database of videos and photos and identify people on the basis of categories of appearance, rather than on their individual characteristics, potentially facilitating racial profiling. The Intercept reported this month that IBM had refined its systems to “allow other police departments to search camera footage for images of people by hair colour, facial hair, and skin tone.”

Where it is clear that facial recognition technology is being used by government, there isn’t always clarity or transparency around what types of standards are required of the systems before they’re implemented. The National Institute of Standards and Technologies conducts a rigorous testing of facial recognition software vendors, but submitting to the tests is voluntary, at the discretion of the company developing the technology.

Inaccuracy can be intentional, too. For years, technologists have been working to prove facial recognition software is susceptible to being tricked. Attempts to spoof Apple’s iPhone facial identification software by duplicating a person’s face through 3D printing are plentiful online, and there are even efforts underway in Canada to create an application to defeat facial recognition systems. University of Toronto researchers developed an algorithm that can dynamically disrupt accurate identification by using light transformation on the images.

It seems that each possible application of facial recognition technology comes with the risk of misuse and abuse. The complexity of mapping out and understanding these challenges underscores the difficulty in creating adequate regulation.

The Need for Improved Governance

There is certainly a need for governance strategies on mitigating the technology’s potentially negative consequences. Policies and legislation must be kept up to date and adequate for protecting individuals, but — as with any fast-paced, innovative and risky technology — governments are outpaced by its development.

Within the last few months, the United Nations privacy rapporteur criticized the accuracy and proportionality (whether the justification for use outweighed the harm caused) of facial recognition technology use in the United Kingdom. As well, many have written recently about how the technology’s shortcomings indicate that it’s still not ready for use by law enforcement. Others have found that there are too few laws governing the use of the technology.

The European Union’s introduction of the General Data Protection Regulation (GDPR) may change this. The GDPR, which came into effect in May, outlines private sector obligations when it comes to personal data and privacy. It’s been championed as the global standard on data protection and privacy safeguards, and has already impacted the data practices of companies around the world.

Even private sector companies have voiced concerns about the potential perils of the technology. Microsoft President Brad Smith released a blog post recently, calling for “thoughtful government regulation and for the development of norms around acceptable uses” of facial recognition technology. Advocates are hopeful that other companies will follow suit and recognize the role that the private sector plays in both minimizing misuse and addressing bias.

The American Civil Liberties Union conducted tests of Amazon's facial recognition software on 535 members of US Congress — 28 were incorrectly identified as people who had been arrested for a crime. In the tests, nearly 40 percent of the false matches were people of colour.
AP_18169771178854.jpg
Outside of the Amazon headquarters in Seattle, volunteers carry boxes of petitions urging the tech giant to stop selling its face surveillance system, Rekognition, to the government. (AP Photo/Elaine Thompson)

Facial Recognition Technology in Canada

What should regulation look like in Canada? It depends on who you ask. Most agree it is unrealistic to halt the development of facial recognition technology altogether. There’s also strong consensus that Canadian laws at the federal and provincial levels must change to reflect modern society. In a blog post written this July, Teresa Scassa, a CIGI fellow and the Canada Research Chair in Information Law for the University of Ottawa, called for a “comprehensive rewrite” of PIPEDA.

There appears to be an appetite for change in the government as well. In the last two years, the Office of the Privacy Commissioner of Canada, whose job it is to advise and oversee the handling of personal information, has launched a consultation and issued a report on consent under PIPEDA. The office has also called for reform of the Privacy Act, which governs how governments collect, use, and disclose personal information. The House of Commons Standing Committee on Access to Information, Privacy and Ethics also issued a report this year called Towards Privacy by Design: Review of the Personal Information Protection and Electronic Documents Act, which explicitly acknowledged a need for the law to change. Many are waiting to see whether the investigations launched by the Canadian and Alberta privacy commissioners will create even greater impetus for revising privacy laws.

Ann Cavoukian, former privacy commissioner of Ontario and creator of the Privacy by Design framework, argues that Canadian regulation around consent and collection should reflect that of the GDPR, which requires companies to build data protection into the design by default. She adds, “Privacy is all about control. Personal control related to the uses of your personal information.”

Part of maintaining that control is through consent. In the GDPR, consent is defined as a “clear affirmative act establishing a freely given, specific, informed and unambiguous indication” of how one’s data may be used and processed. If the processing has multiple purposes, consent must be sought and received for every one of them.

According to Brenda McPhail, director of the Privacy, Technology and Surveillance Project at the Canadian Civil Liberties Association, privacy by design is a good start, but it’s not a panacea. Sometimes, the way a technology looks a few years down the road can be drastically different from its original design, she explains, offering Facebook as an obvious example. McPhail says there have to be other safeguards in place. At the end of the day, the technology is a surveillance tool, meaning that questions surrounding its necessity, proportionality and risk must be properly weighed and considered. The principles leading to its use should then also be clearly communicated to the public.

In Canada, as investment in artificial intelligence and machine learning grows, Canadian companies can lead change too. The Toronto Declaration on protecting the right to equality and non-discrimination in machine learning systems, launched this May, provides a starting point for building rights-respecting technology.

The public can also take on a role in instigating change. According to McPhail, “There’s the law and then there’s public expectation — the social licence to engage in this kind of process.” Public awareness and consultations on the subject are helpful in driving change, she says, but there has to be an environment for these conversations to occur. Given the negative reaction to the Calgary mall case, it’s safe to say that society is not ready to allow its public spaces to stage scenes straight out of sci-fi movies. Let’s hope that our laws share this conviction.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Nikki Gladstone is a RightsCon Program and Community Manager and a Master of Global Affairs (MGA) graduate from the Munk School of Global Affairs at the University of Toronto, where she focused on the intersection of technology, innovation, and human rights.