Governing the Datafication of Black Lives

June 20, 2022
04_Nkonde_1-BG 04_Nkonde_2-MG 04_Nkonde_3-FG

This essay is part of The Four Domains of Global Platform Governance, an essay series that examines platform governance from four distinct policy angles: content, data, competition and infrastructure.

As the use of algorithmic decision-making technologies increases in the public sector, so too does the volume of questions about which populations this sector serves. One of the areas in which algorithmic decision-making technologies is most accurate is in the disbursement of unemployment benefits. COVID-19-related shutdowns in 2020 led to the largest quarterly drop in GDP since records began (Bauer et al. 2020), which had an impact on the proportion of the population applying for unemployment benefits.

While this decline was felt across the United States, a report conducted by the Economic Policy Institute, a think tank based in Washington, DC, outlined the impact of these shutdowns on racialized communities. The report found in the first quarter of 2021, the unemployment rate among Black people in California was 10.6 percent, compared to 7.2 percent for white people; 6.9 percent within the Asian-American and Pacific Islander population; and 10.3 percent among the Hispanic community (Moore 2021). This means Black and Hispanic people living in California were the most likely recipients of unemployment benefits, disbursed by the California Employment Development Department (EDD), a system that uses algorithmic decision making to decide who gets benefits.

As of this writing, the EDD is contracting with ID.me, a company that uses facial recognition technology (FRT) to verify unemployment claims. According to the company’s website, ID.me verifies the identity of claimants by asking them to upload a picture of a form of government-issued identification (ID) and then upload a selfie. The system works by matching the selfie to the ID and then cross-referencing this data with other forms of identification held by the state.1 This sounds like a great idea, but it assumes two things: first, that pictures taken by the Department of Motor Vehicles, passport agencies and other government offices are going to match the pictures we take of ourselves as we age, put on weight and, in some cases, have facial surgery; and, secondly, that facial recognition, the underlying technology that will match the user-generated selfies with the pictures on ID documents, has the capacity to accurately identify all human faces. These assumptions are false.

In 2018, computer scientists Joy Buolamwini and Timnit Gebru (2018) published a paper exploring how accurate commercially available FRTs were at identifying race and gender. The systems were accurate 99 percent of the time when identifying lighter-skinned men, but the darker the skin of the person, the less accurate the facial recognition systems became, something that was further compounded by gender (ibid.). In fact, Buolamwini and Gebru found dark-skinned women had a 35 percent misidentification rate (Lohr 2018).

The problem stems from how FRTs are designed. They use a process called machine learning, during which FRTs are fed a large number of digital images of human faces (say, one million). The system then measures facial architecture, for example, the distance between the eyes, the distance between the cheekbones and chin, and the positioning of the ears and creates a statistical model of “the” human face (Corcoran and Iancu 2011, 8). Buolamwini and Gebru (2018) theorized the vast majority of the images used to train the models they were analyzing were images of white men, and that is why these models had a 99 percent accuracy rate within this population segment. The authors had to infer this because algorithmic decision-making systems are given intellectual property protections under US commercial law (Wexler 2018).

The question becomes how do state and local governments create policies to bring the outputs of these systems in line with existing civil rights protections.

The inability of FRT systems to accurately recognize dark-skinned faces is a phenomenon called algorithmic bias, which is best described as a situation in which technological systems exhibit the same biases against members of protected groups (in this case, the site of discrimination lies at the intersection of race and gender) as those they face in the analogue world. Algorithmic bias has a disparate impact on Black and Hispanic people, who made up 20.9 percent of California’s unemployment claims in Q1 of 2021. They are also the groups most likely to be wrongly denied unemployment claims because of technical misidentification.

As of this writing, there have been no claims of wrongful denial in California, but the question becomes how do state and local governments create policies to bring the outputs of these systems in line with existing civil rights protections.

This was the question the author tackled as part of the Massachusetts Special Commission on Facial Recognition (Special Commission on Government Use of Facial Recognition Technology in the Commonwealth 2022). Although the use case is different, the issues were the same, and up until February 2022, ID.me software was being used by the Massachusetts Department of Labor (Wood 2022).

The author joined the commission as a designee of the National Association for the Advancement of Colored People (NAACP), a US-based civil rights organization started by W. E. B. Du Bois, among others, in 1909. For 12 months, the author met with a commission made up of 19 people representing academia, law enforcement, advocacy and the criminal legal system. Discussions focused on weighing FRT’s use with its impact on people living in the commonwealth. Despite objections from law enforcement, the commission overwhelmingly voted to limit police use of the technology and published its recommendation. Although unsuccessful in securing a ban, the NAACP was satisfied with the following recommendations:

  • The pending legislation specified that after a defendant is charged with a crime, the attorney general or district attorney must notify the defendant, pursuant to Rule 14 of the Massachusetts Rules of Criminal Procedure, that they were identified using FRT.
  • The commission prohibited local police forces from using or developing facial recognition systems, citing problems with inaccuracy.
  • There would be a clause to exclude any information obtained in violation of facial recognition regulations from any criminal, civil, administrative or other proceedings.
  • The legislature should prohibit law enforcement from using emotion recognition, surveillance and tracking, which are nascent, overreaching technologies with low reliability.
  • The legislature should create a state-level facial recognition operations group within the state police, charged with receiving and evaluating law enforcement requests for facial recognition searches, performing these searches, reporting results and recording relevant data.

However, there were carve-outs to allow state police and the Federal Bureau of Investigation to have the right to petition a judge to allow the use of FRT when identifying the deceased. It was not clear that the state-level facial recognition operations group had the expertise needed to identify cases of algorithmic bias, and so the commission would have liked to see a more senior governing body oversee use. Despite this conclusion, the NAACP voted in favour of adopting the report because of the notification requirements, prohibitions around local police, and the exclusion of FRT as evidence in criminal, civil or administrative procedures and, in doing so, it created rights-affirming norms for public sector use of FRT.

Works Cited

Bauer, Lauren, Kristen Broady, Wendy Edelberg and Jimmy O’Donnell. 2020. Ten Facts about COVID-19 and the U.S. Economy. Washington, DC: Brookings. September. www.brookings.edu/wp-content/uploads/2020/09/FutureShutdowns_Facts_LO_Final.pdf.

Buolamwini, Joy and Timnit Gebru. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81: 1–15. http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.

Corcoran, Peter M. and Claudia Iancu. 2011. “Automatic Face Recognition System for Hidden Markov Model Techniques.” In New Approaches to Characterization and Recognition of Faces, edited by Peter M. Corcoran, 3–28. Rijeka, Croatia: InTech.

Lohr, Steve. 2018. “Facial Recognition Is Accurate, if You’re a White Guy.” The New York Times, February 9. www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html.

Moore, Kyle K. 2021. “Racial disparities in unemployment rates persist, despite claims of a ‘labor shortage.’” Economic Policy Institute, November. www.epi.org/indicators/state-unemployment-race-ethnicity-2021q3/.

Special Commission on Government Use of Facial Recognition Use in the Commonwealth. 2022. Special Commission to Evaluate Government Use of Facial Recognition Technology in the Commonwealth: Final Report. March 14. https://frcommissionma.files.wordpress.com/2022/03/fr-com-final-report-appendices-3.14.22.pdf.

Wexler, Rebecca. 2018. “Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System.” Stanford Law Review 70 (5): 1343–429. www.stanfordlawreview.org/print/article/life-liberty-and-trade-secrets/.

Wood, Colin. 2022. “Massachusetts to stop using facial recognition in identity verification.” StateScoop, February 24. https://statescoop.com/massachusetts-idme-identity-facial-recognition/.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Mutale Nkonde is the founding CEO of AI For the People, a non-profit communications agency.

The Four Domains of Global Platform Governance

In the span of 15 years, the online public sphere has been largely privatized and is now dominated by a small number of platform companies. This has allowed the interests of publicly traded companies to determine the quality of our civic discourse, the character of our digital economy and, ultimately, the integrity of our democracies. This essay series brings together a global group of scholars working in four distinct domains of the platform governance policy discourse: content, data, competition and infrastructure.