Mutale Nkonde On How Biased Tech Design and Racial Disparity Intersect

Season 3 Episode 9
Mutale Nkonde on How Biased Tech Design and Racial Disparity Intersect

Tech entrepreneurs are, predominantly, white males. Because of this, much of our technology has inherent biases that align with those of its creators and disadvantage under-represented groups.

S3E9 / March 18, 2021

BT S3E09 Guest-Headshot-1200.png

Episode Description

In this episode of Big Tech, Taylor Owen speaks with Mutale Nkonde, founder of AI for the People (AFP). She shares her experiences of discrimination and bias working in journalism and at tech companies in Silicon Valley. Moving into government, academia and activism, Nkonde has been able to bring light to the ways in which biases baked into technology’s design disproportionately affect racialized communities. For instance, during the 2020 US presidential campaign, her communications team was able to detect and counter groups who were targeting Black voters in social media groups, by weaponizing misinformation, with the specific message to not vote. In her role with AFP, she works to produce content that empowers people to combat racial bias in tech. One example is the “ban the scan” advocacy campaign with Amnesty International, which seeks to ban the use of facial recognition technology by government agencies.

In their conversation, Mutale and Taylor discuss the many ways in which technology reflects and amplifies bias. Many of the issues begin when software tools are designed by development teams that lack diversity or actively practise forms of institutional racism, excluding or discouraging decision-making participation by minority ethnic group members. Another problem is the data sets included in training the systems; as Nkonde explains, “Here in the United States, if you’re a white person, 70 percent of white people don’t actually know a Black person. So, if I were to ask one of those people to bring me a hundred pictures from their social media, it’s going to be a bunch of white people.” When algorithms that are built with this biased data make it into products — for use in, say, law enforcement, health care and financial services — they begin to have serious impacts on people’s lives, most severely when law enforcement misidentifies a suspect. Among the cases coming to light, “in New Jersey, Nijeer Parks was not only misidentified by a facial recognition system, arrested, but could prove that he was 30 miles away at the time,” Nkonde recounts. “But, because of poverty, [Parks] ended up spending 10 days in jail, because he couldn’t make bail. And that story really shows how facial recognition kind of reinforces other elements of racialized violence by kind of doubling up these systems.” Which is why Nkonde is working to ban facial recognition technology from use, as well as fighting for other legislation in the United States that will go beyond protecting individual rights to improving core systems for the good of all.


This transcript was completed with the aid of computer voice recognition software. If you notice an error in this transcript, please let us know by contacting us here.

Mutale Nkonde: Until relatively recently, people were talking about rights. They were talking about the right to privacy, they were talking about the right to free expression, both of which are important, but they're very individual. Whereas when we talk about race, we're actually talking about justice.

Taylor Owen: Hi. I'm Taylor Owen, and this is Big Tech. In my line of work, I get asked to give a lot of talks and lectures and media interviews, and there are three questions I get asked all the time. One is, do I use Facebook? Not really. The second is, what's the policy solution to all these problems you're describing? Well, unfortunately, there's no silver bullet. Then more recently, the one I get asked again and again, is what do you think of the Netflix documentary, The Social Dilemma?


Tristan Harris: A lot of people think Google is a search box, and Facebook is just a place to see what my friends are doing. What they don't realize is there's entire teams of engineers whose job is to use your psychology against you.

SOURCE: Netflix YouTube Channel

“The Social Dilemma | Official Trailer | Netflix”

August 27, 2020

Taylor Owen: The Social Dilemma is to Big Tech, what An Inconvenient Truth was to climate change. I can't think of another book or film about the harms of modern technology that has had a wider reach than this movie, and that's undoubtedly a good thing. But The Social Dilemma also has its detractors. Some say it buys into the narrative that Big Tech is pushing, that these platforms and algorithms are all powerful. Others say the documentary proposes that we fix technology with, well, more technology, which is problematic to say the least. But the biggest issue with The Social Dilemma isn't about what's in the film, it's about what or really who was left out. Most of the interviews in the documentary are with former tech employees, the very people at Google and Facebook who caused the problems they're now concerned about. Because the valley is largely white and male, The Social Dilemma is also largely white and male. But in focusing on that subset of people, the film ignores more important voices. Voices that have been calling attention to the harms of technology for years, many of whom are women of colour, who saw firsthand the impact these technologies were having in their communities. People like Sophia Noble, Ruha Benjamin and Mutale Nkonde. Mutale is a former journalist who went on to do policy work in D.C., where she developed legislation on algorithmic biases, deepfakes, and facial recognition. The through line with all of these pieces of legislation was that the underlying technology disproportionately affected racialized communities. Now, Mutale is the founder and CEO of AI For the People, a nonprofit that imagines a world in which technology is used to mend racial inequities, not amplify them. Mutale has a fascinating perspective on the intersection of race and technology, and why if we want to solve some of these big tech problems, we might want to start by looking at systemic racism. Here's Mutale Nkonde. Welcome to the podcast. Thank you so much for doing this.

Mutale Nkonde: Taylor, I'm so excited to be welcomed. I feel like I'm in rarefied company.

Taylor Owen: I feel the same way. It's a good place to start. I mean, I want to start with a discussion about journalism, because you began your career in journalism, worked for a number of outlets, the BBC, CNN, ABC. Why did you leave journalism?

Mutale Nkonde: Why did I leave? Oh, my God. Me and my journalism career of letters, as I call it, all three letter companies. I went in very, very, idealistically. I am old, so, in 2000, I was a bright thing in the UK and got on to what was called a journalism scheme in the United Kingdom, and it was really in the wake of the murder of a black boy called Stephen Lawrence in East London. Before his murder, a gang of white used the N word, and then basically kicked him to death. The UK, like Canada, isn't a gun carrying country. But we do have quite a mean knife game and boot game, as it were. In response to this, the BBC had wanted to tell that story but realize that they didn't have black journalists, and so went out into the country and asked ethnic minorities, which is what they're called in the UK to apply and I was one of the people that got through that process. So going in, I had assumed that I would be telling stories about race, telling stories about injustice and working with white reporters who want to do the same, and unfortunately for me, that wasn't really the case. It was still a very white institution. From an editorial standpoint, it was extremely difficult to get stories that I was interested in commissioned. I worked for almost a year with a senior producer trying to see whether the BBC news and Current Affairs department would commission a series on the trials for slavery. I was really interested in the role that Lloyds of London had played, underwriting those ships to the new world. That was the very first time that I saw racial prejudice in my career. I was labeled a troublemaker, and I'm still labeled a troublemaker. It was really that labeling throughout my career. By the time 2008 rolls around, I'm tired. I'm absolutely exhausted. I haven't been able to gain any particular traction in my career. I left as an associate producer, which is really low down in the food chain. Barack Hussein Obama was running for president, did not think a black person could win, certainly not one with the name Hussein, but it had to be better than where I was.

Taylor Owen: I want to talk about that transition into the policy world. But first, one more reflection on the state of journalism at the time, and in particularly how it covered technology over that period. I'm wondering if the lack of representation in journalism is related to the fact that they didn't cover technology critically. That whole period was embedded with this boosterism and gadget review type journalism. Was that because they just weren't being affected by technology in any negative way, so they didn't cover it?

Mutale Nkonde: Well, the ironic thing is, I was covering the science beat. The one piece I looked at was — Reading these press releases that were coming in from companies, turning them over to senior editors, and we would just turn them around. There was one story that came in around 2002 that talked about this really weird thing, facial recognition at football stadium. The way that we looked at that story back then was just like, "Oh, my God, this is going to be so great. You're going to be able to see the faces in the crowd," and I didn't even think about that critically back then. Because with technology, we just never seem to think that it impacted people, and then I was having such problems as identifying as a black person, identifying as an injured party, identifying whiteness, that I didn't have the institutional power to even raise that.

Taylor Owen: Because they didn't even see any of negative repercussions of.

Mutale Nkonde: Nope. Those newsrooms back then, and I would argue today because the Society of Editors yesterday in the United Kingdom in response to the Meghan Markle interview, said that there is no racism in journalism in the United Kingdom, and that's March 2021 and I'm speaking to you probably much 2000.

Taylor Owen: Wow. That's remarkable. A 20 year period and very little evolution there.

Mutale Nkonde: Yep.

Taylor Owen: Barack Hussein Obama is running for president, and you make the leap into the policy world in some capacity. I mean, can you explain how that happened and where you ended up and how you made that transition?

Mutale Nkonde: Yes. I had moved to Brooklyn, New York, and had just fallen into community block associations and other kind of neighbourhood associations because I wanted to meet people, and I was really interested in African American culture, and all I really knew was The Cosby Show. I was living in a Brownstone community, which very much look like The Cosby Show. I figured that this is what I would do. If I were Clair Huxtable or one of their children, this is probably what I would do. Within this group, I had really been — They were like, "We're going to go to Pennsylvania." I live in New York State, and it's very blue. But Pennsylvania is what we call a swing state, so you never know which way it's going to go. We're going to petition, and so I go to Pennsylvania and I see some of the work that I would have been doing if I was a journalist. Knocking on doors, speaking to people, speaking to the candidate. But because I didn't have to be objective or I didn't have to be fair, I could just say, "vote." I had never been able to do that. This is eight years into a career, and I can finally say that I have an opinion about politics. Eventually, the volunteers were — It was racist and wrong, but they like, "You're so articulate. We're looking for people on the communication side. Do you know anything about communications?" I was like, "Actually, I'm a journalist. I know a little bit about how to be [inaudible]." They asked me to write something, I did, started volunteering. This is before virtual, so leaving New York to go to Pennsylvania, and eventually a job came up in their Twitter team. They were like, "There's this thing called Twitter, and we think we're going to use it. There's this other thing called Facebook," and my boss was always like, "I was at Harvard and there was this thing called Facebook where we could look at each other with our faces." I was like, "This is crazy. Why would we want to do this? This is really weird." He was like, "No, no, no, no. you really want to do this. But there's this other thing called Twitter, and we don't really know what it is. But you can write, can you use it?" In my head, I said, "No." But in my mouth, I said, "Yes, absolutely." Before I knew it, I was on this Twitter team. It was through personal relationships that when that campaign ended and I'd gone back to my real life, which at that point was political communications and policy advocacy, about a year later I got asked if I wanted to join a project with Google in New York that was looking at community engagement. I never thought it would end up like this.

Taylor Owen: Before you went to work in D.C., you had this experience with Google and in some ways on their policy side? Right?

Mutale Nkonde: Yeah.

Taylor Owen: How did that evolve and play? My sense is you left with that little bit disenchanted with the company itself? Or how did that play out?

Mutale Nkonde: I loved it initially. I loved everything from the badge, to the building, to just being associated. It was External Affairs, so what they were doing was reaching out to local politicians, reaching out to local interest groups to show that Google could really be a friend to New York City because, at that time, they were buying real estate and they were really looking to, I mean colonize is really the best term I can think about, from about 8th Avenue to — I know you guys are in Canada, but to just give you an idea, so it's about 30-block radius, which is a neighbourhood.

Taylor Owen: I mean, I used to live in New York, and that Chelsea Market square block is insanely big. The amount of real estate they have there is astounding.

Mutale Nkonde: Right. I was in Chelsea Markets, but think of it from 8th Avenue down.

Taylor Owen: All the way across. Yeah.

Mutale Nkonde: And thousands of jobs, and they had to justify that. What that meant was I was speaking to Congresswoman Yvette Clarke, who I eventually went to advise in D.C. and others about Google hiring practices. I was talking to them about the technology and how good it would be, and I was also talking to them about really the need for Google to do this real estate grab, and it just wasn't true. There is no need to grab real estate, particularly when we have a homeless crisis, right? The conversations that were critical in any way were discouraged. Cathy O'Neil's book came out, Weapons of Math Destruction. I remember very angrily going into a meeting and saying, "We can't do this. We cannot build a business off the backs of, not just poor people and black people, but people generally. We can't exclude people in this way." That was really unpopular. I remember Congresswoman Clarke at one point, because I'd met Congresswoman Clarke in my Obama days, so even prior to getting into tech, these people knew who I was. She just leaned into me on a site visit, and she just looked at me and said, "Tell me the truth, what's really happening?" We went for lunch, and I told her the truth. I knew that after I had told her, that it wasn't just hiring that was an issue but it was the architecture of the technology itself and design that I knew that I was leaving.

Taylor Owen: It's amazing how I think that's a such a common story inside these companies, that engagement and critique is accepted up until the point in which it gets to a certain nerve, and that nerve is the design of the technology itself. I mean, you hear this time and time again, with people inside the system.

Mutale Nkonde: It's the business plan. The minute that it got to the business model, it was absolutely off the table.

Taylor Owen: On that, what do you make of this Timnit Gebru and Margaret Mitchell episode at Google?


Back in December, Timnit Gebru was fired as the co-leader of ethical AI at Google. The reasons behind the firing have been controversial and much debated. More than 1,200 Google employees and 1,500 AI leaders in academia and industry signed a petition condemning the termination of Gebru. Now, Margaret Mitchell, the other co-lead of Google's ethical AI team said she had been fired, quoting Axios.

SOURCE: Techmeme Podcast YouTube Channel

“Margaret Mitchell, co-lead of Google's Ethical AI team, says she's been fired”

February 22, 2021

Taylor Owen: How do you reflect on that? I mean, again, that's a decade later, right? Or 15 years later, and there still seem to be having some of these internal problems.

Mutale Nkonde: It's so triggering because I didn't work in that department, so I can't speak to the individuals. But even in my own experience, I had assignments taken from me. Increasingly, I wasn't on emails. It was much more difficult to get meetings with people, and specifically with the way that they did it with Mitchell, which seemed to be — The way that I remember it, it just seems that this is the way that Google operates. With Timnit, it was slightly different because she was on vacation. But the telling her direct reports that she wouldn't be coming back before her, the subtle undermining, the gaslighting, and this is where we have to really question — We have to look at whiteness as a driving force. What I mean by whiteness isn't the people. I'm talking about this ideology of purity, where white is the company because it's always right, it's always pure, it's always the way to go, and everything else by comparison has to be dark, has to be black. The one thing I would take is seeing Mitchell behave in the way that she did. Watching the support and advocacy for Timnit makes me feel so good. Because 15 years ago white employees would not have done that. Also, it gets to the point that, even back then, the type of people that worked at Google were idealistic.

Taylor Owen: You think that's no longer the case?

Mutale Nkonde: I feel like they're being pushed out. I feel like, whether it's Google, Amazon, Pinterest, those people are being pushed out. When you go into their backgrounds, you realize we were probably at the company at the same time.

Taylor Owen: Yeah. I mean, that brings me to another thing to talk about, which is that I think when you look broadly at the tech policy conversation or the tech governance conversation now, there are all these different entry points into that discourse, into this policy conversation and the need for it. In many ways, you've come at it through a lens of race, and it feels like what has drove a lot of the initial policy initiatives you were a part of. I'm wondering if you think that is the — Is that a entry point into the governance conversation, or is that the most revealing and important entry point into that conversation?

Mutale Nkonde: Oh, my God, that is such a good question. I think it's a entry point, and the reason I would say it's an a entry point was, until relatively recently, people were talking about rights. They were talking about the right to privacy, they were talking about the right to free expression, both of which are important, but they're very individual. Whereas when we talk about race, we're actually talking about justice. We're talking about the communal rights of group. When we're talking about justice, we're also really considering the perhaps the rights, the laws that already exist are not adequate. COVID really revealed to many white audiences that race isn't just this — It's not just about skin colour. Actually, impacts the quality of your life, the opportunities that you're given or not, how you're advocated for or not. It becomes something that people are ready and willing to accept. We saw that post George Floyd, where having — The thing I love about being a journalist and the thing I love about being the media is that I feel like we set the table for national and international discussion, and all those stories, that great reporting about racial disparities and COVID because of institutional racism, people not having access to health care, people not having access to housing, people not having access to adequate nutrition. When George Floyd with died, many white people that decided that that was not fair and that was not a world that they wanted to perpetuate, stood up and their voices were made. Suddenly, instead of race being this big divisive conversation, race became a conversation by which we could finally talk across difference. I like and appreciate this lens, because it enables me to talk about the very real ways that technology just interrupts the most mundane parts of everyday life.

Taylor Owen: I think that's such a powerful way of framing this. I wonder if the acts that you ended up working on, on algorithmic accountability, on the design of deepfake technology, on biometric data, by entering into them via a lens of racial justice, you were able to zero in on that design element that is so core, whereas as you come in through other ways, you might end up in more sort of — I don't want to say superficial, but outcome spaces. I often feel the content moderation debate, for example, is never focused on design. It's always focused on the output of that design. But because you were focused on that justice element, you zeroed right in to those core design flaws, and how those could have this negative impact on society. Is that a fair framing?

Mutale Nkonde: Yeah. Also, and this is why, Taylor, I love your work and I'm so glad we met at the Toronto Public Library, is I also believe that policy is the delivery system for ideology. If you want to see how a society thinks or feels, look at their policy outputs. In my mind, policy is the design element for how we're going to relate to each other. That's the design piece. Because I was always coming from that, I would never be happy with any conversation about technology that didn't start right at the beginning. Yes, I'm concerned about deployment, but the design itself is flawed. Yes, I'm concerned about governance, but you cannot provide governance structures around poor design.

Taylor Owen: So, let's just dive into that for a moment just to get some clarity. Can you give some examples of how you see design of certain, either technologies or technological systems, perpetuating racial biases or discrimination? What are you most concerned about?

Mutale Nkonde: Sure. I'm pretty obsessed right now with how algorithmic recommendation systems on social media spread disinformation, specifically racially targeted disinformation, and I'm really obsessed with biometrics. When I'm thinking about social media, and why it so interests me, Pew Research Center here in the States did a survey in 2019 that found that 55% of American adults get their news from social media. That is their primary news source, and we were living under Trump and the news was under attack. So if you are somebody that in fact believed that the news was fake, you were looking to these platforms, who'd already decided what you were going to see. The way that social media algorithms work in terms of race is if you engage with content, that's click, share or comment, right? Those are three types of engagement, you're going to be served up that content again because the algorithm is a pattern matching mechanism. What it does is to look to match you with similar content. What the Mueller Report found in 2016 was people that were clicking on content that was promoting Black Lives Matter messaging were then also being fed content that was encouraging them not to vote. One place that this ended up being really, really successful was Detroit, Michigan. Detroit, Michigan is an 89% black city, and in 2016, 70,000 people in that city decided not to vote at the top of the ticket. That swung that state to the right from Obama to Trump. In April 2020, there was a militia attack to kidnap the governor. The state of the political communication in that particular place had become so heightened, again through these social media algorithms, because people that were clicking on white supremacist content were being fed more white supremacist content through this patent matching process had decided that wearing a mask was a political issue. Fast forward to [Jan] 6, they are now finding that some of the people that were in the chat rooms in Michigan also took part in the insurgency of [Jan] 6. That's really interesting to me because it's not just about election outcomes, it's about racialized violence and us not being able to see the other side, us not being able to go through that transformation like Mitchell at Google, not as an individual, but just generally. This white woman very bravely standing up and saying, "You cannot harass my black employee," right? That type of growing towards a shared vision for a beloved community can't take place under that type of algorithmic interruption. The second way that systems can become really racist is a lot easier to understand. I'm going to use facial recognition.

Taylor Owen: Can I just add one follow up on the election stuff because I want to talk about facial recognition because I think that — I agree, there's this super acute challenge there, and I want to talk about that. But one thing on elections for me, the example you just gave is the one that always strikes me as one of the most revealing about the vulnerability that actually existed in 2016. Another one that similar that have always struck me is how that Beyoncé, group on Facebook, was set up, built for a year and then weaponized as this sort of vehicle to dissuade voting the day before the election. Race was almost used as a vulnerability by people who wanted to affect the election. I'm wondering if that means we ended up having — The conversation became entirely about election integrity and protecting cybersecurity and about Russia. If we had actually had a conversation in 2016, after that election, about the role race played, would we have had a different conversation about election interference and platforms and all of that?

Mutale Nkonde: That's the project that we took on in 2020s. We knew that race was going to be a vulnerability, because if you look into the history of disinformation, and specifically, the history of Russian interference and the KGB, who created it, they had been using race as a tool as far back as the 1930s. I started to do a lot of talks about the way Stalin was using the anti-lynching movement to recruit black people in the south to the Soviet Union, and really pointing out that Angela Davis' mother had been a communist, and as we know, Angela Davis is a communist. But that came from the fact that where she grew up in Alabama is where the church was bombed by the KKK in the '40s. Coming in after that were the Soviets who organized local women on blocks, just like the block association I worked with and the Obama thing, black women being busy bodies, as we are, and had really pointed out that communism could be a fix for racism. Fast forward again, into the 1980s, and Soviet forces plant a story India around the AIDS epidemic, and they say that it was made by the CIA to murder blacks and to murder gay people. Fast forward to the early 2000s, and I'm listening to Kanye West. As I'm listening to Kanye West, one of the lyrics that he says on one of his songs is the CIA invented AIDS. That using of race to turn black people away from the democratic project and black people away from nation state is something that is so insidious. So, of course, by the time 2016 comes around, we're so primed for it. In 2020, we decided that we were going to insert ourselves in the conversation, and we looked at black groups that were telling black people not to vote in the election, because '16 had been about Russian groups doing this work. We identified one network where they were telling black people to vote down ballot, and we created an alternative social media campaign to interrupt that in which we positioned COVID as a racial justice issue, and encouraged black audiences on social media to vote for who they thought would bring us out of this crisis. And when we measured engagement, we found that not only did our campaign work in that particular case study, but the campaign itself looked like it had been set up by Russian operatives, that it was being executed by black people. But what made it — The question I love to ask people like you Taylor is from a policy perspective. What do we do here because race is going to continue to be used, just how it's used is going to differ?

Taylor Owen: Yeah. The policy debate often just falls into these binaries that if one makes the case that race was the underlying issue, then de facto, the issue is no longer a technology issue. It's a society issue. Therefore, don't look at technology. This is about people. I don't think that binary is true either, because there's elements of the technology, as you point out so clearly, that exacerbated or potentially incentivized that division and contributed to it. I think that's what's so powerful about how you frame this, is it brings those two together and doesn't create these two alternatives of social issue versus technology issue. The two are actually intertwined.

Mutale Nkonde: When we were looking at it in the DEEPFAKES Accountability Act, which was the closest one that — Because we were really speaking to platform companies. We talked about it in terms of race becoming a national security issue. But back in 2018, '19 when we were saying this, because it was perceived to be black people who were talking about race, people were like, "Oh, my God, you're so crazy. You're so alarmist." Around January 12, we went back to the same people that said that we were alarmist, and we said, "You have just had a white supremacist insurrection of the US Capitol." Race is a national security issue. Suddenly, people had the ears to hear it.

Taylor Owen: Yeah. In all of those acts that you worked on, it seems to me that coming at a tech governance conversation via a lens of racial justice lead to policy outcomes that now benefit all of us.

Mutale Nkonde: I love that you pointed that out because I think one of the most misunderstood element, at least of our work, is our underlying guiding principles. And we look to the Combahee River Collective, which was a feminist statement in the '70s. One of the things that they say in that statement is that to free ourselves as black women, we have to free everybody, because until everybody's free, nobody's free. I think in the imagination of the hostile right is this idea that black power then means white subordination. It's like no, no. It just means everybody can live in dignity, and we can live in shared prosperity. But some people will lose. If you're a billionaire, yes we want you to be taxed in a way that means that the poorest amongst us can live healthy and dignified lives. If you are a man, yes we want you to make space for women and other feminize people, non binary people, right? That's not well communicated.

Taylor Owen: I wonder if some of that view is crystallizing, coming back to the facial recognition conversation, in that particular technology, because the risks are so acute, not just to racialized populations, but to everybody. Yet the driver for it, the change, is coming from that particular perspective. Can you speak a little bit about how you think about FRT, and whether it is this crystallizing moment?

Mutale Nkonde: Yes. I think the interesting thing about facial recognition technology from a design perspective is how the technology is often fed countless images of white men, and then graphs are taken of people's faces. What that means is a measurement between your eyes, eyebrow circumference, measurement between the cheek to the chin, and that then creates this statistical model which is labeled as the human face. Part of that is skin colour, hairline, hair colour. One of the things that always interested me was, in the building of that technology, research scientists are often just looking at people who are in the lab or people like them or people that can access digital images of people like them, and we live segregated lives. Here in the US, if you're a white person, you're — 70% of white people don't actually know a black person. If I were to ask one of those people to bring me 100 pictures from their social media, it's going to be a bunch of white people. That is the same, there are black people that live in an all black world. I wouldn't have an issue with that if it wasn't for the way these technologies have been deployed, and specifically in policing. So, my new work is really looking at instances where facial recognition systems are being used as probable cause for arrest, and then people are being incarcerated and having to prove their innocence. This is happening. There are two cases in again Michigan. Wow, I'm speaking about Michigan law, one in New Jersey. Nijeer [Parks] Peters, who not only was misidentified by facial recognition system, arrested but could prove that he was 30 miles away at the time. Interestingly, through his social media, he had been going live somewhere else, and that evidence was entered. But because of poverty, ended up spending 10 days in jail because he couldn't make bail, and that story really shows how facial recognition reinforces other elements of racialized violence by doubling up these systems. When we were looking at no biometric barriers to housing, we couldn't believe that these systems would not only decide whether you could get into your house, but take a picture of every single person coming into the building, tracking where they were going. Then, in the case that we followed, those were just going to be freely shared with the local police force and the NYPD, to their credit, weren't asking for them. That's the case for Link kiosks, which we have here, which are like smart towers on our streets. Had you guys allowed for the development of the smart city in Toronto, I'm sure it would have been full of them. Self checkout also uses facial recognition. We're using them at our borders. If you come to JFK, you have to scan your face. We're also using them in recruitment software, and it's that third party, what happens to the data once it's been captured because the way IP laws work in the US is it's the person that takes the picture that holds the right, not the subject. If I take a picture of you, Taylor, and I send it to the police, that's my right to do so.

Taylor Owen: Or if I take a picture, put it on Instagram, and I've signed up for the terms of service agreement, they can then train their facial recognition system on that picture?

Mutale Nkonde: Right. Right. The thing that — To your point, I really appreciate the privacy community coming in and saying this is a privacy issue because that's really where most white people are entering this. They're entering it through that lens of privacy, and then once they learn about the racialized lens are adopting that also.

Taylor Owen: Yeah, I'm interested to hear you say that because actually I think the racial justice at entry point is more powerful in to a certain degree than the privacy lens. I mean, you may be saying the same thing, but people come at things from different perspectives and that's fine. But to me, what's so acute about the FRT debate in the policy lens is that certain harms are so acute and clear, and unambiguous that they raise flags about the technology being generalized and normalized.

Mutale Nkonde: Yes.

Taylor Owen: The last thing I want to ask you a bit about is the work you're doing with AI For the People, and how I think that is, in many ways, deliberatively appropriating to a certain degree the AI discourse in a way that you want to reframe it. I'm curious to hear more about what that looks like. I mean, a lot of this policy conversation focuses on harms. How would this conversation look different if we focused on the potential benefits of these technologies?

Mutale Nkonde: So, AI For the People, it is really like a baby of every single part of my career. We use journalism, art and culture as translating tools for policy. The way that we do that is that we try to think of a world in which technology exists, we use it alongside with humans, and we have an alternative way forward for the future. For example, very hard examples around that is we're in a partnership with Amnesty International around their 'ban the scan' campaign, and we created a five minute film where we just told stories. Because to your point, Taylor, the racial justice lens to the facial recognition debate, really allows you to get into those human interest stories. We profile a group of activists in a housing setting, and then an activist who believes he was arrested because of facial recognition use. The reason that we found that to be so powerful is as we show this film to policymakers and advocates in the United States, they pause and they start to ask questions. How does this happen? Where does this happen? What can I do about this? We're using that film, and strategically, we're showing it in places that are thinking about facial recognition bans or moratoriums. We find that to be particularly powerful, and it's such a beautiful way for me to get back to my career, which was the documentary filmmaker, making films about science and society. Somebody asked me recently, "If AI For the People's successful, what will the world look like in five years?" I was saying, "Well, all racial justice, we'll have a technological analysis, and all policymakers will have a place in which they can learn where the barrier to entry is so low, that they can just play a video, and it be told, in the words of their constituents, what is happening to them in the face of these technologies."

Taylor Owen: In closing, you've been working in the space for a long time, and it feels like we're in a moment where the conversation about race and racial justice is in a different place. The conversation about technology and tech governance is in a different place. The political moment maybe is different. Are you broadly optimistic about where we're headed?

Mutale Nkonde: Oh, my God, I'm always optimistic. Believe it or not, I was optimistic that I would get a three part documentary commissioned by the BBC and blaming them for racism. I am not known for my realism, but I think now more than ever there is the appetite of the evidence that has to lead us in a different direction. I mean, Taylor, you and I talked about this, too. I feel like we've been in the trenches together. I mean, these are conversations that I started 20 years ago, but generations before me, these conversations were being had. I just see so much interest like AI For the People, Data for Black Lives, Algorithmic Justice League, Encoded Justice. There's now this ecosystem of organizations that all want to do their bit. They are speaking to — You know, Stanford has their waste and tech fellowship and these other institutional spaces that want to hear and engage and lean in. Even the work that you're doing at McGill, you're giving us the space to be heard and to be validated, and that's really important because we are typically on the margins.

Taylor Owen: Well, I'm glad that's no longer the case. Thank you for everything you're doing in this conversation. It's inspiring.

Mutale Nkonde: Yes, let's do it together.

Taylor Owen: Done. That Mutale Nkonde. For more on her work, you can go to Big Tech is presented by the Centre for International Governance Innovation, and produced by Antica Productions. Please consider subscribing on Apple Podcasts, Spotify, or wherever you get your podcasts. We release new episodes on Thursdays, every other week.

For media inquiries, usage rights or other questions please contact CIGI.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.