Gender Bias in Technology: How Far Have We Come and What Comes Next?

March 19, 2020
2018-11-01T132407Z_1058787565_MT1PRA39452654_RTRMADP_3_PA-MEDIA.JPG
Google employees in Dublin, Ireland, join others from around the world walking out of their offices in protest over claims of sexual harassment, gender inequality and systemic racism at the tech giant. (Niall Carson/Reuters)

An artificial intelligence (AI) system operates on — and learns from — the data it’s given. And when that data is generated by and collected from humans, it carries all the biases that we do, including bias about women. The result: firms develop technologies that reinforce inequality. For example, Facebook was sued for withholding financial services advertising from older and female users, facial recognition technologies have been called out for disproportionately misidentifying women (and in particular, women of colour), and researchers have found that hiring technologies unfairly screen women in the job application process.

Given these major missteps in technology development, we asked four experts to answer a few questions about the bias that’s often built into digital products and services:

  • We’ve been talking about gender bias in algorithms (and the technology those algorithms support) for years. What — if any — notable policy or regulatory changes have been made to mitigate algorithmic bias?
  • What are some of the most pressing gaps in technology governance that still enable gender bias and reinforce inequality? 

The following is a lightly edited version of their responses.
 


Joanna J. Bryson, the Hertie School of Governance

My main expertise with respect to bias is that I wrote an article about it in the context of semantics and cognitive science. Also, I’m enough of an AI expert to know that AI neither magically eliminates or introduces bias. Bias is intrinsic in our society; the way to ensure it is not introduced deliberately is [to] establish norms of auditing the development process when any bias is discovered. The main change I’ve noticed in the three years since my article came out was that people do now understand that a lot of bias comes from simply uploading it along with everything else when we use machine learning, and of course — less because of my work than because of others’ — that it can be exaggerated by uploading unrepresentative data sets. Most law, certainly all common law, is altered by what is general and public knowledge. It is no longer excusable not to be aware of these possible sources of bias in a system.

Susan Etlinger, the Centre for International Governance Innovation

The biggest gaps in technology governance right now are framing, representation and tractability. First, the way we frame the issue of algorithmic bias is critical. If we view it as a technology problem, we tend to oversimplify potential solutions and can exacerbate inequality while creating a false sense of security. Yet, if we frame algorithmic bias as the inevitable outcome of social inequality, it seems utterly intractable: how can technology address challenges that are so nuanced and so deeply embedded in society, especially in the highly polarized world we’re living in?

The next important challenge is one of representation. We must insist that the people who are most affected by governance decisions are in the room, on the team, and are part of the decision-making process. Fixing other gaps without addressing the question of representation can only result in more and better fig leaves — not real progress.

Finally, we also need to ensure that governance structures are both expansive and tractable. This means considering the “supply chain” of algorithms, from data sets to data models, applications, use cases, policies and outcomes. Nuance and flexibility are not traditionally the friend of governance structures, but they are crucial inputs to systems that purport to represent and affect humanity.

Os Keyes, the University of Washington

To be perfectly frank, I’ve seen little meaningful action on a policy and regulatory front as a result of campaigning [for equality]. This is not to say that campaigns have been pointless — far from it; raising consciousness and attention is its own kind of victory, one that persists despite regulatory sluggishness. And there have certainly been examples of general technological reform and regulation at the local level for issues that disproportionately impact women, particularly trans women and/or trans women of colour, such as public facial recognition. But generally speaking, such victories have been few and far between.

I suspect that one major reason for this, [is that this] is not a gap in what technology problems are being treated as gender-biased alone, so much as where that [bias] comes from: what people assume about society. Far too often, work in this area looks for examples of explicit gender bias, operating from the implicit assumption that absent that [bias], technology (and society) are “neutral.” But, neither are anything of the sort; we start off from a default position where society is (violently) gender-biased, minimizes the nature of that bias and (when challenged) regularly works to minimize the impact of any course correction. Correspondingly, the biggest issue is not any particular area of technology, but instead myriad implicit and/or less obvious harms that go uninterrogated and under-considered in work that looks for obvious, explicit bias in data. And such an issue is not new; it is also found by people who investigate the impact of anti-discrimination laws in practice, for example. So, what I would like to see addressed is not any particular technology, but instead how we frame “gender bias”: what we look for, where we look for it, and how we draw connections between different types of and sources of oppression.

Joy Rankin, the AI Now Institute

Although many researchers point to the developments around algorithmic fairness or AI ethics as steps to mitigate algorithmic bias, these very limited and computationally focused efforts completely avoid the larger structural and systemic forms of bias and discrimination that algorithmic technologies perpetuate and often amplify. For example, during the 1960s in the United States, efforts were made to make computing and networking more broadly accessible to [students from kindergarten through high school] — boys and girls alike. However, due to how the program was set up, private school students received nearly double the computing time as public school students. Coupled with the fact that all of the private schools involved were (at the time) boys-only, meant that boys received far more computing and networking access than girls. The inequity in computing access reflected — and amplified — existing structural inequities. As we (AI Now) have demonstrated in our Discriminating Systems report, there is a harmful feedback loop between the stunning lack of diversity in the communities building algorithmic technologies and the ways that these technologies demonstrably harm women, transgender people, nonbinary people and people of colour.

Many of these technologies still remain “black boxes” because of corporate secrecy and intellectual property legal protections. We, the public, do not know how these algorithms work, nor do we know how they are being deployed on and against us. Last year, we saw that women were being offered significantly lower credit lines from Apple Card, despite having better credit history than men, because the algorithm used by Apple was biased against women. And we recently learned that Clearview AI’s client list includes thousands of organizations ranging from local police departments to universities to Walmart and Best Buy. Coupled with the fact that we know from researchers such as Joy Buolamwini [founder of the Algorithmic Justice League] that facial recognition technology is demonstrably less accurate for women, especially women of colour, which is why facial recognition, and associated technologies such as affect recognition, must be banned.

AI Now also calls for radically increased transparency in the tech industry around hiring practices, how harassment and discrimination reports are addressed, and compensation levels. Moreover, the AI industry must make — and the public and our governments must demand — sweeping changes to address the industry’s systemic racism, sexism and misogyny. The work of technology governance cannot be accomplished within the industry itself. Rather, we need “non-technical” disciplines ranging from science and technology studies to gender studies, critical race studies, and disability studies; these fields have developed deep and rigorous expertise in analyzing how technology bolsters racist, sexist and ableist structures.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Authors

Joanna J. Bryson is a professor of ethics and technology at the Hertie School of Governance in Berlin.

Susan Etlinger is a senior fellow at CIGI and an expert on artificial intelligence and big data.

Os Keyes is a PhD student at the University of Washington’s Department of Human Centred Design & Engineering and the inaugural winner of the Ada Lovelace Fellowship.

Joy Lisi Rankin is the research lead on gender, race and power in artificial intelligence at the AI Now Institute.