Artificial Intelligence Could Magnify Social Inequality

May 7, 2018
AP_18086560890504.jpg
(AP Photo/Charles Rex Arbogast)

Advancements in machine learning and the increasing use of algorithms for pattern recognition are generating a lot of attention — and funding — for artificial intelligence (AI).

In 2017, the Canadian government announced that it would be investing $125 million in a Pan-Canadian Artificial Intelligence Strategy, in order to cement its place as a global leader in AI. The 2018 budget reaffirmed the government’s commitment to investing in AI. Meanwhile, the government has repeatedly made explicit its intention on “advancing gender equality and women’s empowerment” as a “central theme of its [Group of Seven] Presidency.”

As it currently exists, the Pan-Canadian Artificial Intelligence Strategy does not explicitly mention mainstreaming gender considerations in the investment and development of AI. And while the United States’ National Artificial Intelligence Research and Development Strategic Plan explicitly mentions “gender, age, racial, or economic” considerations, it does not outline what those considerations are and how they should be integrated into the design and development of AI.

In order to truly advance gender equality and women’s empowerment, gender considerations and issues need to be mainstreamed across all disciplines and sectors, including AI.

Reinforcing Existing Inequalities and Biases

AI is only as robust as the inputs we provide it. In order to recognize a pattern, an algorithm needs to assess a data set. In developing these data sets, researchers make conscious decisions about which data to include and which to exclude as “irrelevant.” AI’s outputs are therefore limited to the quality and quantity of its inputs, which are limited by human decisions as to what is and is not relevant and the availability of data in the real world. Very simply put, algorithms essentially work to recognize a pattern that exists in a data set and then make decisions based on these patterns.

Arguments for increasing the number of women in science, technology, engineering and math (STEM) to overcome some of the gendered biases emerging from AI technologies, have been shown to be flawed and simplistic. Even as women enter the field, the existing culture is one that was and is often still shaped solely by men. As a result, the algorithms and AI technologies born in such an environment fail to recognize a diversity of perspectives and considerations, eventually reflecting those biases and inequalities. Unless the culture itself changes and systemic reforms are implemented, it is unclear how AI will help to fix gender inequality. After all, it is only replicating existing patterns.

Inequality in Practice

The issue of perpetuating bias and inequality is likely to be more pronounced in fields where gender disparity is especially prominent — security and military environments fit the bill.

Research and work on AI in the security field has focused on its military applications, as opposed to peacebuilding. For example, China’s State Council released their “Next Generation Artificial Intelligence Development Plan” in July 2017. Upon assessing a translated version of the plan, it is clear that security applications of AI do not include peacebuilding, mediation or negotiations. While these terms make no appearance in the plan, the term "military" appears 12 times, while "defence" appears 10 times. France’s AI strategy, For a Meaningful Artificial Intelligence: Towards a French And European Strategy, makes only one mention of peace, in relation to the impact of AI exports on “regional peace and security,” while it mentions "defence" 24 times, "military" nine times, "security" 25 times, and "weapons" 17 times. Beyond an overview, the lack of transparency surrounding the Pan-Canadian Artificial Intelligence Strategy makes it difficult to examine which applications of AI are being prioritized in the security field.  

The emphasis on military AI applications and the gender disparity in the security field cannot be resolved by encouraging developers and researchers to focus on peacebuilding applications of AI. A study of 31 major peace processes between 1992 and 2011 revealed that only four percent of signatories, two percent of chief mediators and nine percent of negotiators were women. As it stands, the majority of peacebuilding or peacekeeping data would emphasize the decision-making processes of a certain group of people: men in positions of power. The resulting tools could fail to capture considerations that women may include in peace negotiations or mediation. They may also reinforce the stereotype that women are either solely victims of war or new entrants in militaries, because the existing data fails to capture the myriad of roles women play in conflict and peacebuilding.

Bias is built into much more than algorithms; the very security structures and systems in which AI tools operate have been conceptualized by men.

AI itself prizes a certain type of knowledge: traditional, rationalist epistemologies or logical propositions. This narrow definition of knowledge may disadvantage women (and other minorities) who, whether as a result of socialization, biology or historical experiences, may develop embodied knowledge more than they do propositional knowledge. If this is the case, solely relying on propositional knowledge in AI will mean that it may not accurately capture women’s decision-making processes.

As AI is increasingly used in everyday commercial applications, users may become accustomed to algorithms making decisions for them, whether they are aware that these decisions are being made or not. We may be limiting our own decision-making capacities and skills when the responsibility to make decisions is relegated to AI. And if AI is unable to capture embodied knowledge, there are sure to be gender implications that developers and regulators alike have not yet accounted for.

Calling for Interdisciplinary Solutions

Society cannot rely on AI to fix its challenges and make existing inequalities disappear, particularly when AI relies on data that is derived from the real world where these inequalities still exist.  Increased investments in AI should also be accompanied by increased investments in research, programming, and social innovation to better understand and address real world inequalities.

Attempts to combat gender inequality must look beyond addressing the “pipeline” problem of women in the STEM fields, and empower more women to shape and influence the very systems and industries in which AI operates. This needs to happen in the technology field, but also in politics, economics, security and law.

Inter- and cross-disciplinary research and education are critical in recognizing blind spots and understanding the societal implications of AI. It is a mistake to cut funding to or eliminate humanities and social sciences programs, or to emphasize STEM education over other disciplines, in the misguided belief that these subjects are no longer relevant. 

Without that kind of interdisciplinary approach, AI could replicate and potentially magnify societal inequalities, injustice, and tribalism.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Bushra Ebadi is a social innovator focused on designing sustainable, innovative solutions to complex global challenges using her multidisciplinary background and skills in design and systems thinking, policy analysis and mixed methods research.