Carnegie-Mellon found that Google was showing ads for high-paying jobs six times as often to men as it showed them to women. (Shutterstock)
Carnegie-Mellon found that Google was showing ads for high-paying jobs six times as often to men as it showed them to women. (Shutterstock)

Women looking for a job as an accountant, engineer or economist — or another well-compensated position — have been at a measurable disadvantage in the past few years. Researchers at Carnegie-Mellon found in 2015 that Google’s ad-serving algorithm was showing ads for high-paying jobs six times as often to men as it showed them to women.

Search tools are powered by artificial intelligence (AI) systems that are plagued by racial and gender bias — among other biases. This unfortunately human element of AI can have very real and significant impact on people’s lives.

“The bias may impact a person’s ability to get a loan or credit,” said Nicol Turner-Lee, a fellow in the Brookings Institution’s Center for Technology Innovation. “The bias of the algorithm may keep them in jail longer because of predictive analytics in sentencing, or it may steer them away from getting into a good school.”

French President Emmanuel Macron pointed to the potential systemic impacts when he told Wired magazine earlier this year, “AI could totally jeopardize democracy.” He addressed concerns about bias directly in that conversation. Leaders “have to guarantee there is no bias in terms of gender, age, or other individual characteristics,” he said. “This is a huge issue that needs to be addressed. If you don’t deal with it from the very beginning, if you don’t consider it is as important as developing innovation, you will miss something and at a point in time, it will block everything. Because people will eventually reject this innovation.”

So far, tech companies have been tasked with addressing bias in artificial intelligence. IBM said in a forecast released in March that it expects bias in AI to soar in the next five years. At the same time, the technology giant said it had come up with a methodology to minimize the bias in data sets that feed AI systems. In May, Microsoft announced it was working on a tool to automatically identify bias in AI algorithms, and Facebook unveiled a bias detection tool. Of course, developing tools and methods that are free of bias is a tall order that’s yet to be seen.

Governments Waking Up to AI Urgency

New technologies with structural, economic and ethical implications as momentous as those posed by AI have typically been major concerns of policy makers around the world, but in the case of AI, governments have been slow to wake up to the issues and their role in creating remedies.

“They’re probably the farthest behind,” said Jeremy Gillula, tech policy director at the Electronic Frontier Foundation, a nonprofit data privacy and civil liberties group. “At least in the US, we haven’t seen a lot of real regulations about how AI should be used or what sort of safety checks should be in place to prevent unwanted bias or discrimination.”

AI technologies and capabilities are advancing so fast that it’s hard for many institutions to keep up, pushing governments — with their massive bureaucracies and slow-moving decision-making processes — to the back end of the rapidly advancing technological curve.

“Governments realize they need to deal with these issues and they need to do it urgently,” said Karine Perset, an economist with the intergovernmental Organisation for Economic Co-operation and Development (OECD). Perset says members started pushing the OECD to address AI concerns about two-and-a-half years ago. “They are aware they require public policy involvement; the technology and business community alone are not going to be sufficient to address these questions [of bias].”

The OECD is working with its 35 members as well as with partner countries, including China, to address public policy concerns raised by the accelerating development and deployment of AI technologies. “In Europe, there’s more focus on privacy,” said Perset. “In the US and Canada, there’s more focus on discrimination risks.”

Billions at Stake

The race to address the downsides of AI is being run directly alongside the race to capitalize on the game-changing technology. In January, the United Kingdom announced it would invest US$12 million in a new Centre for Data Ethics and Innovation to help guide its governance around AI. Two months later, France said it would pour US$1.8 billion into AI research as it fights to take leadership in a space dominated by the United States and China. Last year, Canada announced it planned to spend CDN$125 million (about US$96 million today) to fund a Pan-Canadian Artificial Intelligence Strategy.

Of course, the biggest companies in the AI world are putting much larger sums to work. Last fall, Chinese e-commerce giant Alibaba announced plans to spend US$15 billion on global research and development efforts centered around AI, quantum computing and financial technology. The US private sector is estimated to invest more than US$70 billion a year in AI.

Given the massive amount of spending on AI and its huge potential for societal transformation, how should governments be tackling questions and challenges around AI? Although it’s unclear how well policy making can keep up with such a fast-changing technology, there are steps governments can take to begin identifying and mitigating bias and potential discrimination.

What Governments Can Do

“The first thing governments should do is look at their own use of AI,” said Gillula. This kind of assessment would go beyond taking a vendor’s word for how its product works. Governments should have products and tools audited, he said, preferably by a third party who can verify error rates and performance data. Public officials need to ask whether the AI product they’re considering is really needed, how it compares to what they’re currently doing, and how they would measure its performance.

Public agencies should also implement algorithmic impact assessments, looking at fairness, bias, justice and other concerns, according to an April report from the AI Now Institute at New York University.

“If governments deploy systems on human populations without frameworks for accountability, they risk losing touch with how decisions have been made, thus making it difficult for them to identify or respond to bias, errors, or other problems,” the report said.

At the national and international level, some say, governments need to involve industry and civil society groups in a multi-stakeholder process to establish clear AI governance standards or principles. At an OECD conference last year, participants agreed that “governments and business should cooperate to create policies to ensure that AI does not widen the economic divides between people, companies of different sizes, countries and continents,” according to a draft report.

If governments and business do not succeed in developing frameworks that address emerging issues of bias and discrimination, we could see a future where people are even more divided than they have been in the past.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.
  • Liz Enochs is an economic, financial and legal journalist with more than 15 years of experience at outlets such as Bloomberg News. She has contributed to The New York Times, Los Angeles Times and Boston Globe.