How Artificial Intelligence Is Reshaping Global Power and Canadian Foreign Policy

May 9, 2019
AP_19096428472286.jpg
Group of Seven foreign ministers meet in Dinard, France on April 6, 2019. (Stephane Mahe/Pool photo via AP)

Artificial intelligence has a reputation for being a buzzword that’s dangled in front of venture capitalists. A recent UK study found that 40 percent of European ‘AI startups’ did not actually use AI in a “material” way, an error sometimes caused by incorrect labeling from third-party analytics websites, but which businesses were in no rush to correct. According to the study, AI companies attract between 15 and 50 percent more funding in comparison to non-AI startups.

Perhaps complicating the problem is that no single definition of artificial intelligence exists.

When an Australian news outlet described the Boeing 737 Max’s sensor malfunction as a “‘confused’ AI,” technical professionals on Twitter protested that the term was seemingly misapplied to any technology that uses an algorithm.

But what does AI really mean and when should we use the term? AI is better understood as a disciplinary ecosystem populated by various subfields that use (often big) data to train goal-seeking technologies and simulate human intelligence. A few of these subfields include machine learning, machine vision, and natural language processing. These technologies are often predictive, designed to anticipate social, political or economic risk and transfer the burden of human decision-making onto a model. In fact, the Treasury Board of Canada, tasked with drafting the directive that will guide AI integration into the federal civil service, prefers the term “automated decision-making” to describe how AI will operate within the Canadian government once the directive takes effect.

Due to its scientific veneer, emerging technology has been historically guarded from social scrutiny. Yet AI applications belong to a class of physical and digital objects that are used to project Canadian influence abroad and at home. AI applications have a number of foreign policy uses, ranging from trade to defence to development work. In no particular order, AI: allocates commercial resourcesenables large-scale surveillance of often vulnerable populationsradicalizes extremistsfights extremism, and predicts and reduces climate change vulnerability.

Despite its widespread use, we are only just beginning to decide how AI’s social impact should be regulated. Canada’s most visible commitments to AI have been through the G7, a group whose members possess close to 60 percent of global wealth and who use the platform to cultivate shared norms on topics ranging from security to economics. Less visible, though equally important, are the intersections between AI and Canadian national security. So far, Canadian legislation has focused on the standards that govern data collection, a move that directly, if not obviously, impacts AI’s relationship to security. Because algorithms (and yes, sometimes AI) are enmeshed in political decision-making, these technologies also offer a vision of ‘social good’ that can compete with liberal democratic commitments.

In Ottawa, decision-makers sprinkle the evidence of AI’s socio-technical impact across political speeches and reports. Foreign Minister Chrystia Freeland’s 2017 address on Canada’s foreign policy priorities points to the transformative impact that automation and the digital revolution have had on the workforce to explain rising populist disaffection towards free trade and globalization (though Freeland says that free trade is still overwhelmingly beneficial). Similarly, the Department of National Defence position, outlined in its Strong, Secure, Engaged policy, acknowledges that western military forces have a strategic and tactical advantage because operations use space-enabled systems in order to process and manipulate big data. (Drones and metadata harvesting are probably the most frequently cited examples here, though many other common uses exist that don’t incite the same level of public concern. For instance, the navy is developing voice-enabled assistants for Canadian warships.) And in the aftermath of the New Zealand mosque shooting, Public Safety Minister Ralph Goodale called on digital platforms to better recognize the ways their platforms propagate right-wing extremism and terrorism. (Curiously, right-wing extremism and terrorism are mentioned as separate categories in his speech, despite the New Zealand shooting being the most severe in the nation’s history.) Goodale went further and told his G7 colleagues that digital platforms who could not temper their algorithms “should expect public regulation...if they fail to protect the public interest.”

The Regulatory Gap

Automated systems are responsible for auditing our digital environment by sorting between allegedly worthy and unworthy information. Because AI shapes our digital environment by choosing and automating our exposure to friends, politics and commerce, these applications are also responsible for our socialization. In part for this reason, the national security community takes the spread of misinformation campaigns seriously and has begun to investigate the ways digital platforms can radicalize users. What is Canada’s approach to governing this challenging? Despite Goodale’s threat, Canada has so far refused to support global initiatives that seek to regulate AI’s negative impact on the security landscape. For example, Canada’s tone on lethal autonomous weapons systems, today’s archetypal ‘AI security’ issue, could be charitably described as placid. In a statement to the UN Convention on Certain Conventional Weapons, Canadian parties have stated that “International Humanitarian Law is sufficiently robust to regulate emerging technologies.”

This is a departure from the way Canada treats AI at the domestic level, where even federal departments are mandated to use new policy instruments in order to assess the impact of automated decision-making. Within Canada’s federal service, the requires federal branches to conduct algorithmic impact assessments and to publicize the results. In a positive move, the directive also instructs federal branches to make their source code public on the Open Resource Exchange, the government’s public source code repository. At the domestic level and in comparison to other states, Canada is arguably a leader in AI. In 2017the Government of Canada introduced a five-year pan-Canadian AI strategy, a $125-million dollar initiative that was the first of its kind globally. The strategy’s development and implementation was awarded to the Canadian Institute for Advanced Research (CIFAR). CIFAR’s latest annual report does not mention foreign policy or national security, though the organization did provide funding for a 2018 workshop titled ‘AI and Future Arctic Conflicts’ and a 2019 UN workshop on arms control governance and AI. Clearly, Canada is invested in AI, but its approach to governing AI reflects a global landscape torn by new forms of instability.

International relations scholars have long argued that liberal democratic countries are responsible for setting the human rights agenda, acting as ‘norm entrepreneurs’ that other countries then emulate in order to gain entry into international society. Some scholars have even argued that the G20, a forum that includes countries like Russia, China and Saudi Arabia, functions as a space where states are more willing to adopt liberal values. (The G20 is tackling AI disruption in the workplace.) But the recent annual report released by Canada’s National Security and Intelligence Committee of Parliamentarians claims that, when it comes to AI at least, China and Russia’s ambitions are risks to Canadian and global security. And the latest G20 meetings have also been marred by an illiberal turn, thanks to President Donald Trump’s attraction to authoritarian politics and his refusal to exert traditional US leadership.

G7 members are still ideologically aligned (the US notwithstanding) so the group remains Canada’s preferred forum for advancing the development of “human-centric” AI. Ahead of last year’s summit in Charlevoix, Canada and France issued a joint statement on artificial intelligence, reaffirmed an earlier G7 Innovation Ministers’ Statement that linked artificial intelligence to social context, and called for an international study group of government experts that would promote the development of “human-centric artificial intelligence grounded in human rights, inclusion, diversity, innovation and economic growth.” Yet, by further insulating the AI human rights agenda into the G7, we risk excluding some of the world’s most marginalized groups from acting as stakeholders on issues that directly impact them. For example, the Rohingya in Myanmar have seen their own government use Facebook to amplify genocidal violence; so far, they must rely on Facebook’s internal corporate policy on hate speech to remove pages associated with the Myanmar military. What we’re currently seeing is an AI human rights agenda that is only accessible to some people depending on their political identity.

Moving Forward

With this context in mind, there is a need for a baseline level of data literacy so that we can unravel AI’s impact on foreign policy. In comparison to other policy domains, foreign policy is notoriously unresponsive to democratic influence. AI further exacerbates this challenge when political decision-making becomes the target of algorithmic intervention.

AI reshapes global power in at least two fundamental ways. First, AI redistributes the physical infrastructure that is needed to exert influence. A number of ‘big player’ tech companies were founded and continue to steer their operations in liberal democratic countries. Both state and non-state actors use this infrastructure, whether positively or negatively, and benefit from economic policies that promote AI innovation in liberal democratic countries. The impact of AI cannot be isolated to a subset of global players. Second, AI redistributes power. Individuals and groups who would traditionally have had sparse access to a global audience, either for the purposes of extremist recruitment, mobilization or ‘likes,’ are now global players. For this reason, Canada’s approach to AI, which has focused on domestic innovation in coordination with its G7 partners, is insufficient for a country that claims loyalty to a liberal international order. Some citizens from liberal democratic countries may see the benefits of regulation, but as the literature on algorithmic discrimination illustrates, individuals from marginalized groups already bear the burden of AI’s worst excesses.

In response to the social challenges posed by AI, the data science community has responded by developing applications that are connected to social good; fairness, accountability and transparency in algorithms is now a field of study with its own annual conference. There is also greater recognition that data scientists, who largely use AI to address social challenges, are political actors who are responsible for the trajectory of global affairs. Employees at GoogleMicrosoft and Clarifai have protested the use of their work in military systems. In the case of Google, the employees were at least partially successful. The US Department of Defense’s contract with Project Maven (which plans to use AI to identify potential drone targets) was not renewed, and Google’s plan to develop a censorship-friendly search engine for China appears to be on hold.

When successfully mobilized, and because their technical expertise sustains AI’s role in foreign policy practice, technical professionals can expect that their influence on the global landscape will grow alongside AI adoption. The challenge is that technical professionals often possess technical literacy that is not widely accessible to the general public or, if the recent US Congressional hearings with Facebook and Google are any indication, lawmakers. Automated decision-making impacts everyone, but without greater data literacy, political decision-making becomes increasingly reserved for cadres of specialized professionals that may or may not have sufficient leverage to direct decision-makers away from unethical AI.

This article first appeared on OpenCanada.org.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Sarah Shoker is the founder and CEO of Glassbox, a consultancy firm that trains software development and legal teams to identify how choices made along the technical pipeline can translate into bias.