People are increasingly reliant on artificial intelligence (AI) — that is, the machines, systems or applications that are capable of performing tasks that, until recently, could only be performed by a human.
Think of your morning routine: maybe a Google Assistant checks your calendar and reminds you of your meetings. Then you survey Twitter, which uses algorithms to curate what you see — the latest about Trump, trade and technology rise to the top. From there, your smartwatch coaches you through your workout, telling you about both your progress and potential. And at the end of it all, when you settle in for some Netflix, your profile suggests a few thrillers you’re likely to binge-watch.
Just as we depend on AI, many applications and devices powered by AI depend on cross-border data flows to fuel and train them. Every day, an incredible amount of data flows through the internet, over borders, and between individuals, firms and governments to power everything from Siri to a Google search. Because all of this movement is directly or indirectly associated with a commercial transaction, such data flows are essentially traded.
Unfortunately, the average AI user — the one who relies on the machine learning behind Google Assistant, or Twitter curation, or a smartwatch — probably doesn’t know that trade agreements govern AI. If they did, polling data reveals that they might call for stronger privacy requirements, better disclosure and a fuller national debate about how firms use algorithms and publicly generated data.
The public needs such information to assess if these algorithms are being used unethically, used in a discriminatory manner (to favor certain types of people), or used to manipulate people — as was the case in recent elections.
To date, data flows related to AI have been governed by World Trade Organization rules drafted before the invention of the internet. Because language was originally drafted to govern software and telecommunications services, it is implicit and out of date. Today, trade policymakers in Europe and North America are working to link AI to trade with explicit language in bilateral and regional trade agreements. They hope this union will yield three outputs: the free flow of information across borders, large markets to help train AI systems, and the ability to limit cross-border data flows in order to protect citizens from potential harm.
As of December 2017, only one trade agreement, the Comprehensive and Progressive Agreement for Trans-Pacific Partnership — the CPTPP, formerly the TPP — includes explicit and binding language to govern the cross-border data flows that fuel AI. Specifically, the CPTPP (which is still being negotiated) includes provisions that make the free flow of data a default, requires that nations establish rules to protect the privacy of individuals and firms providing data (a privacy floor), bans data localization (requirements that data be produced or stored in local servers), and bans all parties from requiring firms to disclose source code. These rules reflect a shared view among the 11 parties: nations should not be allowed to demand proprietary information when facilitating cross-border data flows.
The United States (which withdrew from TPP) wants even more explicit language related to AI; trade diplomats recently proposed that the NAFTA, which is also currently under re-negotiation, should include language that bans mandated disclosure of algorithms as well as source code. The United States wants to ensure that its firms will not be required to divulge their source code or algorithms even if the other NAFTA parties see doing so as legitimate and necessary to prevent discrimination, disinformation, or undermine their citizens’ ability to make decisions regarding their personal information (autonomy).
Like most trade agreements, the CPTPP and NAFTA also include exceptions, where governments can break the rules delineated in these agreements to achieve legitimate domestic policy objectives. These objectives include rules to protect public morals, public order, public health, public safety, and privacy related to data processing and dissemination. However, governments can only take advantage of the exceptions if they are necessary, performed in the least trade distorting manner possible and do not impose restrictions on the transfer of information greater than what is needed to achieve that government’s objectives. Policymakers will need greater clarity about how and when they can take these steps to protect their citizens against misuse of algorithms.
On that front, the European Union is leading the way.
The 28 (soon to be 27) member states of the European Union have agreed to pool resources and share governance for a digital single market that builds financial support for innovation, including AI, and establishes clear and explicit rules protecting personal data under the General Data Protection Regulation.
The European Union has also introduced limits on the use of algorithms as a human right in the regulations underpinning the digital single market. Article 21 allows anyone the right to opt out of ads tailored by algorithm. And article 22 allows citizens to contest legal or similarly significant decisions made by algorithms and to appeal for human intervention.
Unfortunately, these regulations are not perfect. Such measures raise the costs of obtaining data and, and, as a result could stifle innovation and competition. Moreover, some European courts have interpreted these rules as applying to the global internet and not just within the European Union. Some companies see that as a form of extraterritoriality.
While the European Union has taken a momentous first step in building a digital single market and encouraging public debate, the member states have not yet agreed to rules that fully govern data flows and AI in trade agreements with other countries. Policymakers are concerned that other nations will seek to dilute the European approach to protecting privacy and empowering citizens to challenge the use of AI.
Clearly, the European Union’s approach does not reflect North American norms when it comes to regulation. But NAFTA renegotiations — assuming they aren’t halted by United States President Donald Trump —provide an opportunity to begin a discussion in North America on how to encourage the data flows that power AI while simultaneously protecting citizens from misuse or unethical use of algorithms. Canada should lead the way, given its commitment to human rights and its comparative advantage in machine learning.