Canada has established itself as a leader in the research and development of artificial intelligence (AI). But, not all AI is created equally — or with equal application of ethics — and with that in mind, Quebec is aiming for international leadership on ethical AI and digital technologies.
Quebec’s provincial government recently announced the formation and funding of the International Observatory on the Societal Impacts of Artificial Intelligence and Digital Technologies — known by its French acronym OIISIAN — to be based in Montreal. The observatory has a few “axes” or areas of focus: health, work, security, human rights, the arts, the environment and Quebec’s North.
Rémi Quirion, Quebec’s chief scientist, launched the observatory by asserting that the “project demonstrates the leadership role that Québec intends to take on in the socially responsible development of AI.”
This initiative follows up on the Montreal Declaration for a Responsible Development of Artificial Intelligence. This declaration was a result of academics, researchers and citizens who participated in the now annual Forum on the Socially Responsible Development of AI, as well as citizen roundtables and workshops organized in Montreal and elsewhere.
The province of Quebec is funding this observatory in the hopes that it will not only act as a provincial centre of excellence but eventually as a global one, eligible for funding from multilateral bodies such as the United Nations or the Organisation for Economic Co-operation and Development. With this in mind, the observatory has already established ties with related research institutions in the United States and Europe.
In an interview with Forbes, AI researcher Yoshua Bengio — one of the authors of the Montreal declaration — argued that Canada was well positioned to take the lead on ethical AI: “We bring not just the science but also humanist values that I think are really important because AI is going to change society and we want it to change for good.”
Marie-Josée Hébert, the vice-director of research, discovery, creation and innovation at Université de Montréal and a member of the steering committee to create a research cluster in AI, substantiates this sentiment. She emphasizes the importance of interdisciplinarity: “To ensure that the development of artificial intelligence makes a positive contribution to society, it must necessarily be supported by the reflections of researchers in the humanities and social sciences. In Québec, this sector is as rich as the field of artificial intelligence, and the observatory project will bring the two talent pools together to generate knowledge and know-how that will instill confidence in the future.”
The observatory is seeking the holy grail of research — bringing different perspectives together to collaborate. Of course, this pursuit has never been easy in an academic environment; the humanities and social sciences have been facing cuts, while AI and computer science have received a dramatic increase in funding.
The disparity between disciplines will make it difficult, but the observatory has an opportunity to lessen the deficit of AI policy that Canada — and the world — faces.
Jonathan Roberge is skeptical that the observatory can achieve this kind of interdisciplinary collaboration. He’s the Canada Research Chair in New Digital Environments and Cultural Intermediation at Université du Québec.
In an email interview, he argued that an observatory on the social impact of AI implies a distant and objective perspective on the phenomenon it wants to comprehend.
“The fundamental flaw of the observatory launched last week by Québec's chief scientist is that it achieves [neither distance nor objectivity]. The consortium led by the University of Montréal's Institute for Learning Algorithms is meant to have a superficial, if not self-indulgent look at itself by and for computer scientists.”
Roberge’s concern raises an important point: instead of being critical of the social impacts of AI, the observatory could instead legitimize AI by suggesting that these social impacts can be effectively managed and resolved.
Julia Powles, a research fellow in the Information Law Institute at New York University, and Helen Nissenbaum, a professor at Cornell Tech, touch upon these issues in their recent essay “The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence”: “The preoccupation with narrow computational puzzles distracts us from the far more important issue of the colossal asymmetry between societal cost and private gain in the rollout of automated systems. It also denies us the possibility of asking: Should we be building these systems at all?”
This point suggests what may be the ultimate blind spot for the observatory: can it actually be critical of AI if it is run by those actively creating AI?
When the province of Quebec initially made its call for proposals to create the observatory in May of 2018, a consortium of universities and researchers across the province came together in the hopes of creating a single united bid. This unity did not last, however, as a clear division emerged between two groups. One desired research autonomy and independence with limited cooperation with industry, and another saw private-public partnerships as the path forward.
The latter group was ultimately successful and received the funding to establish the observatory. Jonathan Roberge was part of the dissenting group and is highly critical of the role of the private sector and the impact it will have on post-secondary education and research. “The idea that the Observatory will ignite new efforts toward better governance is misleading at best. The Montreal hub revolves around a very small group of public and private actors that redesign universities around public-private partnerships. The consequence is that higher education will become more like Silicon Valley, something Chris Slater at Concordia has called the “Stanfordization” of Québec’s knowledge economy.”
The observatory argues that they will do everything possible to preserve their research autonomy and independence, while also mitigating any risk that may come from conflicts of interest.
In an email interview, Réjean Roy, a special adviser to the observatory’s scientific director, noted that while AI and related companies will take part in the observatory’s projects, they will play “NO role in the governance of the observatory,” as they will not be members, and will not take part in selecting the observatory’s board of directors.
Roy also indicated that the observatory will establish an external committee on ethical governance and conflicts of interests that will then define an ethical charter that every organization will have to agree to in order to partner with the observatory.
Another critique of this group (and others) is that government funding of AI research will only reinforce existing divisions and inequalities within Canada. As it stands, Montreal, Toronto, Edmonton and — to a lesser extent — Vancouver have been the primary beneficiaries of AI funding. The observatory could further enrich these urban centres at the expense of the rest of Canada.
Roy noted that seven of the observatory’s 18 founding post-secondary institutions are based outside of Montreal, and that more will be added once the institution is up and running.
Funding aside, the idea that the observatory — and similar attempts to understand the impact of AI — is being created in an echo chamber is a valid one, and it’s not new to academia or to the technology sector.
Powles and Nissenbaum articulate this concern. “In accepting the existing narratives about A.I., vast zones of contest and imagination are relinquished. What is achieved is resignation — the normalization of massive data capture, a one-way transfer to technology companies, and the application of automated, predictive solutions to each and every societal problem.”
It seems clear that a new pot of provincial funding is not a panacea, and neither is AI. While AI technology can be an effective tool, sometimes the most responsible use is no use at all. According to Bengio, those discussions must begin outside of and in addition to the work happening at Montreal’s new observatory.
“I think ethics should be part of the training not just of the grad students that I’m working with. It should be taught at undergrad to everyone, not just people doing technology,” Bengio told Forbes. “Philosophy should be taught in primary school because we need the next generation to understand how to think by themselves, how to reason. Step away from their immediate fears and immediate desires so that society over the long run can move in the right direction.”
These words suggest that the individuals involved in the observatory value this kind of critical thinking. But, if the Quebec-based observatory is to become a global observatory, it must move beyond technocentric perspectives to include philosophers and dissidents too. They raise important points about the challenges ahead in understanding ethical AI, and about writing the related policy.