The Canadian access to information (ATI) system has a laudable purpose. By creating rights of access to records belonging to, or under the control of, federal institutions, it seeks to enhance democracy and foster debate about the conduct of those institutions.
However, the reality is quite different. Information Commissioner Caroline Maynard, whose office investigates complaints of individuals and organizations about access to information, affirmed in March she has “no confidence that bolstering Canadians’ right of access to information will figure prominently in the government’s financial priorities.”
Artificial intelligence (AI) only promises to make the system a lot worse. As more and more federal institutions turn to AI to deliver goods and services such as law enforcement, border control and immigration — either by using AI systems of their own design or, more likely, those procured from third parties — accountability and transparency are proving elusive. For example, when the Citizen Lab sought to investigate the screening of temporary resident visa applications using AI and automated decision making, the government did not answer a single one of the Lab’s 27 separate ATI requests.
Several things must change.
First, Canada should bolster the public interest override when it comes to access to records.
The current system allows the federal government to disclose third-party records — such as a particular contractor’s internal source code, algorithms, data or due diligence reports — where it is in the public interest as it relates to public health, public safety, and the protection of the environment. But when I asked a dozen ministries working in these areas if they have ever used the override, all confirmed they had not.
This is also true of ministries working closely with AI. Public Services and Procurement Canada (PSPC), the central purchasing agent of government that maintains the AI source list for government purchasing, has never used the public interest override. Neither has Innovation, Science and Economic Development (ISED), the department responsible for the Pan-Canadian AI Strategy.
Indeed, it is not clear any federal institution has ever used the public interest override. All of the major federal institutions I have asked have told me they have never used it.
Second, we need to take more seriously the problem of trade secrecy. As the Canadian Intellectual Property Office advises, trade secrecy — protecting information that is not widely known, has economic value, and is subject to reasonable steps to maintain its secrecy — is the main form of intellectual property protection for AI’s essential components.
But Canada’s ATI framework is unnecessarily protective of third-party trade secrets, such as the AI systems it procures from private parties to deliver or operate services. Even if we started to use the public interest override, the ATI framework completely exempts trade secrets from its reach. It does not do that for any other subject matter. The government’s newly proposed Artificial Intelligence and Data Act is also overly protective of confidential business information.
This protection assures little accountability or transparency of the federal government’s use of AI systems procured by third parties.
For example, the Treasury Board of Canada Secretariat — which oversees the administration of the ATI system — is itself using third-party AI to try “to improve user experience.” To do so, it has paid $4 million — the equivalent of about five percent of the system’s current annual budget — to GC Strategies, which bills itself as an “IT staffing firm,” to subcontract this work to other private actors.
This type of subcontracting gives rise to another problem. As PSPC recently told Parliament: “For confidentiality reasons, the Government of Canada doesn’t disclose the names of companies that have worked as subcontractors for one of its suppliers, as it is considered third party information.” In other words, Canadians are not even allowed to know the names of private actors to whom critical governance decisions are increasingly being outsourced via their technologies’ automated and algorithmic functions.
This is not what AI governance should look like.
Finally, we should take non-binding and advisory instruments like the Directive on Automated Decision-Making, which “requires” federal institutions to conduct algorithmic impact assessments, and make them actually binding. The government’s newly proposed Artificial Intelligence and Data Act should also be amended to apply to federal institutions.
Canada’s ATI system has already been desperately in need of repair for years. But as Canadians are increasingly being governed using AI technologies in ways that are neither accountable nor transparent, the current state of the ATI framework does not promise to make things better.
We must hurry to change that.
This piece first appeared in The Globe and Mail.