Voluntary Codes of Practice for AI Lack Accountability

There should be a mechanism to investigate complaints and a means of punishing violators.

February 12, 2024
aicubes
Governments should work toward informed legislation and international agreements to effectively govern AI, while promoting innovation and opportunity, the author argues. (Photo illustration/REUTERS)

Canada’s recent implementation of the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems has been celebrated by some and pilloried by others, who deem it an anchor around the neck of innovation.

Rhetoric aside, this much is clear: To prevent unethical practices in artificial intelligence (AI) development and use, voluntary codes of practice (including Canada’s) must have built-in accountability, a mechanism to investigate complaints and a means of punishing violators. The draft code has none of these.

This was intended as a stopgap, while legislation, such as the Artificial Intelligence and Data Act within
Bill C-27, wends its way through the House of Commons. The bill is unlikely to become law until 2025 at the earliest, if it sees the light of day at all, given the possibility of a change of government before then. The Biden administration also has secured voluntary commitments for AI development from the country’s largest AI developers.

But although the core principles of Canada’s draft code are sound, its being voluntary means there will be no way of holding companies accountable to those principles. This could conceivably lead to effects, not unlike “greenwashing” around environmental issues, whereby companies present an image of ethical development, with their signature on the code proudly displayed, while engaging in unethical or unsafe practices.

And although Canada’s code is temporary, it’s not at all unimportant: It stands to be the main governing document on AI at a key moment of transition into the technology’s widespread use. Moreover, the model of a voluntary code for AI governance is likely to be implemented in other contexts where legislation is not possible or not favourable, including in higher education. The latter has been omitted from proposed and existing AI legislation, on the grounds of preserving academic freedom. If voluntary codes are to be used in any efforts to govern AI, those codes must include built-in accountability.

The fact is that voluntary codes are not laws, and that is their main allure for governments that are unable to legislate or need an interim measure before legislation.

Consider this parallel: In Australia, a grocers’ code of conduct to “improve standards” has been widely criticized for its lack of a proper mechanism for dispute resolution among retailers, suppliers and wholesalers. This criticism has been echoed in a 2018 report from Australia’s Treasury evaluating the entire code. In effect, the report found that the dispute resolution mechanism in the code was ineffective. And there are several suggestions for improving the mechanism, each of which is transferable to the AI governance discussion.

In the tech realm, the social media platform X (formerly Twitter) drew headlines in May 2023 when its CEO, Elon Musk, withdrew from the voluntary code against disinformation after X was called out for its lack of measures to combat disinformation. Since August 2023, however, the European Union has enacted legislation to curb disinformation on social media platforms. In that vein, X has been warned in the wake of allegations that the platform allowed disinformation to proliferate about the October 2023 attack on Israel by Hamas and the ensuing war in Gaza. As might be expected, the legislation offered far greater accountability than the previous voluntary code.

The fact is that voluntary codes are not laws, and that is their main allure for governments that are unable to legislate or need an interim measure before legislation. The need is understandable, to a point. But whether a code is temporary or permanent, accountability is essential if it is to be effective. It’s worth noting here that unethical practices in AI, sophisticated and powerful as the technology is, are potentially far more damaging than those in the grocery industry. Clear and impactful guardrails are required.

For any voluntary code of practice for AI to be effective, therefore, the measures must include a mechanism for making confidential complaints that are investigated and lead to actionable remedies. Additionally, internal security testing and reporting must be verified and audited by an external body. And, finally, to give force to these investigation and reporting processes, companies’ status as signatories should be conditional on their demonstrated adherence and compliance.

These relatively simple steps would ensure that companies continue to actively engage with the principles of ethical AI and ensure accountability to consumers, other businesses and the public at large.

Governments should work toward informed legislation and international agreements to effectively govern AI, while promoting innovation and opportunity. But where legislation is not possible or favourable, voluntary codes must be far more robust and accountable than they currently are — including in Canada.

Some might argue that adding teeth to such accords may scare off innovation and investment. The truth is that accountability is a small price to pay for confidence in any system, particularly one so new and potentially powerful.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Matthew da Mota is a post-doctoral fellow at CIGI’s Digital Policy Hub, where he researches the uses and governance of artificial intelligence and large language models within universities and public research institutions.