Europe Has Taken the Lead in Regulating AI: Now the World Must Step Up

Now is the time for everyone, everywhere, to get educated about the risks and benefits of the technology.

December 21, 2023
EUAIACT
Ensuring that AI is developed in ways that serve the public interest will require participation from citizens and governments around the world, the author argues. (Photo illustration/REUTERS)

The European Union’s new artificial intelligence (AI) laws are on track to entirely overshadow Britain’s Bletchley Declaration on AI, completed just six weeks earlier. The text of the agreement on a suite of comprehensive laws to regulate AI is not finalized, and many devils will be found in the details, but its impending arrival signals a sea change in how democracy can steer AI toward the public interest.

The Bletchley Declaration was a huge achievement, especially for bringing countries such as China, Saudi Arabia and the United Arab Emirates to agree on a formal statement about AI regulation. The problem is that it was just that: a statement, with no legal power or enforcement mechanism. Now that the European Union is taking action to impose firm legal requirements on the developers of AI, it’s up to other countries to step up and complete the puzzle.

The final hurdle that negotiators cleared was over the question of which uses of AI would be banned outright. Prohibited practices include:

  • “cognitive behavioural manipulation,” a broad term for technologies that interpret behaviours and preferences with the intent of influencing our decisions;
  • the “untargeted scraping of facial images from the internet or CCTV footage,” a practice that is already in use by some companies that sell databases used for surveillance;
  • “emotion recognition in the workplace and educational institutions,” which could be used by companies to discipline, rank or micromanage employees;
  • “social scoring,” a dystopian surveillance tool used in China to rate individuals on everyday activities and allocate (or withhold) “social credit”;
  • “biometric categorisation,” a practice where characteristics such as skin tone or facial structure are used to make inferences about gender, sexual orientation or even the likelihood of committing a crime; and
  • “some cases of predictive policing for individuals,” which has already been shown to have racially discriminatory impacts.

But don’t breathe a sigh of relief just yet. In the same way that the climate crisis is a global problem and can only be solved if all countries reduce emissions, AI is global in nature and can only be kept in check by many nations working together. Powerful “general-purpose AI” (GPAI) systems, such as the one underlying ChatGPT, can churn out personalized misinformation and manipulation campaigns, non-consensual intimate imagery (NCII, sometimes known as deepfake pornography) and even designs for biological weapons.

If one part of the world regulates these but then another releases unsecured, “open-source” versions of these tools that bad actors can weaponize at will, the whole world can still suffer the consequences. These bad actors could include Russia’s military intelligence agency, the GRU, or digital mercenaries (troll farms for hire), which may not have the funds or technology to make their own world-class models, but could get a hold of powerful AI tools built without these safeguards and use them to try to manipulate elections around the world.

The planned EU AI Act is, unfortunately, not perfect. While it places laudably strong regulations on GPAI, including open-source systems, there are still gaps. If AI tools such as “undressing” apps are used to create NCII, it appears liability could fall only on the individual user creating this content, not the developer of the AI system that created it, according to one European Commission official I spoke to. I would prefer developers be prohibited from distributing tools capable of causing such potentially irreparable harm, especially when children could be both perpetrators and victims.

Another worry is that the EU AI Act won’t come fully into force until at least 2026. Some parts of it will phase in sooner, and it is designed to be “future proof,” but AI tech is improving so quickly that there’s a strong possibility the technology will outrun legislation. This is an even bigger risk if the European Union stands alone on legislating AI.

The Bletchley Declaration, which came out of the first AI Safety Summit, was an important part of a series of parallel efforts taking place within the G7, G20, United Nations and Organisation for Economic Co-operation and Development. Follow-on AI Safety Summits are planned for South Korea and France in 2024.

Here are the most important binding regulations that these summits and parallel governance processes need to put in place:

  • Affirm the prohibition of uses described above.
  • Firmly regulate high-risk AI systems including GPAI, requiring thorough risk assessments, testing and mitigations.
  • Require companies to secure their high-risk GPAI systems and not release them under open-source licences unless they are determined by independent experts to be safe.
  • Clearly place liability on the developers of GPAI systems as well as their deployers for harms that they cause.
  • Require that AI-generated content be “watermarked” in a way that it can be easily detected by lay consumers as well as experts.
  • Respect the copyright of creators such as authors and artists when training AI systems.
  • And, finally, tax AI companies and use the revenue to protect society from any harms caused by AI, from misinformation to job losses.

Ensuring that AI is developed in ways that serve the public interest is a gargantuan task that will require participation from citizens and governments around the world. Now is the time for everyone, everywhere, to get educated about the risks and benefits of AI, and demand that your elected representatives take its threats seriously. The European Union has made a good start; now, the rest of the world needs to enact binding legislation to make AI serve you and your community.

This piece first appeared in The Guardian.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

David Evan Harris is a CIGI senior fellow, Chancellor’s Public Scholar at UC Berkeley and faculty member at the Haas School of Business.