The World Needs an Intergovernmental Panel on Digital Change

We badly need a global forum in which to share sound research about the positive and negative impacts of digital technologies.

March 30, 2023
digital
The Finnish technology company Nokia announced the rebranding of its logo at the Mobile World Congress 2023 on March 2, 2023, in Barcelona, Spain. (Joan Cros/NurPhoto via REUTERS)

Digital technology has created immense opportunities for humanity. But its abuse can also threaten peace on a global scale. That great modern paradox is at the heart of the growing list of reasons why the world needs a new intergovernmental panel that could assess the scientific background related to global digital change, akin to the United Nations’ Intergovernmental Panel on Climate Change: The Intergovernmental Panel on Digital Change (IPDC).

Before policy makers and multilateral organizations can take responsible decisions to safeguard global peace, foster equitable economic growth and respect human rights, they must first be able to agree on the basic facts and impacts that digital technologies are producing in our societies. We badly need a global forum in which to share sound research and insights based on rigorous analysis of the positive and negative impacts of digital technologies. From that, effective and coherent global policy can grow.

In a peaceful context, technological innovations can help us overcome the great challenges of human society — including poverty, inequality, discrimination, social and political exclusion, and the degradation of our natural environment. In a violent or militarized context, these same new technologies are at risk of being used by authoritarian regimes, or even democratic ones, to wage war, as well as to suppress their own people’s demands for justice, freedom and democracy.

Digital technologies, enhanced by artificial intelligence (AI), are already on their way to becoming tools of violence. For example, lethal autonomous weapon systems are being developed for national defence purposes by countries such as the United States, as well as by Australia, China, India, Iran, Israel, Russia, South Korea, Türkiye and the United Kingdom. Yet, potentially, killer robots may be employed not only by the military but also by police, as was seen recently in the city of San Francisco.

Mass surveillance and modern tools of espionage are employed by many countries to monitor and control their citizens. China, for example, is using AI-powered surveillance technology, including the use of facial recognition, social credit scoring and online activity tracking, to monitor and control its citizens. The US government has also been criticized for its mass surveillance practices, which were exposed by Edward Snowden in 2013. The US government has been found to collect and store massive amounts of data on American citizens, including phone records and internet activity. Other countries that have been accused of using modern tools of espionage and mass surveillance of citizens with assistance from China include Bolivia, Ecuador and Venezuela.

As revealed by the Forbidden Stories consortium, digital technologies are being used by private companies, such as, for example, Cambridge Analytica and the Israeli hacker team “Jorge,” as well as by politicians and state actors, who pay for and use these private companies’ services to spread misinformation, political propaganda and hate speech, engage in political manipulation of elections, fuel political and social tensions, facilitate cyberattacks, and deploy other new forms of digital violence that can disrupt democracies.

At the same time, “accountability gaps” of states and the private sector — meaning situations for which no individualized remedy is available under international or national law and thus nobody can be held accountable for the damage occurred to victims’ property, bodies or lives — are being exacerbated with the use of new technologies.

As legal scholar Rebecca Crootof observed in a recent essay for a CIGI series on the ethics of automated warfare and AI, the rapid development of sophisticated military technologies and AI is likely to increase the number of civilian deaths for which no one is held accountable. And Bonnie Docherty, a researcher for Human Rights Watch, points out that militaries and law enforcement agencies around the world are developing lethal autonomous weapon systems but as yet have been unable to agree on putting them under international control or restrictions as defined by a legally binding intergovernmental treaty.

Indeed, technology is evolving more quickly than policy makers can regulate it.

Climate change provides us with a powerful example of how certain human-made dynamics can create powerful adverse impacts that are difficult to foresee and almost impossible to reverse once they have passed a certain tipping point.

Likewise, the development of new technologies, including AI, promises huge benefits to private corporations. Those potential benefits provide these companies with an incentive to lobby political decision makers.

Once technological tools to enhance security and surveillance are in place, it is unlikely that they would ever be abandoned; what politician would wish to take responsibility for a terrorist attack that could have been avoided if surveillance had remained in place? Here again, we cannot avoid that the very technologies that promise more security also pose significant threat to peace.

All of that makes an urgent case for an IPDC.

Similar to the Intergovernmental Panel on Climate Change, an IPDC could help advance scientific knowledge about the impacts of digital change on our societies.

Its overarching task would be to help decision makers make better choices on both a national and a global level. As a first milestone, the IPDC could work toward drafting a UN Framework Convention on Digital Change (again, in the mould of the United Nations’ Framework Convention on Climate Change).

We must take a precautionary approach and act now to ensure that digital technologies serve humanity rather than harm it. As noted by the UN Secretary-General’s High-level Panel on Digital Cooperation, it is time for the international community to make a “global commitment for digital cooperation.”

CIGI has often advocated for a similar idea — a Digital Stability Board.

The UN Summit of the Future coming up in September 2024, which aims at a New Agenda for Peace (and where a new Global Digital Compact is expected to be negotiated), could provide a once-in-a-generation opportunity to reinvigorate global action on this front. The summit also presents a political window of opportunity for establishing an IPDC.

Let’s come together to call for the creation of this body. It can help us better understand both the opportunities and the risks of these powerful technologies, and in turn help bring about a more peaceful, just and prosperous world.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Evelyne A. Tauchnitz is a CIGI senior fellow and senior researcher at the Institute of Social Ethics ISE, University of Lucerne, Switzerland. Her expertise lies within digital technology, specifically global governance, peace and conflict research, ethics and human rights.