The Planetary Implications of AI: What We Don’t Know Could Hurt Us

AI’s planetary impact is uncertain. From Climate Week NYC to COP31, choices now will decide if AI drives regeneration or deepens climate risks.

September 16, 2025
Walther, Cornelia - Planetary Implications of AI
Jupiter, Nvidia's new exascale class supercomputer, during its ceremonial launch. Jupiter is expected to reach over 90 exaflops of AI performance. (Jana Rodenbusch/REUTERS)

When Climate Week NYC opens on September 21, 2025, it will do more than spark conversations in Manhattan. It could send ripples toward COP30 in Belém, Brazil, (November 10–21, 2025) and even further, into the negotiations shaping the post-2030 climate agenda at COP31. What happens in these months will influence whether artificial intelligence (AI) becomes a regenerative ally or an accelerant of planetary stress.

As it is today, AI is a double-edged sword. On the hopeful side, it is already helping forecast energy demand, discover better car batteries, and optimize industrial processes both in small and medium-sized enterprises and in large companies. In medicine, for example, it can serve to predict the success of future vaccines and accelerate cancer research. However, on the negative side, AI’s electricity and water needs are soaring. The International Energy Agency (IEA) estimates that data centres consumed around 415 terawatt-hours of electricity in 2024 — about 1.5 percent of global use — and that figure could more than double by 2030.

To put this in perspective, one terawatt of power can supply circa 100 million homes for one hour, although the exact number depends on the average power consumption per home. Additionally, recent analysis suggests that AI already accounts for up to 20 percent of data centre consumption and could rise to nearly 50 percent by the end of this year. These numbers are staggering, not least because they remain filled with uncertainty and projections depend heavily on how efficiently AI is designed, deployed and governed.

Despite the headlines, one truth remains: We still do not know the full ecological footprint of AI. Companies are not required to disclose how much energy or water is used for training models, nor the carbon intensity of the grids where data centres sit. Even when disclosure occurs, it often focuses narrowly on usage instead of the full life cycle. For example, Google now reports that a single Gemini text query emits about 0.03 grams of carbon dioxide and uses the equivalent of five drops of water — a surprisingly small figure, yet one that excludes the enormous energy costs of training and of multimodal queries. Transparency remains piecemeal, meaning the public, policy makers and many businesses are navigating blindly.

Beyond Electricity: Water and Waste

The planetary toll of AI stretches far beyond terawatt-hours. Cooling data centres requires millions of litres of water each day, straining US regions such as California that are already water-scarce, but also areas that had abundant water before the recent AI boom. For example, in Malaysia, the three states of Johor, Selangor and Negeri Sembilan together host 101 data centres, which require 808 million litres per day (MLD), whereas the current infrastructure can only provide 142 MLD. AI’s water footprint could double by 2030, exacerbating tensions in vulnerable regions. At the same time, rapid hardware turnover is feeding a surge in electronic waste. Analysts warn that AI could account for up to 12 percent of global e-waste by 2030 if current trends continue. Considering these hidden costs — water, waste and mining for rare materials — it becomes apparent that AI not only consumes energy but also many of our planetary resources.

Yet to dwell only on the downsides would be misleading. The very systems consuming power could also help reduce that consumption. AI is already being used to enhance the efficiency, security and resilience of electricity grids by enabling real-time data analysis, predictive maintenance, demand-response optimization and automated fault detection, thereby improving overall operational efficiency while cutting curtailment and waste. It can accelerate the discovery of climate-friendly materials such as advanced batteries or catalysts. It can optimize agriculture to reduce inputs and emissions. As the IEA notes, AI is both a driver of electricity demand and a potential optimizer of energy systems if deployed deliberately. Whether it turns disastrous or delightful depends on how we design and direct it.

That is precisely why Climate Week NYC must be seen not as an isolated gathering but as a spark that shapes the narrative toward COP30 and beyond. Belém will host debates not just about national pledges but also about the technologies underpinning them. By COP31, the world will be negotiating a post-2030 agenda — one in which AI is no longer an optional add-on but an essential part of the infrastructure. The question is whether it will be treated as an unregulated burden or as a consciously designed tool for regeneration.

AI is neither saviour nor saboteur; it is simply a mirror of our choices.

The Four T’s of Pro-social AI

A useful way to think about this future is through the four T’s of pro-social AI: tailored, trained, tested and targeted.

First, AI must be tailored to the task, using smaller, efficient models where possible and not applying oversized systems to trivial queries. Second, it must be trained with transparency, with companies disclosing the true energy and water costs of building large models. Third, it should be rigorously tested in real-world conditions, using context-specific data on grid intensity, cooling demands and water stress. Finally, it must be targeted deliberately toward applications that serve people and the planet, such as climate modelling, energy optimization and conservation.

If there is one shift that must occur between Climate Week, COP30 and COP31, it is this: Regenerative intent must be built into AI by design. This means situating data centres near clean energy, adopting water-free cooling, recycling hardware and embedding sustainability metrics into procurement and executive performance. None of this happens by default; it requires deliberate decisions, policies and incentives.

So what does this mean in practice? The A-Frame offers a simple guide. It begins with awareness, encouraging us to map the full footprint of AI use — not only the outputs but also the energy, water and hardware that make it possible. Next comes appreciation, which is about choosing the right tool for the right task, valuing efficiency and context rather than defaulting to the most powerful option. Then there is acceptance, which requires recognizing trade-offs openly; transparency about the knowns and unknowns, such as the real environmental costs that remain only partially understood, helps build trust. Finally, accountability ties metrics to outcomes by making emissions, water use and e-waste part of contracts, procurement and public reporting.

AI is neither saviour nor saboteur; it is simply a mirror of our choices. As the world moves from Climate Week through COP30 and into COP31, the task is clear: to shape AI so that it is tailored, trained, tested and targeted for regeneration. In doing so, we decide whether the Janus face of AI smiles toward a livable planet — or turns away.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Cornelia C. Walther is a visiting fellow at the Wharton Neuroscience Initiative/Wharton AI & Analytics Initiative, as well as an adjunct associate faculty at the School of Dental Medicine at the University of Pennsylvania.