AI’s Footprint Is Small… Until It Isn’t (Part 2)

In the previous article, we talked about numbers; now it’s time to look at the stories surrounding them, the narratives that suggest there’s nothing to worry about. From the promise of ever-increasing efficiency to the idea that AI is greener than your lunch, these claims shape public perception far more than data alone. But do they stand up to critical thinking? Let’s take a look!

Need the Gist? Swipe through the visuals below for a quick summary!

“AI is getting more efficient all the time. There’s nothing to worry about.”

It’s true that AI models have become more efficient in certain ways. Inference, the process of generating outputs, now runs on more advanced hardware, like NVIDIA’s latest “Blackwell” chips. These GPUs, the B100 and B200, are designed to deliver significantly more operations per watt than previous generations. According to NVIDIA marketing claims, Blackwell can reduce energy per trillion-parameter inference by up to 25× compared to the prior Hopper generation. On paper, that sounds like a major environmental improvement.

But focusing only on per-operation efficiency misses the bigger picture.

As AI becomes embedded into everything from email and search to document editing and mobile apps, it’s no longer something users deliberately invoke. In many cases, it runs automatically and invisibly, in the background. And as history shows across many technologies, when something becomes cheaper and faster to operate, it tends to be used more, not less. This is the rebound effect: efficiency lowers the cost per operation, which encourages more usage, often outpacing any per-unit savings. A single query might use less electricity than before, but the total number of queries keeps multiplying, and with it, total energy demand.

Meanwhile, the new chips themselves still consume enormous amounts of power. The B200 GPU can draw up to 1,200 watts, nearly double the 700 watts of its predecessor, the H100. When deployed in large-scale systems, the total system draw can exceed 8 kilowatts. So, while efficiency per watt improves, total electricity use per deployment often rises just as fast, if not faster.

And inference is only part of the picture. Training large models like GPT-4 is vastly more resource-intensive. It often involves weeks or months of continuous computation across tens of thousands of GPUs. Yet OpenAI’s GPT-4 Technical Report, like most in the industry, does not disclose how much energy was used, what hardware was involved, or how much carbon was emitted. Without that data, it’s impossible to assess whether today’s efficiency gains during inference are meaningfully reducing the total environmental footprint or simply enabling expansion.

So yes, some parts of AI systems are becoming more efficient. But that doesn’t mean the systems are sustainable, and it certainly doesn’t mean their environmental impact is shrinking. Without full transparency into the lifecycle energy costs of training, deployment, and infrastructure, efficiency gains can easily be misunderstood or misused as a substitute for real accountability.

“AI’s environmental cost is smaller than most daily habits, like eating meat or driving a car.”

It’s true that the environmental footprint of a single AI query is currently smaller than driving a kilometer in a gasoline car or producing a hamburger. On a per-use basis, those comparisons can be technically accurate. But that doesn’t make them meaningful or useful.

A hamburger’s environmental footprint is typically calculated using full life-cycle analysis. That means it includes every stage of production: the water and energy used to grow the crops that fed the cow, the emissions from the cow’s digestion, the transportation, processing, packaging, and even the refrigeration before it reaches your plate. In contrast, most popular comparisons for AI only consider the energy or water used to cool a data center during a single query. They leave out the enormous resources involved in manufacturing the servers, mining the metals, constructing and powering the data centers, training the model, and building and running the networks and end-user devices. So, while it might be technically true that an AI query uses less energy than a burger, the comparison is structurally flawed. It’s apples to oranges. If we want to compare fairly, we’d need to either include AI’s full system-wide impact or reduce the burger’s footprint to just the energy used to cook it.

More importantly, even if the per-use emissions are small now, that’s the wrong lens for assessing long-term sustainability. If we judged environmental relevance solely by what pollutes the most today, we’d overlook every fast-growing threat until it was too late. That’s precisely what environmental planning is designed to prevent. The most important time to interrogate a system’s impact is when it’s still being built, when its architecture, scale, and norms are still flexible. That’s exactly where AI is now.

AI is not yet one of the world’s largest emitters, but it is among the fastest-growing and least transparent. Bringing attention to AI’s environmental cost isn’t a call to ignore bigger problems. It’s a call to expand the conversation. The climate challenges aren’t the result of one industry or one habit, it’s a result of unchecked accumulation. Small systems matter when they grow. So, the real question isn’t “Is AI worse than meat or driving?” but rather: “Is this a system whose environmental footprint is accelerating, and do we still have time to shape how that growth unfolds?” The answer, for now, is yes! But that window is closing fast.

“Now that we have official numbers, the debate is over.”

Releasing per-query electricity and water usage for ChatGPT is a step. For the first time, the public has a tangible data point to anchor the conversation. But it’s a mistake to treat these figures as the final word, or to assume they represent the environmental cost of AI as a whole.

What’s been shared so far reflects a narrow slice of one product’s operation: the energy and water needed to generate a single response. Remember that doesn’t cover training, retraining, infrastructure buildout, or the supply chain required to support large-scale deployment. It doesn’t tell us how often models are updated, where the hardware is located, or what kind of energy mix powers it. And crucially, it tells us nothing about other models, including those developed by other companies, where no comparable data has been released.

That doesn’t make the figures meaningless. But it makes them incomplete and non-representative. Selective disclosure should not be mistaken for transparency. And per-query averages, while helpful, can obscure more than they reveal when they’re not paired with clear methodology, assumptions, or lifecycle context.

If we treat these one-time announcements as the end of the debate, we risk letting companies define the boundaries of accountability on their own terms. True environmental transparency means independent, auditable, system-wide reporting, not just a few handpicked metrics. It means disclosing how models are trained, where infrastructure is located, what energy sources are used, and how regional water withdrawals are managed, not just what a single query consumes under ideal conditions.

So yes, it’s good that we finally have a starting point. But that’s exactly what it is: a start. The debate isn’t over. For the first time, it’s just becoming possible to have it with “real” data on the table.

Not Shame! Awareness and Responsibility

The goal here isn’t to shame you for using AI tools, or for not seeing the full picture. These systems are complex, their impacts are often invisible, and most people engage with them in good faith. It’s not on users to be experts. It’s on the companies building and deploying this technology to be transparent, accountable, and guided by long-term responsibility.

AI isn’t inherently harmful. But its growth trajectory matters. And right now, that trajectory is being shaped with too little public scrutiny.

Using AI shouldn’t come with guilt. But it should come with awareness. It should invite questions, and challenge us, as users, developers, and policymakers, to ask not just what these systems can do, but what they cost, and how we want them to evolve.

We don’t have to choose between AI and sustainability. But we do have to stop treating them as separate conversations.

References & Resources

Leave a comment

I’m Johanna

Welcome to PlanetSync, your gateway to exploring the pressing challenges, emerging trends, and policy developments shaping the future of our planet’s water resources and environmental systems.

My mission is to bring attention to important topics often overlooked, misunderstood, or difficult to engage with. Through clear and accessible information, I aim to inform and inspire individuals to take informed actions that drive lasting, positive change.

Let’s connect

© 2025 PlanetSync by Johanna Gutiérrez