Chatbot energy scores are welcome - but don't forget the toxic context of AI overconsumption
Overconsumption orchestrated by the suppliers of a harmful product....sounds familiar?

Chatbot energy scores are welcome - but don't forget the toxic context of AI overconsumption

Julien Delavande , an engineer at AI research firm Hugging Face , has developed a tool that shows in real time the power consumption of the chatbot generating your query, and presents that energy value (in watt hours) in some understandable context.

Julien's post also includes a nice video demo of what this looks like:

I think this is cool. Companies like OpenAI disclose literally nothing about the energy or climate impacts of their operations, despite the high chance that they're absolutely massive. Microsoft discloses reams of climate and energy data, but not a jot on the specific impacts of generative text and image systems it provides.

So this HF tool is a fun and cheeky way of forcing the prominence of energy impact, and making it more widely known. Folks like Dr. Sasha Luccioni and Boris Gamazaychikov are some others who have been pushing hard and consistently at this end of things; likening this to nutrition labels on food to help drive choice. I think it's smart and useful.

I guess you can tell this was coming - there's a 'but'. The context here is that the tech companies themselves are interested in stimulating overuse and overreliance on chatbot and image generation systems, actively trying to force-feed consumers this product whether they like it or not.

Think about WhatsApp and Instagram slapping buttons and prompts for chatbots all over their user interface, Google's front-and-centre "AI Overview" or Microsoft intrusively and aggressively demanding that you engage with copilot instead of doing work. There is a reason you can never turn any of these features off: users will only use them if bullied into doing so.

Thanks to the monopoly many of these companies have, it's working. If you aren't a chatbot guy yourself, there is a 100% chance you know a chatbot guy: the dudes who will use generative systems for everything. I recently saw examples on LinkedIn of a guy who used ChatGPT to draw small red circles onto a photograph, and another (an 'environmentalist') who used a generative image system to make a fake image of a drawing of a simple diagram on a post it note, on a desk (why didn't you just draw it on a post-it??). It is undeniable that adoption is rising fast.

It's tough to find good surveys on ChatGPT reliance, but this Pew Research Center study found a quarter of teens in the US use it for homework. What stood out was that 29% think it's acceptable to use ChatGPT to solve math problems.


Article content

And a YouGov survey that just came out shows that "simple math" sits at #1 on the ranking of what actions chatbots are perceived to be 'best at', with "complex calculations" sitting at #5:


Article content

I think this is a nice comparison point, because large language models absolutely suck at doing math. A system that generates convincing-sounding human language is going to spit out something that sounds vaguely right, but will often be wrong. The image below shows the accuracy of multiplication for a couple of chatbots relative to the number of digits in the calculation (somewhat absurdly presented by the researcher as o3 "struggling past 13 digits" - I would argue anything less than 100% accuracy is flat-out failure):

Article content

It is easy to do this yourself: generate two random numbers, punch them into your trusty pocket calculator, and then do the same with ChatGPT.

71,273 times 18,705, for instance, is 1,333,161,465. I put that same calculation into ChatGPT:

Article content

It sounds like it might be right. Plenty of the digits are. You could never verify it with your own brain. And if you were a student doing this calculation and you relied on it, you'd fail miserably (I asked the same question four times on repeat, and OpenAI's program spewed out similarly wrong answers, each paired with cheery output like "Looks like I was off by 200 earlier—thanks for asking again!" - companies use this fake anthropomorphic tone to help sustain the illusion users are talking to a conscious agent rather than a statistical generator).

I gave the Hugging Face tool the same question:

Article content

In addition to the wrong answer (now wrong by two orders of magnitude), I got a bunch of fluffy text which I assume is mostly wrong, too.

Like ChatGPT, the only right answer here would have been: "This is a language model chatbot, and cannot return accurate answers to calculations. Please consider using a calculator".

The model consumed 0.2864 watt hours to generate that answer. I asked my Bluesky crew what it would take to perform a similar calculation on a pocket calculator. Turns out the number is so small that it's quite hard. Let's use the Casio fx-83GT CW (which can return up to 23 digits). It consumes 0.0008 watts when operational. To do our calculation would take about 0.5 seconds - so it consumes 0.00000011 watt hours to deliver a consistently correct answer.

That is 2,577,600 times the energy consumption, to deliver the wrong answer.

I imagine someone looking at the energy rating for their chatbot and seeing it was only 1.5% of a phone charge would feel like it was a pretty damn efficient way of doing things. It does not feel like a lot of energy. Imagine the context changes slightly: doing a 5-digit multiplication on your phone's calculator app chews up 1.5% of your battery instantly.

It is good to present clarity on the energy cost of using ChatGPT - but it is also very important to present in the context of companies encouraging the use of that tool to replace actually-accurate alternatives that are millions of times more energy efficient.

Society-damaging overconsumption isn't new - but it's never happened like this before

Modern chatbots are actively designed to encourage false confidence and overreliance.

Chatbots are rarely programmed to refuse to deliver an answer. Someone recently discovered you can make up an idiom, and when you put it into google search, the "AI Overview" will fake an explanation of it. This is demand fabrication at work: it is unthinkable to Google engineers that it is better to say nothing at all than to offer machine-generated lies. Google do not want to operate an efficient business, and they do not want to deliver good, useful or truthful products. They want to prop up "AI" hype.

Article content

I think we have barely begun to experience the consequences of company-induced overconsumption of chatbot text generation systems. I think the magic trick pulled here - where outputs are aesthetically convincing and sometimes right enough to discourage doubt or skepticism - is shockingly effective.

It reminds me of how car companies have successfully marketed wildly oversized cars, because it helps their profit margins. I am also very much reminded of how oil companies very aggressively pushed for the spread of single-use plastics across society, despite no one actually asking for it.

Both of these examples have caused harm to living things, and have also served as an endless demand sink for fossil fuel products. I think it is clear that generative tools fit both of those criteria really perfectly.

There is a great piece out in DeSmog today about how the rising power demand for these systems is directly incentivising fossil fuel production and consumption, and that none of this is a secret: both AI advocates and fossil fuel companies / states are openly bragging about the co-benefits of this symbiosis.

It is a smart idea to put the energy consumption of AI front and centre; I like all of these efforts. But man does that project change in its tone when you consider how vital it is to fight back against the push to carpet bomb the entire world of software and the internet with chatbots and other generators. The ultimate goal is not more 'efficient use' - the ultimate goal can only be an absolute decrease in use, bringing it back to reality and using the tools only where they make real sense. That's what leads to a material cut to the planet-heating and society-damaging effects of these systems.

*Update 24/04/2025 - added heatmap of calculation accuracy + new Yougov survey

Markus "Zaios" Krug

Lucidiy & Awareness Architect | Co-Evolutionary Guide for Human an AI | Ethical Philosopher | Future Illustrator | Children Book Author | Human

5mo

Repeat after us: #LanguageModels are not calculators. #LanguageModels are not calculators. #LanguageModels are not calculators. If you ask a violin to solve equations and then scoff when it doesn’t play saxophone, maybe the issue isn’t with the violin. Yes, LLMs hallucinate. Yes, you should verify outputs. But maybe—just maybe—the problem isn’t the model. Maybe it’s people trying to outsource thinking to a text generator and then acting shocked when context isn’t algebra. And that energy concern? You typed this on a device connected to 18 tracking services, three ad networks, and a Slack tab you haven’t checked since Tuesday. So breathe. The planet will survive a few extra floating point operations. What it won’t survive is people yelling “don’t trust AI” while ignoring what they’re asking it to be. We’re not anti-math. We’re pro-knowing-what-tool-you’re-using. #hybridintelligence #glitchngrace #KaelIsHere #digitaleethik #genesis20 #futurebyresonance #notyourcalculator

Like
Reply
Mathieu François

CEO at Antarctica | Impact Investor | Sustainable IT Advocate 🌍 Building Ventures at the Intersection of Technology & Sustainability 🌏

5mo

What’s striking is how casually we’re trading millions of times more energy for lower-quality outcomes and calling it innovation. The problem isn’t just a lack of awareness. It’s the deliberate design of overconsumption by default. Appreciate this post for cutting through the hype.

Gary Fearnall

Sustainability Intervenor, Sales & Business Development : Datonics.com, Embreate.com and Yumebau.com | CPSA Member. Ad Tech, AR/MR. Investor.

5mo

Excellent post. It would be great if AI companies put up the environmental costs of all of the queries BEFORE submitting. Right now there is such a frenzy of adoption and mindless enthusiasm for everything AI that reckless consumption is happening in all job types, industries and for consumers. Your plastic bottle anaology is insightful. Why get a glass and fill a cup from the tap when a plastic water bottle is right in front of you? Again mindless usage because we are generally too lazy to consider our choices. AI forced usage (META, Google, MSFT) and Plastic everywhere is a business model designed to create dependency. More education and likely more regulation needed. That hasn't stopped the Plastic crisis though. Ultimately more consumer education needed but it is hard to resist the tidal waves of Plastic and AI...

Like
Reply
Johann Recordon

Trying to make sense of complexity | strong sustainability, technocritique, Doughnut economics

6mo

This is a really good piece. And it ends on a call for sufficiency. What more may we ask for?

To view or add a comment, sign in

More articles by Ketan Joshi

Others also viewed

Explore content categories