Home » The AI Industry’s Green Problem: GPT-5’s Power Use Sparks Calls for Transparency

The AI Industry’s Green Problem: GPT-5’s Power Use Sparks Calls for Transparency

by admin477351

OpenAI’s GPT-5 is a marvel of technology, but its release is raising serious questions about the future of AI and the environment. As the company remains quiet on the model’s resource usage, experts are sounding the alarm. They contend that the model’s enhanced capabilities, from website creation to solving PhD-level problems, come with a steep and unprecedented environmental cost. This lack of transparency from a leader in the AI space is forcing a difficult conversation about the industry’s commitment to sustainability.

A key study from researchers at the University of Rhode Island’s AI lab provides a stark illustration of this issue. They found that producing a single medium-length response of around 1,000 tokens with GPT-5 can consume an average of 18 watt-hours. This represents a dramatic increase from previous models. To put this into a more relatable context, 18 watt-hours is the amount of energy an incandescent light bulb uses in about 18 minutes. Considering that a platform like ChatGPT handles billions of requests every day, the total energy consumed could be enormous, potentially matching the daily electricity demand of millions of homes.

The spike in energy usage is directly linked to the model’s increased size and complexity. Experts suggest GPT-5 is significantly larger than its predecessors, with a greater number of parameters. This theory is supported by research from French AI company Mistral, which identified a strong correlation between a model’s size and its energy consumption. The Mistral study concluded that a model ten times bigger would have an impact that is an order of magnitude larger. This seems to be the case with GPT-5, with some experts theorizing its resource use could be “orders of magnitude higher” than even GPT-3.

This problem is further exacerbated by the model’s new architecture. Although it uses a “mixture-of-experts” system to improve efficiency, its ability to handle video, images, and complex reasoning likely negates these gains. The “reasoning mode,” which requires the model to compute for a longer time before delivering an answer, could make its power needs several times greater than for simple text tasks. This combination of increased size, complexity, and advanced features paints a clear picture of an AI system with a massive appetite for power, leading to urgent calls for greater transparency from OpenAI and the wider AI community.

You may also like