AI Is Driving Network Energy Costs Up. Here’s How to Control Them

The Escalation of AI-Driven Energy Demand
The integration of artificial intelligence into network operations was initially framed as a method for reducing costs through automation and predictive maintenance. While those benefits have materialized, they have been shadowed by the massive power requirements of the hardware necessary to run large-scale AI models. In the era of 5G-Advanced and the early development of 6G, AI is used for everything from dynamic spectrum sharing to beamforming and real-time traffic management. These tasks require immense computational power, often situated at the "edge" of the network to reduce latency.
According to the TNS and Kaleido Intelligence report, this shift is driving an unprecedented expansion in data center capacity. The researchers project that data center requirements will increase between two and six times by the year 2030. This growth is not merely a matter of physical space but of power density. Traditional network hardware, which once focused on simple packet switching, is being replaced or augmented by high-performance GPUs and AI-optimized ASICs (Application-Specific Integrated Circuits). These components draw significantly more power and generate more heat than their predecessors, leading to a secondary spike in energy costs related to cooling and thermal management.
For CSPs, the challenge is three-fold: they must meet the rising demand for high-speed data, fulfill aggressive sustainability targets mandated by global regulators, and maintain a profitable bottom line in a market where energy prices remain volatile. The report suggests that without a standardized approach to network intelligence, the cost of powering these AI-driven networks could become the single largest operational expenditure for global carriers.
A Chronology of the Intelligence-Energy Paradox
To understand the current crisis, it is necessary to look at the evolution of network energy consumption over the last decade. During the 4G LTE era, energy consumption was relatively linear and predictable, tied largely to the number of base stations and the volume of data traffic. The transition to 5G introduced the concept of "massive MIMO" and higher frequency bands, which required more power per bit of data transferred, though the efficiency of the 5G standard itself helped mitigate some of these costs.
By 2023 and 2024, the "Generative AI" boom shifted the focus from simple connectivity to deep intelligence. Initially, these workloads were confined to centralized cloud data centers. However, by 2025, the industry began a concerted effort to move AI closer to the user—the "Edge AI" movement. This was necessary to support autonomous vehicles, industrial IoT, and immersive augmented reality.
As we reach 2026, the industry has hit a wall. The decentralized nature of Edge AI means that instead of a few massive, efficient data centers, CSPs are managing thousands of smaller, power-hungry nodes. The TNS and Kaleido report marks this moment as the turning point where "intelligence" became a liability for "efficiency." The whitepaper serves as the first major industry attempt to reconcile these two opposing forces through the introduction of a "Common Language" framework for network planning.
Supporting Data: The Cost of Intelligence
The data provided by Kaleido Intelligence paints a stark picture of the road ahead. Beyond the projected 6x increase in data center capacity, the report identifies several key pressure points:
- Power Usage Effectiveness (PUE) Challenges: While massive hyperscale data centers often achieve a PUE of 1.1 or 1.2, smaller edge deployments are frequently less efficient, often hovering around 1.5 to 1.7. This means that for every watt used for computing, nearly another full watt is wasted on cooling and power distribution.
- Spectrum Efficiency vs. Energy Cost: AI-driven beamforming can improve spectrum efficiency by up to 30%, but the computational cost of running those algorithms in real-time can increase the power draw of a base station by 15-20%.
- Regulatory Penalties: In regions like the European Union, new carbon taxes and energy efficiency mandates are expected to impose significant fines on CSPs that fail to reduce their carbon footprint, making energy efficiency a regulatory necessity rather than a voluntary goal.
The report argues that the current "siloed" approach to network management is the primary driver of these inefficiencies. Different vendors use different metrics, and different layers of the network (radio, core, and transport) do not share a unified system for identifying and managing assets.
The Common Language Solution
The central thesis of the TNS and Kaleido Intelligence whitepaper is the adoption of a "Common Language" framework. In the context of telecommunications, "Common Language" refers to a standardized set of codes and terminologies used to identify network equipment, locations, and capabilities. Historically used for inventory management and interconnection, this framework is now being repurposed as a tool for energy optimization.
By implementing a standardized intelligence layer, CSPs can gain a granular view of their network’s energy profile. This allows for:
- Precise Asset Management: Knowing exactly what hardware is deployed where, its power rating, and its AI-processing capability.
- Interoperability: Ensuring that AI models from different vendors can communicate and share resources, preventing the redundant processing that occurs when multiple proprietary systems run simultaneously.
- Automated Energy Steering: Using standardized data to automatically shift AI workloads to the most energy-efficient nodes or to power down underutilized components during low-traffic periods.
The report emphasizes that for network planners, this standardization provides the "clarity" needed to balance capacity with sustainability. Without it, planners are essentially flying blind, unable to accurately predict how a new AI service will impact the overall energy budget of the network.
Official Responses and Industry Implications
While the whitepaper represents the research of TNS and Kaleido, the broader industry has already begun to react to these findings. Representatives from major European and North American carriers have expressed that energy management is now "the top priority" for their 6G research and development teams.
An industry analyst from Kaleido Intelligence noted that "the era of growth at any cost is over. We are entering the era of ‘Smarter Scaling,’ where every kilohertz of spectrum and every watt of power must be accounted for. The TNS report shows that standardization isn’t just a technical nicety; it’s a commercial imperative."
Similarly, operations teams are pivoting toward "AI-native" network planning. Instead of adding AI as an overlay to existing networks, they are designing the physical infrastructure with AI’s energy needs in mind. This includes the use of liquid cooling for edge nodes and the integration of renewable energy sources directly into the site design.
Broader Impact and the Path to 6G
The implications of this report extend far beyond the telecommunications sector. As the backbone of the digital economy, the energy efficiency of 5G and 6G networks directly impacts the sustainability of every industry that relies on them—from smart cities and logistics to healthcare and manufacturing.
If CSPs can successfully implement the "Common Language" framework and control their energy costs, it will pave the way for more affordable and accessible AI services. However, if energy costs continue to spiral, the "digital divide" could widen, as only the most affluent regions will be able to afford the high operational costs of AI-driven connectivity.
The TNS and Kaleido Intelligence whitepaper concludes with a call to action for the industry: join the experts in adopting a unified framework. The shift to AI-ready networks is inevitable, but the shift to energy-efficient ones requires a deliberate, standardized, and collaborative effort. As the industry moves toward 2030, the success of 6G will not be measured by speed alone, but by the intelligence of its efficiency.
Strategic Recommendations for Network Planners
Based on the findings, network planners and operations teams are encouraged to take several immediate steps:
- Audit Existing Assets: Use standardized naming conventions to map out the power consumption and AI capacity of current hardware.
- Prioritize Interoperability: Move away from "black box" vendor solutions that do not share data with the rest of the network ecosystem.
- Integrate Sustainability into Procurement: Make energy efficiency a primary KPI in the selection of new AI and network hardware.
- Adopt Predictive Energy Management: Utilize the very AI that is driving costs up to predict traffic patterns and optimize power usage in real-time.
By focusing on these areas, CSPs can ensure that they remain competitive in an AI-driven world without sacrificing their financial stability or their environmental responsibilities. The "Common Language" is more than just a set of codes; it is the foundation for a sustainable digital future.







