Key Takeaways:
- AI demand for electricity is outpacing power grid capacity.
- The future of data centers lies in maximizing existing power supplies and eliminating inefficiencies in energy delivery.
- Claros is rethinking data center architecture—from chip to grid—to unlock smarter, more scalable power infrastructure.
The world’s demand for computing power is growing every day, driven largely by the popularity of artificial intelligence and the relentless expansion of cloud-based services. While data centers are scaling rapidly to meet this growing need, one critical topic is often overlooked: power efficiency.
Today, much of the discussion about energy revolves around grid shortages and delays in infrastructure expansion. But, in truth, the bigger problem is inefficient power delivery systems that waste electricity—long before it reaches the xPU, a category of processors, such as the central processing unit (CPU) and the graphic processing unit (GPU), that serves as the engine of data centers. For hyperscale data centers, for example, this is not just a technical nuisance. It’s a bottleneck of unprecedented proportions that slows networks, hinders performance, and limits growth. Considering what’s at stake, it has become crucial to find more sustainable and responsible ways to manage the world’s energy supply.
The Power Demand Boom and Strained Energy Infrastructure
Between 2017 and 2023, the roll out of AI servers more than doubled in the United States, as more enterprises turned to artificial intelligence to automate routine tasks, boost productivity and efficiency, and reduce costs. Yet, as AI processes like model training and inference workloads have increased, enhancing the ability of organizations to make predictions and better decisions, we’re finding that our power infrastructure simply cannot keep pace. In fact, the average AI data center consumes enough electricity to power 100,000 households and, in the United States alone, the annual energy use at data centers is expected to reach 132 gigawatts (GW), which accounts for approximately 12% of total U.S. electricity consumption.
All this paints a picture of an energy infrastructure that is under strain, forcing data center hubs like Dublin, Ireland, and Northern Virginia to enforce or consider restrictions on further expansion due to resource constraints. Adding more power to the grid might seem like an obvious solution, but in many cases, the issue lies not in how much energy is available, but in how much energy is being wasted.
The Hidden Costs of Power Loss
As legacy power delivery systems max out, we must explore innovative solutions that address the bottleneck issues, starting with ones that ensure high efficiency at data centers—from the chip all the way to the grid and, eventually, to the meter or other end points.
The process of delivering power today is anything but efficient. Once power enters a data center, a series of conversion stages begin. On average, electricity experiences five to seven transformation steps, including generation, regeneration, transmission, and distribution; at each stage, power is shed.
Losses stem from several key factors, including:
- Inefficient long-distance alternating current (AC) transmission
- Transformer wastefulness during voltage conversion
- Resistive power loss throughout the system
- Poor real-time load visibility, which impacts the balance between electricity supply and demand
Within traditional architectures, such inefficiencies can lead to more than 30 percent of power dissipation.
The data center industry recognizes the need to minimize waste in its systems and uses various metrics to measure energy efficiency, including Power Usage Effectiveness (PUE), Energy Use Intensity (EUI), ENERGY STAR scores, ASHRAE, FLOPS/W, TOPS/W, ML Perf, and among others. While there is some debate over which metric is most useful, there is consensus among industry leaders that data centers should strive to improve power efficiency by 25 to 35 percent.
Breaking the Power Bottleneck with Smarter Design
Every industry—from healthcare and cyber to defense and finance—is being transformed by AI. But this progress comes with a larger energy footprint that more states—and nations—will have to confront. At Claros, we’re creating a sustainable path forward that’s based on modular, efficient, and resilient systems built to deliver high performance while reducing costs. That’s because we believe that the future of AI isn’t just about faster chips or smarter AI models—it’s about marrying innovation with accountability.
Rethinking Power Delivery to the Chip
By reducing distribution losses, the high-performance power management capabilities of IVRs form the foundation for smarter energy delivery systems. At Claros, our IVRs are designed to deliver power directly to the xPU to minimize heat conversion loss, reduce power wastage, and allow operators to control voltage levels for additional efficiency. Located directly at the point of compute, IVRs eliminate the need for inefficient board-level regulators, enabling dynamic voltage scaling—to adjust voltage in real time based on workloads, which saves energy without compromising performance—and enhanced telemetry and control, providing granular visibility into power data to optimize energy delivery.
DC-Native Data Center Design
Another area that requires reconsideration is the use of traditional AC power. At Claros, we have found that DC-native designs at data centers are extremely well-suited to maximize efficiency during AC/DC conversions. This simplifies integration with on-site renewable energy sources and battery systems, allowing centers to lower component complexity; support rapid, modular scaling; and improve energy delivery across the power chain.
These innovative approaches not only address inefficiencies but also serve as a blueprint for a new generation of high-density data centers capable of operating more sustainably, while lowering costs.
Where We Go From Here
The future of data centers doesn’t lie in just adding more megawatts to already-strained grids, but in making better use of the energy we have. By improving the efficiency of power delivery, the industry can achieve growth that’s both scalable and sustainable.
From chip to grid, Claros is working to redefine the mechanisms that power modern computation. As individuals who care deeply for our planet, we—like so many others—endeavor to be good stewards of our energy supply. With solutions like IVRs and DC-native data centers, we believe we can grow intelligently and decrease energy consumption in this increasingly power-constrained world.