Key Takeaways:
- Legacy power systems were not designed for AI workloads.
- DC power architectures better align with modern, dynamic loads.
- To solve today’s infrastructure challenges, we need system-wide coordination.
One of my favorite adages is, “A problem well-defined is a problem half solved.”
Today, the problem we’re collectively working to define involves one of the most consequential infrastructure challenges of our time: the power problem behind AI.
Three Events that Shaped Modern Power and Computing
To understand why this matters, we need to look at how power and computing evolved, starting with three pivotal years: 1886, 1961, and 2003.
In 1886, the commercialization of the line-frequency AC transformer enabled power to be stepped up for transmission and down for distribution, laying the groundwork for the energy systems we rely on today.
Then, in 1961, Fairchild Semiconductor produced the first integrated circuit, shrinking transistor sizes and increasing computational density. This established the foundation for the modern semiconductor industry and ushered in decades of relentless innovation.
Subsequently, in 2003, Infineon commercialized one of the early III-V semiconductor MOSFET power devices, a breakthrough that enabled dramatically higher switching speeds, breakdown voltages, and efficiency conversions; smaller magnetics; and reduced cost.
Yet, despite these extraordinary advancements, one thing remained largely unchanged: the fundamental architecture of electrical distribution.
AI Workloads Are Breaking the Model
Power delivery systems were initially designed around steady, predictable loads, such as lighting, motors, and industrial equipment. But AI workloads introduce characteristics that legacy power systems struggle to manage: massive load swings; high-power density; rapid transient demand; and large-scale synchronized compute clusters.
The consequences of this mismatch are already playing out. In April 2025, millions across Spain and Portugal experienced widespread power outages, illustrating the systemic risk that comes when legacy control paradigms are applied to next-generation load profiles. And this problem is far from isolated to the Iberian Peninsula.
Utilities everywhere are now facing unprecedented load growth due to data centers. Data center developers are struggling to secure sufficient power capacity. At the same time, GPU developers are shipping larger models requiring 50 to 100+ kW per rack, levels that far exceed legacy data center design assumptions. Together, these forces present significant challenges for grid infrastructures globally.
In response, industry is being forced to determine whether the power architecture we built for the last century is the right architecture for the future.
Designing Failure Out of the System
In manufacturing, there is a concept called poka-yoke. Developed in the 1960s by a Toyota engineer, it means “mistake-proofing.” With poka-yoke, rather than detecting mistakes after they happen, you design systems that cannot fail. In other words, failure mode is removed entirely.
This philosophy influences how engineers approach building resilient systems. At Claros, it raised an important question: What if we applied the poka-yoke mindset to power infrastructure and redesigned the architecture so certain failure modes no longer exist?
A First-Principles View of Power Architecture
With that question, Claros began reimagining electricity distribution for today’s AI infrastructure, one built entirely around DC power. We started from first principles, leveraging the best electronics available to build a power-delivery platform unconstrained by legacy architectures and assumptions and designing for the realities of AI data centers today. In the process, two insights emerged:
First, other industries have already solved many of the AC-to-DC conversion challenges now facing data centers. Electric vehicles, high-power fast-charging systems, DC microgrids, all operate within extremely efficient DC-to-DC conversion architectures, purpose-built for dynamic environments.
Second, when rethinking the system architecture, we realized it was possible to dramatically reduce the number of conversion steps and control layers required to deliver power to compute. Fewer steps mean less power loss, greater efficiency, and improved scalability.
Those insights led to the Claros Power Gateway, an 800VDC intelligent power-distribution platform designed for AI workloads.
In most data centers, operators control when and where work happens. With Power Gateway, we extend that control, allowing operators to manage how energy flows through the system in real time while cutting distribution and conversion losses by up to 50%.
A Call to Collaborate
The underlying reality is straightforward: The demand for compute is not slowing down and the demand for energy will only continue to accelerate. This challenge brings a real opportunity to introduce a new technology layer that simplifies power delivery and improves performance. That doesn’t mean simply replacing the grid or disrupting utilities. It means enabling better coordination between infrastructure layers—from energy producers and transmission operators to data centers and compute platforms— streamlining how power moves through these systems, allowing AI infrastructure to scale AI in ways that are currently difficult.
But no single company can solve this challenge alone.
This is a system-wide problem that impacts the entire ecosystem. Utilities, technology developers, policymakers, infrastructure investors, all have a role to play in shaping the technological landscape for the decades to come.
If the last century was defined by the electrification of industry and the last few decades have been defined by the digitization of information, the next era may well be defined by the electrification of intelligence. Let’s partner to ensure this transition is powered by infrastructure worthy of the moment.
If you’d like to collaborate, contact us.