Main image of Scaling AI Sustainably: High-Voltage DC Power for Next-Generation Data Centers

Scaling AI Sustainably: High-Voltage DC Power for Next-Generation Data Centers

Introduction

AI, robotics, and edge computing are driving unprecedented growth in data center energy demand. As rack densities climb past 100 kW and approach megawatt-class deployments, the industry faces a dual challenge: delivering exponentially more compute power while shrinking environmental impact.

Traditional 48V VDC power distribution, which transformed efficiency a decade ago, is reaching its physical limits. At today's power densities, low-voltage systems require massive conductors, generate significant resistive losses, and create thermal challenges that drive up both infrastructure costs and environmental impact.

The answer lies in ±400 VDC and 800 VDC architectures that fundamentally change how energy moves from the grid to the processor. This isn't an incremental improvement. It's a pathway to handling megawatt-scale loads with far less copper, reduced conversion losses, and lower cooling demand.
The goal isn't just more watts. It's cleaner, more efficient power delivery for a computing-intensive world.

The Next Phase of AI Power Delivery

A decade ago, the adoption of 48 VDC marked a milestone in data center efficiency. It cut resistive losses, standardized N+1 redundancy, and laid the groundwork for modern hyperscale infrastructure. Standards like OCP Open Rack V3*1 cemented 48 VDC as the baseline for open, efficient rack power.

But as AI workloads push rack power into the hundreds of kilowatts, the physics become challenging. At 48 VDC, current requirements grow exponentially with power. The result: heavier copper busbars, higher resistive losses, and thermal challenges that increase both capital and operating costs, not to mention environmental impact.

The current penalty at scale: At 48 VDC, feeding a 1 MW rack means nearly 20,000 amps—requiring massive copper infrastructure and generating significant waste heat. By contrast, 800 VDC slashes current draw by more than 95%, cutting copper mass from approximately 400lbs to 40lbs for the same load while boosting end-to-end efficiency into the 94–96% range.

*1 OCP Open Rack V3: An Open Compute Project specification defining mechanical and electrical standards for 48 VDC rack power distribution, designed to improve efficiency and interoperability in large-scale data centers.

Metric48 VDC±400 VDC800 VDC
Max Practical Rack Power<80 kW100–300 kW>1 MW
Copper Mass (1 MW)*~400 lbs~80 lbs~40 lbs
End-to-End Efficiency**~90%~90–93%~94–96%
Conversion Stages3–42–32

* Order-of-magnitude illustration assuming similar runs and materials.

** Representative of well-designed systems at nominal load. Idle and transient conditions differ.

Higher voltage distribution reduces current for a given power level, offering a straightforward way to contain costs, losses, and material consumption. Every pound of copper saved reduces mining, refining, and transport impact. Every point of efficiency gained saves megawatt-hours and reduces carbon emissions over a facility's lifetime.

The next phase (±400 VDC and 800 VDC) extends the same efficiency philosophy that made 48 VDC successful, but at power densities that match the demands of modern AI infrastructure.

Image 1 of Scaling AI Sustainably: High-Voltage DC Power for Next-Generation Data Centers

From the Grid to Chip: Streamlining the Power Path

In legacy data centers, electricity passes through multiple conversion stages before reaching processors. Each conversion loses a few percentage points as heat, which cooling systems must then remove—multiplying total energy use and environmental impact.

HVDC architectures streamline that path. By transmitting 400 VDC or 800 VDC deep into the rack, operators minimize intermediate conversions and deliver power closer to the point of load with significantly higher efficiency.

Image 2 of Scaling AI Sustainably: High-Voltage DC Power for Next-Generation Data Centers

Key architectural advantages:

  • Simplified power chain: High-efficiency AC-DC front ends convert utility power into a stable high-voltage DC bus that feeds rows or racks, lifting end-to-end efficiency toward the mid-90s
  • Reduced conversion stages: Fewer transformation steps mean less waste heat and lower cooling energy requirements
  • Precision at the edge: Point-of-load converters near GPUs and CPUs useGaN or SiC switching for precise voltage and fast transient response
  • Real-time awareness: Telemetry and microsecond-scale fault detection keep pace with rapid load fluctuations in AI training clusters
  • Thermal compatibility: These architectures work with both air and liquid cooling approaches as tray loads surpass 100 kW

When executed correctly, these changes can reduce copper requirements by roughly 30%, improve facility efficiency by several percentage points, and lessen cooling energy demand due to lower waste heat—though results depend on specific layouts and controls.

Efficiency as Infrastructure Resilience

Power availability has become as critical as power efficiency. Data-center energy demand is rising sharply and clustering near urban centers and fiber hubs, testing the limits of local grid capacity.
According to the Pew Research Center*2, U.S. data centers consumed about 183 TWh in 2024—just over 4% of national electricity use—with projections reaching approximately 426 TWh by 2030. That growth equates to the annual consumption of tens of millions of homes.

Grid constraints are already impacting operations. In dense markets, utilities have limited new connections and curtailed loads during fault events. In mid-2024, a surge suppression failure in Northern Virginia triggered the emergency shutdown of 60 data centers*3. Within minutes, roughly 1,500 MW of load, enough to power over a million homes, was forced offline.

This context reframes efficiency as more than an operational metric—it's a resilience strategy. Forward-thinking data-center designs now prioritize:

  • Curtailment tolerance: Power systems that can safely ride through supply drops or grid instabilities
  • On-site generation: Integrating renewables, fuel cells, and battery storage tooffset 20–30% of facility demand
  • Distribution efficiency: Reducing internal losses through higher-voltage architectures to deliver more compute per available megawatt

HVDC addresses the efficiency component by cutting losses and improving capacity utilization. When paired with grid-aware controls and renewable integration, it becomes part of a comprehensive resilience strategy—using available energy more wisely while enabling cleaner future growth.

*2 Pew Research Center (2024): Independent research organization reporting on U.S. energy use and infrastructure trends. Their study on national data center electricity consumption provides insight into industry growth and grid capacity challenges.

*3 Reuters – "Power Issues Cause Massive Virginia Data Center Shutdown" (2024): Reuters news report detailing how a surge suppression failure at a utility substation in Northern Virginia caused 60 local data centers to shut down, temporarily removing approximately 1,500 MW of demand from the grid.

Designing for Safety, Efficiency, and Sustainability

Transitioning to 400 VDC and 800 VDC introduces new design and operational responsibilities. Success requires treating safety, efficiency, and sustainability as integrated design requirements rather than afterthoughts.

Electrical Safety and Protection

Higher voltages demand strict adherence to IEC and UL standards for creepage, clearance, and isolation. Systems should incorporate:

  • Ground-fault detection and fast-acting electronic protection to contain faultslocally
  • Arc mitigation and well-defined hot-swap procedures for maintenance safety
  • Reinforced 3 kVAC isolation between stages for decades of reliable operation
  • Embedded electronic fuses providing microsecond-level protection

Advanced Power Conversion

Next-generation converters leverage new materials and topologies that simultaneously improve efficiency and reduce environmental footprint:

  • GaN and SiC devices: Enable high-frequency, low-loss conversion with better thermal characteristics
  • Interleaved and resonant topologies: Reduce electromagnetic emissions and distribute thermal loads more evenly
  • Optimized thermal paths: Enhance long-term reliability and reduce cooling requirements
  • AI-driven telemetry: Real-time optimization of thermal and workload management
Image 3 of Scaling AI Sustainably: High-Voltage DC Power for Next-Generation Data Centers

Embedded Sustainability

Sustainability in power design is increasingly measured through material efficiency and lifecycle longevity, not just energy savings:

  • Material reduction: Up to 85% less copper in 1 MW rack designs compared to 48 VDC equivalents
  • Serviceability: Hot-swappable, modular components reduce e-waste and extend system lifecycles
  • Adaptability: Firmware-upgradable platforms evolve with workloads withouthardware replacement
  • Renewable integration: Direct compatibility with solar, wind, and battery systems

Operational Integration

Power systems must integrate seamlessly with existing infrastructure management:

  • Standard interfaces: Report voltage, current, temperature, and fault states through standard protocols
  • Microsecond response: Critical for stability during AI workload transients
  • Hybrid readiness: Most operators will maintain mixed 48V/HVDCenvironments for years—design plans must address interface points, maintenance zones, and technician training

Validation under AI-class transient profiles is critical before deployment to ensure systems can handle rapid load fluctuations without instability.

Modular Platforms for Flexible Deployment

Because each data center has unique layout and load requirements, modular approaches offer maximum flexibility for scaling and serviceability. Instead of redesigning entire electrical backbones, operators can upgrade modules as efficiency and capacity improve.

Typical building blocks include:

  • Front-end AC-DC modules: Titanium-class units (3,000W+ targeting 97% efficiency) establishing the main high-voltage DC bus
  • High-voltage DC-DC stages: Rack- or sled-level converters tailored for specific voltage rails
  • Point-of-load regulators: Compact modules providing tight voltage control and fast transient response for processors
  • Protection and monitoring: Built-in e-fuses, isolation monitors, and telemetry for early fault detection

When evaluating solutions, consider trade-offs among efficiency, thermal headroom, EMI performance, service access, and total cost of ownership over the system lifecycle.

Sustainability Impacts Across the Infrastructure

The benefits of HVDC extend beyond electrical efficiency into measurable environmental improvements:

  • Operational energy: At hyperscale, even modest efficiency gains add up. For a 50 MW facility, moving from about 90% to 96% end-to-end power efficiency can translate into tens of gigawatt-hours of annual energy savings, with meaningful benefits for operating cost and emissions. (Outcomes depend on load factor, climate, and system design.)
  • Material efficiency: Reducing conductor mass by 30% or more lowers embodied carbon from mining, refining, and transport while simplifying installation
  • Cooling and water use: Less waste heat directly translates to lowe rmechanical-cooling demand, improving PUE and WUE metrics (water savingsvary by cooling method and region)
  • Lifecycle management: Modular, serviceable platforms support component-level replacement instead of wholesale equipment disposal, reducing e-waste and long-term costs

These aren't abstract sustainability goals—they're measurable design outcomes that lower both energy intensity and embodied impact over time.

Murata's HVDC Roadmap: Engineering for Sustainable Scale

Murata is developing and collaborating with leading compute, rack, and power-architecture innovators to accelerate this transition—ensuring tomorrow's data centers can scale responsibly with efficiency and sustainability built in from the start.

We're investing in technologies that make high-voltage distribution the default for AI-scale computing:

  • Advanced HVDC brick architecture: designed to scale power delivery up to 10kW
  • Next-generation front ends: 5,500W+ designs targeting 97% efficiency
  • Scalable GaN/SiC converters: Adaptable for both rack-level and sled-level deployment
  • Direct renewable compatibility: Supporting solar, wind, and batteryintegration
  • AI-driven telemetry: Real-time thermal and workload optimization
  • Field-upgradeable safety and monitoring: Extending system lifecyclesthrough firmware updates

In the compute rack, ±400V is converted to 48V to supply the server input. This is where our industry leading, broad portfolio delivers clear differentiation.

Murata provides modular building blocks for HVDC-ready architectures, including front-end AC-DC modules, high-voltage DC-DC converters, point-of-load regulators, and integrated protection and monitoring.

Conclusion: Resilient Power for Sustainable Growth

The world will continue to compute more—training AI models, processing data, connecting billions of devices. That reality makes efficiency and resource stewardship more critical than ever.

Transitioning from 48 VDC to 400 VDC and 800 VDC is more than an upgrade. It's a foundational change in how data centers deliver, distribute, and manage power. While it calls for thoughtful validation and operational planning, the benefits are clear: higher density, lower losses, and stronger alignment with sustainability goals under increasingly constrained grid conditions.

This isn't about chasing green labels. It's about building smarter, leaner, and more responsible infrastructure where every watt does more.

Murata is focused on enabling that shift, delivering power technologies that support meaningful scale with significantly lower environmental impact.

Related products

Related articles