New industry Technology regarding to Bussmann fuse, ABB breakers, Amphenol connectors, HPS transformers, etc.
The debate over electricity seems to have come full circle — as if we’ve returned to the 1890s.
Back then, a fierce rivalry raged between two titans of innovation: Nikola Tesla, who championed the oscillating current of AC (alternating current), and Thomas Edison, who advocated for the steady, unidirectional flow of DC (direct current).
AC ultimately triumphed because it could be easily transformed to higher voltages, enabling long-distance transmission with minimal losses. The logic was simple yet profound: higher voltage means lower current; lower current means thinner conductors. For instance, transmitting power at high voltage (around 35 kV) could be done using cables just an inch thick, while attempting the same at low voltage would require conductors nearly six feet in diameter — a logistical and physical impossibility.
Today, a similar paradigm shift is unfolding, not on city grids, but inside data centers — the digital factories of the 21st century.

For decades, data centers have relied on 400V AC or 48V DC systems to distribute power. These methods have served well for moderate workloads, but the rapid escalation in computational density has exposed their physical limits.
The challenges are fundamental:
Massive Cable Size and Weight – Conductor cross-sectional area scales with current. Supplying 400 kW at 48V requires more than 8,000 amperes of current. The resulting copper busbars would be as thick as a fire hose, cumbersome to install, and so heavy they could collapse racks and raised floors.
Severe Energy Losses – Power loss follows the relationship P_loss = I² × R. Doubling the current quadruples the losses. Gigantic currents not only waste energy but generate enormous heat, demanding elaborate cooling systems that further erode efficiency.
In short, low-voltage systems are choking under their own electrical load.
The solution lies in physics itself — raise the voltage. By elevating the DC voltage from 48V to 800V, the current drops dramatically. For the same 400 kW cabinet:
At 48V, current = 8,333 A
At 800V, current = 500 A
That’s a 94% reduction in current. Less current means smaller conductors, lower resistive losses, and drastically reduced thermal output.
This shift isn’t theoretical. Modern accelerated computing platforms, such as NVIDIA GB300 NVL72, already demand extraordinary power levels — 142 kW per rack, with dozens of GPUs operating in parallel. Power must flow from utility distribution (around 35 kV) down to the server’s 12V domain.
Today’s dominant approaches — 400V three-phase AC and 48V DC — are increasingly impractical beyond 200 kW per rack. At 400 kW, they become untenable. This is the world of NVIDIA Kyber and NVIDIA Rubin Ultra, where traditional power architectures can no longer keep pace with compute density.
At GTC 2025, NVIDIA unveiled a side-mounted 800 VDC power supply unit (PSU), designed to deliver power directly to the Rubin Ultra GPUs within a single Kyber cabinet. Each cabinet, housing up to 576 GPUs, can demand up to 1 megawatt of power.
Such monumental energy density demands a revolution in power distribution — and 800 VDC provides it.
For high-density IT enclosures pushing 400 kW to 1 MW, 800 VDC is not optional — it is essential.
This architecture offers several distinct advantages:
Thinner cables and busbars free up valuable rack space and simplify mechanical layouts. With reduced bulk, power paths become shorter, cleaner, and easier to service.
Copper usage — one of the major cost drivers in data center infrastructure — is significantly reduced. Lower mass translates into lighter installations and lower material expenditures.
Minimized resistive loss means less waste heat, lower cooling requirements, and superior overall energy utilization. Every watt saved in transmission is a watt that can fuel computation.
With single-stage AC/DC conversion, the number of transformers and conversion points is minimized. Energy moves more directly, with fewer opportunities for inefficiency. The electrical topology becomes leaner, easier to maintain, and inherently more reliable.
DC systems naturally integrate diode-based and overcurrent protection components that are both robust and efficient. The result is a system that can sustain high loads with exceptional fault tolerance.
NVIDIA’s 800 VDC PSU aligns not only with its internal roadmap but also with emerging demands from hyperscalers such as Google, Meta, and Microsoft, all exploring high-voltage DC architectures to sustain AI workloads.
Moreover, the new PSU design includes hot-swappable modules, allowing live replacement without system downtime — a critical feature for hyperscale operations where uptime is measured in financial significance.
As the AI revolution pushes data centers toward megawatt-per-rack power densities, 800 VDC emerges as the inevitable backbone of next-generation infrastructure.
It marks the return of direct current — but this time, not as Edison envisioned it, rather as a high-efficiency, high-voltage enabler for the world’s most advanced computing systems.
The 19th-century “War of Currents” is being fought anew — not on city streets, but in the heart of data centers. And this time, DC may finally win.
New industry Technology regarding to Bussmann fuse, ABB breakers, Amphenol connectors, HPS transformers, etc.