SoftBank , ZutaCore, and Foxconn have collaborated on the design and development of a rack-integrated solution incorporating ZutaCore’s two-phase direct liquid cooling (DLC) technology for AI servers using NVIDIA accelerated computing. The solution was tested at SoftBank’s data center, where it achieved a partial Power Usage Effectiveness (pPUE) of 1.03, indicating high cooling efficiency.
The project marks the first implementation of ZutaCore’s two-phase DLC technology with NVIDIA H200 GPUs, according to ZutaCore research as of January 2025. SoftBank developed a rack-integrated solution that consolidates all server components, including the cooling system, at a rack scale. The solution successfully passed NVIDIA’s NVQual temperature test, confirming its compatibility, stability, and reliability.
As AI adoption increases, data center power consumption is expected to grow, prompting the need for more energy-efficient cooling technologies. Since May 2024, SoftBank has been working with ZutaCore to develop solutions aimed at reducing energy use in AI data centers. The newly developed rack-integrated solution is designed to enhance cooling efficiency while lowering power consumption.
The technology uses a two-phase DLC system that circulates an insulating dielectric fluid, which changes from liquid to gas to absorb heat from the semiconductor chips. This method allows for more efficient heat dissipation and reduces the need for high water flow rates, lowering power consumption. Additionally, the use of an insulating fluid minimizes the risk of damage from leaks.
Foxconn developed an AI server optimized with this cooling technology, based on NVIDIA H200 GPUs. The server exceeded performance requirements in NVIDIA’s NVQual certification test. SoftBank’s rack design is scalable for high-density AI server placement and aligns with global standards, being compatible with both 21-inch and 19-inch servers conforming to the Open Compute Project’s ORV3 standard.
SoftBank, ZutaCore, and Foxconn plan to continue developing solutions to improve efficiency in AI data centers and explore commercialization for deployment in global markets.