KDDI Corporation, a Japanese telecommunications operator, and Hewlett Packard Enterprise (HPE) have announced a strategic partnership to launch a next-generation AI data center in Sakai City, Osaka Prefecture, with operations scheduled to begin in early 2026. The facility will leverage Nvidia’s Blackwell architecture and advanced liquid cooling technology to serve Japan’s growing AI market, positioning both companies to capture demand from startups, enterprises, and research institutions developing AI applications and large language models.
What you should know: The Osaka Sakai data center will be powered by HPE’s rack-scale system featuring the Nvidia GB200 NVL72 platform, specifically designed for high-performance AI workloads.
- The system incorporates advanced direct liquid cooling to significantly reduce environmental impact while maintaining optimal performance for complex AI computations.
- KDDI plans to deliver cloud-based AI compute services through its WAKONX platform, targeting Japan’s AI-driven digital economy with low-latency inferencing capabilities.
- The facility will support customers in building large language models and scaling AI applications across various industries.
Technical specifications: The Nvidia GB200 NVL72 system comes equipped with cutting-edge networking capabilities designed for enterprise-scale AI clusters.
- The platform includes Nvidia-accelerated networking with Quantum-2 InfiniBand, Spectrum-X Ethernet, and BlueField-3 DPUs for high-performance network connectivity.
- Customers will be able to run the Nvidia AI Enterprise platform on KDDI’s infrastructure to accelerate development and deployment processes.
- The rack-scale design is optimized for energy efficiency while handling large and complex AI workloads.
In plain English: Think of this setup like a supercharged computer room specifically built for AI tasks.
- Instead of traditional air conditioning, it uses liquid cooling (like a high-end gaming PC) to keep everything running smoothly while using less energy.
- The networking equipment acts like ultra-fast highways that let different parts of the AI system communicate instantly, which is crucial when training complex AI models that need to process massive amounts of data simultaneously.
Why this matters: Japan’s AI infrastructure development is accelerating as demand for sophisticated computing power grows across industries.
- The collaboration addresses increasing requirements for low-latency inferencing and energy-efficient infrastructure as AI workloads become more complex and resource-intensive.
- The partnership strengthens Japan’s position in the global AI race by providing domestic access to cutting-edge computing capabilities without relying solely on international cloud providers.
What they’re saying: HPE’s leadership emphasized the strategic importance of the Japanese market for AI innovation.
- “Our collaboration with KDDI marks a pivotal milestone in supporting Japan’s AI innovation, delivering powerful computing capabilities that will enable smarter solutions,” said Antonio Neri, president and CEO of HPE.
Broader context: This announcement follows HPE and Nvidia’s recent unveiling of new AI factory offerings at HPE Discover 2025 in Las Vegas.
- The expanded portfolio includes HPE’s AI-ready RTX PRO Servers and the next generation of HPE Private Cloud AI, branded as Nvidia AI Computing by HPE.
- These offerings combine Nvidia’s latest technologies—including Blackwell accelerated computing, Spectrum-X Ethernet, and BlueField-3 networking—with HPE’s comprehensive server, storage, software, and services ecosystem.
- The revamped HPE Private Cloud AI platform, co-developed with Nvidia and validated under the Nvidia Enterprise AI Factory framework, delivers full-stack solutions for enterprises seeking to harness generative and agentic AI capabilities.
KDDI, HPE partner to launch advanced AI data center in Japan