
Kioxia has introduced its CD9P Series SSDs, engineered specifically for high-performance AI and HPC workloads. Powered by the 8th generation BiCS FLASH™ with CMOS directly Bonded to Array (CBA) technology, these PCIe 5.0 NVMe drives deliver high speed, low latency, and improved power efficiency. The idea is to maintain a completely utilized GPU during training and inferring AI models. They support higher data throughput due to read speeds up to 14.8 GB/s and random read input/output per second (IOPS) of 2.6 million, especially during large-scale AI applications, which minimizes bottlenecks by improving workloads and system capabilities and asset optimizations during next-generation AI infrastructure implementations.
Powering the Demands of AI Workloads
AI workloads increasingly demand ultra-fast, high-capacity storage to feed GPUs with continuous data streams. Kioxia’s CD9P SSDs are tailored to that need, offering 61.44TB capacity and blazing read/write performance to support training and inference of large AI models. The drives achieve up to 125% improvement in random write speeds and a 100% boost in performance-per-watt compared to previous models. Their CBA-enabled infrastructure makes a huge advancement in thermo management and energy optimization, which are essential aspects in AI data centers with performance density and sustainability as obvious key factors.
To AI developers, it implies more predictable results when performing intensive jobs, such as fitness of language models, image synthesis generators, and instantaneous analysis. Post-quantum cryptography that conforms to CNSA 2.0 is also supported in the SSDs and ensures data integrity in future AI systems. The CD9P Series fits perfectly into scalable AI servers by supporting EDSFF E3.S and 2.5-inch derivatives.
The SSDs made by Kioxia comply with the needs of such industries as the finance industry, healthcare, and the automotive industry due to the AI loads having a high throughput and ultra-low latency. These drives are necessary to keep GPU pipelines busy and working as the AI workloads become larger and more complex, and ultimately accelerate time-to-insight and make it possible to conduct continuous AI innovation at scale within enterprise settings and research.
Future-Proofing AI Infrastructure with CBA SSDs
Kioxia’s CD9P Series isn’t just about speed; it’s about sustainability, security, and scalability in AI systems. The CBA (CMOS Bonded to Array) technology offers better energy performance and thermal management, helping AI data centers reduce cooling requirements and total cost of ownership. For power-hungry tasks like LLM inference or generative video rendering, these efficiencies can translate into major gains. Security is also at the forefront. With support for CNSA 2.0 algorithms like LMS digital signatures and AES-256 encryption, CD9P SSDs prepare infrastructure for quantum-era threats, especially relevant for government and regulated AI environments. Whether training autonomous systems or deploying large models across edge environments, reliability and trust are non-negotiable.
On the deployment front, compatibility with Open Compute Project specs ensures seamless integration into hyperscale architectures. Mixed-use (3 DWPD) and read-intensive (1 DWPD) endurance options also give flexibility for varied AI workflows, from heavy writes in training environments to massive reads during inference. This next-gen storage architecture reflects Kioxia’s strategy to become a core enabler of AI innovation. In the race to build faster, smarter, and more secure models, robust storage systems like the CD9P will serve as the backbone, delivering scalable performance and future-ready design for mission-critical AI applications.
AI Infrastructure Starts with Storage
AI systems are only as fast as their slowest component. Kioxia’s CD9P SSDs eliminate storage bottlenecks, feeding high-speed data directly to GPUs and unlocking next-level AI performance. These drives optimize for future-proof scalability, energy efficiency, and cryptographic resilience to meet the real-time demands of AI and HPC workloads.. With capacity up to 61.44TB and blazing-fast IOPS, they’re built to sustain the data needs of multi-modal models and continuous AI deployment. As AI continues to scale globally, Kioxia positions itself not just as a memory supplier but as a foundational partner in the AI infrastructure ecosystem.