
Hewlett-Packard Enterprise (HPE) has expanded its AI infrastructure offerings with deeper NVIDIA integrations and new enterprise-grade products. The company unveiled compute and software enhancements featuring the powerful NVIDIA RTX PRO 6000 Blackwell Server Edition GPU. HPE also announced its validated designs for the NVIDIA Enterprise AI Factory. These upgrades support the full AI lifecycle, from development to deployment, across industries. HPE’s Private Cloud AI now supports feature branch model updates, enabling more flexible development environments. Combined with the new Alletra X10000 SDK for data pipelines, HPE is building intelligent, real-time AI solutions from core to edge.
HPE and NVIDIA Drive Next-Gen AI Infrastructure for Business
HPE continues to co-develop enterprise-ready AI systems with NVIDIA to support agentic and generative AI workloads at scale. The expanded HPE Private Cloud AI platform now includes support for NVIDIA AI Enterprise’s feature branch updates. Developers can test AI models before pushing them into production, accelerating iteration and innovation.
The platform integrates NVIDIA NIM microservices, SDKs, and pre-trained models. This setup helps businesses deploy multimodal AI quickly and securely. HPE’s approach ensures guardrails and governance remain intact throughout the development pipeline. These systems scale from on-premises to edge to hybrid cloud environments without disruption or re-engineering.
HPE Alletra Storage MP X10000 now includes a software development kit (SDK) for the NVIDIA AI Data Platform. This SDK boosts performance with RDMA data transfers between storage, system memory, and GPU resources. Customers gain real-time access to unstructured data for model training and inference. These modular building blocks allow scaling based on capacity or performance needs. Together, HPE and NVIDIA GPUs create a full-stack, validated AI factory for production-grade workloads. This joint platform empowers enterprises to unlock faster time-to-value, lower infrastructure costs, and competitive AI agility across diverse industries.
HPE Delivers AI Leadership Through Hardware and Software Synergy
HPE’s ProLiant DL380a Gen12 server will now ship with up to 10 NVIDIA RTX PRO 6000 Blackwell GPUs. This system is already ranked No. 1 in over 50 MLPerf benchmarks. It delivers elite performance for GPT models, Llama2, RetinaNet, and other high-demand workloads. The DL380a supports both air-cooled and direct liquid cooling options, maintaining peak performance under pressure. Its iLO 7 firmware adds post-quantum cryptography and FIPS 140-3 Level 3 compliance. These advanced features offer high security and compliance for regulated industries. HPE Compute Ops Management delivers predictive AI-driven insights and full lifecycle automation for deployed AI systems.
In addition, HPE’s OpsRamp software now supports RTX PRO 6000 Blackwell. It offers real-time AI infrastructure observability, resource tracking, event automation, and predictive planning. OpsRamp collects GPU temperature, clock speed, usage, and fan speed data to inform operational decisions. IT teams can optimize workloads, reduce power consumption, and allocate resources dynamically. By unifying cutting-edge compute, storage, and software, HPE offers a complete solution to deploy and scale AI across cloud, data center, and edge. These innovations cement HPE’s leadership in the AI infrastructure space.
A Full-Stack Future for Enterprise AI at Scale
HPE and NVIDIA GPUs are building the future of enterprise AI through collaboration, hardware leadership, and full-stack integration. Their joint platform helps companies develop, test, and scale multimodal AI, agentic systems, and large model inferencing. With powerful servers, secure software, and intelligent storage, HPE ensures every business can build its own AI factory. These solutions support real-time AI anywhere, cloud, edge, or on-premises. With launch dates starting in summer 2025, HPE is ready to deliver flexible, scalable AI infrastructure at enterprise scale.