HPE Discover 2024—Hewlett Packard Enterprise (NYSE: HPE) and NVIDIA today announced NVIDIA AI Computing by HPE, a portfolio of co-developed AI solutions and joint go-to-market integrations that enable enterprises to accelerate adoption of generative AI.
Among the portfolio’s key offerings is HPE Private Cloud AI, a first-of-its-kind solution that provides the deepest integration to date of NVIDIA AI computing, networking and software with HPE’s AI storage, compute and the HPE GreenLake cloud. The offering enables enterprises of every size to gain an energy-efficient, fast and flexible path for sustainably developing and deploying generative AI applications. Powered by the new OpsRamp AI copilot that helps IT operations improve workload and IT efficiency, HPE Private Cloud AI includes a self-service cloud experience with full lifecycle management and is available in four right-sized configurations to support a broad range of AI workloads and use cases.
All NVIDIA AI Computing by HPE offerings and services will be available through a joint go-to-market strategy that spans sales teams and channel partners, training and a global network of system integrators — including Deloitte, HCLTech, Infosys, TCS and Wipro — that can help enterprises across a variety of industries run complex AI workloads.
Announced during the HPE Discover keynote by HPE President and CEO Antonio Neri, who was joined by NVIDIA founder and CEO Jensen Huang, NVIDIA AI Computing by HPE marks the expansion of a decades-long partnership and reflects the substantial commitment of time and resources from each company.
“Generative AI holds immense potential for enterprise transformation, but the complexities of fragmented AI technology contain too many risks and barriers that hamper large-scale enterprise adoption and can jeopardize a company’s most valuable asset — its proprietary data,” said Neri. “To unleash the immense potential of generative AI in the enterprise, HPE and NVIDIA co-developed a turnkey private cloud for AI that will enable enterprises to focus their resources on developing new AI use cases that can boost productivity and unlock new revenue streams.”
“Generative AI and accelerated computing are fueling a fundamental transformation as every industry races to join the industrial revolution,” said Huang. “Never before have NVIDIA and HPE integrated our technologies so deeply — combining the entire NVIDIA AI computing stack along with HPE’s private cloud technology — to equip enterprise clients and AI professionals with the most advanced computing infrastructure and services to expand the frontier of AI.”
HPE and NVIDIA co-developed Private Cloud AI portfolio
HPE Private Cloud AI delivers a unique, cloud-based experience to accelerate innovation and return on investment while managing enterprise risk from AI. The solution offers:
● Support for inference, fine-tuning and RAG AI workloads that utilize proprietary data.
● Enterprise control for data privacy, security, transparency and governance requirements.
● Cloud experience with ITOps and AIOps capabilities to increase productivity.
● Fast path to consume flexibly to meet future AI opportunities and growth.
Curated AI and data software stack in HPE Private Cloud AI
The foundation of the AI and data software stack starts with the NVIDIA AI Enterprise software platform, which includes NVIDIA NIM™ inference microservices.
NVIDIA AI Enterprise accelerates data science pipelines and streamlines development and deployment of production-grade copilots and other GenAI applications. Included with NVIDIA AI Enterprise, NVIDIA NIM delivers easy-to-use microservices for optimized AI model inferencing offering a smooth transition from prototype to secure deployment of AI models in a variety of use cases.
Complementing NVIDIA AI Enterprise and NVIDIA NIM, HPE AI Essentials software delivers a ready to run set of curated AI and data foundation tools with a unified control plane that provide adaptable solutions, ongoing enterprise support and trusted AI services, such as data and model compliance and extensible features that ensure AI pipelines are in compliance, explainable and reproducible throughout the AI lifecycle.
To deliver optimal performance for the AI and data software stack, HPE Private Cloud AI delivers a fully integrated AI infrastructure stack that includes NVIDIA Spectrum-X™ Ethernet networking, HPE GreenLake for File Storage and HPE ProLiant servers with support for NVIDIA L40S, NVIDIA H100 NVL Tensor Core GPUs and the NVIDIA GH200 NVL2 platform.
Cloud experience enabled by HPE GreenLake cloud
HPE Private Cloud AI offers a self-service cloud experience enabled by HPE GreenLake cloud. Through a single, platform-based control plane, HPE Greenlake cloud services provide manageability and observability to automate, orchestrate and manage endpoints, workloads and data across hybrid environments. This includes sustainability metrics for workloads and endpoints.
HPE GreenLake cloud and OpsRamp AI infrastructure observability and copilot assistant
OpsRamp's IT operations are integrated with HPE GreenLake cloud to deliver observability and AIOps to all HPE products and services. OpsRamp now provides observability for the end-to-end NVIDIA accelerated computing stack, including NVIDIA NIM and AI software, NVIDIA Tensor Core GPUs and AI clusters as well as NVIDIA Quantum InfiniBand and NVIDIA Spectrum Ethernet switches. IT administrators can gain insights to identify anomalies and monitor their AI infrastructure and workloads across hybrid, multi-cloud environments.
The new OpsRamp operations copilot utilizes NVIDIA’s accelerated computing platform to analyze large datasets for insights with a conversational assistant, boosting productivity for operations management. OpsRamp will also integrate with CrowdStrike APIs so customers can see a unified service map view of endpoint security across their entire infrastructure and applications.
Accelerate time to value with AI — expanded collaboration with global system integrators
To advance the time to value for enterprises to develop industry-focused AI solutions and use cases with clear business benefits, Deloitte, HCLTech, Infosys, TCS and Wipro announced their support of the NVIDIA AI Computing by HPE portfolio and HPE Private Cloud AI as part of their strategic AI solutions and services.
HPE adds support for NVIDIA’s latest GPUs, CPUs and Superchips
- HPE Cray XD670 supports eight NVIDIA H200 NVL Tensor Core GPUs and is ideal for LLM builders.
- HPE ProLiant DL384 Gen12 server with NVIDIA GH200 NVL2 is ideal for LLM consumers using larger models or RAG.
- HPE ProLiant DL380a Gen12 server support for up to eight NVIDIA H200 NVL Tensor Core GPUs is ideal for LLM users looking for flexibility to scale their GenAI workloads.
- HPE will be time-to-market to support the NVIDIA GB200 NVL72 / NVL2, as well as the new NVIDIA Blackwell, NVIDIA Rubin and NVIDIA Vera architectures.
High-density file storage certified for NVIDIA DGX BasePOD and NVIDIA OVX systems
HPE GreenLake for File Storage has achieved NVIDIA DGX BasePOD certification and NVIDIA OVX™ storage validation, providing customers with a proven enterprise file storage solution for accelerating AI, GenAI and GPU-intensive workloads at scale. HPE will be a time-to-market partner on upcoming NVIDIA reference architecture storage certification programs.
Availability
- HPE Private Cloud AI is expected to be generally available in the fall.
- HPE ProLiant DL380a Gen12 server with NVIDIA H200 NVL Tensor Core GPUs is expected to be generally available in the fall.
- HPE ProLiant DL384 Gen12 server with dual NVIDIA GH200 NVL2 is expected to be generally available in the fall.
- HPE Cray XD670 server with NVIDIA H200 NVL is expected to be generally available in the summer.