Google Cloud Next — Google Cloud and NVIDIA today announced new AI infrastructure and software for customers to build and deploy massive models for generative AI and speed data science workloads.
In a fireside chat at Google Cloud Next, Google Cloud CEO Thomas Kurian and NVIDIA founder and CEO Jensen Huang discussed how the partnership is bringing end-to-end machine learning services to some of the largest AI customers in the world — including by making it easy to run AI supercomputers with Google Cloud offerings built on NVIDIA technologies. The new hardware and software integrations utilize the same NVIDIA technologies employed over the past two years by Google DeepMind and Google research teams.
“We’re at an inflection point where accelerated computing and generative AI have come together to speed innovation at an unprecedented pace,” Huang said. “Our expanded collaboration with Google Cloud will help developers accelerate their work with infrastructure, software and services that supercharge energy efficiency and reduce costs.”
“Google Cloud has a long history of innovating in AI to foster and speed innovation for our customers,” Kurian said. “Many of Google’s products are built and served on NVIDIA GPUs, and many of our customers are seeking out NVIDIA accelerated computing to power efficient development of LLMs to advance generative AI.”
NVIDIA Integrations to Speed AI and Data Science Development
Google’s framework for building massive large language models (LLMs), PaxML, is now optimized for NVIDIA accelerated computing.
Originally built to span multiple Google TPU accelerator slices, PaxML now enables developers to use NVIDIA® H100 and A100 Tensor Core GPUs for advanced and fully configurable experimentation and scale. A GPU-optimized PaxML container is available immediately in the NVIDIA NGC™ software catalog. In addition, PaxML runs on JAX, which has been optimized for GPUs leveraging the OpenXLA compiler.
Google DeepMind and other Google researchers are among the first to use PaxML with NVIDIA GPUs for exploratory research.
The NVIDIA-optimized container for PaxML will be available immediately on the NVIDIA NGC container registry to researchers, startups and enterprises worldwide that are building the next generation of AI-powered applications.
Additionally, the companies announced Google’s integration of serverless Spark with NVIDIA GPUs through Google’s Dataproc service. This will help data scientists speed Apache Spark workloads to prepare data for AI development.
These new integrations are the latest in NVIDIA and Google’s extensive history of collaboration. They cross hardware and software announcements, including:
- Google Cloud on A3 virtual machines powered by NVIDIA H100 — Google Cloud announced today its purpose-built Google Cloud A3 VMs powered by NVIDIA H100 GPUs will be generally available next month, making NVIDIA’s AI platform more accessible for a broad set of workloads. Compared to the previous generation, A3 VMs offer 3x faster training and significantly improved networking bandwidth.
- NVIDIA H100 GPUs to power Google Cloud’s Vertex AI platform — H100 GPUs are expected to be generally available on VertexAI in the coming weeks, enabling customers to quickly develop generative AI LLMs.
- Google Cloud to gain access to NVIDIA DGX™ GH200 — Google Cloud will be one of the first companies in the world to have access to the NVIDIA DGX GH200 AI supercomputer — powered by the NVIDIA Grace Hopper™ Superchip — to explore its capabilities for generative AI workloads.
- NVIDIA DGX Cloud Coming to Google Cloud — NVIDIA DGX Cloud AI supercomputing and software will be available to customers directly from their web browser to provide speed and scale for advanced training workloads.
- NVIDIA AI Enterprise on Google Cloud Marketplace — Users can access NVIDIA AI Enterprise, a secure, cloud native software platform that simplifies developing and deploying enterprise-ready applications including generative AI, speech AI, computer vision, and more.
- Google Cloud first to offer NVIDIA L4 GPUs — Earlier this year, Google Cloud became the first cloud provider to offer NVIDIA L4 Tensor Core GPUs with the launch of the G2 VM. NVIDIA customers switching to L4 GPUs from CPUs for AI video workloads can realize up to 120x higher performance with 99% better efficiency. L4 GPUs are used widely for image and text generation, as well as VDI and AI-accelerated audio/video transcoding.