SofTeCode Blogs

One Place for all Tech News and Support

NVIDIA GPUs for Google Cloud

3 min read
Nvidia gpu for google cloud
0
(0)

The NVIDIA A100 Tensor Core GPU has landed on Google Cloud.

Available in alpha on Google Compute Engine just over a month after its introduction, A100 has come to the cloud faster than any NVIDIA GPU in history.

Today’s introduction of the Accelerator-Optimized VM (A2) instance family featuring A100 makes Google the primary cloud service provider to supply the new NVIDIA GPU.

A100, which is made on the newly introduced NVIDIA Ampere architecture, delivers NVIDIA’s greatest generational leap ever. It boosts training and inference computing performance by 20x over its predecessors, providing tremendous speedups for workloads to power the AI revolution.

“Google Cloud customers often look to us to supply the newest hardware and software services to assist them drive innovation on AI and scientific computing workloads, ” said Manish Sainani, director of Product Management at Google Cloud. “With our new A2 VM family, we are proud to be the primary major cloud provider to plug NVIDIA A100 GPUs, even as we were with NVIDIA T4 GPUs. We are excited to ascertain what our customers will do with these new capabilities.”

In cloud data centers, A100 can power a broad range of compute-intensive applications, including AI training and inference, data analytics, scientific computing, genomics, edge video analytics, 5G services, and more.

Fast-growing, critical industries are going to be ready to accelerate their discoveries with the breakthrough performance of A100 on Google Compute Engine. From scaling up AI training and scientific computing to scaling out inference applications, to enabling real-time conversational AI, A100 accelerates complex and unpredictable workloads of all sizes running within the cloud.

NVIDIA CUDA 11, coming to general availability soon, makes accessible to developers the new capabilities of NVIDIA A100 GPUs, including Tensor Cores, mixed-precision modes, multi-instance GPU, advanced memory management and standard C++/Fortran parallel language constructs.

Breakthrough A100 Performance within the Cloud for each Size Workload
The new A2 VM instances can deliver different levels of performance to efficiently accelerate workloads across CUDA-enabled machine learning training and inference, data analytics, also as high-performance computing.

For large, demanding workloads, Google Compute Engine offers customers the a2-megagpu-16g instance, which comes with 16 A100 GPUs, offering a complete of 640GB of GPU memory and 1.3TB of system memory — all connected through NVSwitch with up to 9.6TB/s of aggregate bandwidth.

For those with smaller workloads, Google Compute Engine is additionally offering A2 VMs in smaller configurations to match specific applications’ needs.

Google Cloud announced that additional NVIDIA GPU A100 support is coming soon to Google Kubernetes Engine, Cloud AI Platform, and other Google Cloud services. For more information, including technical details on the new A2 VM family and the way to check-in for access, visit the Google Cloud blog

 

Read More:

NVIDIA GeForce gets Divinity

NVIDIA CloudXR for VR and AR

Nvidia Ampere A100 GPU for super computers

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

Give your views

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Social Love – Follow US