Google Cloud Expands GPU Portfolio with Nvidia Tesla V100
Google brings Nvidia Tesla V100 to Google Cloud Computing
Google has expanded its GPU portfolio with one of the fastest and most powerful GPUs that are out there- The Nvidia Tesla V100. Nvidia Tesla V100 is now available, on beta, on Google cloud computing and Google powered Kubernetes Engine. Google announced on Monday the expansion of its GPU portfolio with the powerful Tesla V100 GPU that is publically available now on beta, while the less powerful P100 GPUs are available to the public now.
Nvidia Tesla V100 are one of the more powerful GPUs that is out there today, offering customers about a 100 CPUs worth of processing power which is great for customers needing to perform demanding applications like machine learning, video processing and analytics.
Google the first to get Nvidia Tesla V100 GPU?
Google may not be the first out there to get the all- powerful Nvidia Tesla V100 GPU. Other companies like IBM and AWS have already opted for the high processing power of Nvidia Tesla V100 GPU a while back now. They can be viewed privately on Azure.
Offering of Nvidia Tesla V100 to Customers:
Google customers can now get as many as eight Tesla V100 GPUs, 96 vCPU and 624 GB worth of system memory, while NVLINKS interconnects at speeds for up to 300 GB per second in GPU to GPU bandwidth. These speeds increase performance of deep learning and HPC workloads by about 40%
As for the Kubernetes engine, the Nvidia Tesla V100 GPUs can be used for turbocharging containers which can also be scaled down with Google’s Cluster Autoscaler.
Coming to the Tesla P100, customers can get up to four Tesla P100 GPUs, 624 GB of memory and 96 vCPUs.
Google a bit late to the game in offering Nvidia Tesla V100 GPUs?
The Nvidia Tesla V100 GPUs are one of the most powerful GPUs on the Nvidia lineup with a number of companies already having these GPUs in use. They have been in the market for quite some time now and Google has been a little late in getting the Tesla V100 GPU on board. AWS and IBM are already on the Tesla V100 bandwagon and they are available for private preview on Azure.
Google has also mentioned that it uses NVLink which is Nvidia’s fast interconnect for multiple GPU processing. But then again Google’s competitors do this too.
As mentioned earlier, these interconnects can boost performance for up to 40% for some workloads which is faster than PCIe connections.
Cost of using the Nvidia Tesla V100 GPUs:
An hour of using the Tesla V100 will set you $2.48 back while an hour’s usage of the Tesla P100 will cost you $1.46, these are all standard prices. This comes in addition to Google charging a fee for running your regular virtual machine or containers.
The Tesla V100 is available in one to eight GPUs with support for two or four GPUs coming in the future. The P100 GPUs on the other hand come in one, two or four GPUs.