Google says its AI supercomputer is faster, greener than Nvidia A100 chip
Alphabet Inc.'s Google released on Tuesday new details about the supercomputers it uses to train its artificial intelligence models, saying the systems are both faster and more power-efficient than comparable systems from Nvidia Corp. Google has designed its own custom chip called the Tensor Processing Unit, or TPU. "Circuit switching makes it easy to route around failed components," Google Fellow Norm Jouppi and Google Distinguished Engineer David Patterson wrote in a blog post about the system. In the paper, Google said that for comparably sized systems, its chips are up to 1.7 times faster and 1.9 times more power-efficient than a system based on Nvidia's A100 chip that was on the market at the same time as the fourth-generation TPU. Google hinted that it might be working on a new TPU that would compete with the Nvidia H100 but provided no details, with Jouppi telling Reuters that Google has "a healthy pipeline of future chips."
Discover Related

Google Pixel 9 augments the camera experience with a big helping of AI

Apple used Google's chips to train two AI models, not Nvidia's: Report

How Apple used Google's help to train its AI models

Nvidia intends to price newest AI chips to appeal to wide group of users

Nvidia's Blackwell GPU may reach Indian shores as early as October

Nvidia delays launch of new AI chip in China amid US export rules

Nvidia tweaks flagship H100 chip for export to China as H800
