2 years ago

Google Claims Its Supercomputer Is Faster, More Power-Efficient Than Nvidia Systems

On Tuesday, Alphabet Inc's Google provided new information about its supercomputers that are used to train its artificial intelligence models. Google claims that its systems are both faster and more power-efficient compared to comparable systems from Nvidia Corp. Google uses its custom-designed Tensor Processing Unit chips for over 90 per cent of its artificial intelligence training. Improving connections has become a crucial point of competition among companies that build AI supercomputers because the large language models that power technologies like Google's Bard or OpenAI's ChatGPT have exploded in size. In a blog post about the system, Google Fellow Norm Jouppi and Google Distinguished Engineer David Patterson wrote that "circuit switching makes it easy to route around failed components," and "this flexibility even allows us to change the topology of the supercomputer interconnect to accelerate the performance of an ML model." Google said that its chips are up to 1.7 times faster and 1.9 times more power-efficient than a system based on Nvidia's A100 chip that was on the market at the same time as the fourth-generation TPU.

ABP News

Discover Related