One of the hidden aspects of the global AI race is who works with what processors, or chips. The AI developer with the fastest and most optimised processor, will have a massive advantage over anyone and everyone else who’s using an outdated processor. Now, Google claims that its latest generation of Tensor chips, or Tensor Processing Units is the fastest, most energy-efficient and most optimised chips. Currently, all major software studios are using NVIDIA’s A100 processors. The A100 chips are the most commonly used chips that development studios use for AI and Machine Learning workloads. Google claims that its latest Tensor chips outperform NVIDIA’s offering in every conceivable parameter. As good as NVIDIA’s A100 processors are, they are not the best that they have to offer - that will be the H100. Google’s Tensor Processors make a massive jump Google recently revealed new information about the supercomputers it uses to teach its artificial intelligence models, claiming the systems are both faster and more power-efficient than similar Nvidia Corp systems. Google created its own special processor, known as the Tensor Processing Unit, or TPU. It employs those processors for more than 90 per cent of the company’s artificial intelligence training, which is the process of putting data into models to make them usable. The Google TPU has reached its fourth iteration. Google released an article on Tuesday outlining how it connected over 4,000 of the chips into a supercomputer using its own custom-developed optical switches to help link individual machines. Because large language models that fuel technologies like Google’s Bard or OpenAI’s ChatGPT have exploded in size, they are far too large to hold on a single chip. Hence, improving these connections has become a crucial point of rivalry among companies that create AI supercomputers. The computational power needed to train AI models Instead, the models must be distributed across thousands of processors, which must then collaborate for weeks or months to train the model. Google’s PaLM model, the company’s biggest publicly revealed language model to date, was trained over 50 days by splitting it across two of the 4,000-chip supercomputers. Google claims that its supercomputers make it simple to reconfigure links between processors on the run, thereby avoiding problems and optimising performance. In a blog article about the system, Google Fellow Norm Jouppi and Google Distinguished Engineer David Patterson wrote, “Circuit switching makes it easy to route around failed components.” “We can even change the topology of the supercomputer interconnect to accelerate the performance of an ML (machine learning) model because of this flexibility." A healthy line of processors in the pipeline While Google is only now disclosing information about its supercomputer, it has been operational since 2020 in a data centre in Mayes County, Oklahoma. Google stated that the system was used by the startup Midjourney to train its model, which creates new images after being given a few lines of text. According to the study, Google’s chips are up to 1.7 times quicker and 1.9 times more power-efficient than a system built on Nvidia’s A100 chip, which was on the market at the same time as the fourth-generation TPU. Google stated that it did not compare its fourth-generation processor to Nvidia’s current top H100 chip because the H100 was introduced after Google’s chip and uses newer technology. Google hinted that it was working on a new TPU to contend with the Nvidia H100, but gave no further information. All they said was that they have a healthy line of chips in the pipeline. Read all the Latest News , Trending News , Cricket News , Bollywood News , India News and Entertainment News here. Follow us on Facebook , Twitter and Instagram .