AI, quantum computing and chip leaps transcend science fiction
Hindustan TimesIn my younger years, teens and perhaps even younger, visual and written science fiction was a decidedly aspirational periscope. In just the past few days, Google’s unveiled a quantum chip thats a sort of breakthrough, Nvidia’s unveiled an affordable yet astonishingly powerful Generative AI supercomputer, OpenAI’s o1 model is out of preview, and Google’s raised that move with their own Gemini 2.0 update. By being able to handle 67 trillion operations per second in what is really a tiny form factor, Nvidia’s craziness makes this computing device more than relevant for robotics, research, small businesses, even for training advanced AI models and running somewhat lightweight AI applications. As Microsoft Azure describes the differences between qubit and bits, it states, “A qubit, however, can represent a 0, a 1, or any proportion of 0 and 1 in superposition of both states, with a certain probability of being a 0 and a certain probability of being a 1.” Google’s latest quantum chip, they claim, performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion years—they specifically reference ORNL’s Exascale super computer, which itself exceeds a quintillion calculations per second; primed for scientists developing new tech and research for medicine, energy and materials. Here’s something that would likely escape wider attention—Google researchers say that the more qubits were being used with 105-qubit quantum processor Willow, the greater a reduction in errors.