Nvidia’s $3,000 ‘Personal AI Supercomputer’ Will Let You Ditch the Data Center
WiredNvidia already sells boatloads of computer chips to every major company building proprietary artificial intelligence models. It contains an Nvidia “superchip” called GB10 Grace Blackwell, optimized to accelerate the computations needed to train and run AI models, and comes equipped with 128 gigabytes of unified memory and up to 4 terabytes of NVMe storage for handling especially large AI programs. Jensen Huang, founder and CEO of Nvidia, announced the new system, along with several other AI offerings, during a keynote speech today at CES, an annual confab for the computer industry held in Las Vegas. Nvidia says the Digits machine, which stands for "deep learning GPU intelligence training system," will be able to run a single large language model with up to 200 billion parameters, a rough measure of a model’s complexity and size. If two Digits machines are connected using a proprietary high-speed interconnect link, Nvidia says they will be able to run the most capable version available of Meta’s open source Llama model, which has 405 billion parameters.