New AI benchmarks test speed of running AI applications
San Francisco: Artificial intelligence group MLCommons unveiled two new benchmarks that it said can help determine how quickly top-of-the-line hardware and software can run AI applications. As the underlying models must respond to many more queries to power AI applications such as chatbots and search engines, MLCommons developed two new versions of its MLPerf benchmarks to gauge speed. One of the new benchmarks is based on Meta's so-called Llama 3.1 405-billion-parameter AI model, and the test targets general question answering, math and code generation. For the new test, Nvidia's latest generation of artificial intelligence servers - called Grace Blackwell, which have 72 Nvidia graphics processing units inside - was 2.8 to 3.4 times faster than the previous generation, even when only using eight GPUs in the newer server to create a direct comparison to the older model, the company said at a briefing on Tuesday.

Discover Related

Qwen 2.5 vs DeepSeek vs ChatGPT: Comparing performance, efficiency, and cost in AI battle

DeepSeek AI: How to use? Why is US alarmed by the rise of China’s AI? Explained

Chinese AI firm releases DeepSeek V3, a new leader in open-source AI models

Meta prioritizes open-source play, native Hindi support to rival OpenAI, Google

Meta Unveils New Big Llama 3 AI Model Capable Of Doing Maths Problems

Apple joins the AI race with ChatGPT-inspired 'Apple GPT' chatbot, testing underway

Meta brings AI chatbot with own large language model for researchers
