Advertisement · 728 × 90
#
Hashtag
#MLPerf
Advertisement · 728 × 90
Video

MLPerf Inference v6.0 drops next Wednesday.

New benchmarks for text-to-video, speculative decoding, and VLMs.

Are you ready? #MLPerf

0 0 0 0
Post image

#MLPerf Inference v6.0 adds two LLM benchmarks: GPT-OSS 120B, a new open-weight MoE model benchmark covering math, coding & science reasoning, plus a DeepSeek-R1 interactive scenario — the first #MLPerf standard for speculative decoding.
https://bit.ly/4m19oCl

1 0 0 0
Standardizing Generative AI Service Evaluation: An API-Centric Benchmarking Approach - MLCommons MLPerf® Endpoints brings API-native benchmarking, Pareto curve visualizations, and rolling submissions to generative AI infrastructure evaluation.

GenAI inference doesn't behave like classical ML. MLPerf® Endpoints is being designed to benchmark the full complexity of production GenAI services — not just peak numbers. mlcommons.org/2026/03/mlperf-endpoints... #MLPerf #AIBenchmarking

0 0 0 0
Post image

It’s today! 🎤

David Kanter is on stage at #GTC2026 in just a few hours — MLPerf Endpoints, the API-centric way to benchmark any Gen AI system.

📅 12–12:40 PM · San Jose Convention Center
🔗 https://bit.ly/4l5p8Uh
#MLPerf #MLCommons

0 0 0 0
Post image

Tomorrow at #GTC2026 — David Kanter on MLPerf Endpoints.

✅ Any API, any system
✅ Standardized Pareto curves
✅ Rolling submissions
✅ Transparent & reproducible

📅 March 19 · 12–12:40 PM · San Jose
🔗 https://bit.ly/4l5p8Uh
#MLPerf #MLCommons

0 0 0 0
Post image

🎉 MLPerf Mobile is now on the iOS App Store!
Run industry-standard ML benchmarks right from your iPhone.
See how your device performs — download it now 📲
https://apple.co/4bvbfdI
#MLPerf #AI #MachineLearning

2 1 0 0
Preview
Standardize Gen AI Service Evaluation: An API-Centric Benchmarking Approach Evaluating Gen AI services across heterogeneous environments presents significant visibility gaps for developers, as traditional hardware-centric b...

Enterprise buyers, infra providers, model developers — all need AI benchmarks they can trust.

MLPerf Endpoints: standardized Pareto curves, rolling submissions, transparent results.

David Kanter at #GTC2026 · Thursday · 12–12:40 PM · San Jose
🔗 https://bit.ly/4l5p8Uh
#MLPerf #MLCommons

0 1 0 0
Preview
Standardize Gen AI Service Evaluation: An API-Centric Benchmarking Approach Evaluating Gen AI services across heterogeneous environments presents significant visibility gaps for developers, as traditional hardware-centric b...

#GTC2026 is live! 🎉

Join us Thursday when David Kanter presents MLPerf Endpoints — API-centric benchmarking built for the Gen AI era.

📅 March 19 · 12–12:40 PM · San Jose
🔗 www.nvidia.com/gtc/session-catalog/sess...
#MLPerf #MLCommons #NVIDIA

0 1 0 0
Preview
Standardize Gen AI Service Evaluation: An API-Centric Benchmarking Approach Evaluating Gen AI services across heterogeneous environments presents significant visibility gaps for developers, as traditional hardware-centric b...

How do you benchmark an AI system you can’t look inside?

MLPerf Endpoints: any system, any cloud, any model — one methodology.

David Kanter at #GTC2026 · March 19 · 12–12:40 PM · San Jose
🔗 www.nvidia.com/gtc/session-catalog/sess...
#MLPerf #MLCommons

0 0 0 0
Preview
Standardize Gen AI Service Evaluation: An API-Centric Benchmarking Approach Evaluating Gen AI services across heterogeneous environments presents significant visibility gaps for developers, as traditional hardware-centric b...

🎤 David Kanter is speaking at #GTC2026!

"If It Has an API, We Can Measure It: MLPerf Enters the Gen AI Era"

📅 March 19 · 12–12:40 PM · San Jose Convention Center
🔗 https://bit.ly/4l5p8Uh
#MLPerf #MLCommons #GenAI

2 3 0 0

Results day is coming.
MLPerf Inference v6.0 drops April 1 — cross-platform AI inference data spanning datacenter, edge & more. Follow so you don't miss it.
#MLPerf #AIInference

0 0 0 0
Post image

Just saw NVIDIA’s NVFP4 recipe slash training time and costs on Blackwell Ultra GPUs—MLPerf scores are soaring and Llama 3.1 cranks out faster than ever. Want the nitty‑gritty on how GPU acceleration is reshaping LLM training? Dive in! #NVFP4 #MLPerf #Llama3_1

🔗 aidailypost.com/news/nvidias...

0 0 0 0

1/4 🧵
First Qwen model in MLPerf.
40M products daily.
Real production data from Shopify's e-commerce infrastructure.
Submit by Feb 13, 2026 👇
#MLPerf #Shopify #VLM #MLCommons

0 0 1 0
Preview
MLCommons MLPerf Inference v6.0 Qwen3-VL Shopify Catalog text on a checkered background with mlcommons and shopify logos

🚀 NEW: MLPerf Inference v6.0 debuts Qwen3-VL + Shopify Product Catalog benchmark
40M products daily. Real production data. First Qwen model in MLPerf.
Submit by Feb 13, 2026 →
https://bit.ly/4k9F5YS
#MLPerf #VLM #Shopify #MLCommons

1 1 0 0
Preview
MLPerf Mobile - Apps on Google Play An AI benchmark for mobile devices

MLPerf Mobile app v5.0.4 is here!
#MLPerf Mobile release now supports Samsung Exynos 2600 and Qualcomm's newest Snapdragon lineup, from flagship 8 Elite Gen 5 to mid-range 6 Gen 4. Meaning more comprehensive, apples-to-apples #AIperformance data across devices.
play.google.com/store/apps/d...

0 1 0 0

📰 AMD Umumkan Hasil MLPerf 5.1 Training Pertama untuk GPU Instinct MI350 Series

👉 Baca artikel lengkap di sini: ahmandonk.com/2025/11/19/amd-instinct-...

#ai-training #amd #gpu-computing #hardware #instinct-mi350 #mlperf #rocm

0 0 0 0
Preview
Wiwynn Achieves Record-Breaking MLPerf® Training Results with Llama 2 70B at YTL Malaysia Wiwynn sets a new standard in AI training performance with record-breaking MLPerf® Training v5.1 results at YTL Malaysia, enhancing infrastructure efficiency.

Wiwynn Achieves Record-Breaking MLPerf® Training Results with Llama 2 70B at YTL Malaysia #Malaysia #Johor #Wiwynn #MLPerf #YTL_AI_Cloud

0 0 0 0
Post image

Just saw NVIDIA’s Blackwell crush every MLPerf Training v5.1 benchmark using FP4 precision – even outpacing FP16 on Llama 3.1’s 405‑billion‑parameter model. The future of GPU AI is here. Dive in for the full breakdown! #NVIDIABlackwell #MLPerf #FP4

🔗 aidailypost.com/news/nvidia-...

0 0 0 0
Post image

Large language models #LLMs are growing extremely quickly, and the #hardware systems that they require can’t keep up with the pace. Each time #MLPerf introduces a new benchmark, training time increases. The data tells the story. spectrum.ieee.org/mlperf-trends

5 3 0 0

📰 NVIDIA Dominasi MLPerf Training v5.1, Menang di Semua Benchmark

👉 Baca artikel lengkap di sini: ahmandonk.com/2025/11/13/nvidia-menang...

#ai #training #blackwell #gpu #llama #mlperf #nvidia

1 0 0 0
Preview
NVIDIA Blackwell Ultra Secures Win Across All Seven MLPerf AI Training Benchmarks, GB300 NVL72 Sets Record 10 Minutes Training Time For Llama 405B By securing wins across all MLPerf training tests, NVIDIA boasts its Blackwell Ultra-based GB300 NVL72 platform, which delivers leading AI training performance. NVIDIA Showcases its GB300 NVL72 "Blackwell Ultra" Results in MLPerf AI Training Tests; Up To Five Times the Performance vs Hopper-Based Platform When it comes to delivering leading AI performance, NVIDIA GPUs have always been at the forefront. The Blackwell-based data center GPUs have already showcased their incredible potential several times previously, and the latest GB300 NVL72 platform is no exception. Today, NVIDIA has proudly announced that its Blackwell Ultra-powered AI GPUs have secured the first position in […]
0 0 0 0
New Results! MLPerf Training v5.1

New Results! MLPerf Training v5.1

MLPerf Training v5.1 results are live!
Record participation: 20 organizations submitted 65 unique systems featuring 12 different accelerators. Multi-node submissions increased 86% over last year, showing the industry's focus on scale.
Results: mlcommons.org/2025/11/trai...
#MLPerf
1/3

2 2 1 0
Preview
AI Model Growth Outpaces Hardware Improvements Since 2018, the consortium MLCommons has been running a sort of Olympics for AI training. The competition, called MLPerf, consists of a set of tasks for training specific AI models, on predefined datasets, to a certain accuracy. Essentially, these tasks, called benchmarks, test how well a hardware and low-level software configuration is set up to train a particular AI model. Twice a year, companies put together their submissions—usually, clusters of CPUs and GPUs and software optimized for them—and compete to see whose submission can train the models fastest. There is no question that since MLPerf’s inception, the cutting-edge hardware for AI training has improved dramatically. Over the years, Nvidia has released four new generations of GPUs that have since become the industry standard (the latest, Nvidia’s Blackwell GPU, is not yet standard but growing in popularity). The companies competing in MLPerf have also been using larger clusters of GPUs to tackle the training tasks. However, the MLPerf benchmarks have also gotten tougher. And this increased rigor is by design—the benchmarks are trying to keep pace with the industry, says David Kanter, head of MLPerf. “The benchmarks are meant to be representative,” he says. Intriguingly, the data show that the large language models and their precursors have been increasing in size faster than the hardware has kept up. So each time a new benchmark is introduced, the fastest training time gets longer. Then, hardware improvements gradually bring the execution time down, only to get thwarted again by the next benchmark. Then the cycle repeats itself.
0 0 0 0

📰 Solidigm Buka AI Central Lab dengan 192 SSD Berkapasitas Total 23,6 Petabyte dalam 16U

👉 Baca artikel lengkap di sini: ahmandonk.com/2025/10/11/solidigm-ai-c...

#ai #d5-p5336 #d7-ps1010 #data #center #metrum #ai #mlperf #penyimpanan #data #solidigm #ssd

0 1 0 0
Preview
A New TinyML Streaming Benchmark for MLPerf Tiny v1.3 - MLCommons A New TinyML Streaming Benchmark for MLPerf Tiny v1.3

TinyML benchmarks finally address real-world deployment with MLCommons' new streaming benchmark in MLPerf Tiny v1.3. Tests 20-minute continuous wake word detection while measuring power and duty cycle.
Technical deep dive: mlcommons.org/2025/09/mlpe... #MLPerf #TinyML #EdgeAI

0 0 0 0
Preview
MLPerf Introduces Largest and Smallest LLM Benchmarks Nvidia's Blackwell Ultra chip is setting new standards in AI performance. How does it achieve nearly 50% performance gain?

This year's #MLPerf introduced three new benchmark tests (its largest yet, its smallest yet, and a new voice-to-text model), and #Nvidia's Blackwell Ultra topped the charts on the two largest benchmarks.

1 0 0 0
Preview
The Summer of MLPerf Congratulations to MLPerf Inference v5.1 for a new submission record! MLPerf Inference is the fourth benchmark release in under two months. Progress in AI is rapid, and the organization is thrilled…

BREAKING TODAY! The Summer of MLPerf -- radicaldatascience.wordpress.com/2025/09/11/t...

#AI #LLM #GenAI #MachineLearning @mlcommons.org #MLPerf

1 0 0 0
Post image

Machine Learning Tests Keep Getting Bigger The machine learning field is moving fast, and the yardsticks used measure progress in it are having to race to keep up. A case in point, MLPerf, the bi-a...

#Mlper #Ai #Nvidia #Amd #Intel #Mlperf

Origin | Interest | Match

0 0 0 0
Post image

Machine Learning Tests Keep Getting Bigger The machine learning field is moving fast, and the yardsticks used measure progress in it are having to race to keep up. A case in point, MLPerf, the bi-a...

#Mlper #Ai #Nvidia #Amd #Intel #Mlperf

Origin | Interest | Match

0 0 0 0
Post image

Machine Learning Tests Keep Getting Bigger The machine learning field is moving fast, and the yardsticks used measure progress in it are having to race to keep up. A case in point, MLPerf, the bi-a...

#Mlper #Ai #Nvidia #Amd #Intel #Mlperf

Origin | Interest | Match

0 0 0 0