Advertisement · 728 × 90

Posts by George E. Dahl

The Earth is *it* and people really need to accept that. Unfortunately, our most beloved art, movies and video games on science fiction undermines that goal and probably contributes to the cavalier way we treat the Earth and each other.

2 months ago 18 4 1 0
Preview
GitHub - mlcommons/algorithmic-efficiency: MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training a... MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models. - mlcommons/algori...

We just released AlgoPerf v0.6! 🎉
✅ Rolling leaderboard
✅ Lower compute costs
✅ JAX jit migration
✅ Bug fixes & flexible API
Coming soon: More contemporary baselines + an LM workload…
github.com/mlcommons/al...

7 months ago 3 2 0 0
Research Engineer/Scientist, Training Algorithms Mountain View, California, US

My team is hiring a research scientist or engineer to work on methodological deep learning research! We study how to improve the deep learning "workflow" (developers.google.com/machine-lear...) with a special emphasis on training algorithms and recipes job-boards.greenhouse.io/deepmind/job...

9 months ago 8 3 0 0
ICLR 2025: Accelerating Neural Network Training (AlgoPerf)
ICLR 2025: Accelerating Neural Network Training (AlgoPerf) YouTube video by Tübingen Machine Learning

The explainer video: www.youtube.com/watch?v=_yX1...

1 year ago 7 2 0 0

We're all about acceleration! 😉
Watch @priya-kasimbeg.bsky.social & @fsschneider.bsky.social speedrun an explanation of the AlgoPerf benchmark, rules, and results all within a tight 5 minutes for our #ICLR2025 paper video on "Accelerating Neural Network Training". See you in Singapore!

1 year ago 5 4 1 0

Hi there! This account will post about the AlgoPerf benchmark and leaderboard updates for faster neural network training via better training algorithms. But let's start with what AlgoPerf is, what we have done so far, and how you can train neural nets ~30% faster.

1 year ago 6 3 1 1
Post image

Making LLMs run efficiently can feel scary, but scaling isn’t magic, it’s math! We wanted to demystify the “systems view” of LLMs and wrote a little textbook called “How To Scale Your Model” which we’re releasing today. 1/n

1 year ago 94 29 3 8