Last year, we introduced FlexOlmo, a novel way to train parts of a model independently then combine them later.
BAR builds on that idea for a harder problem: how to keep improving a model without having to retrain each time. 🧵
Posts by
You can now train, adapt, and eval web agents on your own tasks.
We're releasing the full MolmoWeb codebase—the training code, eval harness, annotation tooling, synthetic data pipeline, & client-side code for our demo. 🧵
Today we're releasing WildDet3D—an open model for monocular 3D object detection in the wild.
It works with text, clicks, or 2D boxes, and on zero-shot evals it nearly doubles the best prior scores. 🧵
MolmoBot, our open robotic manipulation suite trained entirely in simulation, now has code, training data, a data generation pipeline, & evals all available.
This puts our robotics models within reach of any research lab—no extensive real-world data collection required. 🧵
Today we're releasing MolmoWeb, an open source agent that can navigate + complete tasks in a browser on your behalf.
Built on Molmo 2 in 4B & 8B sizes, it sets a new open-weight SOTA across four major web-agent benchmarks & even surpasses agents built on proprietary models. 🧵
Introducing Olmo Hybrid, a 7B fully open model combining transformer and linear RNN layers. It decisively outperforms Olmo 3 7B across evals, w/ new theory & scaling experiments explaining why. 🧵
In just a few weeks, researchers used AutoDiscovery to generate 20K+ hypotheses across oncology, climate science, marine ecology, entomology, cybersecurity, music cognition, social sciences, & more.
Now we're extending access for three more months—and refreshing credits. 👇
We analyzed 250K+ queries & 430K+ clickstream interactions from Asta, our AI-powered research assistant—and today we're releasing the full dataset. How do researchers actually use AI science tools? Here's what we found. 🧵
Can AI predict what scientists will do next—not just one piece, but the whole research process? PreScience is our new model eval for forecasting how science unfolds end-to-end, from how research teams form to a paper's eventual impact. Built with UChicago, supported by NSF.
We've released a Chrome extension for Asta—a faster way to go from finding a paper to asking questions about it while you read. 🧵
Data mixing – determining how much web text, code, math, etc., you need for LM development – is a first-order lever on model quality. Introducing Olmix: a framework for configuring mixing methods at the start of dev & efficiently updating as data changes throughout. 🧵
Knowing which questions to ask is often the hardest part of science. Today we're releasing AutoDiscovery in AstaLabs, an AI system that starts with your data and generates its own hypotheses. 🧪
Introducing MolmoSpaces, a large-scale, fully open platform + benchmark for embodied AI research. 🤖
230k+ indoor scenes, 130k+ object models, & 42M annotated robotic grasps—all in one ecosystem.
LLMs often generate step-by-step instructions, from real-world tasks (how do I file taxes?) to plans for AI agents. Improving this is hard: outputs can sound fluent for steps that don't work, and current datasets cover few domains.
How2Everything evals/trains for this at scale. 🧵
Since launching Open Coding Agents, it's been exciting to see how quickly the community has adopted them. Today we're releasing SERA-14B – a new 14B-parameter coding model – plus a major refresh of our open training datasets. 🧵
Introducing Theorizer: Turning thousands of papers into scientific laws 📚➡️📜
Most automated discovery systems focus on experimentation. Theorizer tackles the other half of science: theory building—compressing scattered findings into structured, testable claims. 🧵
Here's just one of the cool apps you can vibe-code with SERA, our new agentic coding model! I was lucky enough to get my hands on it early and it's quite capable via Claude Code. Give it a go today!
Introducing Ai2 Open Coding Agents—starting with SERA, our first-ever coding models. Fast, accessible agents (8B–32B) that adapt to any repo, including private codebases. Train a powerful specialized agent for as little as ~$400, & it works with Claude Code out of the box. 🧵
Introducing HiRO-ACE: an AI framework that makes highly detailed climate simulations dramatically more accessible. It generates decades of high-resolution precipitation data for any region in a day on a single GPU—no supercomputing cluster required. 🧵
Last year Molmo set SOTA on image benchmarks + pioneered image pointing. Millions of downloads later, Molmo 2 brings Molmo’s grounded multimodal capabilities to video 🎥—and leads many open models on challenging industry video benchmarks. 🧵
Introducing Bolmo, a new family of byte-level language models built by "byteifying" our open Olmo 3—and to our knowledge, the first fully open byte-level LM to match or surpass SOTA subword models across a wide range of tasks. 🧵
Olmo 3.1 is here. We extended our strongest RL run and scaled our instruct recipe to 32B—releasing Olmo 3.1 Think 32B & Olmo 3.1 Instruct 32B, our most capable models yet. 🧵
Update: DataVoyager, which we launched in Preview early this fall, is now available in Asta. 🎉
You can upload real datasets, ask complex research questions in natural language, & get back reproducible answers + visualizations. 🔍📊
Olmo 3 is now available through @hf.co Inference Providers, thanks to Public AI! 🎉
This means you can run our fully open 7B and 32B models — including Think and Instruct variants — via serverless API with no infrastructure to manage.
Our Olmo 3 models are now available via API on
@openrouter.bsky.social. Try Olmo 3-Instruct (7B) for chat & tool use, and our reasoning models Olmo-3 Think (7B & 32B) for more complex problems.
Announcing Olmo 3, a leading fully open LM suite built for reasoning, chat, & tool use, and an open model flow—not just the final weights, but the entire training journey.
Best fully open 32B reasoning model & best 32B base model. 🧵
Today we’re releasing Deep Research Tulu (DR Tulu)—the first fully open, end-to-end recipe for long-form deep research, plus an 8B agent you can use right away. Train agents that plan, search, synthesize, & cite across sources, making expert research more accessible. 🧭📚
Introducing OlmoEarth 🌍, state-of-the-art AI foundation models paired with ready-to-use open infrastructure to turn Earth data into clear, up-to-date insights within hours—not years.