Very excited to share our work on SIMA, a general embodied agent for 3D worlds.
Even more excited to share our work on self-improvement where our gemini-based SIMA 2 agent self-improves to human performance (no humans needed) using a gemini utility model and a gemini task setter! More info ⏬
Posts by Kaustubh Sridhar
REGENT will be presented as an Oral at ICLR 2025 in Singapore 🇸🇬, given to the top 1.8% of 11672 submissions! More details at our website: bit.ly/regent-research
Is scaling current agent architectures the most effective way to build generalist agents that can rapidly adapt?
Introducing 👑REGENT👑, a generalist agent that can generalize to unseen robotics tasks and games via retrieval-augmentation and in-context learning.
Bsky doesn’t want you to see the awesome gifs! Find them on our website: bit.ly/regent-research
This whole project would not have been possible without Souradeep Dutta, Dinesh Jayaraman, and Insup Lee.
We have many more results, ablations, code, dataset, model, and the paper at our website: bit.ly/regent-research
The arxiv link: arxiv.org/abs/2412.04759
REGENT is far from perfect.
It cannot generalize to new embodiments (unseen mujoco envs) or long-horizon envs (like spaceinvaders & stargunner). It cannot generalize to completely new suites (i.e. requires similarities between pre-training and unseen envs).
Few failed rollouts:
Here is a qualitative visualization of deploying REGENT in the unseen atari-pong environment.
While REGENT’s design choices are aimed at generalization, its gains are not limited to unseen environments: it even performs better than current generalist agents when deployed within the pre-training environments.
In the four unseen ProcGen environments, REGENT also outperforms the only other generalist agent, MTT, that can generalize to unseen environments via in-context learning. REGENT does so with an OOM less pretraining data and 1/3rd the number of params.
REGENT also outperforms the ‘All Data’ variants of JAT/Gato which were pre-trained on 5-10x the amount of data.
For context, the Multi-Game DT uses 1M states to finetune to new atari envs. REGENT generalizes via RAG from ~10k states. REGENT Finetuned further improves over REGENT
In the unseen metaworld & atari envs in the Gato setting, REGENT and R&P outperform SOTA generalist agents like JAT/Gato (the open source reproduction of Gato). REGENT outperforms JAT/Gato even after JAT/Gato is finetuned on data from the unseen envs.
We also evaluate on unseen levels and unseen environments in the ProcGen setting.
We evaluate REGENT on unseen robotics and game environments in the Gato setting.
REGENT has a few key ingredients, including an interpolation between R&P and the transformer. This allows the transformer to more readily generalize to unseen envs, since it is given the easier task of predicting the residual to the R&P action rather than the complete action.
R&P simply picks the nearest retrieved state s′ to the query state st, and plays the corresponding action a’.
REGENT retrieves the 19 closest states, throws the corresponding (s, r, a) tuples in the context with query (st, rt-1), and acts via in-context learning in unseen envs.
Inspired by RAG and the success of a simple retrieval-based 1-nearest neighbor baseline that we call Retrieve-and-Play (R&P),
REGENT pretrains a transformer policy whose inputs are not just the query state st and previous reward rt-1, but also retrieved tuples of (state, previous reward, action).
REGENT is pretrained on data from many training envs (left). REGENT is then deployed on the held-out envs (right) with a few demos from which it can retrieve states, rewards, and actions to use for in-context learning. **It never finetunes on the demos in the held-out envs.**
Is scaling current agent architectures the most effective way to build generalist agents that can rapidly adapt?
Introducing 👑REGENT👑, a generalist agent that can generalize to unseen robotics tasks and games via retrieval-augmentation and in-context learning.
Bluesky doesn't want you to see these gifs! :) Please see the rollouts in unseen environments in our website: bit.ly/regent-research
We are also presenting REGENT in the Adaptive Foundation Models (today afternoon, Saturday Dec 14) and Open World Agents (tomorrow afternoon, Sunday Dec 15) workshops in NeurIPS. Please come by if you’d like to hear more!
This whole project would not have been possible without Souradeep Dutta, Dinesh Jayaraman, and Insup Lee.
We have many more results, ablations, code, dataset, model, and the paper at our website: bit.ly/regent-research
The arxiv link: arxiv.org/abs/2412.04759
REGENT is far from perfect.
It cannot generalize to new embodiments (unseen mujoco envs) or long-horizon envs (like spaceinvaders & stargunner). It cannot generalize to completely new suites (i.e. requires similarities between pre-training and unseen envs).
Few failed rollouts:
Cool demo of Gemini 2.0 Flash's new streaming API, by @simonwillison.net.
www.youtube.com/watch?v=mpgW...
Vancouver is so beautiful!
What would deep thought cost for the ultimate question? bsky.app/profile/nato...
What's missing to get to deep thought? :D
In the hitchhikers guide to the galaxy, when they built a huge computer (Deep Thought) to answer the ultimate question (of Life, the Universe and Everything), and it took 7.5 million years, it seems like they clearly did both train-time and test-time scaling.
Can no longer tell if LLMs are sounding like humans or some humans have always sounded like LLMs
I'd like to introduce what I've been working at @hellorobot.bsky.social: Stretch AI, a set of open-source tools for language-guided autonomy, exploration, navigation, and learning from demonstration.
Check it out: github.com/hello-robot/...
Thread ->
I'm still waiting for the "react/respond to the author rebuttal" from a couple of reviewers :_(