Here's my attempt at visualizing the training pipeline for DeepSeek-R1(-Zero) and the distillation to smaller models.
Note they retrain DeepSeek-V3-Base with the new 800k curated data instead of continuing to finetune the checkpoint from the first round of cold-start SFT + RL
Posts by Harris Chan
It was a pleasure working on this project during my first year at Google DeepMind with our amazing collaborators led by @anianruoss.bsky.social, @pardofab.bsky.social, @bonniesjli.bsky.social, @vladmnih.bsky.social, and Tim Genewein!
If you can imagine it, you can play it in Genie 2 ๐ง
Our foundation world model is capable of generating interactive worlds controllable with keyboard/mouse actions, starting from a single prompt image
So proud to have been part of this work led by @jparkerholder.bsky.social and @rockt.ai ๐
LMs see, can LMs do?
LMAct benchmarks current SOTA foundation models' ability to act in text/visual environments using text as low-level actions in many domains using in-context expert (multimodal) demonstrations. We're excited to see how this benchmark drives further progress!