Advertisement ยท 728 ร— 90

Posts by Harris Chan

Post image

Here's my attempt at visualizing the training pipeline for DeepSeek-R1(-Zero) and the distillation to smaller models.

Note they retrain DeepSeek-V3-Base with the new 800k curated data instead of continuing to finetune the checkpoint from the first round of cold-start SFT + RL

1 year ago 6 0 0 0

It was a pleasure working on this project during my first year at Google DeepMind with our amazing collaborators led by @anianruoss.bsky.social, @pardofab.bsky.social, @bonniesjli.bsky.social, @vladmnih.bsky.social, and Tim Genewein!

1 year ago 2 0 0 0

If you can imagine it, you can play it in Genie 2 ๐Ÿงž

Our foundation world model is capable of generating interactive worlds controllable with keyboard/mouse actions, starting from a single prompt image

So proud to have been part of this work led by @jparkerholder.bsky.social and @rockt.ai ๐Ÿ™

1 year ago 5 0 0 0

LMs see, can LMs do?

LMAct benchmarks current SOTA foundation models' ability to act in text/visual environments using text as low-level actions in many domains using in-context expert (multimodal) demonstrations. We're excited to see how this benchmark drives further progress!

1 year ago 6 0 1 0