It's a story about why QKV is magic, my love for the loop, and why HRM might be the blueprint for the next generation of AI reasoning.
My post, written with the help of an LLM (the irony!), is here. I poured my heart into this one:
medium.com/@gedanken.th...
#AI #DeepLearning #RNN #Transformer #HRM
Posts by Arvind Nagaraj
The Hierarchical Reasoning Model (HRM) isn't just another model. It's a deep synthesis. It marries the iterative soul of an RNN (minus the BPTT nightmare) with the raw power of modern Attention.
I wrote a deep dive on why this is a full-circle moment for me, going back to the RNN finetuning days.
What makes HRM truly special is its ability to "think fast and slow."Its ACT module isn't just a stop signal; it's a cognitive engine that learns to allocate effort.
It's the closest we've come yet to embodying Prof. Kahneman's vision of a System 1/2 mind in code.
But how does it fix mistakes buried deep in the past? By not letting them stay in the past.
Each new "Thinking Session" (the M-loop) starts with the flawed result of the last one. It forces the model to confront its own errors until the logic is perfect.
So how does HRM work? Imagine a tiny,2-person company.
π§ A strategic CEO (H-module) who thinks slow, sees the big picture, and sets the overall strategy.
β‘οΈ A diligent Worker (L-module) who thinks fast, executing the details of the CEO's plan.
This separation allows for truly deep, iterative thought.
The Hierarchical Reasoning Model (HRM) isn't just another model. It's a deep synthesis. It marries the iterative soul of an RNN (minus the BPTT nightmare) with the raw power of modern Attention.
Then, last month, a paper dropped that changes everything.
This is the architecture I've been waiting for since 2018. A thread on HRM. π§΅
For years, I died a little inside every time I taught the Transformer model, grudgingly accepting that the elegant loop of the RNN was dead.
You're supposed to what? Swallow the toothpaste?
π₯π₯
MCTS rollout pruning, python interpreter verifier and iterative self improvement of intermediate steps during each round of training.
Brilliant stuff thisπͺ
rStar-Math is the kind of paper I wish to see more of!
(1/7) For a while we've been working on an ambitious problem: The National Archive of Mexico #AGN holds 58 linear km of documents. Only a drop of this βoceanβ has been studied due to many challenges. But great news: we are now unlocking this information! A thread π§΅ (1/8) #HTR #AI #CulturalHeritage
Computer Vision: Fact & Fiction is now available on YouTube ππΌ I made a playlist for it with the seven chapters. Enjoy this time capsule from two decades ago!
I like how the new gemini 2.0 thinking model insists like a child...lol
Taking a time machine within a time machine... stealing someone's consciousness...the ideas were next level!
The guy is a beast.
It's a shame Shane Carruth couldn't carry on making more amazing films.
Yooo...a primer fan?
There are so many incredible moments in this film.
Wow...have you seen 'Upstream color' as well?
Wow!
I should read this!
Ah...
What does "fuch" mean?
Diffusion transformer (DiT) ftw!!
6. V is not rotated. Only Q and K are rotated relative to each other. Farther tokens now have a larger angle between them.
7. The encoding signal is not going to die out. It can be preserved by doing it as part of the softmax dot product attn.
8. What a gorgeous π idea...
4. RoPE takes this operation from the beginning of the input to inside the attention operation itself.
5. There are 2 benefits: the semantic meaning of the token is not corrupted. We only rotate the vector, preserving the magnitude.
TL;DR:
1. We need a way to encode token positions when feeding them as input into the transformer
2. We could just concat 1,2,3 etc. but this doesn't scale for variable lengths
3. Noam Shazeer showed show sin and cos waves can produce a beautiful pattern that encodes relative positions bw tokens.
RoPE has been the one π― genuine upgrade to the vanilla Vaswani transformer.
This beautiful blogpost by Chris Fleetwood explains the significance and how rotations of Q & K preserves meaning(magnitude) while encodes relative positions(angle shift) π₯π₯
Why does ChatGPT refuse to say "David Mayer" ?? π€
I have tried a bunch of ways and it refuses to!! π
ππ
π€ Can you turn your vision-language model from a great zero-shot model into a great-at-any-shot generalist?
Turns out you can, and here is how: arxiv.org/abs/2411.15099
Really excited to this work on multimodal pretraining for my first bluesky entry!
π§΅ A short and hopefully informative thread:
π
SIGGRAPH'25 (form): 48 days.
RSS'25 (abs): 49 days.
SIGGRAPH'25 (paper-md5): 55 days.
RSS'25 (paper): 56 days.
ICML'25: 62 days.
RLC'25 (abs): 77 days.
RLC'25 (paper): 84 days.
ICCV'25: 97 days.
We should give this place a serious try...
It may work π