Advertisement Β· 728 Γ— 90

Posts by Arvind Nagaraj

Preview
The Loop is Back: Why HRM is the Most Exciting AI Architecture in Years Years ago, I sat in Jeremy Howard’s FastAI class, right at the dawn of a new era. He was teaching us ULMFiT, a method he (& Sebastian…

It's a story about why QKV is magic, my love for the loop, and why HRM might be the blueprint for the next generation of AI reasoning.
My post, written with the help of an LLM (the irony!), is here. I poured my heart into this one:
medium.com/@gedanken.th...

#AI #DeepLearning #RNN #Transformer #HRM

8 months ago 1 1 0 0

The Hierarchical Reasoning Model (HRM) isn't just another model. It's a deep synthesis. It marries the iterative soul of an RNN (minus the BPTT nightmare) with the raw power of modern Attention.
I wrote a deep dive on why this is a full-circle moment for me, going back to the RNN finetuning days.

8 months ago 0 0 1 0

What makes HRM truly special is its ability to "think fast and slow."Its ACT module isn't just a stop signal; it's a cognitive engine that learns to allocate effort.
It's the closest we've come yet to embodying Prof. Kahneman's vision of a System 1/2 mind in code.

8 months ago 0 0 1 0

But how does it fix mistakes buried deep in the past? By not letting them stay in the past.
Each new "Thinking Session" (the M-loop) starts with the flawed result of the last one. It forces the model to confront its own errors until the logic is perfect.

8 months ago 0 0 1 0

So how does HRM work? Imagine a tiny,2-person company.
🧠 A strategic CEO (H-module) who thinks slow, sees the big picture, and sets the overall strategy.
⚑️ A diligent Worker (L-module) who thinks fast, executing the details of the CEO's plan.
This separation allows for truly deep, iterative thought.

8 months ago 0 0 1 0
Post image

The Hierarchical Reasoning Model (HRM) isn't just another model. It's a deep synthesis. It marries the iterative soul of an RNN (minus the BPTT nightmare) with the raw power of modern Attention.

8 months ago 0 0 1 0
Post image

Then, last month, a paper dropped that changes everything.
This is the architecture I've been waiting for since 2018. A thread on HRM. 🧡

8 months ago 0 0 1 0
Post image

For years, I died a little inside every time I taught the Transformer model, grudgingly accepting that the elegant loop of the RNN was dead.

8 months ago 1 0 1 0

You're supposed to what? Swallow the toothpaste?

1 year ago 0 0 0 0
Advertisement

πŸ”₯πŸ”₯
MCTS rollout pruning, python interpreter verifier and iterative self improvement of intermediate steps during each round of training.
Brilliant stuff thisπŸ’ͺ
rStar-Math is the kind of paper I wish to see more of!

1 year ago 3 0 0 0
Post image Post image

(1/7) For a while we've been working on an ambitious problem: The National Archive of Mexico #AGN holds 58 linear km of documents. Only a drop of this β€˜ocean’ has been studied due to many challenges. But great news: we are now unlocking this information! A thread 🧡 (1/8) #HTR #AI #CulturalHeritage

1 year ago 140 60 5 13

Computer Vision: Fact & Fiction is now available on YouTube πŸ™ŒπŸΌ I made a playlist for it with the seven chapters. Enjoy this time capsule from two decades ago!

1 year ago 58 16 4 4

I like how the new gemini 2.0 thinking model insists like a child...lol

1 year ago 0 0 0 0


Taking a time machine within a time machine... stealing someone's consciousness...the ideas were next level!
The guy is a beast.
It's a shame Shane Carruth couldn't carry on making more amazing films.

1 year ago 1 0 0 0

Yooo...a primer fan?
There are so many incredible moments in this film.
Wow...have you seen 'Upstream color' as well?

1 year ago 2 0 1 0

Wow!
I should read this!

1 year ago 0 0 0 0

Ah...

1 year ago 0 0 0 0

What does "fuch" mean?

1 year ago 0 0 1 0

Diffusion transformer (DiT) ftw!!

1 year ago 1 0 0 0
Advertisement

6. V is not rotated. Only Q and K are rotated relative to each other. Farther tokens now have a larger angle between them.
7. The encoding signal is not going to die out. It can be preserved by doing it as part of the softmax dot product attn.
8. What a gorgeous 😍 idea...

1 year ago 1 0 0 0

4. RoPE takes this operation from the beginning of the input to inside the attention operation itself.
5. There are 2 benefits: the semantic meaning of the token is not corrupted. We only rotate the vector, preserving the magnitude.

1 year ago 0 0 1 0

TL;DR:
1. We need a way to encode token positions when feeding them as input into the transformer
2. We could just concat 1,2,3 etc. but this doesn't scale for variable lengths
3. Noam Shazeer showed show sin and cos waves can produce a beautiful pattern that encodes relative positions bw tokens.

1 year ago 0 0 1 0
fleetwood.dev fleetwood.dev

Link: fleetwood.dev/posts/you-co...

1 year ago 0 0 1 0

RoPE has been the one πŸ’― genuine upgrade to the vanilla Vaswani transformer.

This beautiful blogpost by Chris Fleetwood explains the significance and how rotations of Q & K preserves meaning(magnitude) while encodes relative positions(angle shift) πŸ”₯πŸ”₯

1 year ago 13 2 1 0
Post image

Why does ChatGPT refuse to say "David Mayer" ?? πŸ€”
I have tried a bunch of ways and it refuses to!! 😭

1 year ago 1 0 0 0

πŸ‘ŒπŸ™

1 year ago 0 0 0 0
Advertisement
Post image Post image Post image

πŸ€” Can you turn your vision-language model from a great zero-shot model into a great-at-any-shot generalist?

Turns out you can, and here is how: arxiv.org/abs/2411.15099

Really excited to this work on multimodal pretraining for my first bluesky entry!

🧡 A short and hopefully informative thread:

1 year ago 135 24 2 7

πŸ˜„

1 year ago 0 0 0 0

SIGGRAPH'25 (form): 48 days.
RSS'25 (abs): 49 days.
SIGGRAPH'25 (paper-md5): 55 days.
RSS'25 (paper): 56 days.
ICML'25: 62 days.
RLC'25 (abs): 77 days.
RLC'25 (paper): 84 days.
ICCV'25: 97 days.

1 year ago 12 1 0 2

We should give this place a serious try...
It may work πŸ™

1 year ago 0 0 0 0