Advertisement · 728 × 90

Posts by Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)

Preview
Misconceptions about LLM-Modulo and the Role of Verifiers in Reasoning Models. #SundayHarangue I have talked before about LLM-Modulo framework (c.f.

𝙈𝙞𝙨𝙘𝙤𝙣𝙘𝙚𝙥𝙩𝙞𝙤𝙣𝙨 𝙖𝙗𝙤𝙪𝙩 𝙇𝙇𝙈-𝙈𝙤𝙙𝙪𝙡𝙤 𝙖𝙣𝙙 𝙩𝙝𝙚 𝙍𝙤𝙡𝙚 𝙤𝙛 𝙑𝙚𝙧𝙞𝙛𝙞𝙚𝙧𝙨 𝙞𝙣 𝙍𝙚𝙖𝙨𝙤𝙣𝙞𝙣𝙜 𝙈𝙤𝙙𝙚𝙡𝙨. #SundayHarangue

www.linkedin.com/pulse/miscon...

1 week ago 3 0 0 0
Post image

I am oh-so-proud of it to the point of talking everyone's ear off about it for the past several months.. 2/2

Here is a twitter thread about the paper: x.com/rao2z/status...

And here is a one hour talk centered on this paper given at #NeurIPS2025
www.youtube.com/watch?v=rvby...

3 weeks ago 2 0 0 0
Post image Post image

Our paper questioning the wide-spread anthropomorphization of LRM intermediate tokens as "reasoning traces" has just been accepted to @tmlrorg.bsky.social (arxiv.org/abs/2505.13775). This work was lead by the dream team of Karthik Valmeekam, Vardhan Palod, Kaya Stechly and Atharva Gundawar. 1/

3 weeks ago 21 2 1 0
Preview
𝐋𝐋𝐌-𝐏𝐫𝐨𝐜𝐞𝐬𝐬-𝐌𝐨𝐝𝐮𝐥𝐨: Our original 𝘓𝘓𝘔-𝘔𝘰𝘥𝘶𝘭𝘰 framework (https://lnkd.in/gyAyKx4E) is a Generate-Test framework, with the LLM generating candidate solutions and a bank of… | Subbarao Kambhampati 𝐋𝐋𝐌-𝐏𝐫𝐨𝐜𝐞𝐬𝐬-𝐌𝐨𝐝𝐮𝐥𝐨: Our original 𝘓𝘓𝘔-𝘔𝘰𝘥𝘶𝘭𝘰 framework (https://lnkd.in/gyAyKx4E) is a Generate-Test framework, with the LLM generating candidate solutions and a bank of verifiers critiquing those sol...

Verifying LLM problem progress with LLM-Process-Modulo

www.linkedin.com/posts/subbar...

4 weeks ago 7 0 0 0
Preview
World Models: The old, the new and the wishful #SundayHarangue There is a lot of chatter about world models of late--even more than can be explained by Yann betting his entire new enterprise on it. I was going to comment on this clamor in my class this week, and ...

World Models: The old, the new and the wishful #SundayHarangue

There is a lot of chatter about world models of late--even more than can be explained by Yann's bet. I was going to comment on this clamor in my next class, and thought I will preview it here first..😋

www.linkedin.com/pulse/world-...

1 month ago 2 0 0 0

In the immortal words of Azriel Rosenfeld

"This research fills a much needed gap.."

1 month ago 1 0 0 0

If you, as a CS Prof, are wondering whether you need PhD students at all now that you have wangled a subscription to Claude Code, your lab probably had a pretty depressing vibe to begin with--and 'em students are likely better off with you hanging out with Claude.. #AIAphorisms

1 month ago 4 0 1 0
Role of LLMs in Human-AI Interaction (Keynote @ Cogsima 2026)
Role of LLMs in Human-AI Interaction (Keynote @ Cogsima 2026) YouTube video by Subbarao Kambhampati

Our initial interest in the reasoning capabilities of LLMs arose from our proximal work on Human-AI interaction. It was thus gratifying to give a keynote on the role of LLMs in Human-AI interaction at IEEE CogSIMA conference yesterday

Video 👉 youtu.be/yf4RQYlKRJI

1 month ago 3 0 0 0
Post image

We updated our position on anthropomorphization of intermediate tokens in LRMs--with additional results and a call to action.. arxiv.org/abs/2504.09762

1 month ago 19 3 0 0
Preview
Planning & Reasoning Abilities of LLMs/LRMs (Lecture 2 @ Melbourne ML Summer School 2026) | Subbarao Kambhampati 𝗣𝗼𝘀𝘁-𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗼𝗿 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝘀𝗲𝗹𝗳-𝗱𝗶𝘀𝘁𝗶𝗹𝗹𝗮𝘁𝗶𝗼𝗻 𝗶𝘀 𝗮𝗹𝗹 𝗮𝗯𝗼𝘂𝘁 𝗰𝗼𝗺𝗽𝗶𝗹𝗶𝗻𝗴 𝘃𝗲𝗿𝗶𝗳𝗶𝗲𝗿 𝘀𝗶𝗴𝗻𝗮𝗹 𝗶𝗻𝘁𝗼 𝘁𝗵𝗲 𝗯𝗮𝘀𝗲 𝗟𝗟𝗠 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗼𝗿 #SundayHarangue 𝗦𝘁𝗮𝗻𝗱𝗮𝗿𝗱 𝗣𝗼𝘀𝘁-𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 ≈ 𝗖𝗼𝗺𝗽𝗶𝗹𝗶𝗻𝗴 𝗩𝗲𝗿𝗶𝗳𝗶𝗲𝗿 𝗦𝗶𝗴𝗻𝗮𝗹 𝗦𝗲𝗹𝗳-𝗱𝗶𝘀𝘁𝗶𝗹𝗹𝗮𝘁𝗶𝗼...

𝗣𝗼𝘀𝘁-𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗼𝗿 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝘀𝗲𝗹𝗳-𝗱𝗶𝘀𝘁𝗶𝗹𝗹𝗮𝘁𝗶𝗼𝗻 𝗶𝘀 𝗮𝗹𝗹 𝗮𝗯𝗼𝘂𝘁 𝗰𝗼𝗺𝗽𝗶𝗹𝗶𝗻𝗴 𝘃𝗲𝗿𝗶𝗳𝗶𝗲𝗿 𝘀𝗶𝗴𝗻𝗮𝗹 𝗶𝗻𝘁𝗼 𝘁𝗵𝗲 𝗯𝗮𝘀𝗲 𝗟𝗟𝗠 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗼𝗿 #SundayHarangue

𝗦𝘁𝗮𝗻𝗱𝗮𝗿𝗱 𝗣𝗼𝘀𝘁-𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 ≈ 𝗖𝗼𝗺𝗽𝗶𝗹𝗶𝗻𝗴 𝗩𝗲𝗿𝗶𝗳𝗶𝗲𝗿 𝗦𝗶𝗴𝗻𝗮𝗹
𝗦𝗲𝗹𝗳-𝗱𝗶𝘀𝘁𝗶𝗹𝗹𝗮𝘁𝗶𝗼𝗻 = 𝗖𝗼𝗺𝗽𝗶𝗹𝗶𝗻𝗴 𝗩𝗲𝗿𝗶𝗳𝗶𝗲𝗿 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝘁𝗼𝗼

See 👇 for more..

www.linkedin.com/posts/subbar...

1 month ago 2 0 0 0
Advertisement

Does it really make sense to think of inference efficiency in terms of the number of tokens produced?

No. 👇

x.com/i/status/202...

2 months ago 2 0 2 0

Sorry, but I think you miss the point that most of the reasoning model revolution came exactly for tasks where there are verifiers--whether external/symbolic, or learned, or even hand-coded simulators. What do you think RLVR or Self Distillation are?

2 months ago 1 0 1 0

The lectures, 3hrs long with Q&A, are quite up-to-date and cover LLMs, LRMs, as well as the latest test-time scaling and post-training methods such as LLM-Process-Modulo and self-distillation.

2 months ago 2 0 0 0
Post image Post image

Here are the recordings of two lectures on 𝗣𝗹𝗮𝗻𝗻𝗶𝗻𝗴 & 𝗥𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗖𝗮𝗽𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀 𝗼𝗳 𝗟𝗟𝗠𝘀/𝗟𝗥𝗠𝘀 that I gave this week at Melbourne ML Summer School (lnkd.in/g7rxg9sw).

𝙇𝙚𝙘𝙩𝙪𝙧𝙚 1: youtube.com/watch?v=_PPV...
𝙇𝙚𝙘𝙩𝙪𝙧𝙚 2: youtube.com/watch?v=fKlm...

2 months ago 6 0 1 1
Post image Post image Post image Post image

Slides available with the video (direct link bit.ly/4sXyjtj)

2 months ago 0 0 0 0
Anthropomorphization Sins in Modern AI (or Perils of Prematurely Applying Lens of Cognition to LLMs)
Anthropomorphization Sins in Modern AI (or Perils of Prematurely Applying Lens of Cognition to LLMs) YouTube video by Subbarao Kambhampati

A common theme in our work these past few years has been pushing back on facile anthropomorphizations (and/or efforts that bring questionable/discredited Cognitive Science metaphors) to LLMs.. So I enjoyed giving this talk at @ivado.bsky.social yesterday... www.youtube.com/watch?v=CoyS...

2 months ago 4 0 1 0
On the Mythos of LRM "Thinking Tokens" (Talk @ Microsoft Research, India; 12/16/2025)
On the Mythos of LRM "Thinking Tokens" (Talk @ Microsoft Research, India; 12/16/2025) YouTube video by Subbarao Kambhampati

Three of my talks in India last month--at @iitdelhi.bsky.social,
@msftresearch.bsky.social India and at IndoML Symposium--were "On the Mythos of LRM Thinking Tokens." Here is a recording of one of them--the talk I gave at MSR India.

www.youtube.com/watch?v=fCQX...

3 months ago 0 0 0 0
Advertisement
Post image

Like I say, if a human--even a Terence Tao--makes an egregious mistake (e.g. the one below) once, our trust in them takes a nose dive. With LLMs, it is just "..but they do so well on IMO problems!"..

3 months ago 11 2 2 0
Talk on  the semantics of "Thinking Traces" (Keynote at NeurIPS2025 MAR Workshop)
Talk on the semantics of "Thinking Traces" (Keynote at NeurIPS2025 MAR Workshop) YouTube video by Subbarao Kambhampati

ICYMI, here is my keynote on the semantics of LRM "thinking traces" at #NeurIPS2025 workshop on Multimodal Algorithmic Reasoning. It's a unified view of the seven papers we presented at the conference workshops. Special thanks to the engaged audience..🙏

www.youtube.com/watch?v=rvby...

4 months ago 1 0 0 0
Post image

[On using Continuous Latent Space Vectors in the context windows of Transformers and LLMs] #SundayHarangue
👉 x.com/rao2z/status...

5 months ago 2 0 0 0
LRMs and Agentic AI (Talk at Samsung AI Forum)
LRMs and Agentic AI (Talk at Samsung AI Forum) YouTube video by Subbarao Kambhampati

My talk at Samsung AI Forum yesterday
www.youtube.com/watch?v=L2nA...

7 months ago 2 0 0 0
Preview
In the year since LRMs ("reasoning models") hit the scene, we have been trying to understand, analyze and demystify them.. Here are our efforts to date--conveniently all in one… | Subbarao K... In the year since LRMs ("reasoning models") hit the scene, we have been trying to understand, analyze and demystify them.. Here are our efforts to date--conveniently all in one place.. (𝗙𝗶𝗿𝘀𝘁..) 𝗘𝘃𝗮𝗹...

In the year since LRMs ("reasoning models") hit the scene, we have been trying to understand, analyze and demystify them.. Here are our efforts to date--conveniently all in one place..👇

www.linkedin.com/posts/subbar...

7 months ago 5 1 0 0
Post image

𝐏𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐯𝐞 𝐓𝐡𝐢𝐧𝐤𝐢𝐧𝐠? The anthropomorphization of LRM intermediate tokens as thinking begat a cottage industry to "get efficiency by shortening thinking." We ask: 𝗜𝘀 𝗖𝗼𝗧 𝗹𝗲𝗻𝗴𝘁𝗵 𝗿𝗲𝗮𝗹𝗹𝘆 𝗮 𝗿𝗲𝗳𝗹𝗲𝗰𝘁𝗶𝗼𝗻 𝗼𝗳 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 𝗵𝗮𝗿𝗱𝗻𝗲𝘀𝘀 𝗼𝗿 𝗶𝘀 𝗶𝘁 𝗺𝗼𝗿𝗲 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝘃𝗲? 👉 www.linkedin.com/posts/subbar...

7 months ago 6 0 0 1
Preview
Subbarao Kambhampati (కంభంపాటి సుబ్బారావు) on X: "Rejecting papers in #AI Conferences because of "resource constraints" is shooting ourselves in the foot as a community; use Findings.. #SundayHarangue By now, we have all know that top AI conferences are oversubscribed (in terms of paper submissions), and have heard that that" / X Rejecting papers in #AI Conferences because of "resource constraints" is shooting ourselves in the foot as a community; use Findings.. #SundayHarangue By now, we have all know that top AI conferences are oversubscribed (in terms of paper submissions), and have heard that that

Rejecting papers in #AI Conferences because of "resource constraints" is shooting ourselves in the foot as a community; use Findings.. #SundayHarangue 👇

x.com/rao2z/status...

7 months ago 1 0 2 0
Preview
Subbarao Kambhampati (కంభంపాటి సుబ్బారావు) on X: "Proofs are not reasoning traces & I/O Format Language shouldn't be much of an issue for LLMs #SundayHarangue (Special IMO edition). 1/ My feed these last couple of days of IMO discussions has been full of comments that seem to conflate LRM intermediate tokens (aka reasoning" / X Proofs are not reasoning traces & I/O Format Language shouldn't be much of an issue for LLMs #SundayHarangue (Special IMO edition). 1/ My feed these last couple of days of IMO discussions has been full of comments that seem to conflate LRM intermediate tokens (aka reasoning

Proofs are not reasoning traces & I/O Format Language shouldn't be much of an issue for LLMs + other things #SundayHarangue (Special IMO edition). 🧵 👇

x.com/rao2z/status...

8 months ago 4 1 0 0
Advertisement
Preview
Neither LLMs nor LRMs have the ability to go beyond the humanity's knowledge closure--which is needed for true discoveries. | Subbarao Kambhampati Neither LLMs nor LRMs have the ability to go beyond the humanity's knowledge closure--which is needed for true discoveries. Both are beholden to the collected knowledge of the humanity (whether de...

Both LLMs and LRMs are upper bounded by humanity's knowledge closure. True scientific discoveries are, by definition, outside of that closure. Ergo, LLMs/LRMs are great force multipliers to us; but don't support "Nobel this weekend" hype..

👉 www.linkedin.com/posts/subbar...

9 months ago 9 2 0 0
Post image

Computational Complexity is the wrong measure for LRMs (as it was for LLMs)--think distributional distance instead #SundayHarangue (yes, we're back!)

👉 x.com/rao2z/status...

9 months ago 2 0 0 0

A̶̶̶I̶̶̶ ̶ ̶ ̶ ̶(̶A̶r̶t̶i̶f̶i̶c̶i̶a̶l̶ ̶I̶n̶t̶e̶l̶l̶i̶g̶e̶n̶c̶e̶)̶
̶̶̶A̶̶̶G̶̶̶I̶̶̶ ̶(̶A̶r̶t̶i̶f̶i̶c̶i̶a̶l̶ ̶G̶e̶n̶e̶r̶a̶l̶ ̶I̶n̶t̶e̶l̶l̶i̶g̶e̶n̶c̶e̶)̶
̶̶̶A̶̶̶S̶̶̶I̶̶̶ ̶(̶A̶r̶t̶i̶f̶i̶c̶i̶a̶l̶ ̶S̶u̶p̶e̶r̶ ̶I̶n̶t̶e̶l̶l̶i̶g̶e̶n̶c̶e̶)
ASDI (Artificial Super Duper Intelligence)

Don't get stuck with yesterday's hypeonyms!
Dare to get to the next level!

#AIAphorisms

9 months ago 3 1 0 0
Preview
Subbarao Kambhampati (కంభంపాటి సుబ్బారావు) on X: "Some of what that recent Apple LRM limitations paper shows is known (pardon my friendly Schmidhubering; I do welcome more LLM studies with scientific skepticism). Our study 👇 from Sep 2024 shows o1 accuracy degrading as complexity increases.. 1/ https://t.co/d8zEUGi4SZ" / X Some of what that recent Apple LRM limitations paper shows is known (pardon my friendly Schmidhubering; I do welcome more LLM studies with scientific skepticism). Our study 👇 from Sep 2024 shows o1 accuracy degrading as complexity increases.. 1/ https://t.co/d8zEUGi4SZ

This series of lectures was given the same week there was all that brouhaha over the Apple illusion paper (I was giving these lectures during the day and talking to reporters in the evening 😅). As such they are pretty up-to-date! 3/

x.com/rao2z/status...

10 months ago 0 0 0 0
Post image

The lectures start with a "big picture" overview (Lecture 1); focus on standard LLMs and their limitations, and LLM-Modulo as a test-time scaling approach (Lecture 2); and end with a critical appraisal of the test-time scaling and RL post-training techniques (Lecture 3). 2/

10 months ago 0 0 1 0