Advertisement Β· 728 Γ— 90

Posts by Arnab Sen Sharma

Post image

Humans and LLMs think fast and slow. Do SAEs recover slow concepts in LLMs? Not really.

Our Temporal Feature Analyzer discovers contextual features in LLMs, that detect event boundaries, parse complex grammar, and represent ICL patterns.

5 months ago 20 8 1 1

Thanks to my collaborators Giordano Rogers, @natalieshapira.bsky.social, and @davidbau.bsky.social .

Checkout our paper for more details:

πŸ“œ arxiv.org/pdf/2510.26784
πŸ’» github.com/arnab-api/fi...
🌐 filter.baulab.info

5 months ago 4 0 0 0

The fact that the neural mechanisms implemented in transformer architecture align with human-designed symbolic strategies suggests that certain computational patterns rise naturally from task demands rather than specific architectural constraints.

5 months ago 2 0 1 0

This dual implementation of filtering: lazy evaluation via filter heads and eager evaluation by storing intermediate flags, echoes the lazy vs eager evaluation strategies in functional programming patterns.

Check Henderson & Morris Jr (1976): dl.acm.org/doi/abs/10....

5 months ago 2 0 1 0

This seemingly innocent change in the prompt order fundamentally changes what strategy is used by the LLMs. This suggests that LLMs can maintain multiple strategies for the same task, and flexibly switch/prioritize them based on information availability.

5 months ago 2 0 1 0
Post image

We validate this flag-based eager evaluation hypothesis with a series of carefully designed causal analysis. If we swap this flag with another item, in the question-before context the LM consistently picks the item carrying the flag. However, the question-after is not sensitive to this.

5 months ago 2 0 1 0

🎭 Plot twist: when the question is presented *before* the options, the causality scores drops to near zero!

We investigate this further and find that when the question is presented first, the LM can can *eagerly* evaluate each option as it sees them, and store a "flag" directly in the latents.

5 months ago 2 0 1 0

πŸ”„ The predicate can also be transferred (to some extent) across different tasks - suggesting the LLMs rely on shared representations and mechanisms that are reused across tasks.

Also checkout @jackmerullo.bsky.social's work on LLM's reusing sub-circuits in different tasks.
x.com/jack_merull...

5 months ago 2 0 1 0
Preview
Sheridan Feucht (@sfeucht.bsky.social) [πŸ“„] Are LLMs mindless token-shifters, or do they build meaningful representations of language? We study how LLMs copy text in-context, and physically separate out two types of induction heads: token heads, which copy literal tokens, and concept heads, which copy word meanings.

Language-independent predicates resembles the cross-lingual concepts seen in prior works by @sfeucht.bsky.social, @wendlerc.bsky.social, and @jannikbrinkmann.bsky.social.
bsky.app/profile/sfe...

5 months ago 2 0 1 0

When the question is presented *after* the options, filter heads can achieve high causality scores across language and format changes! This suggests that the encoded predicate is robust against such perturbations.

5 months ago 2 0 1 0
Advertisement
Post image

We test this across a range of different semantic types, presentation formats, languages, and even different tasks that require a different "reduce" step after filtering.

5 months ago 2 0 1 0

πŸ“Š We measure this with a *causality* score: if the predicate is abstractly encoded in the query states of these "filter heads", then transferring it should change the output. For example: in the figure the answer should change to "Peach" (or the changed format accordingly).

5 months ago 2 0 1 0
Post image

πŸ€” But do these heads play a *causal* role in the operation?

To test them, we transport their query states from one context to another. We find that will trigger the execution of the same filtering operation, even if the new context has a new list of items and format!

5 months ago 3 1 1 0
Video

πŸ” In Llama-70B and Gemma-27B, we found special attention heads that consistently focus their attention on the filtered items. This behavior seems consistent across a range of different formats and semantic types.

5 months ago 3 0 1 0

We want to understand how large language models (LLMs) encode "predicates". Is every filtering question, e.g., find the X that satisfies property P, handled in a different way? Or has the LM learned to use abstract rules that can be reused in many different situations?

5 months ago 3 0 1 0
Post image

How can a language model find the veggies in a menu?

New pre-print where we investigate the internal mechanisms of LLMs when filtering on a list of options.

Spoiler: turns out LLMs use strategies surprisingly similar to functional programming (think "filter" from python)! 🧡

5 months ago 24 9 1 2
Post image

How do language models track mental states of each character in a story, often referred to as Theory of Mind?

We reverse-engineered how LLaMA-3-70B-Instruct handles a belief-tracking task and found something surprising: it uses mechanisms strikingly similar to pointer variables in C programming!

9 months ago 59 19 2 1
Post image

More big news! Applications are open for the NDIF Summer Engineering Fellowshipβ€”an opportunity to work on cutting-edge AI research infrastructure this summer in Boston! πŸš€

1 year ago 9 6 1 2