Advertisement · 728 × 90

Posts by Benno Krojer

I'll be at ICLR!

Will present our LatentLens paper at the Re-Align workshop and happy to chat (ideally at the beach 🏖️)

18 hours ago 6 0 0 0

What are your favorite papers that can serve as excellent examples how to write great scientific paper, present results, great figures, make it engaging and easy to follow?

Doesn't necessarily have to be the most cited or impactful ones

3 weeks ago 6 0 2 0

Inspired by
@bennokrojer.bsky.social, we included a Behind the Scenes section 🎬

The goal is to make science more transparent 🔍, share lessons learned 🧠, and provide a more realistic lens on the research journey 👣

8/

bsky.app/profile/benn...

1 month ago 5 1 1 0
Post image

🚨New Paper!🚨 How do reasoning LLMs handle inferences that have no deterministic answer? We find that they diverge from humans in some significant ways, and fail to reflect human uncertainty… 🧵(1/10)

1 month ago 57 20 3 1

*study in isolation

1 month ago 0 0 0 0

True, but it's also more loopy! Not just one clean forward pass you can study

1 month ago 0 0 1 0

Another way to put it:

With ai systems, we'll never have such privileged access to latents than with brains (albeit just our own, so depends how much you think we're all the same roughly)

1 month ago 3 0 0 0

The flip side:
With ai systems it's still very much unclear which of our human intuitions apply (anthropomorphizing) and when they're a completely different beast that require fully new theories of cognition

1 month ago 0 0 0 0

Sure, there's lots of fallacies and biases in that process of introspection but I wouldn't discard subjective experience as a very strong source of insight

1 month ago 1 0 0 0
Advertisement

People often say (myself too):

Interpretability on AI is so much easier than neuroscience! We can inspect everything and even retrain (vs carefully poke a little into the brain)!

One big advantage in neuroscience I often forget: We're quite literally *inside* the thing we're studying

1 month ago 7 1 4 0
Post image

This is a more accurate example since our method explicitly goes beyond single token interpretations --> words in the context of a sentence/paragraph

1 month ago 0 0 0 0
Preview
GitHub - McGill-NLP/latentlens: Code and data for the paper "LatentLens: Revealing Highly Interpretable Visual Tokens in LLMs" Code and data for the paper "LatentLens: Revealing Highly Interpretable Visual Tokens in LLMs" - McGill-NLP/latentlens

For more detailed instructions how to use the library:
github.com/McGill-NLP/l...

1 month ago 0 0 0 0

You can now "pip install latentlens" 🔨

It comes with:
* pre-computed embeddings for several popular LLMs and VLMs
* a txt file with sentences describing WordNet concepts, which we recommend as a standard corpus to get embeddings from
* ...

Try it out and let us know what we can improve!

1 month ago 7 2 2 0
Post image Post image

Is interpretability at the random fact-gathering stage or beyond?

1 month ago 2 0 0 0
Post image

Finally getting into this classic

Let's see if by the end I'll have a clearer idea what type of science some fields of AI are, like interpretability

What are our paradigms?

1 month ago 7 0 2 0
Post image

Google decided to show this as my first sentence from my website (and not any of the sentences actually at the top of the website)

2 months ago 1 0 0 0
Advertisement

Keep me posted and feel free to ping me anytime something is confusing!

2 months ago 1 0 0 0

Re 2) this was a typo and should be "i" for token position consistent with later uses in 3.2 and also how we use "i" in 3.1

2 months ago 0 0 0 0

Maybe we can formulate it as a description d is text with optional meta-data (token position, layer) that is mapped to a vector r

The general formalism is tricky but i think the intuition is hopefully clear :)

2 months ago 0 0 1 0
Image 0000 - LLaMA3-8B + ViT-L/14-336

So in our case (LatentLens) i would say:
a description here is something like "a brown *dog*" and not "a brown dog" so the token position makes it a different description (this is also how we highlight it in our demo: bennokrojer.com/vlm_interp_d...)

2 months ago 0 0 1 0

So I got a chance to look closely and you are right in both cases! Thank you for spotting this. I will upload a new version on arxiv soon with fixes

To clarify things here also:
1) in 3.1 we described things generally but missed that eg LatentLens would match several vectors r with a description d

2 months ago 1 0 1 0

Thank you! Let me get back to you later today on this when I'm on my laptop

2 months ago 0 0 1 0

What does it mean for visual tokens to be "interpretable" to LLM? And how to we measure it?

These, and many more pressing questions are addressed!

Introducing LatentLens -- a new, more faithful tool for interpretability! Honoured to have collaborated with
@bennokrojer.bsky.social on this!

2 months ago 4 1 0 0
Post image

Finally on a personal note, this will be the final paper of my PhD... what a journey it has been

2 months ago 1 0 0 0
Advertisement
Post image

Pivoting to interpretability this year was great and i also wrote a blog post on this specifically:
bennokrojer.com/interp.html

2 months ago 1 0 1 0
Post image

This is a major lesson i will keep in mind for any future project:

Test your assumptions, do not assume the field already has settled

2 months ago 1 0 1 0
Post image

This project was definitely accelerated and shaped by Claude Code/Cursor. Building intuitive demos in interp is now much easier

2 months ago 1 0 1 0
Post image

Finally we do test it empirically: finding some models where the embedding matrix of the LLM already provides decently interpretable nearest neighbors

But this was not the full story yet...
@mariusmosbach.bsky.social and @elinorpd.bsky.social nudged me to use contextual embeddings

2 months ago 1 1 1 0
Post image

Then the project went "off-track" for a while, partially because we didn't question our assumptions enough:

We just assumed visual tokens going into an LLM would not be that interpretable (based on the literature and our intuition)

But we never fully tested it for many weeks!

2 months ago 1 0 1 0
Post image

The initial ideation phase:

Pivoting to a new direction, wondering what kind of interp work would be meaningful, getting feedback from my lab, ...

2 months ago 1 0 1 0