Are you at #NeurIPS2025? Check out the #KempnerInstituteโs Day 2 presentations! ๐ก
#AI #NeuroAI
@cpehlevan.bsky.social @kanakarajanphd.bsky.social @thomasfel.bsky.social @andykeller.bsky.social @binxuwang.bsky.social @njw.fish @yilundu.bsky.social
Posts by Thomas Fel
๐Into the Rabbit Hull โ Part 1: A Deep Dive into DINOv2๐ง
Our latest Deeper Learning blog post is an #interpretability deep dive into one of todayโs leading vision foundation models: DINOv2.
๐Read now: bit.ly/4nNfq8D
Stay tuned โ Part 2 coming soon.
#AI #VLMs #DINOv2
The Bau lab is on fire ! ๐
Interested in doing a PhD at the intersection of human and machine cognition? โจ I'm recruiting students for Fall 2026! โจ
Topics of interest include pragmatics, metacognition, reasoning, & interpretability (in humans and AI).
Check out JHU's mentoring program (due 11/15) for help with your SoP ๐
Pleased to share new work with @sflippl.bsky.social @eberleoliver.bsky.social @thomasmcgee.bsky.social & undergrad interns at Institute for Pure and Applied Mathematics, UCLA.
Algorithmic Primitives and Compositional Geometry of Reasoning in Language Models
www.arxiv.org/pdf/2510.15987
๐งต1/n
๐ง Thrilled to share our NeuroView with Ellie Pavlick!
"From Prediction to Understanding: Will AI Foundation Models Transform Brain Science?"
AI foundation models are coming to neuroscienceโif scaling laws hold, predictive power will be unprecedented.
But is that enough?
Thread ๐งต๐
Thx a lot Naomi ! ๐๐ฅน
This is so cool. When you look at representational geometry, it seems intuitive that models are combining convex regions of "concepts", but I wouldn't have expected that this is PROVABLY true for attention or that there was such a rich theory for this kind of geometry.
That concludes this two-part descent into the Rabbit Hull.
Huge thanks to all collaborators who made this work possible โ and especially to @binxuwang.bsky.social , with whom this project was built, experiment after experiment.
๐ฎ kempnerinstitute.github.io/dinovision/
๐ arxiv.org/pdf/2510.08638
If this holds, three implications:
(i) Concepts = points (or regions), not directions
(ii) Probing is bounded: toward archetypes, not vectors
(iii) Can't recover generating hulls from sum: we should look deeper than just a single-layer activations to recover the true latents
Synthesizing these observations, we propose a refined view, motivated by Gรคrdenfors' theory and attention geometry.
Activations = multiple convex hulls simultaneously: a rabbit among animals, brown among colors, fluffy among textures.
The Minkowski Representation Hypothesis.
Taken together, the signs of partial density, local connectedness, and coherent dictionary atoms indicate that DINOโs representations are organized beyond linear sparsity alone.
Can position explain this ?
We found that pos. information collapses: from high-rank to a near 2-dim sheet. Early layers encode precise location; later ones retain abstract axes.
This compression frees dimensions for features, and *position doesn't explain PCA map smoothness*
Patch embeddings form smooth, connected surfaces tracing objects and boundaries.
This may suggests interpolative geometry: tokens as mixtures between landmarks, shaped by clustering and spreading forces in the training objectives.
We found antipodal feature pairs (dแตข โ โ dโฑผ): vertical vs horizontal lines, white vs black shirts, left vs rightโฆ
Also, co-activation statistics only moderately shape geometry: concepts that fire together aren't necessarily nearbyโnor orthogonal when they don't.
Under the Linear Rep. Hypothesis, we'd expect Dictionary to be quasi-orthogonality.
Instead, training drives atoms from near-Grassmannian initialization to higher coherence.
Several concepts fire almost always the embedding is partly dense (!), contradicting pure sparse coding.
๐ณ๏ธ๐Into the Rabbit Hull โ Part II
Continuing our interpretation of DINOv2, the second part of our study concerns the *geometry of concepts* and the synthesis of our findings toward a new representational *phenomenology*:
the Minkowski Representation Hypothesis
Huge thanks to all collaborators who made this work possible, and especially to @binxuwang.bsky.social. This work grew from a year of collaboration!
Tomorrow, Part II: geometry of concepts and Minkowski Representation Hypothesis.
๐น๏ธ kempnerinstitute.github.io/dinovision
๐ arxiv.org/pdf/2510.08638
Curious tokens, the registers.
DINO seems to use them to encode global invariants: we find concepts (directions) that fire exclusively (!) on registers.
Example of such concepts include motion blur detector and style (game screenshots, drawings, paintings, warped images...)
Now for depth estimation. How does DINO know depth?
It turns out it has discovered several human-like monocular depth cues: texture gradients resembling blurring or bokeh, shadow detectors, and projective cues.
Most units mix cues, but a few remain remarkably pure.
Another surprise here: the most important concepts are not object-centric at all, but boundary detectors. Remarkably, these concepts coalesce into a low-dimensional subspace within (see paper).
This kind of concept breaks a key assumption in interpretability: that a concept is about the tokens where it fires. Here it is the oppositeโthe concept is defined by where it does not fire. An open question is how models form such concepts.
Let's zoom in on classification.
For every class, we find two concepts: one fires on the object (e.g., "rabbit"), and another fires everywhere *except* the object -- but only when it's present!
We call them Elsewhere Concepts (credit: @davidbau.bsky.social).
Assuming the Linear Rep. Hypothesis, SAEs arise naturally as instruments for concept extraction, they will be our companions in this descent.
Archetypal SAE uncovered 32k concepts.
Our first observation: different tasks recruit distinct regions of this conceptual space.
๐ณ๏ธ๐ ๐๐ฃ๐ฉ๐ค ๐ฉ๐๐ ๐๐๐๐๐๐ฉ ๐๐ช๐ก๐ก โ ๐๐๐ง๐ฉ ๐ (๐๐๐๐ก ๐ผ๐ผ ๐ก๐๐๐๐๐๐๐ค)
๐๐ป ๐ถ๐ป๐๐ฒ๐ฟ๐ฝ๐ฟ๐ฒ๐๐ฎ๐ฏ๐ถ๐น๐ถ๐๐ ๐ฑ๐ฒ๐ฒ๐ฝ ๐ฑ๐ถ๐๐ฒ ๐ถ๐ป๐๐ผ ๐๐๐ก๐ข๐๐ฎ, one of visionโs most important foundation models.
And today is Part I, buckle up, we're exploring some of its most charming features. :)
Really neat, congrats !
Superposition has reshaped interpretability research. In our @unireps.bsky.social paper led by @andre-longon.bsky.social we show it also matters for measuring alignment! Two systems can represent the same features yet appear misaligned if those features are mixed differently across neurons.
For XAI itโs often thought explanations help (boundedly rational) user โunlockโ info in features for some decision. But no one says this, they say vaguer things like โsupporting trustโ. We lay out some implicit assumptions that become clearer when you take a formal view here arxiv.org/abs/2506.22740
Beautiful work !
๐จUpdated: "How far can we go with ImageNet for Text-to-Image generation?"
TL;DR: train a text2image model from scratch on ImageNet only and beat SDXL.
Paper, code, data available! Reproducible science FTW!
๐งต๐
๐ arxiv.org/abs/2502.21318
๐ป github.com/lucasdegeorg...
๐ฝ huggingface.co/arijitghosh/...