Advertisement · 728 × 90

Posts by Jesse Geerts

Preview
Juan Gallego on manifolds and populations of neurons A wealth of evidence supports the view that neural manifolds are real and useful, even if they may not solve the age-old mind-body problem.

On the latest episode, @braininspired.bsky.social with @juangallego.bsky.social about the wealth of evidence that supports the view that neural manifolds are real and useful, even if they may not completely solve the age-old mind-body problem.

#neuroskyence

www.thetransmitter.org/brain-inspir...

4 weeks ago 44 9 1 0
Preview
How is compulsivity related to uncertainty? Inside Neuroscience · Episode

How do we know that things we do will have the outcomes we expect?

We chat to @tobywise.bsky.social about his research into the relationship between compulsivity and uncertainty. Go give it a listen!

#BrainAwarenessWeek

open.spotify.com/episode/2hyj...

1 month ago 5 2 0 0

This looks great! Congrats team!

1 month ago 2 1 1 0

Thanks Ida, and thanks to you for the great task design of course! We're still working on more neural results, so can hopefully share more soon!

1 month ago 2 0 1 0

Wow, awesome to see this work!

1 month ago 3 2 0 0

Wonderful work by @weijiazh.bsky.social at #Cosyne2026, who beautifully navigated a busy poster session yesterday!

1 month ago 23 2 2 0

My travel to Cosyne was barred by the Trump admin, so I'm here on my personal dime. I care about the COSYNE community, I committed to co-chairing. And I always learn here.

But the worst part about this travel ban is my lab colleagues— students and fellows—couldn't come. /1

1 month ago 153 51 2 6
Preview
Relational reasoning and inductive bias in transformers and large language models Transformer-based models have demonstrated remarkable reasoning abilities, but the mechanisms underlying relational reasoning remain poorly understood. We investigate how transformers perform \textit{...

Finally, I’ll be giving a talk on “Neuro-inspired evals for modern AI systems”, in Monday’s workshop on biologically inspired AI, at 11:35. I’ll talk about Weijia’s project and our recent preprint on relational reasoning in transformers.

arxiv.org/abs/2506.04289

1 month ago 6 0 0 0
Advertisement

In the same session, also make sure to visit poster [3-135] by @weijiazh.bsky.social , who’s presenting joint work with @neurokim.bsky.social and Josh Jacobs showing that Human Single Neurons and Predictive Models Remap Differently in Reward and Transition Relearning

1 month ago 7 1 1 1
Post image

Excited to be at #COSYNE2026 ! I’m presenting our recent preprint with Francesca Greenstreet, @juangallego.bsky.social an and @clopathlab.bsky.social in today’s poster session [3-026]

www.biorxiv.org/content/10.6...

1 month ago 13 0 1 1
Post image Post image Post image

I have just arrived at @cosynemeeting.bsky.social ! During my flight, as I thought about opening remarks (and had a glass of wine), I got to reflect on why I love neuroscience. Maybe other folks are doing the same and would like to read or add, so I thought I'd do something unusual and share!

1 month ago 112 19 0 1
Preview
UCL NeuroAI Talk Series A series of NeuroAI themed talks organised by the UCL NeuroAI community. Talks will continue on a monthly basis.

We are excited to have Dr. Tim Kietzmann from Osnabrück University for our next seminar! This will be an in-person plus online seminar!

🗓️Wed 11 March 2026
⏰2-3pm GMT

Talk title: NeuroAI - the synergy between machine learning and neuroscience

Registration: www.eventbrite.co.uk/e/ucl-neuroa...

1 month ago 12 5 0 2

🚨🚨New Preprint Alert!🚨🚨

www.biorxiv.org/content/10.6...

Animal learning is painfully slow (at least initially). Yet, well trained animals can learn very fast, sometimes displaying few-shot inference. How does this transition occur?

2 months ago 61 21 1 1

Thrilled to finally share this work! 🧠🔊

Using a new reinforcement-free task we show mice (like humans) extract abstract structure from sound (unsupervised) & dCA1 is causally required by building factorised, orthogonal subspaces of abstract rules.

Led by Dammy Onih!
www.biorxiv.org/content/10.6...

2 months ago 155 52 3 2

Code for our multi-region motor learning model is now available on GitHub!

github.com/jessegeerts/...

2 months ago 4 0 0 0
Preview
GitHub - jessegeerts/action-embedding: Action embeddings for RL - model of motor adaptation and generalization Action embeddings for RL - model of motor adaptation and generalization - jessegeerts/action-embedding

Code to run this model and reproduce figures is now public: github.com/jessegeerts/...

2 months ago 0 0 0 0

Updated work from @jessegeerts.bsky.social extending his results on transitive inference in transformers (including LLMs!)

updated paper: arxiv.org/abs/2506.04289
bleeprint (what are we calling these?) below ⬇️

2 months ago 18 3 0 0
Preview
Relational reasoning and inductive bias in transformers and large language models Transformer-based models have demonstrated remarkable reasoning abilities, but the mechanisms underlying relational reasoning remain poorly understood. We investigate how transformers perform \textit{...

Updated paper: arxiv.org/abs/2506.04289. Joint work @ndrewliu.bsky.social, @scychan.bsky.social, @clopathlab.bsky.social, and @neurokim.bsky.social

2 months ago 1 0 0 0
Advertisement

This parallels our small transformer findings: when models must reason from context, representational geometry determines success or failure at transitive inference.

2 months ago 1 0 1 0
Post image

This effect was strongest when models couldn't fall back on stored knowledge (incongruent/permuted items). For congruent items where weight-stored knowledge helps, the geometric scaffold barely mattered.

2 months ago 1 0 1 0
Post image

Across Gemini, Gemma, and GPT models, linear consistently led to higher accuracy on transitive inference prompts.

2 months ago 1 0 1 0

We then prompted LLMs with different geometric scaffolds: "imagine these items on a number line" (linear) vs "on a circle" (circular). Circular orderings violate transitivity because relationships can wrap around (A>B>C>A).

2 months ago 1 0 1 0
Post image

We used the ReCogLab dataset (github.com/google-deepm...) to test transitive inference with items that are congruent with world knowledge (whale > dolphin > goldfish), incongruent (goldfish > dolphin > whale), or random. This lets us tease apart reasoning from context vs relying on stored knowledge.

2 months ago 1 0 1 0

Quick recap: how a transformer is pre-trained determines whether it can do transitive inference (A>B, B>C → A>C).

In-weights learning → yes.
ICL trained on copying → no.
ICL pre-trained on linear regression → yes.

But these are small-scale toy models. What about in LLMs?

2 months ago 1 0 1 0

Update on this work! We've extended our transitive inference study to large language models 🧵

2 months ago 11 1 1 1
Post image

I’m excited to share my first PhD preprint!🎉
We studied how interactions between medial entorhinal cortex (MEC) and hippocampus shape theta sequences during navigation, and asked whether some “planning-like” patterns in hippocampus could arise from upstream MEC dynamics. (1/8)

3 months ago 33 8 1 0

With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.

My hope is that this will be a living document, continuously improved as I get feedback.

3 months ago 591 238 16 10

Just to add one thing to this discussion: in our paper, the "supervised" network predicts the action, which is internally generated by the actor, which is why we assume the agent has access to it. We toyed with calling this self-supervised but didn't want to cause confusion with other SS work

3 months ago 2 0 0 0
Advertisement

Thanks for sharing that paper! I was unaware of this but it's a cool result

3 months ago 2 0 0 0

New paper led by wonder postdocs Francesca Greenstreet and @jessegeerts.bsky.social and @clopathlab.bsky.social trying to understand why –in the "what for" sense– there are multiple motor learning systems –supervised and RL-based– in the brain.

Check out Jesse's 🧵

www.biorxiv.org/content/10.6...

3 months ago 33 7 1 0