On the latest episode, @braininspired.bsky.social with @juangallego.bsky.social about the wealth of evidence that supports the view that neural manifolds are real and useful, even if they may not completely solve the age-old mind-body problem.
#neuroskyence
www.thetransmitter.org/brain-inspir...
Posts by Jesse Geerts
How do we know that things we do will have the outcomes we expect?
We chat to @tobywise.bsky.social about his research into the relationship between compulsivity and uncertainty. Go give it a listen!
#BrainAwarenessWeek
open.spotify.com/episode/2hyj...
This looks great! Congrats team!
Thanks Ida, and thanks to you for the great task design of course! We're still working on more neural results, so can hopefully share more soon!
Wow, awesome to see this work!
Wonderful work by @weijiazh.bsky.social at #Cosyne2026, who beautifully navigated a busy poster session yesterday!
My travel to Cosyne was barred by the Trump admin, so I'm here on my personal dime. I care about the COSYNE community, I committed to co-chairing. And I always learn here.
But the worst part about this travel ban is my lab colleagues— students and fellows—couldn't come. /1
Finally, I’ll be giving a talk on “Neuro-inspired evals for modern AI systems”, in Monday’s workshop on biologically inspired AI, at 11:35. I’ll talk about Weijia’s project and our recent preprint on relational reasoning in transformers.
arxiv.org/abs/2506.04289
In the same session, also make sure to visit poster [3-135] by @weijiazh.bsky.social , who’s presenting joint work with @neurokim.bsky.social and Josh Jacobs showing that Human Single Neurons and Predictive Models Remap Differently in Reward and Transition Relearning
Excited to be at #COSYNE2026 ! I’m presenting our recent preprint with Francesca Greenstreet, @juangallego.bsky.social an and @clopathlab.bsky.social in today’s poster session [3-026]
www.biorxiv.org/content/10.6...
I have just arrived at @cosynemeeting.bsky.social ! During my flight, as I thought about opening remarks (and had a glass of wine), I got to reflect on why I love neuroscience. Maybe other folks are doing the same and would like to read or add, so I thought I'd do something unusual and share!
We are excited to have Dr. Tim Kietzmann from Osnabrück University for our next seminar! This will be an in-person plus online seminar!
🗓️Wed 11 March 2026
⏰2-3pm GMT
Talk title: NeuroAI - the synergy between machine learning and neuroscience
Registration: www.eventbrite.co.uk/e/ucl-neuroa...
🚨🚨New Preprint Alert!🚨🚨
www.biorxiv.org/content/10.6...
Animal learning is painfully slow (at least initially). Yet, well trained animals can learn very fast, sometimes displaying few-shot inference. How does this transition occur?
Thrilled to finally share this work! 🧠🔊
Using a new reinforcement-free task we show mice (like humans) extract abstract structure from sound (unsupervised) & dCA1 is causally required by building factorised, orthogonal subspaces of abstract rules.
Led by Dammy Onih!
www.biorxiv.org/content/10.6...
Code for our multi-region motor learning model is now available on GitHub!
github.com/jessegeerts/...
Updated work from @jessegeerts.bsky.social extending his results on transitive inference in transformers (including LLMs!)
updated paper: arxiv.org/abs/2506.04289
bleeprint (what are we calling these?) below ⬇️
Updated paper: arxiv.org/abs/2506.04289. Joint work @ndrewliu.bsky.social, @scychan.bsky.social, @clopathlab.bsky.social, and @neurokim.bsky.social
This parallels our small transformer findings: when models must reason from context, representational geometry determines success or failure at transitive inference.
This effect was strongest when models couldn't fall back on stored knowledge (incongruent/permuted items). For congruent items where weight-stored knowledge helps, the geometric scaffold barely mattered.
Across Gemini, Gemma, and GPT models, linear consistently led to higher accuracy on transitive inference prompts.
We then prompted LLMs with different geometric scaffolds: "imagine these items on a number line" (linear) vs "on a circle" (circular). Circular orderings violate transitivity because relationships can wrap around (A>B>C>A).
We used the ReCogLab dataset (github.com/google-deepm...) to test transitive inference with items that are congruent with world knowledge (whale > dolphin > goldfish), incongruent (goldfish > dolphin > whale), or random. This lets us tease apart reasoning from context vs relying on stored knowledge.
Quick recap: how a transformer is pre-trained determines whether it can do transitive inference (A>B, B>C → A>C).
In-weights learning → yes.
ICL trained on copying → no.
ICL pre-trained on linear regression → yes.
But these are small-scale toy models. What about in LLMs?
Update on this work! We've extended our transitive inference study to large language models 🧵
I’m excited to share my first PhD preprint!🎉
We studied how interactions between medial entorhinal cortex (MEC) and hippocampus shape theta sequences during navigation, and asked whether some “planning-like” patterns in hippocampus could arise from upstream MEC dynamics. (1/8)
With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.
My hope is that this will be a living document, continuously improved as I get feedback.
Just to add one thing to this discussion: in our paper, the "supervised" network predicts the action, which is internally generated by the actor, which is why we assume the agent has access to it. We toyed with calling this self-supervised but didn't want to cause confusion with other SS work
Thanks for sharing that paper! I was unaware of this but it's a cool result
New paper led by wonder postdocs Francesca Greenstreet and @jessegeerts.bsky.social and @clopathlab.bsky.social trying to understand why –in the "what for" sense– there are multiple motor learning systems –supervised and RL-based– in the brain.
Check out Jesse's 🧵
www.biorxiv.org/content/10.6...