Our latest publication grapples with how the brain could implement gradient descent by sending learning targets top-down, gating plasticity with dendritic inhibition, and updating synaptic weights with biologically observed learning rules like BTSP.
www.cell.com/cell-reports...
Posts by Friedemann Zenke
First preprint from the lab! Using intracellular recordings & analysis of 2-photon imaging data, we show that spiking & neuromodulatory input during experience drive a reorganization of visuomotor inputs in V1 layer 2/3 neurons, consistent with enhanced visuomotor cancellation - bioRxiv link below.
Thanks! That's $1M question. Tbh I thought it would be easier to find robust examples. If noise is uncorrelated reconstruction will simply remove it (think denoising autoencoder). I'd put my money on small encoders close to capacity or by adding explicit inductive biases on the latent dynamics.
3/ Dreamer-CDP uses a JEPA-style predictor over the continuous embeddings instead of pixels. It matches vanilla Dreamer on Crafter and outperforms prior reconstruction-free methods. Thus, reconstruction-free world models are maturing, with potential gains in efficiency & generalization.
2/ Standard MBRL (e.g. Dreamer) reconstructs images to model the world, potentially wasting capacity on visual details irrelevant to the task. Prior reconstruction-free approaches exist but underperform on benchmarks like Crafter.
1/3 New paper accepted at ICRL World Model workshop: Dreamer-CDP: Improving Reconstruction-free World Models for RL. We introduce a Dreamer variant that learns world models without reconstructing pixels. arxiv.org/abs/2603.07083
Come see our Cosyne 2026 posters! Friday: 2-069 (Atena & Manu), 2-096 (Julian), Saturday: 3-091 (Julia)
More info zenkelab.org/2026/03/cosy...
Congrats to Fabian Mikulash, a postdoc in the @fzenke.bsky.social lab, for being awarded a Marie Skłodowska-Curie Actions fellowship! His project aims to develop a new theory—tested with real brain data—explaining how neurons decide when to trust what we see versus what we expect 🧠
Our paper is out in @natneuro.nature.com!
www.nature.com/articles/s41...
We develop a geometric theory of how neural populations support generalization across many tasks.
@zuckermanbrain.bsky.social
@flatironinstitute.org
@kempnerinstitute.bsky.social
1/14
Our work with @georgkeller.bsky.social on testing predictive processing (PP) models in cortex is out on biorvix now! www.biorxiv.org/content/10.6... A short thread on our findings and thoughts on where we should move on from PP below.
The hippocampal map has its own attentional control signal!
Our new study reveals that theta #sweeps can be instantly biased towards behaviourally relevant locations. See 📹 in post 4/6 and preprint here 👉
www.biorxiv.org/content/10.6...
🧵(1/6)
With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.
My hope is that this will be a living document, continuously improved as I get feedback.
Joint junior faculty position in Computational Neuroscience, between Ctr for Computational Neuroscience at @flatironinstitute.org and the CUNY Graduate Center @thegraduatecenter.bsky.social . Application deadline: 16 Jan 2026!
www.simonsfoundation.org/flatiron/car...
cuny.jobs/new-york-ny/...
Thanks, Rich!
Thanks so much.
Thank you!
I’m very grateful to the FMI, the tenure committee, inspiring colleagues, and all the hidden supporters who made this possible. Huge thanks to past and present group members for their curiosity and creativity. Excited for the next chapter.
I’m happy to share some recent work out in PLOS Computational Biology with @guille-martin.bsky.social and Christian Machens at @champalimaudr.bsky.social . We use neural coding and population geometry to study different perspectives on hippocampal remapping.
journals.plos.org/ploscompbiol...
Spread the word: I'm looking to hire a postdoc to explore the concept of attention (as studied in psych/neuro, not the transformer mechanism) in large Vision-Language Models. More details here: lindsay-lab.github.io/2025/12/08/p...
#MLSky #neurojobs #compneuro
Finally got the job ad—looking for 2 PhD students to start spring next year:
www.gao-unit.com/join-us/
If comp neuro, ML, and AI4Neuro is your thing, or you just nerd out over brain recordings, apply!
I'm at neurips. DM me here / on the conference app or email if you want to meet 🏖️🌮
Come work with us!!!
Joint modelling of brain and behaviour dynamics with artificial intelligence
www.nature.com/articles/s41...
Thanks! There is a notable difference, though: in Nejad et al. (2025), L5 is trained with a reconstruction loss, i.e., an autoencoder (see Eqs. 4–6 from the methods below). L2/3 then predicts the autoencoder's latent state via a supervised next-step loss. That shouldn't be conflated with a JEPA.
6/ Finally, we build a hierarchical JEPA version of our model and outline how its architecture could map onto cortical microcircuits, toward a predictive-processing framework with mechanistic links to neuroanatomy. Read the full story here 👇
🔗 doi.org/10.1101/2025...
5/ Importantly, RPL captures representational motifs across multiple species and cortical areas: On the one hand, successor-like structures resembling those in human V1. On the other hand, its abstract sequence representations are comparable to macaque PFC.
4/ From raw video streams and without supervision, RPL learns: invariant object identity, equivariant motion variables (position, velocity, orientation, etc.), and a world model that allows simulating plausible motion trajectories entirely in latent space.
3/ Recent studies indicate that, aside from plausibility, representation-space predictive models like JEPAs also learn more abstract representations than input-space generative models, which tend to focus on low-level details (cf @yann-lecun.bsky.social)
2/ RPL operates entirely in latent space, avoiding the anatomical issues of predictive coding models that compute prediction errors in input space. Instead, the network predicts future internal representations through a specific recurrent circuit structure.
1/6 New preprint 🚀 How does the cortex learn to represent things and how they move without reconstructing sensory stimuli? We developed a circuit-centric recurrent predictive learning (RPL) model based on JEPAs.
🔗 doi.org/10.1101/2025...
Led by @atenagm.bsky.social @mshalvagal.bsky.social