Advertisement · 728 × 90

Posts by Friedemann Zenke

Our latest publication grapples with how the brain could implement gradient descent by sending learning targets top-down, gating plasticity with dendritic inhibition, and updating synaptic weights with biologically observed learning rules like BTSP.

www.cell.com/cell-reports...

3 weeks ago 92 35 4 5
Post image

First preprint from the lab! Using intracellular recordings & analysis of 2-photon imaging data, we show that spiking & neuromodulatory input during experience drive a reorganization of visuomotor inputs in V1 layer 2/3 neurons, consistent with enhanced visuomotor cancellation - bioRxiv link below.

1 month ago 75 23 1 3

Thanks! That's $1M question. Tbh I thought it would be easier to find robust examples. If noise is uncorrelated reconstruction will simply remove it (think denoising autoencoder). I'd put my money on small encoders close to capacity or by adding explicit inductive biases on the latent dynamics.

1 month ago 1 0 0 0

3/ Dreamer-CDP uses a JEPA-style predictor over the continuous embeddings instead of pixels. It matches vanilla Dreamer on Crafter and outperforms prior reconstruction-free methods. Thus, reconstruction-free world models are maturing, with potential gains in efficiency & generalization.

1 month ago 4 0 0 0

2/ Standard MBRL (e.g. Dreamer) reconstructs images to model the world, potentially wasting capacity on visual details irrelevant to the task. Prior reconstruction-free approaches exist but underperform on benchmarks like Crafter.

1 month ago 1 0 1 0
Post image

1/3 New paper accepted at ICRL World Model workshop: Dreamer-CDP: Improving Reconstruction-free World Models for RL. We introduce a Dreamer variant that learns world models without reconstructing pixels. arxiv.org/abs/2603.07083

1 month ago 24 5 2 0
Cosyne 2026 – Zenke Lab

Come see our Cosyne 2026 posters! Friday: 2-069 (Atena & Manu), 2-096 (Julian), Saturday: 3-091 (Julia)
More info zenkelab.org/2026/03/cosy...

1 month ago 19 4 0 0
Post image

Congrats to Fabian Mikulash, a postdoc in the @fzenke.bsky.social lab, for being awarded a Marie Skłodowska-Curie Actions fellowship! His project aims to develop a new theory—tested with real brain data—explaining how neurons decide when to trust what we see versus what we expect 🧠

2 months ago 8 1 0 0
Post image

Our paper is out in @natneuro.nature.com!

www.nature.com/articles/s41...

We develop a geometric theory of how neural populations support generalization across many tasks.

@zuckermanbrain.bsky.social
@flatironinstitute.org
@kempnerinstitute.bsky.social

1/14

2 months ago 278 101 7 1
Advertisement
A functional influence based circuit motif that constrains the set of plausible algorithms of cortical function There are several plausible algorithms for cortical function that are specific enough to make testable predictions of the interactions between functionally identified cell types. Many of these algorithms are based on some variant of predictive processing. Here we set out to experimentally distinguish between two such predictive processing variants. A central point of variability between them lies in the proposed vertical communication between layer 2/3 and layer 5, which stems from the diverging assumptions about the computational role of layer 5. One assumes a hierarchically organized architecture and proposes that, within a given node of the network, layer 5 conveys unexplained bottom-up input to prediction error neurons of layer 2/3. The other proposes a non-hierarchical architecture in which internal representation neurons of layer 5 provide predictions for the local prediction error neurons of layer 2/3. We show that the functional influence of layer 2/3 cell types on layer 5 is incompatible with the hierarchical variant, while the functional influence of layer 5 cell types on prediction error neurons of layer 2/3 is incompatible with the non-hierarchical variant. Given these data, we can constrain the space of plausible algorithms of cortical function. We propose a model for cortical function based on a combination of a joint embedding predictive architecture (JEPA) and predictive processing that makes experimentally testable predictions. ### Competing Interest Statement The authors have declared no competing interest. Swiss National Science Foundation, https://ror.org/00yjd3n13 Novartis Foundation, https://ror.org/04f9t1x17 European Research Council, https://ror.org/0472cxd90, 865617

Our work with @georgkeller.bsky.social on testing predictive processing (PP) models in cortex is out on biorvix now! www.biorxiv.org/content/10.6... A short thread on our findings and thoughts on where we should move on from PP below.

2 months ago 49 16 2 1
Attention-like regulation of theta sweeps in the brain's spatial navigation circuit Spatial attention supports navigation by prioritizing information from selected locations. A candidate neural mechanism is provided by theta-paced sweeps in grid- and place-cell population activity, which sample nearby space in a left-right-alternating pattern coordinated by parasubicular direction signals. During exploration, this alternation promotes uniform spatial coverage, but whether sweeps can be flexibly tuned to locations of particular interest remains unclear. Using large-scale Neuropixels recordings in freely-behaving rats, we show that sweeps and direction signals are rapidly and dynamically modulated: they track moving targets during pursuit, precede orienting responses during immobility, and reverse during backward locomotion — without prior spatial learning. Similar modulation occurs during REM sleep. Canonical head-direction signals remain head-aligned. These findings identify sweeps as a flexible, attention-like mechanism for selectively sampling allocentric cognitive maps. ### Competing Interest Statement The authors have declared no competing interest. European Research Council, Synergy Grant 951319 (EIM) The Research Council of Norway, Centre of Neural Computation 223262 (EIM, MBM), Centre for Algorithms in the Cortex 332640 (EIM, MBM), National Infrastructure grant (NORBRAIN, 295721 and 350201) The Kavli Foundation, https://ror.org/00kztt736 Ministry of Science and Education, Norway (EIM, MBM) Faculty of Medicine and Health Sciences; NTNU, Norway (AZV)

The hippocampal map has its own attentional control signal!
Our new study reveals that theta #sweeps can be instantly biased towards behaviourally relevant locations. See 📹 in post 4/6 and preprint here 👉
www.biorxiv.org/content/10.6...
🧵(1/6)

2 months ago 184 62 4 10

With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.

My hope is that this will be a living document, continuously improved as I get feedback.

3 months ago 591 238 16 10
Preview
Careers Careers on Simons Foundation

Joint junior faculty position in Computational Neuroscience, between Ctr for Computational Neuroscience at @flatironinstitute.org and the CUNY Graduate Center @thegraduatecenter.bsky.social . Application deadline: 16 Jan 2026!

www.simonsfoundation.org/flatiron/car...
cuny.jobs/new-york-ny/...

3 months ago 38 22 1 1

Thanks, Rich!

4 months ago 1 0 0 0

Thanks so much.

4 months ago 0 0 0 0

Thank you!

4 months ago 0 0 0 0

I’m very grateful to the FMI, the tenure committee, inspiring colleagues, and all the hidden supporters who made this possible. Huge thanks to past and present group members for their curiosity and creativity. Excited for the next chapter.

4 months ago 55 5 7 0
Preview
Three types of remapping with linear decoders: A population-geometric perspective Author summary Place cells of the hippocampus form unique activity patterns in different environments, a process called remapping. However, it is not clear what the relationship is between changes in ...

I’m happy to share some recent work out in PLOS Computational Biology with @guille-martin.bsky.social and Christian Machens at @champalimaudr.bsky.social . We use neural coding and population geometry to study different perspectives on hippocampal remapping.

journals.plos.org/ploscompbiol...

4 months ago 28 6 1 1
Advertisement
Preview
Lindsay Lab - Postdoc Position Artificial neural networks applied to psychology, neuroscience, and climate change

Spread the word: I'm looking to hire a postdoc to explore the concept of attention (as studied in psych/neuro, not the transformer mechanism) in large Vision-Language Models. More details here: lindsay-lab.github.io/2025/12/08/p...
#MLSky #neurojobs #compneuro

4 months ago 125 91 2 0
Post image

Finally got the job ad—looking for 2 PhD students to start spring next year:

www.gao-unit.com/join-us/

If comp neuro, ML, and AI4Neuro is your thing, or you just nerd out over brain recordings, apply!

I'm at neurips. DM me here / on the conference app or email if you want to meet 🏖️🌮

4 months ago 81 51 1 5

Come work with us!!!

4 months ago 12 6 0 1
Preview
Joint modelling of brain and behaviour dynamics with artificial intelligence - Nature Reviews Neuroscience Artificial intelligence is rapidly advancing our mechanistic understanding of the shared structure between the brain and higher-order behaviours. In this Review, Mathis and Mathis synthesize state-of-...

Joint modelling of brain and behaviour dynamics with artificial intelligence

www.nature.com/articles/s41...

4 months ago 118 29 2 2
Post image

Thanks! There is a notable difference, though: in Nejad et al. (2025), L5 is trained with a reconstruction loss, i.e., an autoencoder (see Eqs. 4–6 from the methods below). L2/3 then predicts the autoencoder's latent state via a supervised next-step loss. That shouldn't be conflated with a JEPA.

4 months ago 4 0 1 0
Post image

6/ Finally, we build a hierarchical JEPA version of our model and outline how its architecture could map onto cortical microcircuits, toward a predictive-processing framework with mechanistic links to neuroanatomy. Read the full story here 👇
🔗 doi.org/10.1101/2025...

4 months ago 12 1 1 0

5/ Importantly, RPL captures representational motifs across multiple species and cortical areas: On the one hand, successor-like structures resembling those in human V1. On the other hand, its abstract sequence representations are comparable to macaque PFC.

4 months ago 7 0 1 0
Post image

4/ From raw video streams and without supervision, RPL learns: invariant object identity, equivariant motion variables (position, velocity, orientation, etc.), and a world model that allows simulating plausible motion trajectories entirely in latent space.

4 months ago 11 0 1 0
Advertisement
Post image

3/ Recent studies indicate that, aside from plausibility, representation-space predictive models like JEPAs also learn more abstract representations than input-space generative models, which tend to focus on low-level details (cf @yann-lecun.bsky.social)

4 months ago 10 1 1 0
Post image

2/ RPL operates entirely in latent space, avoiding the anatomical issues of predictive coding models that compute prediction errors in input space. Instead, the network predicts future internal representations through a specific recurrent circuit structure.

4 months ago 7 0 1 0
Post image

1/6 New preprint 🚀 How does the cortex learn to represent things and how they move without reconstructing sensory stimuli? We developed a circuit-centric recurrent predictive learning (RPL) model based on JEPAs.
🔗 doi.org/10.1101/2025...
Led by @atenagm.bsky.social @mshalvagal.bsky.social

4 months ago 142 42 3 4