Advertisement · 728 × 90

Posts by Andrew Saxe

We’ve got an exciting new thing to share! We have causal evidence (using TMR) that memory reactivation during sleep promotes abstract understanding of underlying structure, allowing transfer learning in a new domain with zero superficial feature overlap with the learned one.

1 week ago 119 35 1 2
Post image

New preprint! 🧠
How do RNNs learn abstract rules from sequences, independent of specific stimuli?

By Vezha Boboeva, with Alberto Pezzotta & George Dimitriadis

"From sequences to schemas: low-rank recurrent dynamics underlie abstract relational representations"
www.biorxiv.org/content/10.6...

1 week ago 93 28 1 0

Two Analytical Connectionism-related updates:

1. ⏰ 1 week left to apply! Interested in language + AI & cognition? Don’t miss it: www.analytical-connectionism.net/school/2026/

2. 📜 Lecture notes from the first two editions are finally out: proceedings.mlr.press/v320/

1 week ago 7 2 0 1
We're hiring! This is a unique opportunity to translate our understanding of neural computation - from circuit-level mechanisms to computational principles -  into the human brain, through the establishment of cutting-edge human neural recording capabilities with collaborators in London and abroad.

We're hiring! This is a unique opportunity to translate our understanding of neural computation - from circuit-level mechanisms to computational principles - into the human brain, through the establishment of cutting-edge human neural recording capabilities with collaborators in London and abroad.

We’re hiring a Group Leader!

Join us to lead a transformative initiative in human systems neuroscience.

Find out more and apply ⤵️

www.sainsburywellcome.org/content/curr...

2 months ago 33 26 1 4
UCL – University College London UCL is consistently ranked as one of the top ten universities in the world (QS World University Rankings 2010-2022) and is No.2 in the UK for research power (Research Excellence Framework 2021).

Postdoc opening!

Come work with us on deep learning theory relevant to AI safety

Deadline: 7 Apr 2026
Details and application: www.ucl.ac.uk/work-at-ucl/...

2 weeks ago 18 6 2 0
Post image

Very excited by this year's Analytical Connectionism Summer School!

A dream lineup of speakers on the topic of language acquisition in minds and machines

Bursaries available to cover costs

Aug 17 – Aug 28, 2026 Gothenburg

Details: www.analytical-connectionism.net//school/2026/

2 weeks ago 16 2 0 1

A great entry into the proposals available for physiologically plausible gradient descent!

I think the way they use dendrite targeting inhibition in this model is particularly elegant.

Time to start testing these ideas folks!!!

#neuroscience 🧪 #NeuroAI

3 weeks ago 34 9 0 0

The First 1,000 Days (1kD) Project - Collecting and Analyzing an Ultra-Dense Naturalistic Dataset of Human Baby Development www.biorxiv.org/content/10.64898/2026.03...

4 weeks ago 5 2 0 1
Video

Looking for alternatives to quadratic functions for closed-form analysis in optimization? This post explores matrix Riccati dynamics and their applications to neural networks. francisbach.com/closed-form-...

1 month ago 20 2 0 0

Here's a lovely #blueprint on a new study from our lab led by @royeyono.bsky.social.

tl;dr: it implies that there may be interneurons whose role is to normalize credit assignment signals during learning.

#neuroscience 🧪

1 month ago 50 12 2 0
Advertisement
Open Rank Faculty Cluster Hire Search for the New Department of Cognitive Science at Bocconi - Bocconi University

A new Department of Cognitive Science is being created at Bocconi University in Milan, Italy.

Here is the call for a cluster hire (for around 10 faculty) in all areas of cognitive science, at both junior and senior levels:

www.unibocconi.it/en/faculty-a...

Deadline: May 4th, 2026

1 month ago 148 119 3 3

Poster tonight at #cosyne26 (1-079)!

@wanqingjiang.bsky.social & @noehamou.bsky.social show that mice learn hidden community structure in a 15-odour graph even when transition statistics are flat.

Fun collaboration with @saxelab.bsky.social that started with East London coffees ☕!

1 month ago 34 5 1 1
Preview
Why the Brain Consolidates: Predictive Forgetting for Optimal Generalisation Standard accounts of memory consolidation emphasise the stabilisation of stored representations, but struggle to explain representational drift, semanticisation, or the necessity of offline replay. He...

Really neat work by Fountas and colleagues at UCL:
arxiv.org/abs/2603.04688
They propose that consolidation reflects a form of "predictive forgetting" that aids generalization.

1 month ago 93 32 3 3

Thanks @natmesanash.bsky.social for covering our new work, in @thetransmitter.bsky.social!

1 month ago 40 11 1 0
Post image

📢📢 Announcing this year's conference on the Mathematics of Neuroscience & AI (Rome, 9-12th June). We’ve got a stellar line-up and venue, and invite everyone to join:

www.neuromonster.org

1 month ago 30 17 2 1

📢 Job alert - Deep Learning Theory & AI Safety
Applications open for a postdoc fellow (@saxelab.bsky.social lab) to study artificial deep networks using techniques from applied maths & stat physics.

⏰ Deadline: 26 Mar 2026
🤝 In collaboration with @stefsm.bsky.social
ℹ️ www.ucl.ac.uk/life-science...

1 month ago 10 4 0 1
Preview
learningfromscratch march 16th, workshop day 1 @ cosyne 2026

Excited to be co-organising a #cosyne2026 workshop with Alison Comrie on 'algorithms for learning from scratch'! With a great line-up of speakers, we'll be tackling the question of what processes enable naive biological & artificial agents to adapt to new situations. Info here: tinyurl.com/4u8enf7k

1 month ago 49 17 1 0
Post image

📢 We’re now accepting applications for the 2026 School on Analytical Connectionism dedicated this year to Language Acquisition.

📍 Gothenburg, Sweden

🗓️ August 17–28, 2026

☠️ Apply by April 17!

🔗 analytical-connectionism.net/school/2026/

👇 Meet the experts joining us this summer!

2 months ago 20 8 1 3
Advertisement

Thrilled to finally share this work! 🧠🔊

Using a new reinforcement-free task we show mice (like humans) extract abstract structure from sound (unsupervised) & dCA1 is causally required by building factorised, orthogonal subspaces of abstract rules.

Led by Dammy Onih!
www.biorxiv.org/content/10.6...

2 months ago 155 52 3 2
Preview
[Hiring] Principia Research Fellows Principia Research Fellows: Theoretical Model Organisms for AI Safety Principia · London · Fixed-term (6 months) with potential extension · Starting ASAP We are launching Principia, a new technical re...

How to apply:

Salary: USD 80,000–100,000 (50-74k GBP) annualised
Initial contract: 6 months, w/ extension based on funding

Details: docs.google.com/document/d/1...
Application: forms.gle/xKukH74iX16p...

4

2 months ago 3 0 0 0

We’re hiring postdocs/research scientists! Your interests can be anywhere on the spectrum from pure theory to empirically testing predictions relevant to AI safety.

Our theoretical work relies on dynamical systems and tools from statistical physics.

3

2 months ago 5 1 1 0

We avoid many unwanted outcomes in the physical world using our knowledge of physics, and basic deep learning theory should eventually enable the same for AI.

We focus on simple, analytically tractable “model organisms” that capture essential learning dynamics and behaviours.

2

2 months ago 4 0 1 0

Excited to launch Principia, a nonprofit research organisation at the intersection of deep learning theory and AI safety.

Our goal is to develop theory for modern machine learning systems that can help us understand complex network behaviors, including those critical for AI safety and alignment.

1

2 months ago 93 28 1 1
Post image

Our paper is out in @natneuro.nature.com!

www.nature.com/articles/s41...

We develop a geometric theory of how neural populations support generalization across many tasks.

@zuckermanbrain.bsky.social
@flatironinstitute.org
@kempnerinstitute.bsky.social

1/14

2 months ago 278 101 7 1

A great question, I'm not sure. It's important to understand if muon shares similar inductive biases.

2 months ago 1 0 0 0
Advertisement

I agree, there seem to be connections but it's not fully clear to me why. SLT is a static theory, and yet Daniel Murfet and others have shown that the stages we see also correspond to SLT posteriors of increasing complexity.

2 months ago 1 0 1 0
Preview
DLMath&Efficiency This reading group examines the interplay between the theoretical foundations of deep learning and the practical challenge of making machine learning efficient. On the theory side, we study mathematic...

Upcoming online talk next Monday 9th February, at the ELLIS Reading Group on Mathematics & Efficiency of Deep Learning!

Open to all. Info at
sites.google.com/view/efficie...

2 months ago 4 0 0 0

Equipped with this theory, we make new predictions about how network width, data distribution, and initialization affect learning dynamics. For example, increasing the number of attention heads in linear attention shortens the plateaus in learning.

2 months ago 6 0 1 0

So when progressing simple -> complex, linear networks learn solutions of increasing rank, ReLU networks learn solutions with increasing kinks, convolutional networks learn solutions with increasing convolutional kernels, and attention models learn solutions with increasing heads.

2 months ago 5 0 1 0

Here the notion of simplicity is the number of effective units in the architecture: hidden neurons, convolutional kernels, or attention heads.

2 months ago 5 0 1 0