Looking forward to the colloquium in Bochum next Wednesday!
www.ini.rub.de/events/
Posts by Tamas Spisak
Paper: www.sciencedirect.com/science/arti...
Code: github.com/pni-lab/fep-...
AI-perspectives: twitter-thread.com/t/2039126149...
Application with fMRI data: elifesciences.org/articles/98725
To me, what’s especially compelling is the scale-free nature of the framework:
The same functional form applies across:
• single neurons
• microcircuits
• large-scale brain networks
Different substrates, different implementations, same computational imperative.
Stochastic + rotational dynamics together enable spontaneous replay:
This gives a mechanistic account of “resting-state” dynamics: with low sensory input (likelihood) the system revisits its own priors and consolidates existing memories.
With sequential input, expected free energy minimization induces a solenoidal (rotational) flow component:
flow moves along the landscape, not just downhill.
This supports:
• metastable dynamics (NESS)
• efficient inference
• sequence storage
• "planning as inference"
It naturally supports continual learning (a key challenge for current AI models).
Learning targets unexplained variance → recalling new memories involves recalling the old correlated memories, too → no catastrophic forgetting
Self-orthogonalization is gift! It yields:
• disentangling latent causes → parsimonious "world model"
• improved zero-shot generalization in the spanned subspace by *oscillating* across attractors
• context emerging from the urge to orthogonalize
• optimal memory capacity, redundancy & robustness.
In a related fMRI connectivity work we see:
• attractors aligned with connectivity eigenmodes
• near-orthogonal attractor structure
• yielding a generative model for large-scale brain dynamics
In our framework, canonical brain networks (like the DMN) are priors of the brain's generative model.
A key result is "self-orthogonalization".
Even with correlated inputs, attractors become ~orthogonal.
This effectively disentangles latent causes and pushes the system toward a projector-like (eigenmode-aligned) organization.
And this is exactly what we see in brain data too! ↓
Learning is not separate from inference; it emerges from the same free energy imperative.
When predictions fail, synapses update via predictive coding-like plasticity:
• Hebbian → reinforce correlations
• anti-Hebbian → suppress redundancy
A biologically plausible, multi-scale plasticity rule.
At the network level, this yields:
→ attractors (free energy minima) = prior beliefs
→ input reshaping trajectories = sensory "likelihood"
→ dynamics = perception: flow-based, irreversible (MCMC-like) sampling from the posterior
Big Picture:
To persist, a system must minimize its free energy - and so must all its components!
If we follow the "free energy math" across the nested components of a random dynamical system, remarkable properties emerge: perception, learning, memory, spontaneous replay, planning, and much more!
Attractor dynamics are a hallmark of brain function.
But are they just epiphenomena?
Starting from the free energy principle, we show that attractors can actually implement Bayesian priors in self-organizing networks, linking local neural dynamics directly to macro-scale probabilistic inference.
Our work with Karl Friston on Self-Orthogonalizing Attractor Neural Networks is now out in Neurocomputing!
What does this theoretical model mean for our understanding of the brain? I’ve mapped out the key neuroscience implications below.
Read the thread for a neuroscience walk-through ↓
Really exciting to see my pTFCE work re-implemented in Python and taken even further by an independent group.
Great to know pTFCE is in good hands!
github.com/Don-Yin/pytfce
arxiv.org/abs/2603.11344
Out in @elife.bsky.social: Functional connectivity-based attractor dynamics of the human brain in rest, task, and disease doi.org/10.7554/eLif...
Thank you!
We have some fresh, initial evidence that the brain is akin to a self-orthogonalizing network as well.
elifesciences.org/reviewed-pre...
Brain attractors are approximately orthogonal to each other, suggesting that the brain may function as a self-orthogonalizing attractor network.
Check out our revised manuscript about functional connectivity-based brain attractor dynamics in @elife.bsky.social. Link in comment!
Replicability of BWAS with functional and structural MRI has been hotly debated.
But what about DWI?
Our new paper - first authored by @rkotikalapudi.bsky.social - shows that multivariate DWI models of trait-like phenotypes can be replicable, even with moderate sample sizes.
🔗 Link in comment!
Serious concerns about a new cortical biomarker for pain sensitivity
jamanetwork.com/journals/jam...
We (with @tspisak.bsky.social, @christianbuchel.bsky.social) published a commentary on Chowdhury, Bi et al. (2025, JAMA Neurology) raising serious concerns about their reported results.
👇 1/13
@sfb-trr-289.bsky.social @karlfristonnews.bsky.social
graphical abstract
As many of you know, I’ve been fascinated by brain attractor dynamics lately.
Thrilled to share a new preprint on their link to orthogonal neural representations, co-authored with Karl Friston:
arxiv.org/abs/2505.22749
- with implications for both neuroscience & AI!
First in a series - stay tuned!
@sfb-trr-289.bsky.social
🚨 New paper out in GigaScience!
To avoid common pitfalls in multivariate modeling: combine external validation with pre-registration — freeze your model before testing.
For the pros: decide on the fly when to stop training!
First-authored by the brilliant @ggallitto.bsky.social