Advertisement · 728 × 90

Posts by Tamas Spisak

Post image

Looking forward to the colloquium in Bochum next Wednesday!
www.ini.rub.de/events/

2 hours ago 2 0 0 0

Paper: www.sciencedirect.com/science/arti...

Code: github.com/pni-lab/fep-...

AI-perspectives: twitter-thread.com/t/2039126149...

Application with fMRI data: elifesciences.org/articles/98725

5 days ago 0 0 0 0

To me, what’s especially compelling is the scale-free nature of the framework:

The same functional form applies across:
• single neurons
• microcircuits
• large-scale brain networks

Different substrates, different implementations, same computational imperative.

5 days ago 2 0 1 0
Post image

Stochastic + rotational dynamics together enable spontaneous replay:

This gives a mechanistic account of “resting-state” dynamics: with low sensory input (likelihood) the system revisits its own priors and consolidates existing memories.

5 days ago 0 0 1 0
Post image

With sequential input, expected free energy minimization induces a solenoidal (rotational) flow component:

flow moves along the landscape, not just downhill.

This supports:
• metastable dynamics (NESS)
• efficient inference
• sequence storage
• "planning as inference"

5 days ago 0 0 1 0
Post image

It naturally supports continual learning (a key challenge for current AI models).

Learning targets unexplained variance → recalling new memories involves recalling the old correlated memories, too → no catastrophic forgetting

5 days ago 0 0 1 0
Post image

Self-orthogonalization is gift! It yields:
• disentangling latent causes → parsimonious "world model"
• improved zero-shot generalization in the spanned subspace by *oscillating* across attractors
• context emerging from the urge to orthogonalize
• optimal memory capacity, redundancy & robustness.

5 days ago 0 0 1 0
Advertisement
Post image

In a related fMRI connectivity work we see:
• attractors aligned with connectivity eigenmodes
• near-orthogonal attractor structure
• yielding a generative model for large-scale brain dynamics

In our framework, canonical brain networks (like the DMN) are priors of the brain's generative model.

5 days ago 0 0 1 0
Post image

A key result is "self-orthogonalization".

Even with correlated inputs, attractors become ~orthogonal.

This effectively disentangles latent causes and pushes the system toward a projector-like (eigenmode-aligned) organization.

And this is exactly what we see in brain data too! ↓

5 days ago 0 0 1 0
Post image

Learning is not separate from inference; it emerges from the same free energy imperative.
When predictions fail, synapses update via predictive coding-like plasticity:
• Hebbian → reinforce correlations
• anti-Hebbian → suppress redundancy
A biologically plausible, multi-scale plasticity rule.

5 days ago 0 0 2 0
Post image

At the network level, this yields:

→ attractors (free energy minima) = prior beliefs
→ input reshaping trajectories = sensory "likelihood"
→ dynamics = perception: flow-based, irreversible (MCMC-like) sampling from the posterior

5 days ago 1 0 1 0
Post image

Big Picture:

To persist, a system must minimize its free energy - and so must all its components!

If we follow the "free energy math" across the nested components of a random dynamical system, remarkable properties emerge: perception, learning, memory, spontaneous replay, planning, and much more!

5 days ago 0 0 1 0
Post image

Attractor dynamics are a hallmark of brain function.
But are they just epiphenomena?

Starting from the free energy principle, we show that attractors can actually implement Bayesian priors in self-organizing networks, linking local neural dynamics directly to macro-scale probabilistic inference.

5 days ago 1 1 2 0
Post image

Our work with Karl Friston on Self-Orthogonalizing Attractor Neural Networks is now out in Neurocomputing!

What does this theoretical model mean for our understanding of the brain? I’ve mapped out the key neuroscience implications below.

Read the thread for a neuroscience walk-through ↓

5 days ago 5 2 3 0
Preview
GitHub - Don-Yin/pytfce: Fast probabilistic Threshold-Free Cluster Enhancement in Python Fast probabilistic Threshold-Free Cluster Enhancement in Python - Don-Yin/pytfce

Really exciting to see my pTFCE work re-implemented in Python and taken even further by an independent group.
Great to know pTFCE is in good hands!

github.com/Don-Yin/pytfce
arxiv.org/abs/2603.11344

5 days ago 0 0 0 0
Post image

Out in @elife.bsky.social: Functional connectivity-based attractor dynamics of the human brain in rest, task, and disease doi.org/10.7554/eLif...

1 month ago 6 2 0 0
Advertisement

Thank you!
We have some fresh, initial evidence that the brain is akin to a self-orthogonalizing network as well.
elifesciences.org/reviewed-pre...

3 months ago 1 0 1 0
Functional Connectivity-based Attractor Dynamics in Rest, Task, and Disease

doi.org/10.7554/eLif...

3 months ago 0 0 0 0
Post image

Brain attractors are approximately orthogonal to each other, suggesting that the brain may function as a self-orthogonalizing attractor network.

Check out our revised manuscript about functional connectivity-based brain attractor dynamics in @elife.bsky.social. Link in comment!

3 months ago 11 5 1 0
Preview
On the replicability of diffusion weighted MRI-based brain-behavior models - Communications Biology Structural connectome-based predictive models can yield replicable brain–behavior associations with moderate samples in several cases, but large datasets remain key for explainability, bias control, f...

www.nature.com/articles/s42...

5 months ago 1 1 0 0

Replicability of BWAS with functional and structural MRI has been hotly debated.
But what about DWI?

Our new paper - first authored by @rkotikalapudi.bsky.social - shows that multivariate DWI models of trait-like phenotypes can be replicable, even with moderate sample sizes.

🔗 Link in comment!

5 months ago 1 1 1 0
Preview
Concern About Predictive Performance of a Pain Sensitivity Biomarker To the Editor Chowdhury et al1 evaluated a biomarker for pain sensitivity, combining peak alpha frequency and corticomotor excitability. The authors report outstanding performance (validation set area...

Serious concerns about a new cortical biomarker for pain sensitivity

jamanetwork.com/journals/jam...

We (with @tspisak.bsky.social, @christianbuchel.bsky.social) published a commentary on Chowdhury, Bi et al. (2025, JAMA Neurology) raising serious concerns about their reported results.

👇 1/13

8 months ago 73 36 4 5

@sfb-trr-289.bsky.social @karlfristonnews.bsky.social

10 months ago 2 0 0 0
graphical abstract

graphical abstract

10 months ago 0 0 0 0
Advertisement
Preview
Self-orthogonalizing attractor neural networks emerging from the free energy principle Attractor dynamics are a hallmark of many complex systems, including the brain. Understanding how such self-organizing dynamics emerge from first principles is crucial for advancing our understanding ...

As many of you know, I’ve been fascinated by brain attractor dynamics lately.

Thrilled to share a new preprint on their link to orthogonal neural representations, co-authored with Karl Friston:
arxiv.org/abs/2505.22749
- with implications for both neuroscience & AI!

First in a series - stay tuned!

10 months ago 28 9 3 0

@sfb-trr-289.bsky.social

10 months ago 2 0 0 0

🚨 New paper out in GigaScience!
To avoid common pitfalls in multivariate modeling: combine external validation with pre-registration — freeze your model before testing.

For the pros: decide on the fly when to stop training!
First-authored by the brilliant @ggallitto.bsky.social

10 months ago 7 3 1 0