i need the “llms are might be conscious” folx to read this
Posts by DurstewitzLab
From the top of my head, here some recent ones:
"Two views on the cognitive brain" by @johnwkrakauer.bsky.social, @dlbarack.bsky.social
"Reconstructing computational system dynamics from neural data with recurrent neural networks" by @durstewitzlab.bsky.social et al
1/3
In a new #ICLR2026 paper we provide an algorithm for semi-analytically constructing un-/stable manifolds of fixed points and cycles of ReLU-based RNNs:
openreview.net/pdf?id=EAwLA...
These manifolds provide a skeleton for the system’s dynamics, dissecting the state space into basins of attraction.
We had a go at a blog about our recent dynamical systems foundation model published at NeurIPS (with strong support from the Structures outreach team!) … let us know your thoughts!
Fully-funded International Neuroscience Doctoral Programme🧠 Champalimaud Foundation, Lisbon, Portugal 🇵🇹
Deadline: Jan 31, 2026
fchampalimaud.org/champalimaud...
Research program spans systems/computational/theoretical/clinical/sensory/motor neuroscience, neuroethology, intelligence, and more!!
Tomorrow Christoph will present DynaMix, the first foundation model for dynamical systems reconstruction, at #NeurIPS2025 Exhibit Hall C,D,E #2303
Thanks for sharing! Missed it, but just downloaded it, looking forward to get into it ...
Unlike current AI systems, animals can quickly and flexibly adapt to changing environments.
This is the topic of our new perspective in Nature MI (rdcu.be/eSeif), where we relate dynamical and plasticity mechanisms in the brain to in-context and continual learning in AI. #NeuroAI
Revised version of our #NeurIPS2025 paper with full code base in Julia & Python now online, see arxiv.org/abs/2505.13192
Despite being extremely lightweight (only 0.1% of params, 0.6% training corpus size, of closest competitor), it also outperforms major TS foundation models like Chronos variants on real-world TS forecasting with minimal inference times (0.2%) ...
Our #AI #DynamicalSystems #FoundationModel DynaMix was accepted to #NeurIPS2025 with outstanding reviews (6555) – first model which can *zero-shot*, w/o any fine-tuning, forecast the *long-term statistics* of time series provided a context. Test it on #HuggingFace:
huggingface.co/spaces/Durst...
Relevant publications:
www.nature.com/articles/s41...
openreview.net/pdf?id=Vp2OA...
proceedings.mlr.press/v235/brenner...
www.nature.com/articles/s41...
We have openings for several fully-funded positions (PhD & PostDoc) at the intersection of AI/ML, dynamical systems, and neuroscience within a BMFTR-funded Neuro-AI consortium, at Heidelberg University & Central Institute of Mental Health:
www.einzigartigwir.de/en/job-offer...
More info below ...
Is it possible to go from spikes to rates without averaging?
We show how to exactly map recurrent spiking networks into recurrent rate networks, with the same number of neurons. No temporal or spatial averaging needed!
Presented at Gatsby Neural Dynamics Workshop, London.
Today I joined >1900 members of US National Academies of Science, Engineering & Medicine signing this open letter (views our own).
Leadership of science by US has been paramount for >70yrs & Admin is now acting to throw it all away!
docs.google.com/document/d/1...
www.nytimes.com/2025/03/31/s...
What a fantastic accomplishment -- and what a fantastic story! www.quantamagazine.org/at-17-hannah...
Got prov. approval for 2 major grants in Neuro-AI & Dynamical Systems Reconstruction, on learning & inference in non-stationary environments, out-of-domain generalization, and DS foundation models. To all AI/math/DS enthusiasts: Expect job announcements (PhD/PostDoc) soon! Feel free to get in touch.
We wrote a little #NeuroAI piece about in-context learning & neural dynamics vs. continual learning & plasticity, both mechanisms to flexibly adapt to changing environments:
arxiv.org/abs/2507.02103
We relate this to non-stationary rule learning tasks with rapid performance jumps.
Feedback welcome!
Yes I think so!
Happy to discuss our work on parsimonious & math. tractable RNNs for dynamical systems reconstruction next week at
cns2025florence.sched.com/event/1z9Mt/...
Fantastic work by Florian Bähner, Hazem Toutounji, Tzvetan Popov and many others - I'm just the person advertising!
How do animals learn new rules? By systematically testing diff. behavioral strategies, guided by selective attn. to rule-relevant cues: rdcu.be/etlRV
Akin to in-context learning in AI, strategy selection depends on the animals' "training set" (prior experience), with similar repr. in rats & humans.
What a line up!! With Lorenzo Gaetano Amato, Demian Battaglia, @durstewitzlab.bsky.social, @engeltatiana.bsky.social, @seanfw.bsky.social, Matthieu Gilson, Maurizio Mattia, @leonardopollina.bsky.social, Sara Solla.
Into population dynamics? Coming to #CNS2025 but not quite ready to head home?
Come join us! at the Symposium on "Neural Population Dynamics and Latent Representations"! 🧠
📆 July 10th
📍 Scuola Superiore Sant’Anna, Pisa (and online)
👉 Free registration: neurobridge-tne.github.io
#compneuro
I’m really looking so much forward to this! In wonderful Pisa!
Just heading back from a fantastic workshop on neural dynamics at Gatsby/ London, organized by Tatiana Engel, Bruno Averbeck, & Peter Latham.
Enjoyed seeing so many old friends, Memming Park, Carlos Brody, Wulfram Gerstner, Nicolas Brunel & many others …
Discussed our recent DS foundation models …
We dive a bit into the reasons why current time series FMs not trained for DS reconstruction fail, and conclude that a DS perspective on time series forecasting & models may help to advance the #TimeSeriesAnalysis field.
(6/6)
Remarkably, it not only generalizes zero-shot to novel DS, but it can even generalize to new initial conditions and regions of state space not covered by the in-context information.
(5/6)
And no, it’s neither based on Transformers nor Mamba – it’s a new type of mixture-of-experts architecture based on the recently introduced AL-RNN (proceedings.neurips.cc/paper_files/...), specifically trained for DS reconstruction.
#AI
(4/6)