Many thanks to @klaus-tschira-stiftung.de and @gso-forresearchers.bsky.social for this wonderful opportunity to study the mechanisms underlying naturalistic decision processes in the context of foraging behavior!
We’ll also soon have openings for PhD students, stay tuned!
Posts by Arman Behrad
We apologize for the delay but our next profile is here! Dr. Ann Kennedy (@antihebbiann.bsky.social) combines theory & biological data to study how internal states shape social behavior by reconfiguring neural circuit dynamics.
Follow the link to learn more!
www.storiesofwin.org/profiles/202...
I had the chance to attend it in person. It's really amazing, highly recommend.
Out now in Science: The Truman Show, but with killifishes!
"Continuous recording of a vertebrate’s adult life from adolescence until death would provide a complete view into the behavioral architecture of aging."
www.science.org/doi/10.1126/...
Enthusiastic to present our latest results for you and get your feedback.
Are you #Coayne2026? Wanna know how we can have a common notion of state in mind and machines and find them in an unsupervised fashion with temporal embeddings based on dynamic similarity check @armanbehrad.bsky.social and @maxschwabe.bsky.social’s poster today, 3-001!
3-001, poster session 3 (Sat, Mar. 14, 13:15): @armanbehrad.bsky.social on unsupervised method to find behaviorally relevant internal states from data. We tested it w/ RNN & monkey data doing cognitive tasks & RNN trained with deep RL on a naturalistic plum tracking task.
1-031, Poster session 1 (Thu, Mar. 12, 20:30): @maxschwabe.bsky.social (a Cosyne presenter awardee 🎉, a MASTER student) will present how decomposing neural dynamics to intrinsic and input-driven modes can tell us how naturalistic decision computations are realized in the RNNs.
CMC lab is heading to #Cosyne2026, with 3 wonderful posters (1-031, 3-001, and 3-008 also see thread 👇), 2 new enthusiastic team members (@clairesturgill.bsky.social and @maxschwabe.bsky.social ), and tons of excitement for discussions and new ideas!
That was wonderful 👏
Seems super interesting. 🤩
The activity of certain neurons may influence our endurance for exercise, and these could be targeted to help us run faster for longer
For Spanish researchers interested in postdoc positions: my lab at Amsterdam is looking for candidates to apply for a Ramon Areces postdoctoral fellowship, to work for 2 years on compneuro and digital brains. Deadline Feb 23. Please spread the word! More info: www.fundacionareces.es/fundacionare...
Built a domain-agnostic peak detection algorithm and now hunting for datasets with known/annotated peaks to test it on 👀
Any domain works—signals,bio,astro,finance, spectroscopy, etc.
Got data or know a benchmark? Would love pointers 🙌
#SignalProcessing #DataScience #TimeSeries #OpenData #Research
This book is a wonderful, synthetic and richly illustrated journey through the natural history of the vertebrate brain 🤩
A big thank you to the authors 🙏
"A major theme in the evolution of the telencephalon has been the emergence of novel pathways...
1/2
This work couldn’t have happened without my wonderful collaborators: @neurostrow.bsky.social and Ila Fiete (master minds behind the original DSA), @mmdtaha.bsky.social, Christian Beste, and @neuroprinciplist.bsky.social ; and support from the @cmc-lab.bsky.social .
There are also other fantastic tools around for comparing circuits/brains/models like the RSA developed by Nikolaus Kriegeskorte and many others (e.g., see the great work www.biorxiv.org/content/10.1... by @jbarbosa.org , and @itsneuronal.bsky.social )
Related: @neurostrow.bsky.social thread with @wtredman.bsky.social & Igor Mezic on extending dynamical-similarity ideas—an exciting direction for future DSA-style methods: bsky.app/profile/neur...
We also built 2 simple nonlinear systems (A, B) with identical eigenvalues but different eigenvectors. As expected, Wasserstein-based kwDSA struggles to separate them. All 3 fastDSA variants reliably distinguish A vs B (represented w/ MDS).
Under strong noise, we repeat the transformation tests (Plus kernelDMD+Wasserstein distance (kwDSA)). kwDSA highlights a key pitfall: relying mainly on eigenvalues (ignoring eigenvectors) can miss fine dynamical differences. fastDSA alternatives remain sensitive and perform well even at high noise.
Next we tested sensitivity to dynamical change by morphing a ring attractor into a line attractor (same model). fastDSA distances jump at the ring↔line transition, capturing topology change—unlike Procrustes—while being way faster than DSA.
We first tested whether fastDSA is invariant to purely geometric deformations—changes that preserve the same underlying dynamics and attractor topology. All 3 fastDSA variants are faithful to dynamics and remain stable across geometric deformations, while being computationally more efficient.
With different forms of noise, we showed how well the rank estimate supports DMD reconstruction. Across noise levels, the method detects the rank at the knee point automatically (with no tuning)
Our method efficiently estimates the rank of delay embeddings of a dynamical system. For example, on Lorenz trajectories projected to higher dimensions, the estimated order matches the true latent rank and aligns with AIC/BIC baselines.
We made DSA up to 150 times faster 🤯 by introducing 3 new optimization objectives and solvers to speed up the DSA alignment step. Instead of enforcing exact orthogonality at every iteration, we use faster formulations that approximate or penalize the constraint.
The original dynamic similarity analysis (DSA) developed by @neurostrow.bsky.social and Ila Fiete is a powerful method to compare trajectories of (nonlinear) neural dynamics between different datasets and models: arxiv.org/abs/2306.10168
Wanna compare dynamics across neural data, RNNs, or dynamical systems? We got a fast and furious method🏎️
The 1st preprint of my PhD 🥳 fast dynamical similarity analysis (fastDSA):
📜: arxiv.org/abs/2511.22828
💻: github.com/CMC-lab/fast...
I’ll be @cosynemeeting.bsky.social - happy to chat 😉
Way back in 1999, Kenji Doya sketched a big picture theory of the brain:
1️⃣The cerebellum is specialized for supervised learning
2️⃣The basal ganglia are for reinforcement learning
3️⃣The cerebral cortex is for unsupervised learning
How does this hold up in 2026? www.sciencedirect.com/science/arti...
From sensory to perceptual manifolds: The twist of neural geometry
doi.org/10.1126/scia...
#neuroscience
Thank you for having me on BrainInspired, Paul @braininspired.bsky.social! It was such an honor to be on my favorite show—a rare place where we can leisurely talk about manifolds, latent circuits, power laws, and other esoteric ideas, and still be taken seriously in knowing they are all real.