Postdoc position in Paris: come help develop new generation human brain computer interfaces ⚡🧠💻
Interested? Contact me if you have experience with machine learning (e.g. simulation-based inference, RL, generative/diffusion models) or dynamical systems.
See below for + details and retweet 🙏
Posts by Matthijs Pals
Diagram of a recurrent neural network: input goes into the network, output is compared to a target to produce an error, and dotted feedback arrows show updates to neural activity and to synaptic weights.
1/7 How should feedback signals influence a network during learning? Should they first adjust synaptic weights, which then indirectly change neural activity (as in backprop.)? Or should they first adjust neural activity to guide synaptic updates (e.g., target prop.)? openreview.net/forum?id=xVI...
Thanks for the insightful response! I see how multiple populations overcome the limit of shared gain - yet in data there can be large overlaps in units tracking different variables simultaneously. And yes, maybe one shouldn’t think of separate tasks (e.g., two rings), but rather one task (one torus)
In some way this could be seen as doing multiple tasks at the same time (e.g., 3 ring attractors for storing three angular variables). Do you have any idea or speculation on how to extend your framework to this setting? Thanks! 2/2
Hi, great work and nicely written paper! It seems here there is at most one task active at a given time. It has been shown that macaques can memorise multiple stimuli at the same time in (not perfectly) orthogonal subspaces using overlapping populations of units pubmed.ncbi.nlm.nih.gov/39178858/ 1/2
Our paper on data constrained RNN that generalize to optogenetic perturbations now citable on eLife:
doi.org/10.7554/eLif...
Finally got the job ad—looking for 2 PhD students to start spring next year:
www.gao-unit.com/join-us/
If comp neuro, ML, and AI4Neuro is your thing, or you just nerd out over brain recordings, apply!
I'm at neurips. DM me here / on the conference app or email if you want to meet 🏖️🌮
We are looking for a Research Engineer (E13 TV-L) to work at the intersection of #ML and #compneuro! 🤖🧠
Help us build large-scale bio-inspired neural networks, write high-quality research code, and contribute to open-source tools like jaxley, sbi, and flyvis 🪰.
More info: www.mackelab.org/jobs/
MackeLab has grown! 🎉 Warm welcome to 5(!) brilliant and fun new PhD students / research scientists who joined our lab in the past year — we can’t wait to do great science and already have good times together! 🤖🧠 Meet them in the thread 👇 1/7
I am super happy to share that our project on training biophysical models with Jaxley is now published in Nature Methods: www.nature.com/articles/s41...
Our work on training biophysical models with Jaxley is now out in @natmethods.nature.com. Led by @deismic.bsky.social, with @philipp.hertie.ai, @ppjgoncalves.bsky.social & @jakhmack.bsky.social et al.
Paper: www.nature.com/articles/s41...
Really cool work! 🔥
The Macke lab is well-represented at the @bernsteinneuro.bsky.social conference in Frankfurt this year! We have lots of exciting new work to present with 7 posters (details👇) 1/9
I've been waiting some years to make this joke and now it’s real:
I conned somebody into giving me a faculty job!
I’m starting as a W1 Tenure-Track Professor at Goethe University Frankfurt in a week (lol), in the Faculty of CS and Math
and I'm recruiting PhD students 🤗
Our #AI #DynamicalSystems #FoundationModel DynaMix was accepted to #NeurIPS2025 with outstanding reviews (6555) – first model which can *zero-shot*, w/o any fine-tuning, forecast the *long-term statistics* of time series provided a context. Test it on #HuggingFace:
huggingface.co/spaces/Durst...
From hackathon to release: sbi v0.25 is here! 🎉
What happens when dozens of SBI researchers and practitioners collaborate for a week? New inference methods, new documentation, lots of new embedding networks, a bridge to pyro and a bridge between flow matching and score-based methods 🤯
1/7 🧵
Got prov. approval for 2 major grants in Neuro-AI & Dynamical Systems Reconstruction, on learning & inference in non-stationary environments, out-of-domain generalization, and DS foundation models. To all AI/math/DS enthusiasts: Expect job announcements (PhD/PostDoc) soon! Feel free to get in touch.
Jelmer Borst and I are looking for a PhD candidate to build an EEG-based model of human working memory! This is a really cool project that I've wanted to kick off for a while, and I can't wait to see it happen. Please share and I'm happy to answer any Qs about the project!
www.rug.nl/about-ug/wor...
The neurons that encode sequential information into working memory do not fire in that same order during recall, a finding that is at odds with a long-standing theory. Read more in this month’s Null and Noteworthy.
By @ldattaro.bsky.social
#neuroskyence
www.thetransmitter.org/null-and-not...
How do animals learn new rules? By systematically testing diff. behavioral strategies, guided by selective attn. to rule-relevant cues: rdcu.be/etlRV
Akin to in-context learning in AI, strategy selection depends on the animals' "training set" (prior experience), with similar repr. in rats & humans.
Out today in @nature.com: we show that individual neurons have diverse tuning to a decision variable computed by the entire population, revealing a unifying geometric principle for the encoding of sensory and dynamic cognitive variables.
www.nature.com/articles/s41...
Our new preprint 👀
We just pushed “Memory by a 1000 rules” onto bioRxiv, where we use clever #ML to find #plasticity quadruplets (EE, EI, IE, II) that learn basic stability in spiking nets. Why is it cool? We find 1000s!! of solutions, and they don’t just stabilise. They #memorise! www.biorxiv.org/content/10.1...
A wide shot of approximately 30 individuals standing in a line, posing for a group photograph outdoors. The background shows a clear blue sky, trees, and a distant cityscape or hills.
Great news! Our March SBI hackathon in Tübingen was a huge success, with 40+ participants (30 onsite!). Expect significant updates soon: awesome new features & a revamped documentation you'll love! Huge thanks to our amazing SBI community! Release details coming soon. 🥁 🎉
Please RT🙏
Reach out if you want to help understand cognition by modelling, analyzing and/or collect large scale intracortical data from 👩🐒🐁
We're a friendly, diverse group (n>25) w/ this terrace 😎 in the center of Paris! See👇 for + info about the lab
We have funding to support your application!
🎓Hiring now! 🧠 Join us at the exciting intersection of ML and Neuroscience! #AI4science
We’re looking for PhDs, Postdocs and Scientific Programmers that want to use deep learning to build, optimize and study mechanistic models of neural computations. Full details: www.mackelab.org/jobs/ 1/5
Re-posting is appreciated: We have a fully funded PhD position in CMC lab @cmc-lab.bsky.social (at @tudresden_de). You can use forms.gle/qiAv5NZ871kv... to send your application and find more information. Deadline is April 30. Find more about CMC lab: cmclab.org and email me if you have questions.
Excited to present our work on compositional SBI for time series at #ICLR2025 tomorrow!
If you're interested in simulation-based inference for time series, come chat with Manuel Gloeckler or Shoji Toyota
at Poster #420, Saturday 10:00–12:00 in Hall 3.
📰: arxiv.org/abs/2411.02728
Excited to announce that our paper on "Comparing noisy neural population dynamics using optimal transport distances" has been selected for an oral presentation in #ICLR2025 (1.8% top papers). Check the thread for paper details (0/n).
Presentation info: iclr.cc/virtual/2025....
Happening tomorrow morning :).