With the rumblings around CIHR wanting to move to one cycle per year not going away, in consultation with a diverse stakeholder group, we have put together a letter for CIHR leadership. Please consider signing this letter and share/RT: docs.google.com/forms/d/e/1F... @cannabrain.bsky.social
Posts by Matt Perich
Some of you saw a preview of this result at my Cosyne talk last week. We may have had too much fun working on this worm-fly model 🤣🤓🤣
(The digital sphinx may be imagery, but the lessons are real.)
Here's a lovely #blueprint on a new study from our lab led by @royeyono.bsky.social.
tl;dr: it implies that there may be interneurons whose role is to normalize credit assignment signals during learning.
#neuroscience 🧪
We think we're just scratching the surface of what this approach can uncover, and we’d love to see how it works in new scenarios! If you're interested to try JEDI on your datasets, you can get the code here: github.com/sinthlab/JEDI. Reach out to me or @anirudhgj.bsky.social if you have questions!
Overall, I’m excited about the potential here for getting a deeper view into neural dynamics in real datasets, where it’s difficult to know the true connectivity or dynamical regimes. And this is really scalable: more conditions/trials/etc should only improve the generative model of the dynamics!
When we looked at the fixed point structure of motor cortex, we found very few stable fixed points. Perhaps not too surprising since the motor cortex must always be responding to feedback! Indeed, the only stable fixed points we found were at the end of the reach.
Interestingly, when we compared JEDI models fit to the planning period (where activity ramps from a quiescent state) and movement (where it becomes a feedback controller), we see a shift in the eigenspectra towards “edge of chaos” dynamics, consistent with some theories of neural computation.
So we have a model which learns useful dynamical features from neural population time series. Let’s see how it works in real neural data. We fit to motor cortex during monkey delayed center-out reaching, and found our embeddings mapped well onto reach directions.
We also run JEDI on task-trained RNNs doing the MemoryPro (see: Yang 2019, Driscoll 2024). Driscoll et al. beautifully showed the fixed point structure produced by these networks. We show that JEDI infers this structure just from the unit activations, without access to the ground truth weights.
We have a few examples of this with simulations where ground-truth dynamics are known. First, we fit JEDI to RNNs driven by oscillatory inputs of different frequencies. Analysis of the RNN weight eigenspectra showed oscillatory (imaginary) components whose frequency increased, exactly as expected!
Structured embeddings are useful, but the real value is in our ability to infer RNN weights that reproduce the neural dynamics. This is our key to reverse-engineering potential mechanisms underlying those dynamics (e.g., eigenspectra, fixed points, etc).
Using simulated datasets of RNNs driven by varied inputs, we show that JEDI learns context-specific embeddings at least as effectively as classic methods like VAEs, but with more structure due to the joint learning with the system’s dynamics.
JEDI is built on hypernetworks (networks that learn to produce the weights of other networks) to generate RNNs that reproduce neural population recordings from learned context embeddings. This lets us flexibly account dynamical variation across time, trials, behaviors, contexts, etc.
Work led by @anirudhgj.bsky.social and Ali Korojy, with collaborators @oliviercodol.bsky.social and @glajoie.bsky.social. This work is spiritually indebted to my past work with @kanakarajanphd.bsky.social on CURBD (www.biorxiv.org/content/10.1...), but with some slightly different goals.
New paper! We introduce JEDI, Jointly Embedded Dynamics Inference for neural dynamics.
arxiv.org/abs/2603.10489. JEDI flexibly infers dynamical principles (across behaviors/contexts) from neural population data through RNNs constrained at single-neuron resolution to reproduce that data.
Thanks! That's really great to hear!
But: in our recent NeurIPS paper we show pre-training a more general decoder on monkey data helps future performance for human speech decoding, so I definitely think these things should generalize across species within reaching tasks!
arxiv.org/abs/2506.05320
Great question! I would make the strong prediction that we can. We didn't do it in the current paper because the datasets are so varied; we don't have a good "apples to apples" set of behavioral signals to test x-species decoding.
Very cool work, thanks for sharing. I'd be curious to compare it our simulation in Fig. 6 of the paper; it was pretty easy to get networks doing feedback control with quite different dynamics even in the same task
Thanks! It's always a concern, but here we have some extremely different ask structures actually, e.g., our human participant moving objects across a table in trials order of ~ 10s, compared to mice lever pulling in trials order of ~ 100ms. Which makes me think there's more to it than that
Thanks! Let us know what you think!
Thanks!
Juan & tacos, true love. And such youth!
They also are missing those pesky spinal cords 😬
Don't worry, we love the cerebellum here, just a more focused graphic design choice!
P.S., for those at Cosyne26, come find me or Margaux if you want to chat about these results! And stop by Margaux’s main meeting talk at 9:45am on Friday! 19/18
I’ll end by echoing the sentiment put forward recently by @suthanalab.bsky.social. I think comparative analyses across species is ultimately going to greatly improve our understanding of neural computations and brain function. www.thetransmitter.org/animal-model... 18/18
Of course, the broader behavioral repertoires of these species can be quite different. There's much fascinating future work to understand how this shared base of computation adapts to enable, say, a human to play a piano sonata. But at some level, we argue there is conservation across species. 17/18
In summary, we argue that, at least for shared behaviors like reaching and grasping, evolution can maintain and repurpose computations as a base for future behavioral adaptations. 16/18
We found using DSA and CCA that RNNs find a wide range of control solutions, but generally geometry tended to track behavior and dynamics were independent. Interestingly, it was quite difficult to produce conserved geometries *and* dynamics without maintaining conserved circuit properties. 15/18
We then use RNN simulations with MotorNet (elifesciences.org/articles/88591) to explore how geometry relates to dynamics in neural circuits, by manipulating architectural properties (learning rule, effector, etc) and training the RNNs to perform the same reaching task. 14/18