Advertisement · 728 × 90

Posts by Matt Perich

Preview
CIHR Project Grant Cycle Restructuring Dear colleagues, Below you will find a draft letter regarding the proposed transition to a single annual CIHR Project Grant competition, and to invite you to sign on if you are in agreement with its c...

With the rumblings around CIHR wanting to move to one cycle per year not going away, in consultation with a diverse stakeholder group, we have put together a letter for CIHR leadership. Please consider signing this letter and share/RT: docs.google.com/forms/d/e/1F... @cannabrain.bsky.social

3 weeks ago 34 31 4 2

Some of you saw a preview of this result at my Cosyne talk last week. We may have had too much fun working on this worm-fly model 🤣🤓🤣

(The digital sphinx may be imagery, but the lessons are real.)

3 weeks ago 64 11 0 2

Here's a lovely #blueprint on a new study from our lab led by @royeyono.bsky.social.

tl;dr: it implies that there may be interneurons whose role is to normalize credit assignment signals during learning.

#neuroscience 🧪

1 month ago 50 12 2 0
Preview
GitHub - sinthlab/JEDI: Official code for paper "JEDI: Jointly Embedded Inference of Neural Dynamics" Official code for paper "JEDI: Jointly Embedded Inference of Neural Dynamics" - sinthlab/JEDI

We think we're just scratching the surface of what this approach can uncover, and we’d love to see how it works in new scenarios! If you're interested to try JEDI on your datasets, you can get the code here: github.com/sinthlab/JEDI. Reach out to me or @anirudhgj.bsky.social if you have questions!

1 month ago 0 0 0 0

Overall, I’m excited about the potential here for getting a deeper view into neural dynamics in real datasets, where it’s difficult to know the true connectivity or dynamical regimes. And this is really scalable: more conditions/trials/etc should only improve the generative model of the dynamics!

1 month ago 0 0 1 0
Post image

When we looked at the fixed point structure of motor cortex, we found very few stable fixed points. Perhaps not too surprising since the motor cortex must always be responding to feedback! Indeed, the only stable fixed points we found were at the end of the reach.

1 month ago 0 0 1 0
Post image

Interestingly, when we compared JEDI models fit to the planning period (where activity ramps from a quiescent state) and movement (where it becomes a feedback controller), we see a shift in the eigenspectra towards “edge of chaos” dynamics, consistent with some theories of neural computation.

1 month ago 0 0 1 0
Post image

So we have a model which learns useful dynamical features from neural population time series. Let’s see how it works in real neural data. We fit to motor cortex during monkey delayed center-out reaching, and found our embeddings mapped well onto reach directions.

1 month ago 0 0 1 0
Post image

We also run JEDI on task-trained RNNs doing the MemoryPro (see: Yang 2019, Driscoll 2024). Driscoll et al. beautifully showed the fixed point structure produced by these networks. We show that JEDI infers this structure just from the unit activations, without access to the ground truth weights.

1 month ago 0 0 1 0
Advertisement
Post image

We have a few examples of this with simulations where ground-truth dynamics are known. First, we fit JEDI to RNNs driven by oscillatory inputs of different frequencies. Analysis of the RNN weight eigenspectra showed oscillatory (imaginary) components whose frequency increased, exactly as expected!

1 month ago 0 0 1 0

Structured embeddings are useful, but the real value is in our ability to infer RNN weights that reproduce the neural dynamics. This is our key to reverse-engineering potential mechanisms underlying those dynamics (e.g., eigenspectra, fixed points, etc).

1 month ago 0 0 1 0
Post image

Using simulated datasets of RNNs driven by varied inputs, we show that JEDI learns context-specific embeddings at least as effectively as classic methods like VAEs, but with more structure due to the joint learning with the system’s dynamics.

1 month ago 0 0 1 0
Post image

JEDI is built on hypernetworks (networks that learn to produce the weights of other networks) to generate RNNs that reproduce neural population recordings from learned context embeddings. This lets us flexibly account dynamical variation across time, trials, behaviors, contexts, etc.

1 month ago 1 0 1 0

Work led by @anirudhgj.bsky.social and Ali Korojy, with collaborators @oliviercodol.bsky.social and @glajoie.bsky.social. This work is spiritually indebted to my past work with @kanakarajanphd.bsky.social on CURBD (www.biorxiv.org/content/10.1...), but with some slightly different goals.

1 month ago 2 0 1 0
Preview
JEDI: Jointly Embedded Inference of Neural Dynamics Animal brains flexibly and efficiently achieve many behavioral tasks with a single neural network. A core goal in modern neuroscience is to map the mechanisms of the brain's flexibility onto the dynam...

New paper! We introduce JEDI, Jointly Embedded Dynamics Inference for neural dynamics.
arxiv.org/abs/2603.10489. JEDI flexibly infers dynamical principles (across behaviors/contexts) from neural population data through RNNs constrained at single-neuron resolution to reproduce that data.

1 month ago 45 15 1 1

Thanks! That's really great to hear!

1 month ago 2 0 0 0
Preview
Generalizable, real-time neural decoding with hybrid state-space models Real-time decoding of neural activity is central to neuroscience and neurotechnology applications, from closed-loop experiments to brain-computer interfaces, where models are subject to strict latency...

But: in our recent NeurIPS paper we show pre-training a more general decoder on monkey data helps future performance for human speech decoding, so I definitely think these things should generalize across species within reaching tasks!

arxiv.org/abs/2506.05320

1 month ago 2 0 1 0

Great question! I would make the strong prediction that we can. We didn't do it in the current paper because the datasets are so varied; we don't have a good "apples to apples" set of behavioral signals to test x-species decoding.

1 month ago 2 0 1 0
Advertisement

Very cool work, thanks for sharing. I'd be curious to compare it our simulation in Fig. 6 of the paper; it was pretty easy to get networks doing feedback control with quite different dynamics even in the same task

1 month ago 2 0 1 0

Thanks! It's always a concern, but here we have some extremely different ask structures actually, e.g., our human participant moving objects across a table in trials order of ~ 10s, compared to mice lever pulling in trials order of ~ 100ms. Which makes me think there's more to it than that

1 month ago 1 1 1 0

Thanks! Let us know what you think!

1 month ago 0 0 0 0

Thanks!

1 month ago 0 1 0 0

Juan & tacos, true love. And such youth!

1 month ago 1 0 0 0

They also are missing those pesky spinal cords 😬

Don't worry, we love the cerebellum here, just a more focused graphic design choice!

1 month ago 0 0 0 0
Post image

P.S., for those at Cosyne26, come find me or Margaux if you want to chat about these results! And stop by Margaux’s main meeting talk at 9:45am on Friday! 19/18

1 month ago 3 0 1 0
Advertisement
Preview
Neuroscience has a species problem If neuroscience is serious about building general principles of brain function, cross-species dialogue must become a core organizing principle.

I’ll end by echoing the sentiment put forward recently by @suthanalab.bsky.social. I think comparative analyses across species is ultimately going to greatly improve our understanding of neural computations and brain function. www.thetransmitter.org/animal-model... 18/18

1 month ago 4 0 1 0

Of course, the broader behavioral repertoires of these species can be quite different. There's much fascinating future work to understand how this shared base of computation adapts to enable, say, a human to play a piano sonata. But at some level, we argue there is conservation across species. 17/18

1 month ago 3 0 1 0
Post image

In summary, we argue that, at least for shared behaviors like reaching and grasping, evolution can maintain and repurpose computations as a base for future behavioral adaptations. 16/18

1 month ago 3 0 1 0
Post image

We found using DSA and CCA that RNNs find a wide range of control solutions, but generally geometry tended to track behavior and dynamics were independent. Interestingly, it was quite difficult to produce conserved geometries *and* dynamics without maintaining conserved circuit properties. 15/18

1 month ago 4 0 1 0
Post image

We then use RNN simulations with MotorNet (elifesciences.org/articles/88591) to explore how geometry relates to dynamics in neural circuits, by manipulating architectural properties (learning rule, effector, etc) and training the RNNs to perform the same reaching task. 14/18

1 month ago 3 0 1 0