Advertisement · 728 × 90

Posts by Chris Versteeg

Feel free to reach out with questions, and thanks to my co-authors for their great work and the Brody Lab for their helpful conversations!
arxiv.org/abs/2309.06402 /end

2 years ago 0 0 0 0

In future work, we hope to decompose ODIN models fit to biological neural activity to uncover the computations performed by circuits in the brain! 14/

2 years ago 0 0 1 0
Post image

Applying ODIN to neural activity from the monkey motor cortex, we find that ODIN can reconstruct held-out firing rates with high accuracy with only ~10 state dimensions, better than state-of-the-art models with more than double ODIN’s dimensionality. 13/

2 years ago 0 0 1 0
Post image

Additionally, ODIN recovers the nature of the simulated nonlinear embedding more accurately than the alternative readouts, suggesting that ODIN is well suited to model neural manifolds. 12/

2 years ago 0 0 1 0
Post image

We also find that ODIN allows for accurate recovery of fixed points than models that don’t account for embedding nonlinearities. 11/

2 years ago 0 0 1 0
Post image

We find that models with Linear or MLP readouts fail to reconstruct neural activity or have poor latent recovery when state dimensionality is incorrectly chosen. In contrast, ODIN had good performance at all relevant state dimensionalities. 10/

2 years ago 0 0 1 0
Post image

To test the ability of ODIN to accurately recover neural latent dynamics and their embedding, we simulated neural activity from a low-dimensional dynamical system nonlinearly embedded into neural activity. 9/

2 years ago 0 0 1 0
Post image

Our new readout, called Flow, is based on invertible ResNets. Flow models the embedding of latent activity into neural activity as a reversible dynamical system, imposing an inductive bias towards injectivity. 8/

2 years ago 0 0 1 0
Post image

ODIN’s primary innovation is its injective nonlinear readout, which obligates all latent activity to affect neural reconstruction. This penalizes superfluous dynamical features, while readout nonlinearity allows ODIN capture nonlinear embeddings (i.e., neural manifolds). 7/

2 years ago 0 0 1 0

To fix these problems, we developed a new model called ODIN (Ordinary Differential equations autoencoder with Injective Nonlinear readout). 6/

tenor.com/bPbUv.gif

2 years ago 0 0 1 0
Advertisement
Post image Post image

Our previous work (led by @arsedle) has shown that neural ODE-based architectures can recover the latent space better than RNNs. Unfortunately, we also found that higher dimensional models of all types tend to sacrifice latent recovery for reconstruction performance! 5/

2 years ago 0 0 1 0

If we can gain confidence that data-trained models accurately capture the features of the neural circuit, we could trust that dynamical analyses applied to data-trained models will reveal the computational mechanisms of the brain! 4/

2 years ago 0 0 1 0
Post image

In contrast to task-trained models, “data-trained” (e.g., LFADS-like) models learn to approximate a latent dynamical system (the “generator”) and an embedding of those dynamics into neural space (the “readout”) that reconstructs observed spiking data. 3/

2 years ago 0 0 1 0
Post image

Recent work has demonstrated that task-trained RNNs learn to perform computation via dynamical features (e.g. fixed points) that can provide an intuitive understanding of their underlying computational mechanisms. 2/

2 years ago 0 0 1 0
Post image

Ever wondered if the dynamics learned by LFADS-like models could help us understand neural computation?@chethan,@arsedle, @JonathanDMcCart, and I developed ODIN to robustly recover latent dynamical features through the power of injectivity!  arxiv.org/abs/2309.06402 1/

2 years ago 10 4 1 0