Deeply grateful to the @simonsfoundation.org for launching SCENE and thrilled to join this 10-year journey into ecological neuroscience—unraveling how sensory and motor systems interact. Excited to collaborate with an incredible team of theorists and experimentalists working across species!
Posts by Andreas Tolias
Building on Lurz et al., our new Wang et al. studies movie‑data performance vs. training size and compares scaling for Conv‑LSTM vs. CvT(convolutional vision
transformer)‑LSTM. Details: www.nature.com/articles/s41...
In Lurz et al., ICLR 2021 we did quite some analysis on scaling and generalization across animals in the context of visual response prediction (incl. behavioral modulation) with @sinzlab.bsky.social and @andreastolias.bsky.social: openreview.net/forum?id=Tp7...
A super exciting paper by @aecker.bsky.social and Marissa Weis, part of the #MICrONS package, deriving a set of principles to characterize the morphological diversity of excitatory neurons across cortical layers.
www.nature.com/immersive/d4...
We didn't know the optimal patterns driving mouse V1 neurons until the deep learning model by Walker et al. (2019). FYI: Unlike mice, Gabors actually describe macaque V1 neurons quite well (Fu et al., Cell Reports).
MICrONS represents a huge step forward for the field. Big-data and AI will drive the next wave of discoveries in neuroscience
Join me, @andreastolias.bsky.social, and many of the incredible MICrONS team members in an AI-driven approach to neuroscience discovery
Apply here: www.linkedin.com/jobs/view/42...
Or email us at recruiting@enigmaproject.ai
3/3 The core strength of our approach—robust prediction of neural responses to novel visual stimulus domains. Dyer's autoregressive approach generates latent embeddings for neural decoding—an entirely different architectural paradigm with different scientific objectives.
2/3 However this is not the main point, these models serve fundamentally different purposes. Ours explicitly predicts neural responses to visual stimuli (an encoding model), creating functional digital twins.
1/3 Just for clarification our foundation model was introduced on March 21st, 2023—predating Dyer et al. by over six months.
www.biorxiv.org/content/bior...
2
8/8 Deep learning simulation enables systematic representational-level characterization, though detailed circuit-cell-type-level mechanistic comprehension remains beyond current capabilities in the cortex.
7/8 and characterization of the feature landscape of mouse visual cortex (Tong et al., bioRxiv 2023)—just a few examples of their applications. Most importantly, they yield in silico predictions which are subsequently verified through experimental testing.
6/8 Predictive models also enabled to systematically characterize single neuron invariance properties (Ding et al., bioRxiv 2023), center-surround interactions (Fu et al., bioRxiv 2023), color-opponency mechanisms (Hofling et al., Elife 2024),
5/8 Our models also revealed that mouse V1 neurons shift their selectivity toward UV when pupil dilation or running begins, despite maintaining stable spatial stimulus structure—discovered in the digital twin and validated experimentally in closed-loop studies (Franke et al., Nature 2022).
4/8 For example, these simulations revealed that mouse V1 neurons exhibit complex spatial features deviating from the common notion that Gabor-like stimuli are optimal (Walker, Sinz et al., Nature Neuroscience 2019).
3/8 When ANNs accurately simulate neural function, they facilitate 'mechanistic interpretability' (to borrow the AI term)—enabling rigorous representational-level analysis of neuronal tuning.
2/8 Moreover, both task- and data-driven neural predictive models are powerful tools to gain neuroscientific insights as we and others have demonstrated repeatedly.
1/8: This quote from our abstract refers to task-driven modeling approaches (e.g., Yamins, DiCarlo, et al.) which define computational objectives and reveal hidden representations closely matching brain activity—widely recognized for deepening insights into brain computations.
3/3 The core strength of our approach—robust prediction of neural responses to novel visual stimulus domains. Dyer's autoregressive approach generates latent embeddings for neural decoding—an entirely different architectural paradigm with different scientific objectives.
2/3 However this is not the main point, these models serve fundamentally different purposes. Ours explicitly predicts neural responses to visual stimuli (an encoding model), creating functional digital twins.
1/3 Just for clarification our foundation model was introduced on March 21st, 2023—predating Dyer et al. by over six months.
www.biorxiv.org/content/bior...
Huge thanks to @IARPAnews for funding this groundbreaking effort through the @BRAINinitiative, and to our amazing team at
@stanforduniversity.bsky.social @stanfordmedicine.bsky.social @BCM @Allen @Princeton @unigoettingen.bsky.social
#MICrONS #NeuroAI #Connectomics #FoundationModels #AI
Foundation models offer a powerful way to systematically decode the neural code of natural intelligence, bridging the gap between brain structure and function.
Instead, they preferentially connect based on shared functional tuning, choosing partners with similar feature selectivity (“what”) rather than merely receptive field overlap (“where”).
Using the digital twin of the MICrONS mouse—where exact neuronal wiring was known—we found that neurons don’t connect randomly, even when anatomically close enough.
www.nature.com/articles/s41...
Crucially, this robust generalization allowed us to create precise functional digital twins of individual mouse brains, combining functional predictions with known anatomical wiring.
Our foundation model generalized robustly to new neurons, new animals, and even previously unseen stimulus domains. It also accurately predicted entirely new modalities, such as anatomically defined cell types.
To systematically characterize neuronal function, we built the first foundation model of the mouse visual cortex—trained using deep learning on data pooled from multiple mice and cortical areas.
www.nature.com/articles/s41...
After 7 years, thrilled to finally share our #MICrONS functional connectomics results!
We recorded activity from ~75K neurons in the visual cortex of a single mouse, then mapped its wiring using electron microscopy.
nature.com/immersive/d42859-025-00001-w/index.html