We’re excited to share ALL-GCL: a large-scale dataset of 2P Ca²⁺ imaging from 80,000+ cells in the mouse retinal ganglion cell layer, collected over 9 years. Includes rich metadata, shared stimuli & cell-type assignments designed for type-specific analyses, modeling, and ML.
📄: tinyurl.com/ymn53frf
Posts by Federico D’Agostino
This is a concrete step toward bridging the performance/understanding gap in vision science.
📄 Paper: openreview.net/forum?id=cnr...
⚙️ Code: github.com/bethgelab/wh...
🙏 A joint effort with @matthiaskue.bsky.social, Lisa Schwetlick, @bethgelab.bsky.social
#NeurIPS #CognitiveModeling
💬 Conceptually: Deep neural networks should be viewed as scientific instruments. They tell us what is predictable in human behavior.
We then use that information to ask why, building fully interpretable models that approach the performance of their black-box counterparts.
📈 The Result: SceneWalk-X (also re-implemented in #JAX ⚡)
These 3 mechanisms double SceneWalk’s explained variance on the MIT1003 dataset (from 35 % → 70 %)! We closed over 56 % of the gap to deep networks, setting a new State-of-the-Art for mechanistic scanpath prediction.
↔️ 3. Cardinal + Leftward Bias
People tend to move their eyes more horizontally, and display a subtle initial bias for leftward movements. Adding this adaptive attentional prior further stabilized the model.
➡️ 2. Saccadic Momentum
The eyes often tend to continue moving in the same direction, especially after long saccades. We captured this bias by adding a dynamic directional map that adapts based on the previous eye movement.
🔥 1. Time-Dependent Temperature Scaling
Early fixations are more focused (exploitative), later ones become more exploratory. We modeled this with a decaying “temperature” that controls the determinism of fixation choices over time.
From these systematic failures, we isolated three critical mechanisms SceneWalk was missing.
The data pointed to known cognitive principles, but revealed critical new nuances. Our method showed us not just what was missing, but how to formulate it to match human behavior. 👇
💡 Our idea: Use the deep model not just to chase performance, but as a tool for scientific discovery.
We isolate "controversial fixations" where DeepGaze's likelihood vastly exceeds SceneWalk's.
These reveal where the mechanistic model fails to capture predictable patterns.
Science often faces a choice:
Build models primarily designed to predict, or models that compactly explain. But what if we used them in synergy?
Our paper tackles this head-on. We combine a deep network (DeepGaze III) with an interpretable mechanistic model (SceneWalk).
🚨 New paper at #NeurIPS2025!
A systematic fixation-level comparison of a performance-optimized DNN scanpath model and a mechanistic cognitive model reveals behaviourally relevant mechanisms that can be added to the mechanistic model to substantially improve performance.
🧵👇
Thanks for sharing this!
I was not aware of it but looks really relevant. No problem if your lab is no longer working on this much, we will try to incorporate it in the future and reach out if we have any trouble 😉
This is exactly the kind of engagement we hoped to get!
Try it out and help us improve the accessibility of retinal datasets and models together.
A team effort with:
@thomaszen.bsky.social
@dgonschorek.bsky.social
@lhoefling.bsky.social
@teuler.bsky.social
@bethgelab.bsky.social
#openscience #computationalneuroscience (9/9)
This is just the beginning.
We see openretina as more than a Python package—it aims to be the start of an initiative to foster open collaboration in computational retina research.
We’d love your feedback! (8/9)
Researchers can use openretina to:
✅ Explore pre-trained models in minutes
✅ Train their own models
✅ Contribute datasets & models to the community (7/9)
The currently supported models follow a Core + Readout architecture:
🔸 Core: Extracts shared retinal features across data recording sessions
🔸 Readout: Maps shared features to individual neuron responses
🔹 Includes pre-trained models & easy dataset loading (6/9)
Why does it matter?
Current retina models are often dataset-specific, limiting generalization.
With OpenRetina, we integrate:
🐭 🦎 🐒 Data from multiple species
🎥 Different stimuli & recording modalities
🧠 Deep learning models that can be trained across datasets (5/9)
What is openretina?
It’s a Python package built on PyTorch, designed for:
🔹 Training deep learning models on retinal data
🔹 Sharing and using pre-trained retinal models
🔹 Cross-dataset, cross-species comparisons
🔹 In-silico hypothesis testing & experiment guidance (4/9)
📄 Paper: www.biorxiv.org/content/10.1...
📦 Code: github.com/open-retina/...
🔧 pip install openretina
📖 Docs: coming soon at open-retina.org (3/9)
Understanding the retina is crucial for decoding how visual information is processed. However, decades of data and models remain scattered across labs and approaches. We introduce openretina to unify retinal system identification. (2/9)
🚨 New paper alert! 🚨
We’ve just launched openretina, an open-source framework for collaborative retina modeling across datasets and species.
A 🧵👇 (1/9)
Incredibly honoured to have been a part of this!