Can generative AI accelerate neuroscience? Excited to share MoGen at ICLR 2026!🧠
We use point cloud flow matching to generate high-fidelity 3D neuron fragments, capturing intricate details like dendritic spines.
Posts by Jan-Matthis Lueckmann
Applications are now open for our Junior Theoretical Neuroscientists Workshop which will take place July 21 - 24, 2026 at the Center for Computational Neuroscience @flatironinstitute.org
Learn more and apply by April 15 at www.simonsfoundation.org/event/jrwork...
Our paper “Multifidelity Simulation-based Inference for Computationally Expensive Simulators” has been accepted at ICLR 2026! 🥳
We hope this can be a practical solution for anyone analysing and doing inference on computationally expensive simulators.
Paper: openreview.net/pdf?id=bj0dc...
Simulation-based inference (SBI) has transformed parameter inference across a wide range of domains. To help practitioners get started and make the most of these methods, we joined forces with researchers from many institutions and wrote a practical guide to SBI.
📄 Paper: arxiv.org/abs/2508.12939
Our work on training biophysical models with Jaxley is now out in @natmethods.nature.com. Led by @deismic.bsky.social, with @philipp.hertie.ai, @ppjgoncalves.bsky.social & @jakhmack.bsky.social et al.
Paper: www.nature.com/articles/s41...
“Mapping ion channel function” doi.org/10.7554/eLif... isn’t exactly a citation slayer, but it’s still one of my favourites (& my first independent project). Today we push pt 2, where we trace code origin & unite almost all channel models in a common expression. Boom! www.biorxiv.org/content/10.1...
New preprint: SBI with foundation models!
Tired of training or tuning your inference network, or waiting for your simulations to finish? Our method NPE-PF can help: It provides training-free simulation-based inference, achieving competitive performance with orders of magnitude fewer simulations! ⚡️
Wouldn't it be great if we could not only image large connectomic volumes but also completely reconstruct them? And if a whole mouse brain project didn't cost billions?
With the PATHFINDER preprint (www.biorxiv.org/content/10.1...), we preview a future where it doesn't have to.
We'll present our #ICLR2025 spotlight on ZAPBench this afternoon: 📍 Hall 3 #61!
+ special shout-out to @alexbchen.bsky.social who recorded the activity dataset!
Fantastic collaboration between Google Research, HHMI Janelia, and Harvard -- including @michalwj.bsky.social @stardazed0.bsky.social @aleximmer.bsky.social @mishaahrens.bsky.social and many more!
Paper: openreview.net/pdf?id=oCHsD...
Website: google-research.github.io/zapbench
🕸️ Last but not least -- the connectome for this specific 🐟 specimen is currently being reconstructed and will be available at a later date!
🧪 We test a number of SOTA time-series forecasting models to provide baselines. We also explore forecasting activity directly in voxel space in a companion paper (www.arxiv.org/abs/2503.00073).
📈 This dataset forms the core of the Zebrafish Activity Prediction Benchmark (ZAPBench), which uniquely measures progress on forecasting neural activity at full brain scale and single cell resolution in a vertebrate.
🔬 We collected and extensively processed a 4d dataset imaged with a lightsheet microscope. The resulting 3d movie covers over 70,000 neurons of a fish exposed to various visual stimuli.
🧠 How accurately can future neural activity be predicted from past activity at the scale of the whole brain? Larval zebrafish offer a unique opportunity to address this question, as they are currently the only vertebrate species in which whole-brain activity can be recorded at cellular resolution.
⚡️ Excited to introduce ZAPBench, our #ICLR2025 spotlight: The Zebrafish Activity Prediction Benchmark measures progress in predicting neural activity within an entire vertebrate brain (70k+ neurons!)
Explore interactive visualizations, datasets, code + paper: google-research.github.io/zapbench
🧠🧪
If you use the sbi toolbox, help make it better by sharing your feedback!!