Building this kind of mechanical compiler will require the coordination of connectome people, slice physiologists, systems neuroscientists and modelers: it ain't easy! But it's definitely a worthy goal for the next 20 years of neuro.
Posts by Patrick Mineault
Then if we have calibration data, we can think of each of the layers of the ultrastructome as imposing (Bayesian) constraints on biophysical simulation parameters. Each extra calibratred layer removes some undeterminacy. 4/
And we can keep adding layers, like labelling proteins (e.g. PSD95 in PRISM and LICONN). I think the endgame is that the connectome data will be so rich that it'll be conceptually a different kind of object, an ultrastructome. 3/
It's remarkable how far we've taken simulations with *just* connections (e.g. Shiu et al. 2024). And we haven't exhausted what we can do with connectome data: it tells us about connections, yes, but also neuron morphology, neurotransmitter identity, and the position of dense core vesicles 2/
Stoked this is finally out! We ask: how can we simulate the brain from the bottom up? It's not sufficient to grab the connectome and wire it up in silico! We need 1) ultrastructure 2) (causal) calibration data 3) functional data. Then we can build a simulation compiler. 1/
Paradoxically, when you see a yellow on blue slide, you know you're about to get a fantastic lecture. Bonus points for Times New Roman.
🧵 New preprint led by @bingbrunton.bsky.social, @elliottabe.bsky.social, @lawrencehu.bsky.social
We gave a worm brain control of a fly body and it walked
What did we learn? Nothing, other than deep reinforcement learning is effective
We call it the digital sphinx
www.biorxiv.org/content/10.6...
The recommendations features
If you have a good substack about neuro / AI and are cranking out solid content, happy to add it to my list of recs on substack—that little feature has driven 100's of subscribers to other newsletters
A cool insight here: CDM isn’t just hard to infer, it’s hard to train directly (because it can’t be specified as a loss on systems behavior).
This means it either needs to emerge as a consequence of other losses, data, or architecture choices.
Current AI models are trained on human behavior -- the words we produce. New preprint explores the idea that we might be able to address some of the gaps in these systems by training on the latent variables behind that behavior: human cognition.
A remarkable journey of resilience and transformation, from the chaotic corridors of group homes to the halls of Columbia and Stanford, EMERGENCE is a coming-of-age tale where heartbreak and humor meet the scientific wonder of modern artificial intelligence.
🔗 Preorder: tinyurl.com/fzcxb5ea
We know about cosmological dark matter despite being unable to measure it because, without it, galaxies would fall apart. By analogy, let's talk about "cognitive dark matter" (CDM): brain functions that meaningfully shape behavior but are hard to infer from behavior alone.
New paper! 🧵👇
There's no better way to learn than to teach! Help make NMA a resounding success!
Excellent, excellent
Ran into David Chalmers at Wash Sq Park. Beautiful day to ponder the hard problem of consciousness.
Would you say it's a fair characterization that splitters are winning mindshare in neuro over lumpers
I agree BTSP is a big one, and quite underrated.
Looking for YOUR INPUT on what we've learned in neuro in the past 20 years. I've only heard pessimistic takes! Come on, grid cells, manifolds, optogenetics, connectomes, moving past the monoamine theory of depression and the Ab theory of AD, glymphatics and lymphatics, what is sleep?! We did stuff!
I want to write a fun little post on what we've learned in neuroscience in the last 20 years. What are the most interesting results you can think of? Biggest trends?
DNN models of the brain are getting bigger. Are we replacing one complicated system in vivo with another in silico?
In new work, we seek the *smallest* DNN models of visual cortex, balancing prediction with parsimony.
It turns out these compact models are surprisingly small!
rdcu.be/e5H8G
Some soup for you!
CAMs or soup?
Lots of things to think about in these posts from @patrickmineault.bsky.social -- nice to see more blog entries :)
Here's the second one: www.neuroai.science/p/cell-types... . I made a New Year's resolution to blog more, and so far I'm sticking to it!
What are cell types good for, computationally? Encoding innate behavior! In this 2-parter, I break down the relationship between cell types—which I had, in years prior, dismissed as mere implementation detail—and computation. I changed my mind!
www.neuroai.science/p/cell-types...
Book cover. A silhouette of a person's head filled with colorful geometric shapes—perhaps symbolizing cognitive resources or deployment thereof. The style is attractive and modern, if generic. text: The Rational Use of Cognitive Resources Falk Lieder, Frederick Callaway, Thomas L. Griffithts
I'm excited to announce that I had my first (co-authored) book published today! "The Rational Use of Cognitive Resources" with Falk Lieder and Tom Griffiths (@cocoscilab.bsky.social ). You can read it for free! (see thread)
The revised version of our paper on the impact of top-down feedback is now out @elife.bsky.social:
doi.org/10.7554/eLif...
tl;dr: we show that using human-brain-like feedback/anatomy in a deep RNN leads to human-like visual biases!
This work was led by @tmshbr.bsky.social
#NeuroAI 🧠📈 🧪
🚨 new work from the lab on how eye movements 👀 versus orofacial movements influence 🐭 visual cortex activity 🧠 #neuroscience #behavior #neuroAI