Excited to share our work on mechanisms of naturalistic audiovisual processing in the human brain 🧠🎬!!
www.biorxiv.org/content/10.1...
Posts by RJ Antonello
Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.
tl;dr: you can now chat with a brain scan 🧠💬
1/n
🧠 New at #NeurIPS2025!
🎵 We're far from the shallow now🎵
TL;DR: We introduce the first "reasoning embedding" and uncover its unique spatio-temporal pattern in the brain.
🔗 arxiv.org/abs/2510.228...
As our lab started to build encoding 🧠 models, we were trying to figure out best practices in the field. So @neurotaha.bsky.social
built a library to easily compare design choices & model features across datasets!
We hope it will be useful to the community & plan to keep expanding it!
1/
A big effort on the part of all the authors (@csinva.bsky.social, Suna Guo, Gavin Mischler, Jianfeng Gao, Nima Mesgarani, @alexanderhuth.bsky.social) Check out the preprint on bioRXiv here! www.biorxiv.org/content/10.1...
We think these QA models are an important step in bridging the gap between data-driven models of the brain and the easy-to-understand, but hard-to-encode, qualitative theories that guide our intuitions as neuroscientists. 5/6
More surprisingly, we find that the model places critical importance on some unexpected topics, like the existence of specialized or technical terminology or on words that describe events like dialogue or direct speech quotations. 4/6
Our model naturally and automatically replicates many famous neuroscience results, in addition to opening the door to a few surprises. For instance, we naturally observe the selectivity to tactile sensation words in somatosensory areas, and the selectivity to places in OPA, PPA and RSC. 3/6
We show that the model we outperforms less interpretable models built out of the hidden states of LLMs, especially in low data settings. Our model is so compact that it can be fully illustrated in a single figure! 2/6
In our new paper, we explore how we can build encoding models that are both powerful and understandable. Our model uses an LLM to answer 35 questions about a sentence's content. The answers linearly contribute to our prediction of how the brain will respond to that sentence. 1/6
New paper with @mujianing.bsky.social & @prestonlab.bsky.social! We propose a simple model for human memory of narratives: we uniformly sample incoming information at a constant rate. This explains behavioral data much better than variable-rate sampling triggered by event segmentation or surprisal.
🚨Paper alert!🚨
TL;DR first: We used a pre-trained deep neural network to model fMRI data and to generate images predicted to elicit a large response for each many different parts of the brain. We aggregate these into an awesome interactive brain viewer: piecesofmind.psyc.unr.edu/activation_m...
What are the organizing dimensions of language processing?
We show that voxel responses during comprehension are organized along 2 main axes: processing difficulty & meaning abstractness—revealing an interpretable, topographic representational basis for language processing shared across individuals
Stimulus dependencies---rather than next-word prediction---can explain pre-onset brain encoding during natural listening www.biorxiv.org/content/10.1101/2025.03....
I’m hiring a full-time lab tech for two years starting May/June. Strong coding skills required, ML a plus. Our research on the human brain uses fMRI, ANNs, intracranial recording, and behavior. A great stepping stone to grad school. Apply here:
careers.peopleclick.com/careerscp/cl...
......
🚨 New Preprint!!
LLMs trained on next-word prediction (NWP) show high alignment with brain recordings. But what drives this alignment—linguistic structure or world knowledge? And how does this alignment evolve during training? Our new paper explores these questions. 👇🧵
Just in time for the holidays! Some cool new evidence from @eghbal_hosseini for the idea of universal representations shared by high-performing ANNs and brains in two domains: language and vision! Go Eghbal!
Really excited to be at NeurIPS this week presenting our new encoding model scaling laws work! Be sure to check out our poster (#402) on Tuesday afternoon and our new code and model release, and feel free to DM me to chat!