Advertisement · 728 × 90

Posts by RJ Antonello

Excited to share our work on mechanisms of naturalistic audiovisual processing in the human brain 🧠🎬!!
www.biorxiv.org/content/10.1...

5 months ago 6 5 9 2
Post image

Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.

tl;dr: you can now chat with a brain scan 🧠💬

1/n

5 months ago 134 52 7 8
Post image

🧠 New at #NeurIPS2025!
🎵 We're far from the shallow now🎵
TL;DR: We introduce the first "reasoning embedding" and uncover its unique spatio-temporal pattern in the brain.

🔗 arxiv.org/abs/2510.228...

5 months ago 8 4 1 0

As our lab started to build encoding 🧠 models, we were trying to figure out best practices in the field. So @neurotaha.bsky.social
built a library to easily compare design choices & model features across datasets!

We hope it will be useful to the community & plan to keep expanding it!
1/

6 months ago 34 6 1 0
Preview
Evaluating scientific theories as predictive models in language neuroscience Modern data-driven encoding models are highly effective at predicting brain responses to language stimuli. However, these models struggle to explain the underlying phenomena, i.e. what features of the...

A big effort on the part of all the authors (@csinva.bsky.social, Suna Guo, Gavin Mischler, Jianfeng Gao, Nima Mesgarani, @alexanderhuth.bsky.social) Check out the preprint on bioRXiv here! www.biorxiv.org/content/10.1...

8 months ago 5 0 0 0

We think these QA models are an important step in bridging the gap between data-driven models of the brain and the easy-to-understand, but hard-to-encode, qualitative theories that guide our intuitions as neuroscientists. 5/6

8 months ago 1 0 1 0

More surprisingly, we find that the model places critical importance on some unexpected topics, like the existence of specialized or technical terminology or on words that describe events like dialogue or direct speech quotations. 4/6

8 months ago 1 0 1 0

Our model naturally and automatically replicates many famous neuroscience results, in addition to opening the door to a few surprises. For instance, we naturally observe the selectivity to tactile sensation words in somatosensory areas, and the selectivity to places in OPA, PPA and RSC. 3/6

8 months ago 3 0 1 0
Post image

We show that the model we outperforms less interpretable models built out of the hidden states of LLMs, especially in low data settings. Our model is so compact that it can be fully illustrated in a single figure! 2/6

8 months ago 1 0 1 0
Advertisement
Video

In our new paper, we explore how we can build encoding models that are both powerful and understandable. Our model uses an LLM to answer 35 questions about a sentence's content. The answers linearly contribute to our prediction of how the brain will respond to that sentence. 1/6

8 months ago 25 9 1 1

New paper with @mujianing.bsky.social & @prestonlab.bsky.social! We propose a simple model for human memory of narratives: we uniformly sample incoming information at a constant rate. This explains behavioral data much better than variable-rate sampling triggered by event segmentation or surprisal.

8 months ago 51 18 1 3
Cortex Feature Visualization

🚨Paper alert!🚨
TL;DR first: We used a pre-trained deep neural network to model fMRI data and to generate images predicted to elicit a large response for each many different parts of the brain. We aggregate these into an awesome interactive brain viewer: piecesofmind.psyc.unr.edu/activation_m...

10 months ago 11 6 2 0
Video

What are the organizing dimensions of language processing?

We show that voxel responses during comprehension are organized along 2 main axes: processing difficulty & meaning abstractness—revealing an interpretable, topographic representational basis for language processing shared across individuals

10 months ago 71 30 3 0

Stimulus dependencies---rather than next-word prediction---can explain pre-onset brain encoding during natural listening www.biorxiv.org/content/10.1101/2025.03....

1 year ago 8 2 0 0
Preview
Technical Associate I, Kanwisher Lab MIT - Technical Associate I, Kanwisher Lab - Cambridge MA 02139

I’m hiring a full-time lab tech for two years starting May/June. Strong coding skills required, ML a plus. Our research on the human brain uses fMRI, ANNs, intracranial recording, and behavior. A great stepping stone to grad school. Apply here:
careers.peopleclick.com/careerscp/cl...
......

1 year ago 64 48 5 3
Post image

🚨 New Preprint!!

LLMs trained on next-word prediction (NWP) show high alignment with brain recordings. But what drives this alignment—linguistic structure or world knowledge? And how does this alignment evolve during training? Our new paper explores these questions. 👇🧵

1 year ago 59 24 1 2

Just in time for the holidays! Some cool new evidence from @eghbal_hosseini for the idea of universal representations shared by high-performing ANNs and brains in two domains: language and vision! Go Eghbal!

1 year ago 25 2 0 0

Really excited to be at NeurIPS this week presenting our new encoding model scaling laws work! Be sure to check out our poster (#402) on Tuesday afternoon and our new code and model release, and feel free to DM me to chat!

2 years ago 5 0 0 0
Advertisement