Preprint alert! We've done the first ever brain recording simultaneously from IT & PMv in two monkeys interacting socially in natural setting!
Dynamic tracking of social variables in simultaneous brain recordings of socially interacting monkeys
www.biorxiv.org/content/10.6...
Posts by RT Pramod
I just created a series of seven deep-dive videos about AI, which I've posted to youtube and now here. 😊
Targeted to laypeople, they explore how LLMs work, what they can do, and what impacts they have on learning, well-being, disinformation, the workplace, the economy, and the environment.
The cerebellum supports high-level language?? Now out in @cp-neuron.bsky.social, we systematically examined language-responsive areas of the cerebellum using precision fMRI and identified a *cerebellar satellite* of the neocortical language network!
authors.elsevier.com/a/1mUU83BtfH...
1/n 🧵👇
What does it mean to understand language? We argue that the brain’s core language system is limited, and that *deeply* understanding language requires EXPORTING info to other brain regions.
w/ @neuranna.bsky.social @evfedorenko.bsky.social @nancykanwisher.bsky.social
arxiv.org/abs/2511.19757
1/n🧵👇
𝗔 𝗡𝗘𝗨𝗥𝗢𝗘𝗖𝗢𝗟𝗢𝗚𝗜𝗖𝗔𝗟 𝗣𝗘𝗥𝗦𝗣𝗘𝗖𝗧𝗜𝗩𝗘 𝗢𝗡 𝗧𝗛𝗘 𝗣𝗥𝗘𝗙𝗥𝗢𝗡𝗧𝗔𝗟 𝗖𝗢𝗥𝗧𝗘𝗫
By Mars and Passingham
"Understanding anthropoid foraging challenges may thus contribute to our understanding of human cognition"
Going to the top of the reading list!
doi.org/10.1016/j.ne...
#neuroskyence
🚨Out in PNAS🚨
with @joshtenenbaum.bsky.social & @rebeccasaxe.bsky.social
Punishment, even when intended to teach norms and change minds for the good, may backfire.
Our computational cognitive model explains why!
Paper: tinyurl.com/yc7fs4x7
News: tinyurl.com/3h3446wu
🧵
Super excited to share our new article: “Dissociable cortical regions represent things and stuff in the human brain” with @nancykanwisher.bsky.social, @rtpramod.bsky.social and @joshtenenbaum.bsky.social
Video abstract: www.youtube.com/watch?v=B0XR...
Paper: authors.elsevier.com/a/1lWxv3QW8S...
Is the Language of Thought == Language? A Thread 🧵
New Preprint (link: tinyurl.com/LangLOT) with @alexanderfung.bsky.social, Paris Jaggers, Jason Chen, Josh Rule, Yael Benn, @joshtenenbaum.bsky.social, @spiantado.bsky.social, Rosemary Varley, @evfedorenko.bsky.social
1/8
Can you tell if a tower will fall or if two objects will collide — just by looking? 🧠👀 Come check out my #CogSci2025 poster (P1-W-207) on July 31, 13:00–14:15 PT to learn how people do general-purpose physical reasoning from visual input!
Good question! We haven't tested these cases you've mentioned but Jason Fischer's 2016 paper found that PN doesn't respond strongly to social prediction (on hieder and simmel-like displays)
We have started to look in the cerebellum. It is still early days so keep an eye out for updates in the future!
Thrilled to announce our new publication titled 'Decoding predicted future states from the brain's physics engine' with @emiecz.bsky.social, Cyn X. Fang, @nancykanwisher.bsky.social, @joshtenenbaum.bsky.social
www.science.org/doi/full/10....
(1/n)
Thanks to my co-authors and all the people who gave constructive feedback over the course of this project! Special shout out to Kris Brewer for shooting the videos used in Experiment 1 and @georginawooxy.bsky.social for her deep neural network expertise.
(12/12)
Our findings show that PN has abstract object contact information and provide the strongest evidence yet that PN is engaged in predicting what will happen next. These results open many new avenues of investigation into how we understand, predict, and plan in the physical world
(11/n)
Our main results are i) not present in the ventral temporal cortex, ii) not present in the primary visual cortex -- i.e, our stimuli were unlikely to have low-level visual confounds and iii) are replicable with different analysis criteria & methods. See paper for details.
(10/n)
Short answer: Yes! Using MVPA we found that the PN has information about predicted contact events (i.e., collisions). This was true not only within a scenario (the ‘roll’ scene above), but also generalized across scenarios indicating the abstractness of representation.
(9/n)
That is,
(8/n)
When we see this: Does the PN predict this?
In our second pre-registered fMRI experiment, we tested the central tenet of the ‘physics engine’ hypothesis – that the PN runs forward simulations to predict what will happen next. If true, PN should contain information about predicted future states before they occur.
(7/n)
Given their importance for prediction, we hypothesized that the PN would encode object contact. In our first pre-registered fMRI experiment, we used multi-voxel pattern analysis (MVPA) and found that only PN carried scenario-invariant information about object contact.
(6/n)
If a container moves, then so does its containee, but the same is not true of an object that is merely occluded by the container without contacting it!
(5/n)
However, there was no evidence for such predicted future state information in the PN. We realized that object-object contact is an excellent way to test the Physics Engine hypothesis. When two objects are in contact, their fate is intertwined:
(4/n)
These results have led to the hypothesis that the Physics Network (PN) is our brain’s ‘Physics Engine’ – a generative model of the physical world (like those used in video games) capable of running simulations to predict what will happen next.
(3/n)
How do we understand, plan and predict in the physical world? Prior research has implicated fronto-parietal regions of the human brain (the ‘Physics Network’, PN) in physical judgement tasks, including in carrying representations of object mass & physical stability.
(2/n)
Thrilled to announce our new publication titled 'Decoding predicted future states from the brain's physics engine' with @emiecz.bsky.social, Cyn X. Fang, @nancykanwisher.bsky.social, @joshtenenbaum.bsky.social
www.science.org/doi/full/10....
(1/n)
Shown is an example image that participants viewed either in EEG, fMRI, and a behavioral annotation task. There is also a schematic of a regression procedure for jointly predicting fMRI responses from stimulus features and EEG activity.
I am excited to share our recent preprint and the last paper of my PhD! Here, @imelizabeth.bsky.social, @lisik.bsky.social, Mick Bonner, and I investigate the spatiotemporal hierarchy of social interactions in the lateral visual stream using EEG-fMRI.
osf.io/preprints/ps...
#CogSci #EEG
Video of a baby on its parent's chest looking at the parent's face and smiling.
When you see this image, does it make you wonder what that baby is thinking. Do you think the baby is merely perceiving a set of shapes or do you think that the baby is also inferring meaning from the face they are looking at? (1/5)
**ecstatic** to share our @iclr-conf.bsky.social paper: sparse components distinguish visual pathways & their alignment to neural networks, with @nancykanwisher.bsky.social and meenakshi khosla (openreview.net/forum?id=IqH...)
1/n
In a study now out in @eLife, @GeorginJacob @PramodRT9 and I have some exciting results: a novel computation that helps the brain solve disparate visual tasks, a novel brain region that performs this computation....what's not to like?! Read on.... 1/n
elifesciences.org/articles/93033
Academics - where are academic jobs posted for non-UK non-North American countries? If you were looking for jobs in, say, the Nordic countries, or Australia, where do you look? Asking for all the PhDs who are on the market this year. (Pls no April fools jokes, their nerves are frayed as it is)
I’m hiring a full-time lab tech for two years starting May/June. Strong coding skills required, ML a plus. Our research on the human brain uses fMRI, ANNs, intracranial recording, and behavior. A great stepping stone to grad school. Apply here:
careers.peopleclick.com/careerscp/cl...
......