Nijmegen friends: Tomorrow (10–12) I'll be debating Pim Haselager at a Donders Session on the thesis:
"Artificial neural network models are adequate mechanistic models of the mind"
I'm defending, he's opposing. Should be fun. Come join us!
www.ru.nl/en/donders-i...
@dondersinst.bsky.social
Posts by micha heilbron
23 april geven @mheilbron.bsky.social en ik een lezing in het mooie #Maastricht over Een wereld vol denkers. Een avond vol verhalen over het denken en doen van mens, dier, plant en AI! 🧠🐝🌿🤖 Ik hoop jullie daar te zien! www.maastrichtuniversity.nl/nl/events/ee... #wetenschap #psychologie #biologie
I'm hiring! 📢 Fully funded 4-year PhD position in Language Evolution using Communication Games at @mpi-nl.bsky.social. Come work with me on how different social pressures shape the evolution of new communication systems in the lab! Deadline for application is May 18th! share.google/fGTKbFS4v4Gb...
Well not necessarily without prediction, but recent evidence pointed that predictabilty effects were mostly high-level journals.plos.org/ploscompbiol...; direct.mit.edu/imag/article...) but our new work shows an interesting twist, it seems to depend on eccentricity (or sensory reliability)
Classic predictive coding: V1 predicts low-level features, higher areas high-level. But recent studies + AI models suggest prediction happens at higher levels of abstraction.
Who's right?
In new work w/ @wiegerscheurer.bsky.social we find that both are – distinct regimes across the visual field
New preprint! w/ @mheilbron.bsky.social
We found that, even during simple natural scene viewing, human visual cortex predicts—hierarchically in central vision and at higher levels peripherally—reconciling classical predictive coding with recent evidence from animal models and AI (e.g. JEPA) (1/10)
Academic friends,
It's beyond heartbreaking to watch what's unfolding in Iran & the region.
A few of us drafted an open letter calling for protection of civilians & of educational, research, medical & cultural institutions.
Please read & sign if you agree:
sites.google.com/view/protect...
#IranWar
Interested in pursuing a PhD in NLP/cog-sci?
Studying language learning in LMs from the perspective of human language acquisition? Few more days to apply!!
🚨 We're very happy to introduce TRIBE v2: a foundation model of the brain's responses to sight, sound & language.
📄 Paper: ai.meta.com/research/pub...
▶️ Demo: aidemos.atmeta.com/tribev2/
💻 Code: github.com/facebookrese...
🤗 Model: huggingface.co/facebook/tri...
📢 PhD position in Developmental Language Modelling
(PLZ RT)
What can human language acquisition teach us about training language models? Join us as a PhD!
mpi.nl/career-education/vacancies/vacancy/fully-funded-4-year-phd-position-developmental-language @carorowland.bsky.social
@mpi-nl.bsky.social
📢 PhD position in Developmental Language Modelling
(PLZ RT)
What can human language acquisition teach us about training language models? Join us as a PhD!
mpi.nl/career-education/vacancies/vacancy/fully-funded-4-year-phd-position-developmental-language @carorowland.bsky.social
@mpi-nl.bsky.social
📢 PhD position in the NeuroAI of Language
Why can LLMs predict brain activity so well? We're hiring a PhD student to find out -- AI interpretability meets neuroimaging
Deadline March 20
Please RT 🙏
👇
mpi.nl/career-education/vacancies/vacancy/fully-funded-4-year-phd-position-neuroai-language
yes i will be around -- let's do it!
(I'll keep a part-time affiliation with the @uva.nl as Assistant Professor of Cognitive AI, continuing to teach all things AI and the brain/mind, so I'll still be around in Amsterdam)
Job update: Next week I start as a group leader at the Planck Institute for Psycholinguistics in Nijmegen @mpi-nl.bsky.social 🧠
Building the Language and Predictive Computation group -- using LLMs to model language in the mind/brain, and vice versa.
Hiring soon!
What is the relationship between memorization and generalization in AI? Is there a fundamental tradeoff? In infinitefaculty.substack.com/p/memorizati... I’ve reviewed some of the evolving perspectives on memorization & generalization in machine learning, from classic perspectives through LLMs.
Interesting convergence:
The trick that made predictive self-supervised vision models work seems to be what the brain was doing all along
w/ @predictivebrain.bsky.social: visual cortex is most sensitive to high-level prediction errors -- even in V1
Now published:
journals.plos.org/ploscompbiol...
This paper had a pretty shocking headline result (40% of voxels!), so I dug into it, and I think it is wrong. Essentially: they compare two noisy measures and find that about 40% of voxels have different sign between the two. I think this is just noise!
so nice to see this out sush!!
archive.ph/smEj0 (or, unpaywalled 🤫)
This is, without a doubt, the best popular article about current state of AI. And on whether LLMs are truly 'thinking' or 'understanding' -- and what that question even means
www.newyorker.com/magazine/202...
omg. what journal? name and shame
huh! if these effects are similar and consistent, I think it should work, but the q. is how do you get a vector representation for novel pseudowords? we currently use lexicosemantic word vectors and they are undefined for novel words.
so how to represent the novel words? v. interesting test case
@nicolecrust.bsky.social might be of interest
New paper on memorability, with @davogelsang.bsky.social !
New preprint out together with @mheilbron.bsky.social
We find that a stimulus' representational magnitude—the L2 norm of its DNN representation—predicts intrinsic memorability not just for images, but for words too.
www.biorxiv.org/content/10.1...
Together, our results support a classic idea: cognitive limitations can be a powerful inductive bias for learning
Yet they also reveal a curious distinction: a model with more human-like *constraints* is not necessarily more human-like in its predictions
This paradox – better language models yielding worse behavioural predictions – could not be explained by prior explanations: The mechanism appears distinct from those linked to superhuman training scale or memorisation