Advertisement · 728 × 90

Posts by micha heilbron

Preview
Donders Session - 16 April | Radboud University Donders Debate with Micha Heilbron and Pim Haselager: Artificial neural network models are adequate mechanistic models of the mind

Nijmegen friends: Tomorrow (10–12) I'll be debating Pim Haselager at a Donders Session on the thesis:

"Artificial neural network models are adequate mechanistic models of the mind"

I'm defending, he's opposing. Should be fun. Come join us!
www.ru.nl/en/donders-i...

@dondersinst.bsky.social

6 days ago 12 0 1 0
Een wereld vol denkers: mens, dier, plant en AI - Agenda - Maastricht University

23 april geven @mheilbron.bsky.social en ik een lezing in het mooie #Maastricht over Een wereld vol denkers. Een avond vol verhalen over het denken en doen van mens, dier, plant en AI! 🧠🐝🌿🤖 Ik hoop jullie daar te zien! www.maastrichtuniversity.nl/nl/events/ee... #wetenschap #psychologie #biologie

1 week ago 5 2 0 0
Fully funded 4-year PhD position in Language Evolution using Communication Games | Max Planck Institute

I'm hiring! 📢 Fully funded 4-year PhD position in Language Evolution using Communication Games at @mpi-nl.bsky.social. Come work with me on how different social pressures shape the evolution of new communication systems in the lab! Deadline for application is May 18th! share.google/fGTKbFS4v4Gb...

1 week ago 30 27 0 0

Well not necessarily without prediction, but recent evidence pointed that predictabilty effects were mostly high-level journals.plos.org/ploscompbiol...; direct.mit.edu/imag/article...) but our new work shows an interesting twist, it seems to depend on eccentricity (or sensory reliability)

1 week ago 0 0 0 0

Classic predictive coding: V1 predicts low-level features, higher areas high-level. But recent studies + AI models suggest prediction happens at higher levels of abstraction.

Who's right?

In new work w/ @wiegerscheurer.bsky.social we find that both are – distinct regimes across the visual field

1 week ago 25 6 1 0
Post image

New preprint! w/ @mheilbron.bsky.social

We found that, even during simple natural scene viewing, human visual cortex predicts—hierarchically in central vision and at higher levels peripherally—reconciling classical predictive coding with recent evidence from animal models and AI (e.g. JEPA) (1/10)

2 weeks ago 29 11 1 1
Preview
Protect Academic Life in Iran We, the undersigned academics and researchers from around the world, express our profound concern over recent military strikes on Iran, the retaliatory responses, and the reported impact on civilian l...

Academic friends,
It's beyond heartbreaking to watch what's unfolding in Iran & the region.
A few of us drafted an open letter calling for protection of civilians & of educational, research, medical & cultural institutions.

Please read & sign if you agree:
sites.google.com/view/protect...

#IranWar

2 weeks ago 83 51 2 2

Interested in pursuing a PhD in NLP/cog-sci?

Studying language learning in LMs from the perspective of human language acquisition? Few more days to apply!!

3 weeks ago 7 1 0 0
Video

🚨 We're very happy to introduce TRIBE v2: a foundation model of the brain's responses to sight, sound & language.

📄 Paper: ai.meta.com/research/pub...
▶️ Demo: aidemos.atmeta.com/tribev2/
💻 Code: github.com/facebookrese...
🤗 Model: huggingface.co/facebook/tri...

3 weeks ago 57 19 3 5
Post image

📢 PhD position in Developmental Language Modelling
(PLZ RT)

What can human language acquisition teach us about training language models? Join us as a PhD!
mpi.nl/career-education/vacancies/vacancy/fully-funded-4-year-phd-position-developmental-language @carorowland.bsky.social
@mpi-nl.bsky.social

1 month ago 26 35 1 3
Advertisement
Fully Funded 4-Year PhD Position In Developmental Language Modelling | Max Planck Institute

mpi.nl/career-education/vacancies/vacancy/fully-funded-4-year-phd-position-developmental-language

1 month ago 1 0 0 0
Post image

📢 PhD position in Developmental Language Modelling
(PLZ RT)

What can human language acquisition teach us about training language models? Join us as a PhD!
mpi.nl/career-education/vacancies/vacancy/fully-funded-4-year-phd-position-developmental-language @carorowland.bsky.social
@mpi-nl.bsky.social

1 month ago 26 35 1 3
Post image

📢 PhD position in the NeuroAI of Language

Why can LLMs predict brain activity so well? We're hiring a PhD student to find out -- AI interpretability meets neuroimaging
Deadline March 20
Please RT 🙏
👇
mpi.nl/career-education/vacancies/vacancy/fully-funded-4-year-phd-position-neuroai-language

1 month ago 50 40 2 1

yes i will be around -- let's do it!

1 month ago 1 0 1 0

(I'll keep a part-time affiliation with the @uva.nl as Assistant Professor of Cognitive AI, continuing to teach all things AI and the brain/mind, so I'll still be around in Amsterdam)

1 month ago 6 0 0 0
Post image Post image

Job update: Next week I start as a group leader at the Planck Institute for Psycholinguistics in Nijmegen @mpi-nl.bsky.social 🧠

Building the Language and Predictive Computation group -- using LLMs to model language in the mind/brain, and vice versa.

Hiring soon!

1 month ago 55 2 3 0
Preview
Memorization vs. generalization in deep learning: implicit biases, benign overfitting, and more Or: how I learned to stop worrying and love the memorization

What is the relationship between memorization and generalization in AI? Is there a fundamental tradeoff? In infinitefaculty.substack.com/p/memorizati... I’ve reviewed some of the evolving perspectives on memorization & generalization in machine learning, from classic perspectives through LLMs.

2 months ago 136 27 4 5

Interesting convergence:

The trick that made predictive self-supervised vision models work seems to be what the brain was doing all along

w/ @predictivebrain.bsky.social: visual cortex is most sensitive to high-level prediction errors -- even in V1

Now published:
journals.plos.org/ploscompbiol...

2 months ago 24 5 0 0
Advertisement

This paper had a pretty shocking headline result (40% of voxels!), so I dug into it, and I think it is wrong. Essentially: they compare two noisy measures and find that about 40% of voxels have different sign between the two. I think this is just noise!

3 months ago 238 99 8 9

so nice to see this out sush!!

5 months ago 1 0 1 0

archive.ph/smEj0 (or, unpaywalled 🤫)

5 months ago 2 0 0 0
Preview
The Case That A.I. Is Thinking ChatGPT does not have an inner life. Yet it seems to know what it’s talking about.

This is, without a doubt, the best popular article about current state of AI. And on whether LLMs are truly 'thinking' or 'understanding' -- and what that question even means

www.newyorker.com/magazine/202...

5 months ago 5 0 1 0

omg. what journal? name and shame

7 months ago 0 0 0 0

huh! if these effects are similar and consistent, I think it should work, but the q. is how do you get a vector representation for novel pseudowords? we currently use lexicosemantic word vectors and they are undefined for novel words.

so how to represent the novel words? v. interesting test case

7 months ago 0 0 0 0

@nicolecrust.bsky.social might be of interest

7 months ago 0 0 0 0

New paper on memorability, with @davogelsang.bsky.social !

7 months ago 12 0 0 0
Preview
Representational magnitude as a geometric signature of image and word memorability What makes some stimuli more memorable than others? While memory varies across individuals, research shows that some items are intrinsically more memorable, a property quantifiable as “memorability”. ...

New preprint out together with @mheilbron.bsky.social

We find that a stimulus' representational magnitude—the L2 norm of its DNN representation—predicts intrinsic memorability not just for images, but for words too.
www.biorxiv.org/content/10.1...

7 months ago 25 6 4 1
Advertisement

Together, our results support a classic idea: cognitive limitations can be a powerful inductive bias for learning

Yet they also reveal a curious distinction: a model with more human-like *constraints* is not necessarily more human-like in its predictions

8 months ago 1 0 0 0

This paradox – better language models yielding worse behavioural predictions – could not be explained by prior explanations: The mechanism appears distinct from those linked to superhuman training scale or memorisation

8 months ago 1 0 1 0