Advertisement · 728 × 90

Posts by Tim Kietzmann

Post image

📢 PhD position in the NeuroAI of Language

Why can LLMs predict brain activity so well? We're hiring a PhD student to find out -- AI interpretability meets neuroimaging
Deadline March 20
Please RT 🙏
👇
mpi.nl/career-education/vacancies/vacancy/fully-funded-4-year-phd-position-neuroai-language

1 month ago 50 40 2 1
Post image

📢 PhD position in Developmental Language Modelling
(PLZ RT)

What can human language acquisition teach us about training language models? Join us as a PhD!
mpi.nl/career-education/vacancies/vacancy/fully-funded-4-year-phd-position-developmental-language @carorowland.bsky.social
@mpi-nl.bsky.social

1 month ago 26 35 1 3

What a milestone 🤯. The institute of cognitive science in Osnabrück, the first of its kind in Germany, is now home to more than 1,000 cognitive science students across our programs. Wild.

2 months ago 26 3 0 0

What happens if you hook up an energy-efficiency optimising RNN on active vision input?

It learns predictive remapping and path integration into allocentric scene coordinates.

Now out in patterns: www.cell.com/patterns/ful...

5 months ago 28 10 1 1

Excited to share my first paper: Model–Behavior Alignment under Flexible Evaluation: When the Best-Fitting Model Isn’t the Right One (NeurIPS 2025). link below.

5 months ago 18 4 1 2
Video

Our work reveals a sharp trade-off between predictive accuracy and model identifiability. Flexible mappings maximize predictivity, but blur the distinction between competing computational hypotheses.

5 months ago 3 1 1 0

🚨 Out in Patterns!

We asked ourselves, if complex neural dynamics like predictive remapping and allocentric coding can emerge from simple physical principles, in this case Energy Efficiency. Turns out they can!
More information in the 🧵 below.

I am super excited to see this one out in the wild.

5 months ago 18 3 3 0

We went back to the drawing board to think about what information is available to the visual system upon which it could build scene representations.

The outcome: a self-supervised training objective based on active vision that beats the SOTA on NSD representational alignment. 👇

5 months ago 25 6 0 0
Advertisement

We managed to integrate brain scans into LLMs for interactive brain reading and more.. check out Vicky's post below. Super excited about this one!

5 months ago 30 3 0 1

We managed to integrate brain scans into LLMs for interactive brain reading and more.. check out Vicky's post below. Super excited about this one!

5 months ago 30 3 0 1
Preview
Connecting neural activity, perception in the visual system Figuring out how the brain uses information from visual neurons may require new tools. I asked nine experts to weigh in.

Figuring out how the brain uses information from visual neurons may require new tools, writes @neurograce.bsky.social. Hear from 10 experts in the field.

#neuroskyence

www.thetransmitter.org/the-big-pict...

6 months ago 59 25 3 3

Hi, we will have three NeuroAI postdoc openings (3 years each, fully funded) to work with Sebastian Musslick (@musslick.bsky.social), Pascal Nieters and myself on task-switching, replay, and visual information routing.

Reach out if you are interested in any of the above, I'll be at CCN next week!

8 months ago 59 31 0 1

Do come and talk to us about any of the above and whatever #NeuroAI is on your mind. Excited for this upcoming #CCN2025, and looking forward to exchanging ideas with all of you.

All posters can be found here: www.kietzmannlab.org/ccn2025/

8 months ago 5 0 0 0

And last but not least Fraser Smith's work on understanding how occluded objects are represented in visual cortex.

Time: Tuesday, August 12, 1:30 – 4:30 pm
Location: A66, de Brug & E‑Hall

8 months ago 7 0 1 0

Please also check out Songyun Bai's poster on further AVS findings that we were involved in: Neural oscillations encode context-based informativeness during naturalistic free viewing.

Time: Tuesday, August 12, 1:30 – 4:30 pm
Location: A165, de Brug & E‑Hall

8 months ago 4 0 1 0
Post image

Friday keeps on giving. Interested in representational drift in macaques? Then come check out Dan's (@anthesdaniel.bsky.social) work providing first evidence for a sequence of three different, yet comparatively stable clusters in V4.

Time: August 15, 2-5pm
Location: Poster C142, de Brug & E‑Hall

8 months ago 9 0 1 0
Post image

Another Friday feat: Philip Sulewski's (@psulewski.bsky.social) and @thonor.bsky.social's
modelling work. Predictive remapping and allocentric coding as consequences of energy efficiency in RNN models of active vision

Time: Friday, August 15, 2:00 – 5:00 pm,
Location: Poster C112, de Brug & E‑Hall

8 months ago 11 2 1 0
Post image

Also on Friday, Victoria Bosch (@initself.bsky.social) presents her superb work on fusing brain scans with LLMs.

CorText-AMA: brain-language fusion as a new tool for probing visually evoked brain responses

Time: 2 – 5 pm
Location: Poster C119, de Brug & E‑Hall
2025.ccneuro.org/poster/?id=n...

8 months ago 7 0 1 0
Advertisement
Post image

On Friday, Carmen @carmenamme.bsky.social has a talk & poster on exciting AVS analyses. Encoding of Fixation-Specific Visual Information: No Evidence of Information Carry-Over between Fixations

Talk: 12:00 – 1:00 pm, Room C1.04
Poster: C153, 2:00 – 5:00 pm, de Brug &E‑Hall
www.kietzmannlab.org/avs/

8 months ago 6 0 1 0

Also on Tuesday, Rowan Sommers will present our new WiNN architecture. Title: Sparks of cognitive flexibility: self-guided context inference for flexible stimulus-response mapping by attentional routing

Time: August 12, 1:30 – 4:30 pm
Location: A136, de Brug & E‑Hall

8 months ago 7 0 1 0
Post image

On Tuesday, Sushrut's (@sushrutthorat.bsky.social) Glimpse Prediction Networks will make their debut: a self-supervised deep learning approach for scene-representations that align extremely well with human ventral stream.

Time: August 12, 1:30 – 4:30 pm
Location: A55, de Brug & E‑Hall

8 months ago 12 1 1 2

In the "Modeling the Physical Brain" event, I will be speaking about our work on topographic neural networks.

Time: Monday, August 11, 11:30 am – 6:00 pm
Location: Room A2.07
Paper: www.nature.com/articles/s41...

8 months ago 10 0 1 0
Post image

First, @zejinlu.bsky.social will talk about how adopting a human developmental visual diet yields robust, shape-based AI vision. Biological inspiration for the win!

Talk Time/Location: Monday, 3-6 pm, Room A2.11
Poster Time/Location: Friday, 2-5 pm, C116 at de Brug & E‑Hall

8 months ago 10 0 1 1

OK, time for a CCN runup thread. Let me tell you about all the lab’s projects present at CCN this year. #CCN2025

8 months ago 36 8 1 0
Preview
Using AI to 'see' what we see Fed the right information, large language models can match what the brain sees when it takes in an everyday scene such as children playing or a big city skyline, a new study led by Ian Charest finds.

#AI "Ultimately, this is a step forward in understanding how the human brain understands meaning from the visual world." #LLMs @mila-quebec.bsky.social @adriendoerig.bsky.social @timkietzmann.bsky.social @natmachintell.nature.com
nouvelles.umontreal.ca/en/article/2...

8 months ago 9 4 0 0

A long time coming, now out in @natmachintell.nature.com: Visual representations in the human brain are aligned with large language models.

Check it out (and come chat with us about it at CCN).

8 months ago 19 0 0 0
Advertisement

For completeness sake: we know the other team and cite both of their papers in the preprint.

8 months ago 1 0 1 0

Devil is in the details as usual.

They (and others) focused on acuity, while we show that the actual gains do not come from acuity but the development of contrast sensitivity.

8 months ago 1 0 2 0

To be honest, so far it has exceeded our expectations across the board.

A big surprise was that visual acuity (i.e. initial blurring) had so little impact. This is what others had focused on in the past. Instead, the development of contrast sensitivity gets you most of the way there.

9 months ago 8 0 1 0

Exciting new preprint from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions.

Work with @zejinlu.bsky.social @sushrutthorat.bsky.social and Radek Cichy

arxiv.org/abs/2507.03168

9 months ago 139 58 3 11