Advertisement · 728 × 90

Posts by Nina Nusbaumer

Post image

Presented our new **reading time benchmark for Human Sentence Processing modeling** at the Computational Psycholinguistics Meeting in Utrecht 🧠

Expected to be released in open-source in the upcoming months. Keep an eye out! 👀

4 months ago 3 0 0 0
Poster title: Does multimodal pre-activation influence linguistic expectations in LLMs and humans?

Authors: Sasha Kenjeeva, Giovanni Cassani, Noortje Venhuizen, Afra Alishahi

Poster title: Does multimodal pre-activation influence linguistic expectations in LLMs and humans? Authors: Sasha Kenjeeva, Giovanni Cassani, Noortje Venhuizen, Afra Alishahi

Poster title: Generalizing Without Evidence: How Transformer Models Infer Syntactic Rules From Sparse Input

Authors: Mark van den Hoorn, Raquel G. Alhama

Poster title: Generalizing Without Evidence: How Transformer Models Infer Syntactic Rules From Sparse Input Authors: Mark van den Hoorn, Raquel G. Alhama

Poster title: Dependency Length, Syntactic Complexity & Memory: A Reading Time Benchmark for Sentence Processing Modeling

Authors: Nina Nusbaumer, Corentin Bel, Iria de-Dios-Flores, Guillaume Wisniewski, Benoit Crabbé

Poster title: Dependency Length, Syntactic Complexity & Memory: A Reading Time Benchmark for Sentence Processing Modeling Authors: Nina Nusbaumer, Corentin Bel, Iria de-Dios-Flores, Guillaume Wisniewski, Benoit Crabbé

Poster title: 
The success of Neural Language Models on syntactic island effects is not universal: strong wh-island sensitivity in English but not in Dutch

Authors: Michelle Suijkerbuijk, Naomi Tachikawa Shapiro, Peter de Swart, Stefan L. Frank

Poster title: The success of Neural Language Models on syntactic island effects is not universal: strong wh-island sensitivity in English but not in Dutch Authors: Michelle Suijkerbuijk, Naomi Tachikawa Shapiro, Peter de Swart, Stefan L. Frank

Cool posters from day 2!

@sashakenjeeva.bsky.social openreview.net/forum?id=Vtd...

github.com/markvandenho... openreview.net/forum?id=rX3...

@nina-nusbaumer.bsky.social openreview.net/forum?id=GRz...

www.ru.nl/personen/sui... openreview.net/forum?id=NcJ...

4 months ago 4 1 1 1
Preview
To model human linguistic prediction, make LLMs less superhuman When people listen to or read a sentence, they actively make predictions about upcoming words: words that are less predictable are generally read more slowly than predictable ones. The success of larg...

Another banger from @tallinzen.bsky.social .

Also fits with some of the criticisms of Centaur and my faculty-based approach generally; if you want LLMs to model human cognition, give them more architecture akin to human faculty psychology like long and short-term memory.

arxiv.org/abs/2510.05141

6 months ago 23 6 1 1
Post image

First paper is out! Had so much fun presenting it in Marseille last July 🇨🇵

We explore how transformers handle compositionality by exploring the representations of the idiomatic and literal meaning of the same noun phrase (e.g. "silver spoon").

aclanthology.org/2025.jeptaln...

6 months ago 5 1 0 0