Delighted Sasha's (first year PhD!) work using mech interp to study complex syntax constructions won an Outstanding Paper Award at EMNLP!
Also delighted the ACL community continues to recognize unabashedly linguistic topics like filler-gaps... and the huge potential for LMs to inform such topics!
Posts by Qing Yao
UT Austin Linguistics is hiring in computational linguistics!
Asst or Assoc.
We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)
faculty.utexas.edu/career/170793
🤘
Excited to present this at COLM tomorrow! (Tuesday, 11:00 AM poster session)
Heading to #COLM2025 to present my first paper w/ @jennhu.bsky.social @kmahowald.bsky.social !
When: Tuesday, 11 AM – 1 PM
Where: Poster #75
Happy to chat about my work and topics in computational linguistics & cogsci!
Also, I'm on the PhD application journey this cycle!
Paper info 👇:
I’m at #COLM2025 from Wed with:
@siyuansong.bsky.social Tue am introspection arxiv.org/abs/2503.07513
@qyao.bsky.social Wed am controlled rearing: arxiv.org/abs/2503.20850
@sashaboguraev.bsky.social INTERPLAY ling interp: arxiv.org/abs/2505.16002
I’ll talk at INTERPLAY too. Come say hi!
I will be giving a short talk on this work at the COLM Interplay workshop on Friday (also to appear at EMNLP)!
Will be in Montreal all week and excited to chat about LM interpretability + its interaction with human cognition and ling theory.
Picture of the UT Tower with "UT Austin Computational Linguistics" written in bigger font, and "Humans processing computers processing human processing language" in smaller font
The compling group at UT Austin (sites.utexas.edu/compling/) is looking for PhD students!
Come join me, @kmahowald.bsky.social, and @jessyjli.bsky.social as we tackle interesting research questions at the intersection of ling, cogsci, and ai!
Some topics I am particularly interested in:
LMs’ dative alternation preferences come from both direct evidence and more general properties of language. They don’t just memorize–they generalize! See the paper for details on animacy too (interestingly more complicated!)
LMs' length preference vs. perplexity on validation set. We see that models whose training set manipulation reduces exposure to short-first orderings are the ones which have weaker short-first preference.
Learned length preference changes with the input manipulation. That is, the more “long-first” we make the input, the weaker the short-first preference. We think this shows the dative preferences in models come not just from datives but from general properties of English.
For example, “The primates use tools to eat the green coconuts from the shop” becomes:
-Short-first: [tools] use [the primates] [[to] eat [[the] [green] coconuts [from the shop]]]
-Long-first: [[[from the shop] [the] coconuts [green]] eat [to]] use [the primates] [tools]
We think it plausibly comes not from the datives alone but from general properties of English (which is “short-first”). To test that, we manipulate the global structure of the input, creating a corpus where every sentence is short-first and one where they’re all long-first.
DO preference vs. length difference when we remove all datives (left) and all cases with 2 post-verbal arguments (right). The pearson correlation, r is now -0.24 for the no-datives condition, and -0.22 for no cases with 2postverbal arguments.
Now what if we get rid of datives, and further all constructions which have two postverbal arguments? Now we see the length preference is back again. Yes it’s smaller (direct evidence matters), but why is it there? Where does it come from if not the datives?
DO preference vs. length difference for the balanced and swapped-datives manipulations. Left: balanced, pearson correlation r = -0.33; right: swapped-datives, pearson correlation r = -0.03.
What if we modify the corpus such that for every DO there is a PO (balance direct evidence)? The preferences are still present! But what if now we SWAP every dative in the input so that every DO is now a PO, every PO a DO? The preference essentially disappears (but not flipped!)
left: plot showing DO preference vs. Human Judgments – Pearson’s r = 0.5; right: plot showing the DO preference as a function of (log) length difference between the recipient and the theme, with pearson’s r = -0.43, where the negative sign indicates short-first is preferred
To test this, we train small LMs on manipulated datasets where we vary direct (datives) and indirect (non-datives) evidence and test the change in their preferences. First, we see that we get human-like preferences on a model trained on our default BabyLM corpus.
The English dative preferences come from more general features of the language: short constituents tend to appear earlier all over, not just in the dative. We hypothesize LMs rely on direct evidence from datives but also general word order preferences (e.g. “easy first”) from non-datives.
examples from direct and prepositional object datives with short-first and long-first word orders: DO (long first): She gave the boy who signed up for class and was excited it. PO (short first): She gave it to the boy who signed up for class and was excited. DO (short first): She gave him the book that everyone was excited to read. PO (long-first): She gave the book that everyone was excited to read to him.
LMs learn argument-based preferences for dative constructions (preferring recipient first when it’s shorter), consistent with humans. Is this from memorizing preferences in training? New paper w/ @kanishka.bsky.social , @weissweiler.bsky.social , @kmahowald.bsky.social
arxiv.org/abs/2503.20850
For example, “The primates use tools to eat the green coconuts from the shop” becomes:
- Short-first: [tools] use [the primates] [[to] eat [[the] [green] coconuts [from the shop]]]
- Long-first: [[[from the shop] [the] coconuts [green]] eat [to]] use [the primates] [tools]
We think it plausibly comes not from the datives alone but from general properties of English (which is “short-first”). To test that, we manipulate the global structure of the input, creating a corpus where every sentence is short-first and one where they’re all long-first.
DO preference vs. length difference when we remove all datives (left) and all cases with 2 post-verbal arguments (right). The pearson correlation, r is now -0.24 for the no-datives condition, and -0.22 for no cases with 2postverbal arguments.
Now what if we get rid of datives, and further all constructions which have two postverbal arguments? Now we see the length preference is back again. Yes it’s smaller (direct evidence matters), but why is it there? Where does it come from if not the datives?
DO preference vs. length difference for the balanced and swapped-datives manipulations. Left: balanced, pearson correlation r = -0.33; right: swapped-datives, pearson correlation r = -0.03.
What if we modify the corpus such that for every DO there is a PO (balance direct evidence)? The preferences are still present! But what if now we SWAP every dative in the input so that every DO is now a PO, every PO a DO? The preference essentially disappears (but not flipped!)
left: plot showing DO preference vs. Human Judgments – Pearson’s r = 0.5; right: plot showing the DO preference as a function of (log) length difference between the recipient and the theme, with pearson’s r = -0.43, where the negative sign indicates short-first is preferred
To test this, we train small LMs on manipulated datasets where we vary direct (datives) and indirect (non-datives) evidence and test the change in their preferences. First, we see that we get human-like preferences on a model trained on our default BabyLM corpus.
The English dative preferences come from more general features of the language: short constituents tend to appear earlier all over, not just in the dative. We hypothesize LMs rely on direct evidence from datives but also general word order preferences (e.g. “easy first”) from non-datives.