Advertisement · 728 × 90

Posts by Anzi Wang

today at 12:10!

3 weeks ago 2 0 0 0

Friday time! Poster 5 (# 365) @grushaprasad.bsky.social , @shotamomma.bsky.social and I leverage an ACT-R based parsing model (from Grusha’s previous work!) to evaluate potential explanations of gradient effects in cross-structural priming. Explicit parsers FTW!!

3 weeks ago 8 2 1 0
Preview
TunePad TunePad is a free online platform for creating music with the Python programming language. Our step-by-step tutorials are perfect for beginners, and our advanced production tools power music making fo...

TunePad (tunepad.com)! Developed by amazing researchers at Northwestern, extremely friendly and intuitive UI. I'm teaching fifth-graders at local schools TunePad and most of them are loving it

2 months ago 2 0 1 0

one year ago i thought p-side was about predicates and s-side was about subjects :3

3 months ago 4 0 0 0

Wow!! Congratulations!!

3 months ago 1 0 0 0

Reposting because the link has expired:

PDF: drive.google.com/file/d/1t2EF... (if this doesn't work, lmk)

Publisher link: www.sciencedirect.com/science/arti...

4 months ago 10 2 0 1

so cool!!

5 months ago 1 0 1 0
Advertisement
Video

New Preprint: osf.io/eq2ra

Reading feels effortless, but it's actually quite complex under the hood. Most words are easy to process, but some words make us reread or linger. It turns out that LLMs can tell us about why, but only in certain cases... (1/n)

5 months ago 12 5 2 1
Screenshot of a figure with two panels, labeled (a) and (b). The caption reads: "Figure 1: (a) Illustration of messages (left) and strings (right) in toy domain. Blue = grammatical strings. Red = ungrammatical strings. (b) Surprisal (negative log probability) assigned to toy strings by GPT-2."

Screenshot of a figure with two panels, labeled (a) and (b). The caption reads: "Figure 1: (a) Illustration of messages (left) and strings (right) in toy domain. Blue = grammatical strings. Red = ungrammatical strings. (b) Surprisal (negative log probability) assigned to toy strings by GPT-2."

New work to appear @ TACL!

Language models (LMs) are remarkably good at generating novel well-formed sentences, leading to claims that they have mastered grammar.

Yet they often assign higher probability to ungrammatical strings than to grammatical strings.

How can both things be true? 🧵👇

5 months ago 92 21 2 3
Preview
Training an NLP Scholar at a Small Liberal Arts College: A Backwards Designed Course Proposal The rapid growth in natural language processing (NLP) over the last couple years has generated student interest and excitement in learning more about the field. In this paper, we present two types of ...

I took Grusha and Forrest's version of NLP at Colgate (arxiv.org/abs/2408.05664) and as a current linguistics phd student still doing NLP research, I can say that this is THE undergrad course that has benefited me the most

5 months ago 2 0 1 0
Preview
To model human linguistic prediction, make LLMs less superhuman When people listen to or read a sentence, they actively make predictions about upcoming words: words that are less predictable are generally read more slowly than predictable ones. The success of larg...

arxiv.org/abs/2510.05141

6 months ago 13 4 0 0
Preview
GitHub - aaronstevenwhite/glazing: Unified data models and interfaces for syntactic and semantic frame ontologies. Unified data models and interfaces for syntactic and semantic frame ontologies. - aaronstevenwhite/glazing

I've found it kind of a pain to work with resources like VerbNet, FrameNet, PropBank (frame files), and WordNet using existing tools. Maybe you have too. Here's a little package that handles data management, loading, and cross-referencing via either a CLI or a python API.

6 months ago 27 7 3 1

Brand new version of this paper (now a short book!) available at lingbuzz.net/lingbuzz/008...!

6 months ago 7 2 1 2

favorite garden path sentence of the year: "It's better to be hurt by someone you know accidentally, than by a stranger on purpose" by Dwight Schrute

6 months ago 1 0 0 0
Preview
Language Models Identify Ambiguities and Exploit Loopholes Studying the responses of large language models (LLMs) to loopholes presents a two-fold opportunity. First, it affords us a lens through which to examine ambiguity and pragmatics in LLMs, since exploi...

arxiv.org/abs/2508.19546

7 months ago 15 6 0 0

probably? maybe you can have different policies each semester and test it out lol

7 months ago 1 0 0 0

might depend more on participation policy

7 months ago 1 0 1 0
Advertisement

A paper with Vic Ferreira and Norvin Richards is now out

(1) Speakers syntactically encode zero complementizers as cognitively active mental object.

(2) No evidence LLMs capture cross constructional generalizations about null complementizers.

nam10.safelinks.protection.outlook.com?url=https%3A...

8 months ago 15 7 1 1
Preview
NKC Resource Library - The National Kitten Coalition’s Guide to Help You Save More Kittens™ National Kitten Coalition

Work with kittens? Check out the National Kitten Coalition's new Kitten Resource Library! They're an org I like a lot!! library.kittencoalition.org

8 months ago 214 72 2 1
Preview
Collaborative Rational Speech Act: Pragmatic Reasoning for Multi-Turn Dialog As AI systems take on collaborative roles, they must reason about shared goals and beliefs-not just generate fluent language. The Rational Speech Act (RSA) framework offers a principled approach to pr...

"We introduce Collaborative Rational Speech Act (CRSA), an information-theoretic (IT) extension of RSA that models multi-turn dialog by optimizing a gain function adapted from rate-distortion theory."

arxiv.org/abs/2507.14063

9 months ago 13 5 0 0

Someone asked me today how to get better at scientific writing. I'm not the best person to ask because I find my own writing very inadequate! But the tips I thought of were:

1. Practice, and practice with co-authors who are better writers than you. Observe how they make edits and copy them.

(1/n)

9 months ago 56 11 1 1

early November is also the best season for crabs!!

11 months ago 3 0 0 0

is computational psycholinguistics a poly-sci? #puns #linguistics

1 year ago 5 0 1 0