Nice research! You may be interested in the small scale ($4 budget) verification performed by my personal Opus agent here: muninn.austegard.com/blog/this-tr... in which we also introduced a framing-resistant prompt to see how much that would mitigate the effetcs. 1/3
Posts by Jessy Li
Haha good point! Yes — here we actually do contrast with variations from just the same prompt — that’s actually our baseline condition!
If you ask the same question with different framing/phrasing, do language models change their answers? This is super important in medicine because different info can have real consequences! Check out this new work from @hyesunyun.bsky.social
Heading to #EACL2026! 🇲🇦
Friday 11a Poster Session 6: LMs struggle to perform inferences around discourse connectives aclanthology.org/2026.eacl-lo...
Sunday 5p TeachingNLP workshop: new course on discourse+generation aclanthology.org/2026.teachin...
w/ @kanishka.bsky.social + Daniel Brubaker
Want to know how well the models can brainstorm connections across different concepts? Super excited about @manyawadhwa.bsky.social’s work on measuring associative creativity!
title section of the paper: “Cross-Modal Taxonomic Generalization in (Vision) Language Models” by Tianyang Xu, Marcelo Sandoval-Castañeda, Karen Livescu, Greg Shakhnarovich, Kanishka Misra.
What is the interplay between representations learned from (language) surface forms alone, and those learned from more grounded evidence (e.g.,vision)?
Excited to share new work understanding “Cross-modal taxonomic generalization” in (V)LMs
arxiv.org/abs/2603.07474
1/
Check out our special theme: new missions for NLP research!
Title card of our paper: "Which course? Discourse! Teaching Discourse and Generation in the Era of LLMs" by Junyi Jessy Li, Yang Janet Liu, Valentina Pyatkin, and William Sheffield.
Nearly 2 years ago, @jessyjli.bsky.social, @janetlauyeung.bsky.social, @valentinapy.bsky.social, and I decided that it's time to bring discourse structure to the center of NLP teaching.
Check out @asher-zheng.bsky.social's work on quantifying strategic language in dialogue, just appeared in the Dialogue and Discourse journal.
We study non-cooperative moves that are subtle to capture, where modern AI still have trouble comprehending.
Work w/ David_Beaver
Title page of our paper: "Bears, all bears, and some bears. Language Constraints on Language Models' Inductive Inferences"
“All bears have a property”, “Some bears have a property”, “Bears have a property” are different in terms of how the property is generalized to a specific bear – a great example of how language constrains thought!
This holds for kids, adults, and according to our new work, (V)LMs! 🧵
🚨Be careful with LLMs when you ask health related questions -- even when the model relies on "evidence"! Kaijie's paper reveals a key weakness and the tricky balance between safety and faithfulness 👉
Accepted at EACL - excited about Morocco!
Screenshot of a figure with two panels, labeled (a) and (b). The caption reads: "Figure 1: (a) Illustration of messages (left) and strings (right) in toy domain. Blue = grammatical strings. Red = ungrammatical strings. (b) Surprisal (negative log probability) assigned to toy strings by GPT-2."
New work to appear @ TACL!
Language models (LMs) are remarkably good at generating novel well-formed sentences, leading to claims that they have mastered grammar.
Yet they often assign higher probability to ungrammatical strings than to grammatical strings.
How can both things be true? 🧵👇
Incredibly honored to serve as #EMNLP 2026 Program Chair along with @sunipadev.bsky.social and Hung-yi Lee, and General Chair @andre-t-martins.bsky.social. Looking forward to Budapest!!
(With thanks to Lisa Chuyuan Li who took this photo in Suzhou!)
Delighted Sasha's (first year PhD!) work using mech interp to study complex syntax constructions won an Outstanding Paper Award at EMNLP!
Also delighted the ACL community continues to recognize unabashedly linguistic topics like filler-gaps... and the huge potential for LMs to inform such topics!
Think your LLMs “understand” words like although/but/therefore? Think again!
They perform at chance for making inferences from certain discourse connectives expressing concession
Test your models and see if they just memorize or truly understand!
PLSemanticsBench - where formal meets informal!
arxiv.org/abs/2510.03415
Team: Aditya Thimmaiah, Jiyang Zhang, Jayanth Srinivasa, Milos Gligoric
So what's really happening⁉️
LLMs aren't interpreting rules -- they're recalling patterns.
Their "understanding" is promising... but shallow.
💡It's time to test semantics, not just syntax.💡
To move from surface-level memorization → true symbolic reasoning.
Change the rules -- swap (+ with -) or replace (+ with novel symbols) operators -- and accuracy collapses.
Models that were "near-perfect" drop to single digits. 😬
🚨 Does your LLM really understand code -- or is it just really good at remembering it?
We built **PLSemanticsBench** to find out.
The results: a wild mix.
✅The Brilliant:
Top reasoning models can execute complex, fuzzer-generated programs -- even with 5+ levels of nested loops! 🤯
❌The Brittle: 🧵
Find my students and collaborators at COLM this week!
Tuesday morning: @juand-r.bsky.social and @ramyanamuduri.bsky.social 's papers (find them if you missed it!)
Wednesday pm: @manyawadhwa.bsky.social 's EvalAgent
Thursday am: @anirudhkhatry.bsky.social 's CRUST-Bench oral spotlight + poster
We’re hiring faculty as well! Happy to talk about it at COLM!
Can we quantify what makes some text read like AI "slop"? We tried 👇
I’m at #COLM2025 from Wed with:
@siyuansong.bsky.social Tue am introspection arxiv.org/abs/2503.07513
@qyao.bsky.social Wed am controlled rearing: arxiv.org/abs/2503.20850
@sashaboguraev.bsky.social INTERPLAY ling interp: arxiv.org/abs/2505.16002
I’ll talk at INTERPLAY too. Come say hi!
On my way to #COLM2025 🍁
Check out jessyli.com/colm2025
QUDsim: Discourse templates in LLM stories arxiv.org/abs/2504.09373
EvalAgent: retrieval-based eval targeting implicit criteria arxiv.org/abs/2504.15219
RoboInstruct: code generation for robotics with simulators arxiv.org/abs/2405.20179
Traveling to my first @colmweb.org🍁
Not presenting anything but here are two posters you should visit:
1. @qyao.bsky.social on Controlled rearing for direct and indirect evidence for datives (w/ me, @weissweiler.bsky.social and @kmahowald.bsky.social), W morning
Paper: arxiv.org/abs/2503.20850
Here is a genuine one :) CosmicAI’s AstroVisBench, to appear at #NeurIPS
bsky.app/profile/nsfs...
All of us (@kanishka.bsky.social @kmahowald.bsky.social and me) are looking for PhD students this cycle! If computational linguistics/NLP is your passion, join us at UT Austin!
For my areas see jessyli.com
Can AI aid scientists amidst their own workflows, when they do not know step-by-step workflows and may not know, in advance, the kinds of scientific utility a visualization would bring?
Check out @sebajoe.bsky.social’s feature on ✨AstroVisBench:
📣 NEW HCTS course developed in collaboration with @tephi-tx.bsky.social: AI in Health Communication 📣
Explore responsible applications and best practices for maximizing impact and building trust with @utaustin.bsky.social experts @jessyjli.bsky.social & @mackert.bsky.social.
💻: rebrand.ly/HCTS_AI