Check out our paper for more results and analysis!
📝 arxiv.org/abs/2504.09373
🐙 github.com/AlliteraryAl...
This was a fun collaboration with @yatingwu.bsky.social @asher-zheng.bsky.social @manyawadhwa.bsky.social @gregdnlp.bsky.social @jessyjli.bsky.social
Posts by Yating Wu
Do you want to know what information LLMs prioritize in text synthesis tasks? Here's a short 🧵 about our new paper, led by Jan Trienes: an interpretable framework for salience analysis in LLMs.
First of all, information salience is a fuzzy concept. So how can we even measure it? (1/6)
✨New paper✨
Linguistic evaluations of LLMs often implicitly assume that language is generated by symbolic rules.
In a new position paper, @adelegoldberg.bsky.social, @kmahowald.bsky.social and I argue that languages are not Lego sets, and evaluations should reflect this!
arxiv.org/pdf/2502.13195
👋
I did a starter pack of ML/AI people at @utaustin.bsky.social Please distribute and feel free to self nominate!
go.bsky.app/QLQznZg
We at UT Linguistics are hiring for 🔥 2 faculty positions in Computational Linguistics! Assistant or Associate professors, deadline Dec 1.
UT has a super vibrant comp ling & #nlp community!!
Apply here 👉 apply.interfolio.com/158280
I'll be presenting our work soon today from 11:15 to 11:30 AM @ Flagler. Come say hi!
@jessyjli.bsky.social and @kmahowald.bsky.social are awesome advisors! Please apply to join the team!
Abstract for a talk entitled "Complex Situation Representations".
Looking forward to talking about this work and more at UT Austin this Friday.
Psych theories suggest that how we judge a situation (*cognitive appraisal*) leads to diverse emotions. Our #EMNLP 2023 Findings paper tests LLMs' ability to assess and explain such appraisals -- big gap between open-source LLMs and GPT3.5.
Paper arxiv.org/abs/2310.14389 w Hongli Zhan, Desmond Ong
BabyLM leaderboard with Lil Bevo in top 10
I'm happy to announce that 🐮Lil-Bevo🤠 is ready to see the world. It's UT Austin's submission to BabyLM with @kmahowald.bsky.social Juan Diego & Kaj Bostrom. We tried 3 strategies inspired by human learning - music, shorter sequences, and targeted pretraining. Read our paper: arxiv.org/abs/2310.17591
To appear EMNLP2023: simplifying text involves explaining and elaborating concepts. Using QUDs in a question generation -> answering pipeline leads to much better generation of such elaborations!
arxiv.org/abs/2305.10387
w/ @yatingwu.bsky.social Will Sheffield @kmahowald.bsky.social
📢Our EMNLP 2023 work on Questions Under Discussion (QUD)! We introduce QUDeval, the first benchmark for evaluating the generation of open-ended questions and QUD parsing using linguistic principles.
Paper: arxiv.org/abs/2310.14520
w/ @yatingwu.bsky.social, Ritika Mangla, @gregdnlp.bsky.social