Advertisement · 728 × 90

Posts by Yating Wu

Preview
QUDsim: Quantifying Discourse Similarities in LLM-Generated Text As large language models become increasingly capable at various writing tasks, their weakness at generating unique and creative content becomes a major liability. Although LLMs have the ability to gen...

Check out our paper for more results and analysis!
📝 arxiv.org/abs/2504.09373
🐙 github.com/AlliteraryAl...

This was a fun collaboration with @yatingwu.bsky.social @asher-zheng.bsky.social @manyawadhwa.bsky.social @gregdnlp.bsky.social @jessyjli.bsky.social

1 year ago 6 1 0 0
Post image

Do you want to know what information LLMs prioritize in text synthesis tasks? Here's a short 🧵 about our new paper, led by Jan Trienes: an interpretable framework for salience analysis in LLMs.

First of all, information salience is a fuzzy concept. So how can we even measure it? (1/6)

1 year ago 15 6 1 1
Post image

✨New paper✨

Linguistic evaluations of LLMs often implicitly assume that language is generated by symbolic rules.
In a new position paper, @adelegoldberg.bsky.social, @kmahowald.bsky.social and I argue that languages are not Lego sets, and evaluations should reflect this!

arxiv.org/pdf/2502.13195

1 year ago 69 20 1 3

👋

1 year ago 0 0 0 0

I did a starter pack of ML/AI people at @utaustin.bsky.social Please distribute and feel free to self nominate!

go.bsky.app/QLQznZg

1 year ago 27 8 2 1
Post image

We at UT Linguistics are hiring for 🔥 2 faculty positions in Computational Linguistics! Assistant or Associate professors, deadline Dec 1.
UT has a super vibrant comp ling & #nlp community!!

Apply here 👉 apply.interfolio.com/158280

1 year ago 12 7 0 1

I'll be presenting our work soon today from 11:15 to 11:30 AM @ Flagler. Come say hi!

1 year ago 1 0 0 0

@jessyjli.bsky.social and @kmahowald.bsky.social are awesome advisors! Please apply to join the team!

2 years ago 2 0 0 0
Advertisement
Abstract for a talk entitled "Complex Situation Representations".

Abstract for a talk entitled "Complex Situation Representations".

Looking forward to talking about this work and more at UT Austin this Friday.

2 years ago 3 2 0 1
Post image Post image

Psych theories suggest that how we judge a situation (*cognitive appraisal*) leads to diverse emotions. Our #EMNLP 2023 Findings paper tests LLMs' ability to assess and explain such appraisals -- big gap between open-source LLMs and GPT3.5.

Paper arxiv.org/abs/2310.14389 w Hongli Zhan, Desmond Ong

2 years ago 1 2 0 0
BabyLM leaderboard with Lil Bevo in top 10

BabyLM leaderboard with Lil Bevo in top 10

I'm happy to announce that 🐮Lil-Bevo🤠 is ready to see the world. It's UT Austin's submission to BabyLM with @kmahowald.bsky.social Juan Diego & Kaj Bostrom. We tried 3 strategies inspired by human learning - music, shorter sequences, and targeted pretraining. Read our paper: arxiv.org/abs/2310.17591

2 years ago 5 2 1 1
Post image

To appear EMNLP2023: simplifying text involves explaining and elaborating concepts. Using QUDs in a question generation -> answering pipeline leads to much better generation of such elaborations!

arxiv.org/abs/2305.10387

w/ @yatingwu.bsky.social Will Sheffield @kmahowald.bsky.social

2 years ago 3 1 0 1
Post image

📢Our EMNLP 2023 work on Questions Under Discussion (QUD)! We introduce QUDeval, the first benchmark for evaluating the generation of open-ended questions and QUD parsing using linguistic principles.

Paper: arxiv.org/abs/2310.14520

w/ @yatingwu.bsky.social, Ritika Mangla, @gregdnlp.bsky.social

2 years ago 15 6 0 0