Advertisement Β· 728 Γ— 90

Posts by Haoran Zhao

thanks, Robert!!

5 months ago 1 0 0 0

Had an amazing experience at EMNLP 2025 @emnlpmeeting.bsky.social. Glad to present my work in an oral session and honored to win the "SAC Highlight" award. Feel free to check the work below. Big thanks to my amazing advisor @rdhawkins.bsky.social!

5 months ago 7 0 0 1
Post image

🎀 "Your #CogSci presentation was quite good this year."

How flattered or offended will you be? The answer may depend on whether you speak British or American English πŸ‡ΊπŸ‡ΈπŸ‡¬πŸ‡§. Our new #CogSci2025 paper reveals systematic differences in how different cultures interpret the same words.

9 months ago 73 16 4 4
Preview
Comparing human and LLM politeness strategies in free production Polite speech poses a fundamental alignment challenge for large language models (LLMs). Humans deploy a rich repertoire of linguistic strategies to balance informational and social goals -- from posit...

Work done with the amazing @rdhawkins.bsky.social. Looking forward to presenting on this work at CogSci 2025 this summer. Please check our full paper at: arxiv.org/abs/2506.09391

10 months ago 6 1 0 0

In summary, we find that while LLMs have impressive politeness capabilities, their systematic preference for distancing strategies reveals important gaps in pragmatic alignment. Future work should explore how to better balance positive and negative politeness strategies. 🎯

10 months ago 1 0 1 0

As we deploy LLMs in social contexts, we need to think beyond whether they CAN be polite to HOW they're polite. Training that emphasizes "harmlessness" may inadvertently create systems that are pragmatically misaligned with human communication patterns.

10 months ago 0 0 1 0

Why does this matter? Despite good agreement on multi-choice tasks, subtle misalignments in open-ended polite language production could lead to real communication breakdowns: Hedged positive feedback might be interpreted as more negative than the system intends to convey.

10 months ago 0 0 1 0
Post image

Result #3: LLMs systematically overuse negative politeness strategies in positive contexts! While humans shift to rapport-building language for good performances, LLMs keep hedging and distancing.

10 months ago 0 0 1 0
Advertisement

So LLMs are "better" at politeness? Not necessarily. A deeper dive reveals a crucial difference. Politeness theory distinguishes between positive strategies, which build rapport ("I love your creativity here!"), and negative strategies, which minimize imposition ("I'm somewhat concerned...").

10 months ago 0 0 1 0
Post image

Result #2: When we removed constraints and let both humans and LLMs freely generate responses, human evaluators actually PREFERRED the LLM responses 66% of the time! 🀯 This held across all communicative goals!

10 months ago 0 0 1 0

But real conversations aren't multiple choice questions! So we ran a bigger test: What happens when humans and LLMs can say anything they want? We collected XXX+ responses for the same scenarios, manipulating the goal (being informative, kind, or both).

10 months ago 0 0 1 0
Post image

Result #1: Models β‰₯70B successfully replicated human patterns (like using 'wasn't terrible' for a bad performance). Smaller models could barely do the task properly. πŸ“Š

10 months ago 0 0 1 0

We started by asking how LLMs compare to human politeness preferences in a simple task introduced by Yoon et al (2020). If a friend gives a bad performance (0/3 ❀️), what would you say to them? Humans use negation to soften the blow: 'it wasn't terrible' is preferred over "it was bad."

10 months ago 0 0 1 0

LLMs are increasingly deployed in sensitive social contexts like education, healthcare, and customer service. If they're systematically different in HOW they're polite, it could lead to misunderstandings.

10 months ago 0 0 1 0

Suppose your friend asks 'How was my cooking?' and it was... not great. 😬 Speakers use complex politeness strategies to navigate tricky situations. But what happens when LLMs do? We’re excited to share new work revealing surprising similarities and differences in human and LLM politeness usage🧡.

10 months ago 7 2 1 2