Hear from Julia Mendelsohn at our upcoming hybrid Language and Conflict Panel, hosted by the LSC, Baha'i Chair for World Peace, and College of Behavioral & Social Sciences 🗣️ Visit go.umd.edu/lcpanel to register and learn more.
Posts by Yu (Hope) Hou
Attention NYC undergrads: Applications are open for our 13th annual Data Science Summer school at Microsoft Research NYC! Apply here by April 14th: bit.ly/3pCQENh
Come join TRAILS as a postdoc at UMD (and work w folks at GW, MSU & Cornell) to conduct research and scholarship focused on approaches to AI that advance trust and trustworthiness with a great group of colleagues!
🌐 go.umd.edu/trails-postd...
🗓️ Summer/Fall 2026 start
The next edition of the NLP+CSS will be at ACL 2026! It includes an open-ended shared task (work with the Opioid Industry Documents Archive) with travel grants as prizes!
Screenshot of paper title and authors. Title: Social Story Frames: Contextual Reasoning about Narrative Intent and Reception Authors: Joel Mire, Maria Antoniak, Steven R. Wilson, Zexin Ma, Achyutarama R. Ganti, Andrew Piper, Maarten Sap
Reading social media stories evokes a wide range of contextual reader reactions—inferential, affective, evaluative—yet we lack methods to study these at scale.
Excited to share our new paper that builds a framework for analyzing storytelling practices across online communities!
I'm super excited about the 20th @wimlworkshop.bsky.social, which is taking place tomorrow in San Diego, co-located with @neuripsconf.bsky.social!!! 🎉 To celebrate, @jennwv.bsky.social and I recorded a podcast episode! Check it out here: www.microsoft.com/en-us/resear...
AIM's 2nd round of TTK hiring - building up to 30 - is up!
📅 Ddl 12/22/25
🔬 Accessibility & Learning, plus Sustainability & Social Justice
🧑🏫 Associate/Full Prof*
🔗 umd.wd1.myworkdayjobs.com/en-US/UMCP/j...
*Assistant-level candidates: apply to departments, mentioning AIM in a cover letter
Spread the word! 📢 The FATE (Fairness, Accountability, Transparency, and Ethics) group at @msftresearch.bsky.social in NYC is hiring interns and postdocs to start in summer 2026! 🎉
Apply by *December 15* for full consideration.
The debate over “LLMs as annotators” feels familiar: excitement, backlash, and anxiety about bad science. My take in a new blogpost is that LLMs don’t break measurement; they expose how fragile it already was.
doomscrollingbabel.manoel.xyz/p/labeling-d...
Our Responsible AI team at Apple is looking for spring/summer 2026 PhD research interns! Please apply at jobs.apple.com/en-us/detail... and email rai-internship@group.apple.com. Do not send extra info (e.g., CV), just drop us a line so we can find your application in the central pool!
What should Machine Translation research look like in the age of multilingual LLMs?
Here’s one answer from researchers across NLP/MT, Translation Studies, and HCI.
"An Interdisciplinary Approach to Human-Centered Machine Translation"
arxiv.org/abs/2506.13468
A bit late to announce, but I’m excited to share that I'll be starting as an assistant professor at UMD CS @univofmaryland.bsky.social this August.
I'll be recruiting PhD students this upcoming cycle for fall 2026. (And if you're a UMD grad student, sign up for my fall seminar!)
🤔 What if you gave an LLM thousands of random human-written paragraphs and told it to write something new -- while copying 90% of its output from those texts?
🧟 You get what we call a Frankentext!
💡 Frankentexts are surprisingly coherent and tough for AI detectors to flag.
Book cover - Lost in Automatic Translation: Navigating Life in English in the Age of Language Technologies. By Vered Shwartz. Publisher: Cambridge University Press.
I guess that now that I have 1% of my Twitter followers follow me here 😅, I should announce it here too for those of you no longer checking Twitter: my nonfiction book, "Lost in Automatic Translation" is coming out this July: lostinautomatictranslation.com. I'm very excited to share it with you!
1/ How can a monolingual English speaker 🇺🇸 decide if an automatic French translation 🇫🇷 is good enough to be shared?
Introducing ❓AskQE❓, an #LLM-based Question Generation + Answering framework that detects critical MT errors and provides actionable feedback 🗣️
#ACL2025
We introduce a super simple yet effective strategy to improve video-language alignment (+18%): add hallucination correction in your training objective👌
Excited to share our accepted paper at ACL: Can Hallucination Correction Improve Video-language Alignment?
Link: arxiv.org/abs/2502.15079
Please help us spread the word! 📣
FATE is hiring a pre-doc research assistant! We're looking for candidates who will have completed their bachelor's degree (or equivalent) by summer 2025 and want to advance their research skills before applying to PhD programs.
Wisconsin-Madison's tree-filled campus, next to a big shiny lake
A computer render of the interior of the new computer science, information science, and statistics building. A staircase crosses an open atrium with visibility across multiple floors
I'm joining Wisconsin CS as an assistant professor in fall 2026!! There, I'll continue working on language models, computational social science, & responsible AI. 🌲🧀🚣🏻♀️ Apply to be my PhD student!
Before then, I'll postdoc for a year in the NLP group at another UW 🏔️ in the Pacific Northwest
🔈 NEW PAPER 🔈
Excited to share my paper that analyzes the effect of cross-lingual alignment on multilingual performance
Paper: arxiv.org/abs/2504.09378 🧵
🚨 New Paper 🚨
1/ We often assume that well-written text is easier to translate ✏️
But can #LLMs automatically rewrite inputs to improve machine translation? 🌍
Here’s what we found 🧵
A bit of a mess around the conflict of COLM with the ARR (and to lesser degree ICML) reviews release. We feel this is creating a lot of pressure and uncertainty. So, we are pushing our deadlines:
Abstracts due March 22 AoE (+48hr)
Full papers due March 28 AoE (+24hr)
Plz RT 🙏
Nice modern NLP (Ai) intro talk slides isabelleaugenstein.github.io/slides/2025_... Isabelle Augenstein
🚨 Our team at UMD is looking for participants to study how #LLM agent plans can help you answer complex questions
💰 $1 per question
🏆 Top-3 fastest + most accurate win $50
⏳ Questions take ~3 min => $20/hr+
Click here to sign up (please join, reposts appreciated 🙏): preferences.umiacs.umd.edu
Our FATE MTL team has been working on a series of projects on anthropomorphic AI systems for which we recently put out a few pre-prints I’m excited about. While working on these we tried to think carefully not only about key research questions but also how we study and write about these systems
New synthetic benchmark for multilingual long-context LLMs! Surprisingly, English and Chinese are not the top-performing languages (it's Polish!). We also observe a widening gap between high and low-resource languages as context size increases. Check out the paper for more 👇
🚨 New Position Paper 🚨
Multiple choice evals for LLMs are simple and popular, but we know they are awful 😬
We complain they're full of errors, saturated, and test nothing meaningful, so why do we still use them? 🫠
Here's why MCQA evals are broken, and how to fix them 🧵
⚠️Current methods for generating instruction-following data fall short for long-range reasoning tasks like narrative claim verification.
We present CLIPPER ✂️, a compression-based pipeline that produces grounded instructions for ~$0.5 each, 34x cheaper than human annotations.
Screenshot of top half of first page of paper. The paper is titled: "When People are Floods: Analyzing Dehumanizing Metaphors in Immigration Discourse with Large Language Models". The authors are Julia Mendelsohn (University of Chicago) and Ceren Budak (University of Michigan). The top right corner contains a visual showing the sentence "They want immigrants to pour into and infest this country". The caption says: Figure 1: Dehumanizing sentence likening immigrants to the source domain concepts of Water and Vermin via the words "pour" and "infest". The abstract text on the left reads: Metaphor, discussing one concept in terms of another, is abundant in politics and can shape how people understand important issues. We develop a computational approach to measure metaphorical language, focusing on immigration discourse on social media. Grounded in qualitative social science research, we identify seven concepts evoked in immigration discourse (e.g. "water" or "vermin"). We propose and evaluate a novel technique that leverages both word-level and document-level signals to measure metaphor with respect to these concepts. We then study the relationship between metaphor, political ideology, and user engagement in 400K US tweets about immigration. While conservatives tend to use dehumanizing metaphors more than liberals, this effect varies widely across concepts. Moreover, creature-related metaphor is associated with more retweets, especially for liberal authors. Our work highlights the potential for computational methods to complement qualitative approaches in understanding subtle and implicit language in political discourse.
New preprint!
Metaphors shape how people understand politics, but measuring them (& their real-world effects) is hard.
We develop a new method to measure metaphor & use it to study dehumanizing metaphor in 400K immigration tweets Link: bit.ly/4i3PGm3
#NLP #NLProc #polisky #polcom #compsocialsci
🐦🐦
New open source reasoning model!
Huginn-3.5B reasons implicitly in latent space 🧠
Unlike O1 and R1, latent reasoning doesn’t need special chain-of-thought training data, and doesn't produce extra CoT tokens at test time.
We trained on 800B tokens 👇
I have learned a lot in this project! If you are interested in how NLI can be used in VLMs to complement its representation, check it out!