AI can accelerate scientific discovery, but only if we get the scientist–AI interaction right.
The dream of “autonomous AI scientists” is tempting:
machines that generate hypotheses, run experiments, and write papers. But science isn’t just automation.
cichicago.substack.com/p/the-mirage...
🧵
Posts by Dang Nguyen
📣 Announcing our poster session at COLM 2025:
On the Effectiveness and Generalization of Race Representations for Debiasing High-Stakes Decisions
I will talk about biases in LLMs and how to mitigate them. Come say hi!
Poster #43, 4:30 PM
This game from UChicago is incredible! It might be a bit painful to play, especially for those of us who already spend too much time on email, but the concept and execution are brilliant!
Playing HR Simulator™: think I'm getting on Brittany's good side
This is what she says about my attempt to get Dave to return to in-person work.
Any big tech company wanna hire me for HR? 👀
#HRSimulator #RoastedByBrittany
Please use a VPN. We're sorry for any inconvenience!
Home-grown at CHAI and
@uchicagoci.bsky.social
!! The first ever AI-driven game from academia 🎮Give it a go and let us know your rank on the leaderboard!
Stay tuned for more on communication games! Big thanks to @ari-holtzman.bsky.social @Harvey Fu @chenhaotan.bsky.social @Peter West for making this project happen!
hrsimulator.communicationgames.ai
We’re serious! Economic coordination happens via emails. How do humans fare against AIs in getting things done with words?
We see a genre co-emerging with LLMs: communication games, where communication is crucial and not just “cheap talk” like Mafia or Diplomacy.
HR Simulator™: a game where you gaslight, deflect, and “let’s circle back” your way to victory.
Every email a boss fight, every “per my last message” a critical hit… or maybe you just overplayed your hand 🫠
Can you earn Enlightened Bureaucrat status?
(link below!)
Prompting is our most successful tool for exploring LLMs, but the term evokes eye-rolls and grimaces from scientists. Why? Because prompting as scientific inquiry has become conflated with prompt engineering.
This is holding us back. 🧵and new paper with @ari-holtzman.bsky.social .
When you walk into the ER, you could get a doc:
1. Fresh from a week of not working
2. Tired from working too many shifts
@oziadias.bsky.social has been both and thinks that they're different! But can you tell from their notes? Yes we can! Paper @natcomms.nature.com www.nature.com/articles/s41...
@chachachen.bsky.social @haokunliu.bsky.social @divingwithorcas.bsky.social present posters on human-AI decision making, hypothesis generation, interpretability and fairness at MMLS 2025!
Since @elenal3ai.bsky.social cannot make it, I presented the poster on concept incongruence: arxiv.org/abs/2505.14905
🚨 New paper alert 🚨
Ever asked an LLM-as-Marilyn Monroe who the US president was in 2000? 🤔 Should the LLM answer at all? We call these clashes Concept Incongruence. Read on! ⬇️
1/n 🧵
1/n 🚀🚀🚀 Thrilled to share our latest work🔥: HypoEval - Hypothesis-Guided Evaluation for Natural Language Generation! 🧠💬📊
There’s a lot of excitement around using LLMs for automated evaluation, but many methods fall short on alignment or explainability — let’s dive in! 🌊
🧑⚖️How well can LLMs summarize complex legal documents? And can we use LLMs to evaluate?
Excited to be in Albuquerque presenting our paper this afternoon at @naaclmeeting 2025!
🚀🚀🚀Excited to share our latest work: HypoBench, a systematic benchmark for evaluating LLM-based hypothesis generation methods!
There is much excitement about leveraging LLMs for scientific hypothesis generation, but principled evaluations are missing - let’s dive into HypoBench together.
The Midwest Machine Learning Symposium will happen in Chicago on June 23-4 on the University of Chicago campus (midwest-ml.org/2025/). We have an amazing lineup of speakers:@profsanjeevarora.bsky.social from Princeton, Heng Ji from UIUC, Tuomas Sandholm from CMU, @ravenben.bsky.social from UChicago.
Encourage your students to submit posters and register! Limited free housing is provided for student participants only, on a first-come (i.e., request)-first-serve basis.
We are also actively looking for sponsors. Reach out if you are interested!
Please repost! Help spread the words!
12/n
Big thanks to @chenhaotan.bsky.social for advice on the project, as well as helpful feedback from the wonderful members of the @chicagohai.bsky.social lab! Check out our code at github.com/ChicagoHAI/l....
DM me for any questions!
11/n
So strangely, changing the prompt can change how a model represents race. Thus, in some cases, the model’s representation may be sensitive to spurious prompt features, which poses a challenge to the generalizability of debiasing methods. Future work on debiasing should take this into account.
10/n
We found the race subspace generalizes cross-family (from admissions to hiring) and, to a lesser extent, cross-explicitness (from implicit race via name to explicit race), but it fails to generalize cross-prompt (from one prompt template to another).
9/n
So we were able to debias via interventions on the race subspaces, but do they generalize? Here, the story gets more complicated.
8/n
Race Averaging can reduce Gemma’s bias by 37-57% in admissions and hiring. Projecting out the race subspace is similarly effective.
We find more mixed results for LLaMA, where our methods reduce the bias by 33% in admissions, but fail to work in hiring.
7/n
With the race subspaces, we debias models’ decisions in two ways:
1. Race Averaging: we average the subspace representation across different races (see illustration).
2. Race Projection: we project out the race subspace altogether.
6/n
Turning away from prompt engineering, we used Distributed Alignment Search to find subspaces in model representations that encode an applicant’s race.
We found strong race representation at the last prompt token, layers 10-12 for Gemma, and layers 24-26 for LLaMA.
5/n
Despite LLMs’ instruction-following ability, we found that multiple prompting strategies all fail to promote fairness. Prompts either fail to reduce our Bias Score metric, or drastically alter the average acceptance rate.