Paper: arxiv.org/pdf/2505.11080
Code: github.com/lilakk/BLEUB... (coming soon)
Work done with the amazing @yekyung.bsky.social from UMD, Michael Krumdick from Kensho, Amir Zadeh and Chuan Li from LambdaAI ,
@chriswtanner.bsky.social from Kensho, and @miyyer.bsky.social from UMD
Posts by Yapei Chang
Beyond benchmarks, human annotators rate BLEUBERI outputs as comparable to those from GRPO-RM models.
Qualitatively, BLEUBERI models produce more factually grounded outputs, as measured by VeriScore on three diverse datasets. VeriScore extracts verifiable claims from responses and checks each one against Google Search.
The surprising effectiveness of BLEU extends to training. BLEUBERI first selects 5K low-BLEU examples, then trains LLMs with GRPO using BLEU as the reward. BLEUBERI models are competitive as those trained with GRPO-RM (8B) and SFT across 4 benchmarks.
When BLEU agrees with humans on a pair of model outputs, what n-grams contribute to this decision? Below is an example where it captures both format (the “Ukrainian” and “English” headers) and factuality (the number 6.1).
BLEU is often dismissed for weak human correlation in generation tasks. But on general instruction following, using BLEU to rank pairs of Chatbot Arena outputs—scored against references from strong LLMs—matches 8B & 27B reward models in human agreement, especially with more refs.
BLEU is widely used for machine translation (MT) eval. Given a reference and a generation, it computes modified n-gram precision (1–4 grams) and applies a brevity penalty to penalize short outputs. If given multiple references, it takes the max match per n-gram.
🤔 Can simple string-matching metrics like BLEU rival reward models for LLM alignment?
🔍 We show that given access to a reference, BLEU can match reward models in human preference agreement, and even train LLMs competitively with them using GRPO.
🫐 Introducing BLEUBERI:
🕵️♀️ agents are strong on many tasks, but are they good at interacting with the web? 🧸our BEARCUBS benchmark shows that they struggle on interactive tasks that seem trivial to humans! 📄 check out the paper for how to build robust evaluations & directions for future agent research
Is the needle-in-a-haystack test still meaningful given the giant green heatmaps in modern LLM papers?
We create ONERULER 💍, a multilingual long-context benchmark that allows for nonexistent needles. Turns out NIAH isn't so easy after all!
Our analysis across 26 languages 🧵👇
current models struggle with complex long-range reasoning tasks 📚 how can we reliably create synthetic training data?
💽 check out CLIPPER, a pipeline that generates data conditioning on compressed forms of long input documents!
People often claim they know when ChatGPT wrote something, but are they as accurate as they think?
Turns out that while general population is unreliable, those who frequently use ChatGPT for writing tasks can spot even "humanized" AI-generated text with near-perfect accuracy 🎯
Great blog post (by a 15-author team!) on their release of ModernBERT, the continuing relevance of encoder-only models, and how they relate to, say, GPT-4/llama. Accessible enough that I might use this as an undergrad reading.
i've been using this one: repo2txt.simplebasedomain.com it also lets you filter by file type and supports private/local repos
🚨I too am on the job market‼️🤯
I'm searching for faculty positions/postdocs in multilingual/multicultural NLP, vision+language models, and eval for genAI!
I'll be at #NeurIPS2024 presenting our work on meta-evaluation for text-to-image faithfulness! Let's chat there!
Papers in🧵, see more: saxon.me
😵 fish washed up on the shore of walden pond
🐠 what monday feels like..
private closed-source evals are the future 🫣
www.youtube.com/watch?v=afQT...
arxiv-utils Chrome web store
i knew something like this had to exist but why did i only discover it now?? no more suffering from looking at my 10+ open arxiv tabs not knowing which one is which...
🙋🏻♀️
I noticed a lot of starter packs skewed towards faculty/industry, so I made one of just NLP & ML students: go.bsky.app/vju2ux
Students do different research, go on the job market, and recruit other students. Ping me and I'll add you!
i also got 10/10! the ones that rhyme too well feel very AI to me..
such a creative way of using long-context models! this sounds like a super hard evaluation task, but gemini is already so good at it...
A plot showing that reranking improves recall as we increase the number of reranked docs, but with increasing docs we diminishing returns and eventually a performance dip.
Mat is not on 🦋—posting on his behalf!
It's time to revisit common assumptions in IR! Embeddings have improved drastically, but mainstream IR evals have stagnated since MSMARCO + BEIR.
We ask: on private or tricky IR tasks, are rerankers better? Surely, reranking many docs is best?
llms are now training humans with data from their distribution
The soul-searching journey for figuring out what research area is right for you is tricky since so many papers are cool. I tell my early career students that they should try to differentiate papers that they'd like to read 📖, implement 🔨, *and* write 📝 from papers that they'd only like to read 📖.
airbnb >>> hotel for conferences #EMNLP2024