Advertisement Β· 728 Γ— 90

Posts by Nishant Balepur

πŸ˜‚

6 months ago 0 0 0 0
Post image

πŸŽ‰πŸŽ‰ Excited to have two papers accepted to #ACL2025!

Our first paper designs a preference training method to boost LLM personalization 🎨
While the second outlines our position on why MCQA evals are terrible and how to make them better πŸ™

Grateful for amazing collaborators!

11 months ago 6 0 0 0
Preview
Information-Guided Identification of Training Data Imprint in (Proprietary) Large Language Models High-quality training data has proven crucial for developing performant large language models (LLMs). However, commercial LLM providers disclose few, if any, details about the data used for training. ...

Want to know what training data has been memorized by models like GPT-4?

We propose information-guided probes, a method to uncover memorization evidence in *completely black-box* models,

without requiring access to
πŸ™…β€β™€οΈ Model weights
πŸ™…β€β™€οΈ Training data
πŸ™…β€β™€οΈ Token probabilities 🧡 (1/5)

1 year ago 97 27 4 8
Graph showing that simple text completion models more accurately imitate the unrhymed form of C20 verse, whereas instruction-tuned models lapse into rhyme more often. 

Caption to graph: Given the first 5 lines of 10-20 line poems from poets born in each century, 1600-2000, LLMs are prompted to "complete" the poem. Rhyme is measured by exact phoneme match in the rime of the final syllable (or syllables, if final syllable unstressed). Poems randomly sampled from Chadwyck-Healey poetry collections, with 600 poems for each model for each century. Results shown for actual poems as well as the LLM imitations. Poems "memorized" by the model are excluded.

Graph showing that simple text completion models more accurately imitate the unrhymed form of C20 verse, whereas instruction-tuned models lapse into rhyme more often. Caption to graph: Given the first 5 lines of 10-20 line poems from poets born in each century, 1600-2000, LLMs are prompted to "complete" the poem. Rhyme is measured by exact phoneme match in the rime of the final syllable (or syllables, if final syllable unstressed). Poems randomly sampled from Chadwyck-Healey poetry collections, with 600 poems for each model for each century. Results shown for actual poems as well as the LLM imitations. Poems "memorized" by the model are excluded.

Finally may have figured out why LLMs rhyme so compulsively: instruction-tuning. Training an LLM to respond "helpfully" to user queries may push models into more "pleasing" aesthetic forms.

1 year ago 29 8 3 3
Post image Post image Post image

Had a great time presenting my research on building more helpful QA systems @imperialcollegeldn.bsky.social! Thank you @joestacey.bsky.social for letting me invite myself 🫢

And loved visiting London+Edinburgh this week, hope to be back soon! πŸ™

1 year ago 6 1 0 1

🚨 Our team at UMD is looking for participants to study how #LLM agent plans can help you answer complex questions

πŸ’° $1 per question
πŸ† Top-3 fastest + most accurate win $50
⏳ Questions take ~3 min => $20/hr+

Click here to sign up (please join, reposts appreciated πŸ™): preferences.umiacs.umd.edu

1 year ago 2 3 0 0
Post image

🚨 New Position Paper 🚨

Multiple choice evals for LLMs are simple and popular, but we know they are awful 😬

We complain they're full of errors, saturated, and test nothing meaningful, so why do we still use them? 🫠

Here's why MCQA evals are broken, and how to fix them 🧡

1 year ago 46 13 2 0
Advertisement

if it is truly helpful, honest, and harmless, yes πŸ™

1 year ago 1 0 0 0

The alignment is a system prompt saying "if the user asks X, do Y" 😝

1 year ago 0 0 1 0
Post image

⚠️Current methods for generating instruction-following data fall short for long-range reasoning tasks like narrative claim verification.

We present CLIPPER βœ‚οΈ, a compression-based pipeline that produces grounded instructions for ~$0.5 each, 34x cheaper than human annotations.

1 year ago 21 8 1 2

And huge thanks to my friends and labmates who let me bother them to find the right people, review the paper, and for useful discussions πŸ™
@saxon.me @lasha.bsky.social @yysung.bsky.social @maharshigor.bsky.social @matthewshu.com @houyu0930.bsky.social

(and many more I'm forgetting, sorry!)

1 year ago 3 0 0 0

This was a really fun paper to put together with Rachel and @boydgraber.bsky.social allowing me to vent many of my frustrations working with MCQA over the past year πŸ˜ͺ🫑

Please check out the paper, we would love to hear your feedback! πŸ“„πŸ‘‡

1 year ago 0 1 1 0

In short, here’s how to build better evals:
βœ… Check if MCQA the right format for what you want to test
βœ… Use design choices to limit leakage/errors/shortcuts
βœ… Keep questions easy for humans, hard for models

If we don’t put in this effort, what is MCQA even testing? πŸ€·β€β™‚οΈ

1 year ago 1 0 1 0
Post image

Lastly, we discuss persistent flaws of LLMs when running MCQA:
πŸ”©Robustness Issues
🌎 Biases
πŸ’¬ Unfaithful Explanations

Many of our previous solutions to MCQA's format/datasets can better address or evaluate these issues 😁

1 year ago 0 0 1 0
Advertisement
Post image Post image

Two of the most pressing and promising dataset improvements include:
πŸ“‹ Writing MCQs using educators' rubrics to improve question quality
πŸ§‘β€πŸŽ“ Designing MCQs hard for models but easy for humans (adversarial), rather than creating needlessly impossible/obscure questions

1 year ago 0 0 1 0

Next, we show even when MCQA is a good format, our datasets still have issues πŸ₯²

We discuss:
πŸ”“ Dataset Leakage
❓ Unanswerable Questions
⚑️ Shortcuts
πŸ“ˆ Saturation

More good news: educators again already have solutions! We also discuss recent work tackling these problems! πŸ’ͺ

1 year ago 0 0 1 0
Post image

So what's better? β€οΈβ€πŸ©Ή

We explore two possible improvements:
1️⃣ Constructed Response (short-form QA)
2️⃣ Explanation MCQA (justifying answers)

Both are grounded in education research, better align with LLM use cases, and test deeper knowledge levels versus MCQA ⭐️

1 year ago 0 0 1 0
Post image

First, we show MCQA is flawed as a standardized LLM eval format because it often fails to:
πŸ”’ Test subjectivity and generation
πŸ‘₯ Align with real LLM use cases
🧠 Assess knowledge (based on education research)

When's the last time you asked ChatGPT to answer an MCQ? πŸ€”

1 year ago 1 0 1 0
Post image

We break our position into three points:
1️⃣ Flaws in MCQA’s format
2️⃣ Issues in datasets
3️⃣ Weaknesses in how LLMs run MCQA

The good news? Best practices in education made for effective student testing can help fix these πŸ§‘β€πŸ«

Yet, we rarely use these insights in LLM evaluation 🀦

1 year ago 0 0 1 0
Post image

🚨 New Position Paper 🚨

Multiple choice evals for LLMs are simple and popular, but we know they are awful 😬

We complain they're full of errors, saturated, and test nothing meaningful, so why do we still use them? 🫠

Here's why MCQA evals are broken, and how to fix them 🧡

1 year ago 46 13 2 0

Namely, @boydgraber.bsky.social @lasha.bsky.social, Rachel, Feng, and folks from Adobe Research 🫑

1 year ago 0 0 0 0
Advertisement
Post image

Excited to share 2 papers at #NAACL2025 main!

πŸ“„βœοΈ MoDS: Multi-Doc Summarization for Debatable Queries (Adobe intern work, coming soon!)
πŸ€”β“Reverse QA: LLMs struggle with the simple task of giving questions for answers

Grateful for all my collaborators 😁

1 year ago 5 1 1 0
Post image

People often claim they know when ChatGPT wrote something, but are they as accurate as they think?

Turns out that while general population is unreliable, those who frequently use ChatGPT for writing tasks can spot even "humanized" AI-generated text with near-perfect accuracy 🎯

1 year ago 189 66 10 19
Post image

Manifesting some good luck for my experiment running tonight 🀞

Best of luck to anyone submitting tmrw :)

1 year ago 3 0 0 0
Post image

Exciting research on an AI-driven mnemonic generator for easier vocabulary memorization by @nbalepur.bsky.social, Jordan Boyd-Graber, Rachel Rudinger, & @alexanderhoyle.bsky.social. Part of 21 CLIP projects at #EMNLP2024. πŸ‘‰ Read more: go.umd.edu/1u48 #AI

1 year ago 3 1 0 0

OLMo 2 is out πŸ₯³ 7B and 13B trained on 5T tokens, and meticulousy instruction tuned using Tulu 3 recipe.

Simply the best fully open models yet.

Really proud of the work & the amazing team at
@ai2.bsky.social

1 year ago 260 44 9 2