Some personal updates:
- I've completed my PhD at @unccs.bsky.social! π
- Starting Fall 2026, I'll be joining the CS dept. at Johns Hopkins University @jhucompsci.bsky.social as an Assistant Professor π
- Currently exploring options for my gap year (Aug 2025 - Jul 2026), so feel free to reach out! π
Posts by Niranjan
Our paper "Misattribution Matters: Quantifying Unfairness in Authorship Attribution" got accepted to #ACL2025!
@niranjanb.bsky.social @ajayp95.bsky.social
Arxiv link coming hopefully soon!
π₯ BIG CONGRATS to Elias (and UT Austin)! Really proud of you -- it has been a complete pleasure to work with Elias and see him grow into a strong PI on *all* axes π€
Make sure to apply for your PhD with him -- he is an amazing advisor and person! π
Congratulations Elias (and to UT Austin too).
If you are at #AISTATS2025 and are interested in concept erasure, talk to @somnathbrc.bsky.social at Poster Session 1 on Saturday May 3.
Iβll be presenting Meta-Reasoning Improves Tool Use in Large Language Models at #NAACL25 tomorrow Thursday May 1st from 2 until 3.30pm in Hall 3! Come check it out and have a friendly chat if youβre interested in LLM reasoning and tools π #NAACL
Thrilled that our paper won π Best Paper Runner-Up π at #NAACL25!!
Our work (REL-A.I.) introduces an evaluation framework that measures human reliance on LLMs and reveals how contextual features like anthropomorphism, subject, and user history can significantly influence user reliance behaviors.
advisorial advertising? advisor's advertising? π€
π Excited to share a new interp+agents paper: ππ± MICE for CATs: Model-Internal Confidence Estimation for Calibrating Agents with Tools appearing at #NAACL2025
This was work done @msftresearch.bsky.social last summer with Jason Eisner, Justin Svegliato, Ben Van Durme, Yu Su, and Sam Thomson
1/π§΅
I'll do the advisory advertising: @ykl7.bsky.social⬠is a fantastic researcher and is passionate about being in academia. He has this amazing ability to simply get things done! Happy to say more in a letter or over a chat but if you are going to @naaclmeeting.bsky.social (#NAACL2025) ping him.
Nice work Abhilasha!
We are launching HALoGENπ‘, a way to systematically study *when* and *why* LLMs still hallucinate.
New work w/ Shrusti Ghela*, David Wadden, and Yejin Choi π«
π Paper: arxiv.org/abs/2501.08292
π Code/Data: github.com/AbhilashaRav...
π Website: halogen-hallucinations.github.io π§΅ [1/n]
π’ Announcing the #NAACL2025 Award Winners!
The Best Paper and Best Theme Paper winners will present at our closing session
2025.naacl.org/blog/best-pa...
π¨Real-world retrieval is messy: queries are ambiguous or docs conflict & have incorrect/irrelevant info. How can we jointly address these problems?
β‘οΈRAMDocs: challenging dataset w/ ambiguity, misinformation & noise
β‘οΈMADAM-RAG: multi-agent framework, debates & aggregates evidence across sources
π§΅β¬οΈ
Check out @juand-r.bsky.social and @wenxuand.bsky.social 's work on improving generator-validator gaps in LLMs! I really like the formulation of the G-V gap we present, and I was pleasantly surprised by how well the ranking-based training closed the gap. Looking forward to following up in this area!
For years itβs been an open question β how much is a language model learning and synthesizing information, and how much is it just memorizing and reciting?
Introducing OLMoTrace, a new feature in the Ai2 Playground that begins to shed some light. π¦
Please share it within your circles! edin.ac/3DDQK1o
Excited to announce the COLM 2025 keynote speakers: Shirley Ho, Nicholas Carlini, @lukezettlemoyer.bsky.social, and Tom Griffiths!
See you in October in Montreal!
Working on it. Stay tuned.
Thanks @mohitbansal.bsky.social for the wonderful Distinguished Lecture on agents and multimodal generation. This got so many of us here at Stony Brook excited for the potential in these areas. Also, thanks for spending time with our students & sharing your wisdom. It was a pleasure hosting you!
A flyer announcing that Professor Mohit Bansal from the University of North Carolina Chapel Hill will present a Distinguished Lecture on Planning Agents for Collaborative Reasoning and Multimodal Generation at 2:30 PM in the New Computer Science Room 120 on Dec 6th 2024. The flyer also has a head shot of Mohit Bansal.
Excited to host the wonderful @mohitbansal.bsky.social as part of Stony Brook CS Distinguished Lecture Series on Dec 6th. Looking forward to hearing about his team's fantastic work on Planning Agents for Collaborative Reasoning and Multimodal Generation. More here: tinyurl.com/jkmex3e9
Hmmm. Fair. I feel like some record somewhere of my tardiness in submitting reviews will prod me to do better.
I noticed a lot of starter packs skewed towards faculty/industry, so I made one of just NLP & ML students: go.bsky.app/vju2ux
Students do different research, go on the job market, and recruit other students. Ping me and I'll add you!
Have you thoughts about deanon post discussion phase? That is, denon after the deed is done. This way it doesnt influence your thinking but keeps you accountable.
β¨New pre-print!β¨ Successful language technologies should work for a wide variety of languages. But some languages have systematically worse performance than others. In this paper we ask whether performance differences are due to morphological typology. Spoiler: I donβt think so! #NLP #linguistics
A plot: the x axis is baseline score of rankers, in ndcg@10. y axis is delta of model score after an expansion is applied. There are three sets of results, one dataset for each shift type: TrecDL (no shift), FiQA (domain shift), ArguAna (query shift). For each set of result, the chart shows a scatter plot with a trend line. We observe the same trend for all: as the baseline score increases, the delta when using expansion decreases. On TREC DL, worst models have a base score of ~40, and improve by 10 points w/expansion. the best models have a score of >70, and their performance decreases by -5 points w/expansion. On FiQA, worse models have a base score of ~15, and improve by 5 points w/expansion. the best models have a score of ~45, and their performance decreases by -3 point w/expansion. On ArguAna, worst models have a base score of ~25, and improve by >20 points w/expansion. the best models have a score of >55, and their performance decreases by -1 point w/expansion.
Using LLMs for query or document expansion in retrieval (e.g. HyDE and Doc2Query) have scores going π
But do these approaches work for all IR models and for different types of distribution shifts? Turns out its actually more π π¨
π (arxiv soon): orionweller.github.io/assets/pdf/L...
π¨ We are refreshing the π AppWorld (appworld.dev) leaderboard with all the new coding and/or tool-use LMs.
β What would you like to be included?
π Self-plugs are welcome!!
x.com/harsh3vedi/s...