Advertisement · 728 × 90

Posts by Vagrant Gautam

hell yeah

2 days ago 5 0 0 0

this wording is going to be helpful for my reviews too, thanks!

3 days ago 2 0 0 0

congrats!!

4 days ago 1 0 1 0

🥳🥳🥳🥳🥳🥳🥳

5 days ago 1 0 0 0

and john colazione and john pranzo

1 week ago 0 0 1 0

oh god

1 week ago 1 0 0 0

we hung out a bunch and it was lovely!

2 weeks ago 2 0 0 0

I'm playing hooky today 🙈 but yes, tomorrow or day after!

2 weeks ago 0 0 1 0
A cat naps contentedly while sitting neatly on a mosaic tiled floor with a lot of large and brightly coloured patterns. The walls behind are bright blue.

A cat naps contentedly while sitting neatly on a mosaic tiled floor with a lot of large and brightly coloured patterns. The walls behind are bright blue.

A cat yawns widely, looking almost as if it's screaming, in front of a display with rows of touristy magnets of blue windows in Chefchaouen.

A cat yawns widely, looking almost as if it's screaming, in front of a display with rows of touristy magnets of blue windows in Chefchaouen.

Moar cats if you made it to the end of this thread.

2 weeks ago 8 0 0 0

The first case is kinda like if you don't swear because you're constantly thinking to yourself, "I don't want to swear, I can't say fuck, please god I need to shut up," while the second is like if you don't swear just because you don't know how.

2 weeks ago 0 0 1 0
Advertisement

I refer you to Andreas's thread on aligned probing. The coolest finding for me was: When model representations are more informative of an input prompt's toxicity, its generations are less toxic. But for DPO-detoxified models, *less* informative representations somehow also result in lower toxicity.

2 weeks ago 2 0 2 0

There's still a very long way to go here. Referential / pronominal reasoning is something we as humans are great at and we don't even break it down into steps. In contrast, even DeepSeek-distilled Llama-70B with a huge token budget is just above chance in easy settings where humans are perfect.

2 weeks ago 1 0 1 0
Preview
Training in Step-by-Step Formal Reasoning Improves Pronominal Reasoning in Language Models Vagrant Gautam. Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers). 2026.

Prior work shows that code pre-training improves entity tracking, but chain-of-thought prompting worsens pronominal reference. When you combine the two (i.e., *train* on chains of thought about code, math, and logic; via DeepSeek distillation), it helps pronominal reasoning!

2 weeks ago 4 0 1 0

I already presented some work on reference (names, pronouns, coreference resolution, pronoun fidelity, etc.) as a rich site to evaluate biases and commonsense reasoning, and our work on disentangling model behaviour and internals through aligned probing (led by @tresiwald.bsky.social).

2 weeks ago 5 1 1 0
Preview
Teaching and Critiquing Conceptualization and Operationalization in NLP Vagrant Gautam. Proceedings of the Seventh Workshop on Teaching Natural Language Processing (TeachNLP 2026). 2026.

On Sunday, I'm presenting the course I designed on defining and measuring abstract concepts in NLP like "bias," and "interpretability," something we need as researchers to critically parse existing work, make sense of hype, and to do meaningful science. 15% of my poster is a meme, come check it out!

2 weeks ago 10 0 1 0
A smug-looking cat loafs with its paws folded in at the top of stairs that lead down into an alley painted completely blue in Chefchaouen, Morocco.

A smug-looking cat loafs with its paws folded in at the top of stairs that lead down into an alley painted completely blue in Chefchaouen, Morocco.

A huge and imposing marble mosque in Casablanca, with blue skies and sparse white clouds in the sky above. This is the Hassan II Mosque, one of the biggest in the world. The handful of people standing around in front of it look like ants.

A huge and imposing marble mosque in Casablanca, with blue skies and sparse white clouds in the sky above. This is the Hassan II Mosque, one of the biggest in the world. The handful of people standing around in front of it look like ants.

Late post but I'm at #EACL2026 in Morocco where I'm petting cats, seeing sights, and presenting some work - here are the highlights.

2 weeks ago 27 0 2 0

+1 that was definitely one of the coolest talks i've been to at a conference!

3 weeks ago 2 0 1 0
Preview
Mechanistic? Naomi Saphra, Sarah Wiegreffe. Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP. 2024.

And separately, another @nsaphra.bsky.social and @sarah-nlp.bsky.social banger aclanthology.org/2024.blackbo...

4 weeks ago 4 0 1 0
Preview
The elephant in the interpretability room: Why use attention as explanation when we have saliency methods? Jasmijn Bastings, Katja Filippova. Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP. 2020.

Oh another one that can be considered part of this series is aclanthology.org/2020.blackbo...

4 weeks ago 4 0 1 0
Advertisement
Preview
Attention is not Explanation Sarthak Jain, Byron C. Wallace. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short...

Another example is Attention is not Explanation, followed by Attention is not not Explanation, and then all the authors collaborated on a third paper, Learning to Faithfully Rationalize by Construction
aclanthology.org/N19-1357/
aclanthology.org/D19-1002/
aclanthology.org/2020.acl-mai...

4 weeks ago 6 1 2 0
Preview
Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data Emily M. Bender, Alexander Koller. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020.

I like historical debates in the field, e.g., aclanthology.org/2020.acl-mai... and one response julianmichael.org/blog/2020/07...

4 weeks ago 5 0 1 0

damn this is so juicy

1 month ago 3 0 0 0
simplified overview of our aligned probing setup, where we join the behavioral and internal evaluation of LMs' toxicity

simplified overview of our aligned probing setup, where we join the behavioral and internal evaluation of LMs' toxicity

LMs that "know more" about toxicity are less toxic!
Our #TACL 📄 connects behavior and internals:
💠 LMs amplify toxicity beyond humans
💠 Information about toxicity peaks in lower layers
💠 Bypassing these layers increases toxicity
More details👇 #NLProc #interpretability (1/🧵)

2 months ago 16 7 1 2
“Whose Facts Win? LLM Source Preference under Knowledge Conflicts”
Authors: Jakob Schuster, Vagrant Gautam, Katja Markert
 
Source credibility hierarchy of Government > Newspaper > Person, Social Media induced by evaluating 13 LLMs on source and knowledge conflicts. However, repeating information can flip preferences.

“Whose Facts Win? LLM Source Preference under Knowledge Conflicts” Authors: Jakob Schuster, Vagrant Gautam, Katja Markert Source credibility hierarchy of Government > Newspaper > Person, Social Media induced by evaluating 13 LLMs on source and knowledge conflicts. However, repeating information can flip preferences.

Excited to share the first preprint of my PhD!
While many papers focus on what kind of information LLMs trust, @dippedrusk.com, Katja Markert, and I instead investigate whose evidence models prefer by looking at source credibility.

#NLP #Research #CL #LLMs

1/7 🧵

2 months ago 3 1 1 0

I passed! #PhDone

3 months ago 54 1 10 0

love u <3

3 months ago 3 0 1 0

<3 <3

3 months ago 1 0 0 0
Naming in academia: Fill out our survey! We're surveying scholars about naming and name change experiences in academia. This includes spelling variations, reordering, changing any part of your name, for any reason: gender transition, marriage, divorce, immigration, cultural reasons, or recognition. This surveys takes around 5-10 minutes!

Naming in academia: Fill out our survey! We're surveying scholars about naming and name change experiences in academia. This includes spelling variations, reordering, changing any part of your name, for any reason: gender transition, marriage, divorce, immigration, cultural reasons, or recognition. This surveys takes around 5-10 minutes!

@pranav-nlp.bsky.social and I are surveying researchers about naming and name changes in academia (especially computer science).

If your academic name is / has been / might someday be different from other names you've used, please tell us about it here: forms.cloud.microsoft/e/E0XXBmZdEP

5 months ago 11 14 0 1
Advertisement
Beetlejuice 2 - Delores TRAGEDY - Delores first appearance
Beetlejuice 2 - Delores TRAGEDY - Delores first appearance YouTube video by ClipsRJCR

The scene where she appears is the best scene in the film imo
www.youtube.com/watch?v=VfkQ...

5 months ago 0 0 0 0
Vagrant (me) staring into the distance wearing smokey makeup, a long-haired black wig, and a black scar on xyr face that is fake-stapled together with shiny silvery stickers. I'm also wearing a black dress that looks very goth.

Vagrant (me) staring into the distance wearing smokey makeup, a long-haired black wig, and a black scar on xyr face that is fake-stapled together with shiny silvery stickers. I'm also wearing a black dress that looks very goth.

Monica Bellucci is a divine vision for goths everywhere with her stapled face, tear-stained smokey makeup, dark flowing hair and black dress from Beetlejuice Beetlejuice. She looks unhappy, betrayed, and stunning.

Monica Bellucci is a divine vision for goths everywhere with her stapled face, tear-stained smokey makeup, dark flowing hair and black dress from Beetlejuice Beetlejuice. She looks unhappy, betrayed, and stunning.

I was Delores from Beetlejuice Beetlejuice for Halloween!

5 months ago 11 0 2 0