Advertisement Β· 728 Γ— 90

Posts by Shane Storks

Thank you Vagrant! πŸ™

2 weeks ago 1 0 0 0

Thanks Julia!!

2 weeks ago 0 0 0 0

Thanks Sireesh!

2 weeks ago 0 0 0 0
Eastern Michigan University seal

Eastern Michigan University seal

Happy news: I'll be joining Eastern Michigan University as an Assistant Professor of Computer Science in Fall 2026!

If you're an EMU student interested in my research, let's connect πŸ‘‹

Will be sharing some fun #ACL2026NLP papers from my postdoc soon!

2 weeks ago 8 1 3 0

Hello #NLProc #ACL2026NLP people. I am looking for **two emergency reviewers** in the Safety and Alignment in LLMs track for ACL/ARR.

Reviews are due Feb 15th. Please DM if interested and available.

Happy to offer drinks/food if you live in/pass by Lisbon β˜€οΈ

2 months ago 6 10 0 0

Seems to be a common situation for ACs this round, but I'm also looking for two emergency reviewers for the January #ARR Evaluation and Resources track. I'd appreciate any help (reposts, encouragement, black magic...)

2 months ago 3 6 0 0

I'm looking for two emergency reviewers πŸ§‘β€πŸš’πŸ‘©β€πŸš’ for the ARR January Generalizability and Transfer track.

Please reach out if you have time & qualify for review or RT for visibilityπŸ™πŸ™

2 months ago 2 6 0 0

I could use an emergency reviewer for an ACL submission involving interpretability and syntax. Please DM me if you might be able to provide an emergency review before February 15!

2 months ago 4 4 1 0

Looking for emergency reviewers for ARR Special Track "Explainability of NLP Models". Topics: Faithfulness, mechanistic interpretability, surveys and position papers. Deadline Feb 14 AoE. #ACL2026NLP

2 months ago 8 7 1 1

I am looking for 2 emergency reviewers for the ARR Ethics, Bias & Fairness track. Please DM me if you are available πŸ™

2 months ago 6 6 0 0
Advertisement

Hello #NLProc #ACL2026NLP community, I'm looking for an emergency reviewer for an ARR submission on LLM interpretability.

If you're available to complete a review before Feb 15, please reply or DM πŸ™

2 months ago 2 6 0 0

This work finally has a home! Looking forward to presenting β€œTransparent and Coherent Procedural Mistake Detection” at #EMNLP2025 🀩

8 months ago 0 0 0 0
Screenshot of the Ai2 Paper Finder interface

Screenshot of the Ai2 Paper Finder interface

Meet Ai2 Paper Finder, an LLM-powered literature search system.

Searching for relevant work is a multi-step process that requires iteration. Paper Finder mimics this workflow β€” and helps researchers find more papers than ever πŸ”

1 year ago 117 23 6 9
Call for Main Conference Papers Official website for the 2025 Conference on Empirical Methods in Natural Language Processing

The EMNLP 2025 conference website and CfP are now live! 2025.emnlp.org/calls/main-c...

Conference dates: November 5-9 in Suzhou, China

Submissions will be through ARR, and this year's theme is Interdisciplinary Recontextualization of NLP

1 year ago 25 7 0 2

Our workshop has been extended till Feb 20. We are looking forward for your papers at NAACL's Queer in AI workshop.

1 year ago 17 18 0 1

One of ways in which AI hype men are highly copacetic with Trump is that they think you can assert things with absolutely no care for truth or feasibility. Bullshitters par excellence

1 year ago 50 7 3 0
Coherent Physical Commonsense Reasoning in Foundational Language Models

Some happy news: my dissertation on "Coherent Physical Commonsense Reasoning in Foundational Language Models" is finally available online! πŸŽ“https://deepblue.lib.umich.edu/handle/2027.42/196025

1 year ago 3 0 0 0

Adding more details. Space is (very) limited. Please contact me by next Wednesday 1/15/2025 for full consideration. Proposal doesn’t have to be formal.

1 year ago 0 0 0 0

πŸ“£ UMich undergraduate/master students: are you interested in research at the intersection of LLMs and cognitive science, but need guidance and computing resources? I want to work with you!

If interested, DM/email me with your CV and a brief project proposal!

1 year ago 4 2 1 0
Shane Storks wearing academic regalia after his doctoral hooding ceremony.

Shane Storks wearing academic regalia after his doctoral hooding ceremony.

So happy to finally share this last piece of my dissertation (and my first post on Bluesky)!

Obligatory photo after my recent hooding attached πŸ§‘β€πŸŽ“

1 year ago 1 0 0 0
Advertisement

Compared to vanilla VLMs, our interventions improve the accuracy of mistake detection and the relevance, coherence, and efficiency of explanations.

We also show that patterns in metrics can indicate common issues in VLMs, such as visual hallucination! πŸ˜΅β€πŸ’«

1 year ago 0 0 1 0

In this work, we expand the recently studied problem of procedural mistake detection in images to require explanations through self-Q&A. πŸ‘β€πŸ—¨πŸ€–πŸ’¬

We define automated metrics for explanation coherence, and incorporate them into VLMs with various inference and fine-tuning methods.

1 year ago 0 0 1 0
Dialog between a foundational VLM and itself to detect the incomplete state of the procedure "Unclip the pegs on the cloth" in an image showing a cloth pegged to a clothing line. The VLM generates the following questions and answers: 1. "Is there a cloth in the image? Yes", 2. "Are there pegs on the cloth? Yes", and 3. "Is there someone holding pegs? No". As the VLM asks these questions it becomes more confident that the procedure has not been successfully completed.

Dialog between a foundational VLM and itself to detect the incomplete state of the procedure "Unclip the pegs on the cloth" in an image showing a cloth pegged to a clothing line. The VLM generates the following questions and answers: 1. "Is there a cloth in the image? Yes", 2. "Are there pegs on the cloth? Yes", and 3. "Is there someone holding pegs? No". As the VLM asks these questions it becomes more confident that the procedure has not been successfully completed.

How well can VLMs detect and explain humans' procedural mistakes, like in cooking or assembly?
πŸ§‘β€πŸ³πŸ§‘β€πŸ”§

My new pre-print with Itamar Bar-Yossef, Yayuan Li, Zheyuan Zhang, Jason J. Corso, and Joyce Chai dives into this!

arxiv.org/pdf/2412.11927

1 year ago 3 0 1 1