Advertisement · 728 × 90
#
Hashtag
#blackboxNLP
Advertisement · 728 × 90
Post image

@linaconti.bsky.social presented “The Unheard Alternative: Contrastive Explanations for Speech-to-Text Models” at the #BlackboxNLP Workshop

📌 aclanthology.org/2025.blackbo...

(4/5)

1 0 1 0

Heading to the EMNLP BlackboxNLP Workshop this Sunday? Don’t miss @nfel.bsky.social and @lkopf.bsky.social poster on „Interpreting Language Models Through Concept Descriptions: A Survey“
aclanthology.org/2025.blackbo...

#EMNLP #BlackboxNLP #XAI #Interpretapility

11 3 0 0

Only 3 days left for direct submissions to #BlackboxNLP, don't miss it! 🚀

1 0 0 0

Submit your work to #BlackboxNLP 2025!

3 0 0 0
Post image

📢 Call for Papers! 📢
#BlackboxNLP 2025 invites the submission of archival and non-archival papers on interpreting and explaining NLP models.

📅 Deadlines: Aug 15 (direct submissions), Sept 5 (ARR commitment)
🔗 More details: blackboxnlp.github.io/2025/call/

9 1 0 3
Post image

Just 5 days left to submit your method to the MIB Shared Task at #BlackboxNLP!

Have last-minute questions or need help finalizing your submission?
Join the Discord server: discord.gg/n5uwjQcxPR

1 1 0 0
Post image

📝 Technical report guidelines are out!

If you're submitting to the MIB Shared Task at #BlackboxNLP, feel free to take a look to help you prepare your report: blackboxnlp.github.io/2025/task/

3 1 0 1
Post image

Just 10 days to go until the results submission deadline for the MIB Shared Task at #BlackboxNLP!

If you're working on:
🧠 Circuit discovery
🔍 Feature attribution
🧪 Causal variable localization
now’s the time to polish and submit!

Join us on Discord: discord.gg/n5uwjQcxPR

3 1 0 1
Post image

⏳ Three weeks left! Submit your work to the MIB Shared Task at #BlackboxNLP, co-located with @emnlpmeeting.bsky.social

Whether you're working on circuit discovery or causal variable localization, this is your chance to benchmark your method in a rigorous setup!

4 2 0 2
Post image

Working on feature attribution, circuit discovery, feature alignment, or sparse coding?
Consider submitting your work to the MIB Shared Task, part of this year’s #BlackboxNLP

We welcome submissions of both existing methods and new or experimental POCs!

5 3 1 0

The wait is over! 🎉 Our speakers for #BlackboxNLP 2025 are finally out!

5 2 0 0
Post image

🚨 Excited to announce two invited speakers at #BlackboxNLP 2025!

Join us to hear from two leading voices in interpretability:
🎙️ Quanshi Zhang (Shanghai Jiao Tong University)
🎙️ Verna Dankers (McGill University)

‪@vernadankers.bsky.social‬

8 0 0 1
Post image

Working on circuit discovery in LMs?
Consider submitting your work to the MIB Shared Task, part of #BlackboxNLP at @emnlpmeeting.bsky.social 2025!

The goal: benchmark existing MI methods and identify promising directions to precisely and concisely recover causal pathways in LMs >>

5 4 1 0
Post image

Have you heard about this year's shared task? 📢

Mechanistic Interpretability (MI) is quickly advancing, but comparing methods remains a challenge. This year at #BlackboxNLP, we're introducing a shared task to rigorously evaluate MI methods in language models 🧵

16 4 1 1

Have you already found #BlackboxNLP on Bluesky?🎉

5 0 0 0

Interested in mechanistic interpretability and care about evaluation? Please consider submitting to our shared task at #blackboxNLP this year!

5 1 0 0

Excited about the release of MIB, a mechanistic Interpretability benchmark!

Come talk to us at #iclr2025 and consider submitting to the leaderboard.

We’re also planning a shared task around it at #blackboxNLP this year, located with #emnlp2025

4 0 1 0

I’ll be presenting two posters on (psycho)linguistically motivated perspectives on LM generalization at #EMNLP2024!

1. Sensitivity to Argument Roles - Session 2 & #BlackBoxNLP
2. Learning & Filler-Gap Dependencies - #CoNLL

Excited to chat with other folks interested in compling x cogsci!

papers⬇️

7 1 1 0

𝐻𝑜𝓌 𝑀𝓊𝒸𝒽 𝒞𝑜𝓃𝓈𝒾𝓈𝓉𝑒𝓃𝒸𝓎 𝐼𝓈 𝒴𝑜𝓊𝓇 𝒜𝒸𝒸𝓊𝓇𝒶𝒸𝓎 𝒲𝑜𝓇𝓉𝒽?

A new #blackboxNLP paper where we propose a supplemental measurement to contrast set consistency that enables discussion of whether a higher consistency was achievable with the same accuracy 1/

2 0 1 0