Posts by Gunnar König
At #NeurIPS in San Diego this week? Interested in XAI, causality, or performative prediction? Come visit our poster!
💬 Performative Validity of Recourse Explanations
📆 Wednesday, 4.30 pm, Poster Session 2
w/ Hidde Fokkema, Timo Freiesleben, Celestine Mendler-Dünner, Ulrike von Luxburg
The schedule for our Workshop on the Theory of XAI is now online!
🕰️ Dec 2, starting 9am
📍 Bella Center Copenhagen (co-located with EurIPS)
🔗 sites.google.com/view/theory-...
Tübingen Conference for AI and Law starting on wednesday!
Keynotes by Rediet Abebe, Solon Borocas, Sylvie Delacroix, Lilian Edwards, Christoph Engel, Michal Gal, Philipp Hacker, Christoph Kern, Christoph Sorge.
ailawinstitute.de/conference-f...
I almost overlooked this one. Thanks to #NeurIPS for the complimentary registration! 🙏
🔹 Speakers: @jessicahullman.bsky.social, @doloresromerom.bsky.social, @tpimentel.bsky.social & Bernt Schiele
🕒 Call for contributions open until Oct 15 (AoE)
🔗 More info: eurips.cc/ellis
How can we make AI explanations provably correct — not just convincing? 🤔
Join us for the Theory of Explainable Machine Learning Workshop, part of the ELLIS UnConference Copenhagen 🇩🇰 on Dec 2, co-located with #EurIPS.
🕒 Call for contributions open until Oct 15 (AoE)
🔗 eurips.cc/ellis
In short: Many XAI papers are based on goals such as "transparency". But what does that mean? We argue that XAI methods should be motivated by concrete goals (e.g., explaining how to change an unfavorable prediction) instead of vague concepts (e.g., interpretability).
Section 3, Misconception 1
Our article is also on arXiv: arxiv.org/pdf/2306.04292
Looking forward to talking about our work on the value of explanation for decision-making at this workshop
I have 2 open PhD positions on Mathematical Foundations for Explainable AI:
Position 1: werkenbij.uva.nl/en/vacancies... (apply by October 13, 2025)
Position 2: applications via the Ellis PhD Program: ellis.eu/news/ellis-p... by Oct. 31.
Both positions are equivalent (except for starting dates)
Interested in provable guarantees and fundamental limitations of XAI? Join us at the "Theory of Explainable AI" workshop Dec 2 in Copenhagen! @ellis.eu @euripsconf.bsky.social
Speakers: @jessicahullman.bsky.social @doloresromerom.bsky.social @tpimentel.bsky.social
Call for Contributions: Oct 15
expressing appreciation for this scientific diagram
Time to figure out which provable guarantees one can(not) give on XAI! Workshop "Theory of Explainable Machine
Learning", Dec 2 in Copenhagen as part of the Ellis
Unconference/EurIPS. Submission deadline: Oct 15.
sites.google.com/view/theory-...
eurips.cc/ellis/
🚨 Workshop on the Theory of Explainable Machine Learning
Call for ≤2 page extended abstract submissions by October 15 now open!
📍 Ellis UnConference in Copenhagen
📅 Dec. 2
🔗 More info: sites.google.com/view/theory-...
@gunnark.bsky.social @ulrikeluxburg.bsky.social @emmanuelesposito.bsky.social
I am hiring PhD students and/or Postdocs, to work on the theory of explainable machine learning. Please apply through Ellis or IMPRS, deadlines end october/mid november. In particular: Women, where are you? Our community needs you!!!
imprs.is.mpg.de/application
ellis.eu/news/ellis-p...
Not that I know of. But the method is relatively easy to implement. Please reach out if you would like to use it. I'm happy to assist!
Sounds interesting? Have a look at our paper!
Joint work with Eric Günther and @ulrikeluxburg.bsky.social.
DIP is
✅ unique under mild assumptions,
✅ easy to interpret,
✅ entails an efficient estimation procedure,
✅ describes properties of the data (instead of just a specific model), and
✅ comes with a python implementation (github.com/gcskoenig/dipd).
In our recent AISTATS paper, we propose DIP, a novel mathematical decomposition of feature attribution scores that cleanly separates individual feature contributions and the contributions of interactions and dependencies.
Dependencies are not only a neglected cooperative force but also complicate the definition and quantification of feature interactions. In particular, the contributions of interactions and dependencies may cancel each other out and must be disentangled to be fully revealed.
For example, suppose we predict kidney function (Y) from creatinine (C) and muscle mass (M), and that C reflects Y but also M, which is not linked to Y. Here, M becomes useful once combined with C, as it allows us to subtract irrelevant variation from C. In other words, C&M cooperate via dependence!
Determining whether variables are relevant due to cooperation is crucial, as variables that cooperate must be considered jointly to understand their relevance. Notably, features cooperate not only through interactions but also through statistical dependencies, which existing methods neglect.
In many XAI applications, it is crucial to determine whether features contribute individually or only when combined. However, existing methods fail to reveal cooperations since they entangle individual contributions with those made via interactions and dependencies. We show how to disentangle them!
Feature importance measures can clarify or mislead. PFI, LOCO, and SAGE each answer a different question.
Understand how to pick the right tool and avoid spurious conclusions: mcml.ai/news/2025-03...
@fionaewald.bsky.social @ludwig-bothmann.bsky.social @giuseppe88.bsky.social @gunnark.bsky.social
Finally made it to bluesky as well ...
And the video of Gunnar's talk is up on YouTube in case you missed it: youtu.be/7MrMjabTbuM
@gunnark.bsky.social
I recall you had an iPad -- why did you switch?
A starter pack of people working on interpretability / explainability of all kinds, using theoretical and/or empirical approaches.
Reply or DM if you want to be added, and help me reach others!
go.bsky.app/DZv6TSS