Following the success of Explainable AI in generating faithful and understandable explanations of complex ML models, there has been increasing attention on how the outcomes of Explainable AI can be systematically used to enable meaningful actions. These considerations are studied within the subfield of Actionable XAI. In particular, research questions relevant to this subfield include (1) what types of explanations are most helpful in enabling human experts to achieve more efficient and accurate decision-making, (2) how one can systematically improve the robustness and generalization ability of ML models or align them with human decision making and norms based on human feedback on explanations, (3) how to enable meaningful actioning of real-world systems via interpretable ML-based digital twins, and (4) how to evaluate and improve the quality of actions derived from XAI in an objective and reproducible manner. This special track will address both the technical and practical aspects of Actionable XAI. This includes the question of how to build highly informative explanations that form the basis for actionability, aiming for solutions that are interoperable with existing explanation techniques such as Shapley values, LRP or counterfactuals, and existing ML models. This special track will also cover the exploration of real-world use cases where these actions lead to improved outcomes.
CALL FOR PAPERS: #𝗫𝗔𝗜20𝟮𝟱, 𝗦𝗽𝗲𝗰𝗶𝗮𝗹 𝗧𝗿𝗮𝗰𝗸: Actionable explainable AI. Submit your paper until February 15, 2025.
xaiworldconference.com/2025/actiona...
#XAI #LRP #counterfactuals #shapley #models #deeplearning #interpretability #decisionmaking @lorenzlinhardt.bsky.social @tuberlin.bsky.social