Joint work with @lucasvanpraet.bsky.social, @jessabekker.bsky.social, @robinmanhaeve.bsky.social, Pieter Delobelle, and Maaten Buyl. Find the preprint at:
arxiv.org/abs/2511.09768
Posts by Rik Adriaensen
๐ก ProbLog4Fairness bridges this gap. It shows how to declaratively specify causes of bias using probabilistic logic in a principled, flexible, and interpretable way. Neurosymbolic extensions allow integrating these assumptions in the training of a classifier, to learn fair models from biased data!
๐ How to integrate fairness assumptions into ML models?โจ In algorithmic fairness, many definitions of fairness exist, but they often contradict. Rather than choosing one definition, causal models reason about why bias arises in data. However, practitioners struggle to operationalize these models.
๐ Tomorrow we present a poster on ProbLog4Fairness: A Neurosymbolic Approach to Modeling and Mitigating Bias at #AAAI (Machine Learning 2, poster 1233, 12pm - 2 pm).
๐ How to integrate fairness assumptions into ML models?โจ In algorithmic fairness, many definitions of fairness exist, but they often contradict. Rather than choosing one definition, causal models reason about why bias arises in data. However, practitioners struggle to operationalize these models.