Advertisement ยท 728 ร— 90

Posts by Rik Adriaensen

Preview
ProbLog4Fairness: A Neurosymbolic Approach to Modeling and Mitigating Bias Operationalizing definitions of fairness is difficult in practice, as multiple definitions can be incompatible while each being arguably desirable. Instead, it may be easier to directly describe algor...

Joint work with @lucasvanpraet.bsky.social, @jessabekker.bsky.social, @robinmanhaeve.bsky.social, Pieter Delobelle, and Maaten Buyl. Find the preprint at:
arxiv.org/abs/2511.09768

3 months ago 0 0 0 0

๐Ÿ’ก ProbLog4Fairness bridges this gap. It shows how to declaratively specify causes of bias using probabilistic logic in a principled, flexible, and interpretable way. Neurosymbolic extensions allow integrating these assumptions in the training of a classifier, to learn fair models from biased data!

3 months ago 0 0 1 0

๐Ÿ” How to integrate fairness assumptions into ML models?โ€จ In algorithmic fairness, many definitions of fairness exist, but they often contradict. Rather than choosing one definition, causal models reason about why bias arises in data. However, practitioners struggle to operationalize these models.

3 months ago 0 0 1 0

๐Ÿ“Œ Tomorrow we present a poster on ProbLog4Fairness: A Neurosymbolic Approach to Modeling and Mitigating Bias at #AAAI (Machine Learning 2, poster 1233, 12pm - 2 pm).

3 months ago 4 2 1 1

๐Ÿ” How to integrate fairness assumptions into ML models?โ€จ In algorithmic fairness, many definitions of fairness exist, but they often contradict. Rather than choosing one definition, causal models reason about why bias arises in data. However, practitioners struggle to operationalize these models.

3 months ago 0 0 0 0