Advertisement · 728 × 90

Posts by Omri Ben-Dov

Adaptive Symmetrization of the KL Divergence - [Omri Ben-Dov]
Adaptive Symmetrization of the KL Divergence - [Omri Ben-Dov] YouTube video by Friday Talks Tübingen

A new recording of our FridayTalks@Tübingen series is online!

Adaptive Symmetrization of the KL Divergence
by
@omribendov.bsky.social

Watch here: youtu.be/VPu1Rd4TmkU

1 month ago 3 1 1 0
Post image

We (w/ Moritz Hardt, Olawale Salaudeen and
@joavanschoren.bsky.social) are organizing the Workshop on the Science of Benchmarking & Evaluating AI @euripsconf.bsky.social 2025 in Copenhagen!

📢 Call for Posters: rb.gy/kyid4f
📅 Deadline: Oct 10, 2025 (AoE)
🔗 More info: rebrand.ly/bg931sf

7 months ago 21 7 1 0
Our proposed method are more efficient (requiring less label flips) than random flipping to arrive at the same level of fairness

Our proposed method are more efficient (requiring less label flips) than random flipping to arrive at the same level of fairness

We also evaluate our methods and theoretically analyze their limitations.

Read the full preprint here: arxiv.org/abs/2508.15374
Authors: Omri Ben-Dov, Samira Samadi, @amartyasanyal.bsky.social, @alext2.bsky.social

We hope this paper inspires new research into user-side bias mitigation.
(4/4)

8 months ago 0 0 0 0
Post image

How do user-side methods compare with firm-side fair learning?

Weakness: User-side generally cannot reach perfect fairness, while firm-side can.

Strength: User-side methods have a smaller accuracy cost than firm-side algorithms.

(3/4)

8 months ago 0 0 1 0
Post image

We show how algorithmic collective action can align with fairness, leading the collective to a relabeling strategy.

To approximate the correct labels, we propose three model-agnostic methods.

Across several datasets, 20-30% of the minority is enough to achieve the best possible fairness.
(2/4)

8 months ago 1 0 1 0
A classifier has an error of 0.15 and unfairness violation of 0.13, while the same classifier, on data with 6 relabeled samples, have the the same error but 0.03 fairness violation.

A classifier has an error of 0.15 and unfairness violation of 0.13, while the same classifier, on data with 6 relabeled samples, have the the same error but 0.03 fairness violation.

In our new work we ask: Can end-users make a platform’s ML models fairer?

Firm-side fair learning often reduces accuracy, discouraging firms from using it. But if a platform relies on user data, can minority users collectively change the data to induce fairness?

(1/4)

8 months ago 1 2 1 0