Denied a loan, an interview, or an insurance claim by machine learning models? You may be entitled to a list of reasons.
In our latest w @anniewernerfelt.bsky.social @berkustun.bsky.social @friedler.net, we show how existing explanation frameworks fail and present an alternative for recourse
Posts by Sujay Nagaraj
We’ll be at #ICLR2025, Poster Session 1 – #516!
Come chat if you’re interested in learning more!
This is work done with wonderful collaborators: Yang Liu, @fcalmon.bsky.social, and @berkustun.bsky.social.
Our algorithm can improve safety and performance by flagging regretful predictions for abstention or data cleaning.
For example, we demonstrate that, by abstaining from prediction using our algorithm, we can reduce mistakes compared to standard approaches:
We develop a method that trains models over plausible clean datasets to anticipate regretful predictions, helping us spot when a model is unreliable at the individual-level.
We capture this effect with a simple measure: regret.
Regret is inevitable with label noise, but it can tell us where models silently fail, and how we can guide safer predictions
This lottery breaks modern ML:
If we can’t tell which predictions are wrong, we can’t improve models, we can’t debug, and we can’t trust them in high-stakes tasks like healthcare.
We can frame this problem as learning from noisy labels.
Plenty of algorithms have been designed to handle label noise by predicting well on average, but we show how they still fail on specific individuals.
Many ML models predict labels that don’t reflect what we care about, e.g.:
– Diagnoses from unreliable tests
– Outcomes from noisy electronic health records
In a new paper w/@berkustun, we study how this subjects individuals to a lottery of mistakes.
Paper: bit.ly/3Y673uZ
🧵👇
We’ll be at #ICLR2025, Poster Session 1 – #516!
Come chat if you’re interested in learning more! This is work done with wonderful collaborators: Yang Liu, @fcalmon.bsky.social, and @berkustun.bsky.social
Our algorithm can improve safety and performance by flagging regretful predictions for abstention or for data cleaning. For example, we demonstrate how abstaining from prediction on these instances can reduce mistakes compared to standard approaches:
We develop a method to anticipate regretful predictions by training models over plausible clean datasets.
This helps us spot when a model is unreliable at the individual-level.
We capture this effect with a simple measure: regret.
Regret is inevitable with label noise -- it tells us where models silently fail, and how we can guide safer predictions.
This lottery breaks modern ML:
If we can’t tell which predictions are wrong, we can’t improve models, we can’t debug, and we can’t trust them in high-stakes tasks like healthcare.
We can frame this as learning from noisy labels.
Plenty of algorithms have been designed to handle label noise by predicting well on average —
But we show how they can still fail on specific individuals.
🧠 Key takeaway: Label noise isn’t static—especially in time series.
💬 Come chat with me at #ICLR2025 Poster Session 2!
Shoutout to my amazing colleagues behind this work:
@tomhartvigsen.bsky.social
@berkustun.bsky.social
🔬 Real-world demo:
We applied our method to stress detection from smartwatches where we have noisy self-reported labels vs. clean physiological measures.
📈 Our model tracks the true time-varying label noise—reducing test error over baselines.
We propose methods to learn this function directly from noisy data.
💥 Results:
On 4 real-world time series tasks:
✅ Temporal methods beat static baselines
✅ Our methods better approximate the true noise function
✅ They work when the noise function is unknown!
📌 We formalize this setting:
A temporal label noise function defines how likely each true label is to be flipped—as a function of time.
Using this function, we propose a new time series loss function that is provably robust to label noise.
🕒 What is temporal label noise?
In many real-world time series (e.g., wearables, EHRs), label quality fluctuates over time
➡️ Participants fatigue
➡️ Clinicians miss more during busy shifts
➡️ Self-reports drift seasonally
Existing methods assume static noise → they fail here
🚨 Excited to announce a new paper accepted at #ICLR2025 in Singapore!
“Learning Under Temporal Label Noise”
We tackle a new challenge in time series ML: label noise that changes over time 🧵👇
arxiv.org/abs/2402.04398
Would be great to be added :)