Advertisement · 728 × 90

Posts by Melis İlayda Bal

Post image

News!⚡
We’re pleased to announce a new Call for Contributed Talks to bring novel and diverse viewpoints to EWRL2025.
Early-career researchers are especially encouraged to submit proposals and share their work with the community.
Full details: euro-workshop-on-reinforcement-learning.github.io/ewrl18/

10 months ago 11 6 0 4
EWRL18

Working on something cool in RL? Submit it to EWRL!
We’ll be opening submissions soon — and this year we’re introducing a fast track for papers already accepted at other conferences! 🚀
Check out the full Call for Papers on the website: euro-workshop-on-reinforcement-learning.github.io/ewrl18/

1 year ago 14 5 1 3
Post image

Mark your calendars, EWRL is coming to Tübingen! 📅
When? September 17-19, 2025.
More news to come soon, stay tuned!

1 year ago 37 14 0 5
Preview
Adversarial Training for Defense Against Label Poisoning Attacks As machine learning models grow in complexity and increasingly rely on publicly sourced data, such as the human-annotated labels used in training large language models, they become more vulnerable to ...

📄Preprint: arxiv.org/abs/2502.17121
Many thanks to my co-authors @cevherlions.bsky.social and Michael Muehlebach!

Looking forward to presenting this at ICLR 2025! If you're interested in adversarial robustness, I’d be happy to connect!

1 year ago 0 0 0 0

...an adversarial training framework designed to enhance robustness against these attacks.
🔹Defense formulated as a bilevel optimization framework using kernel SVMs.
🔹Adapts against poisoned labels, improving robust accuracy.
🔹Scalable and outperforms robust baselines under strong attacks.

1 year ago 0 0 1 0
Post image

🌺Excited to share our #ICLR2025 paper: 𝗔𝗱𝘃𝗲𝗿𝘀𝗮𝗿𝗶𝗮𝗹 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗳𝗼𝗿 𝗗𝗲𝗳𝗲𝗻𝘀𝗲 𝗔𝗴𝗮𝗶𝗻𝘀𝘁 𝗟𝗮𝗯𝗲𝗹 𝗣𝗼𝗶𝘀𝗼𝗻𝗶𝗻𝗴 𝗔𝘁𝘁𝗮𝗰𝗸𝘀!

Training ML models on public data isn’t all sunshine and rainbows—especially when adversaries sneak in poisoned labels to mess with your model. But fear not! In our work, we introduce 𝗙𝗟𝗢𝗥𝗔𝗟🌺...

1 year ago 9 1 1 0