Advertisement · 728 × 90

Posts by Silpa Vadakkeeveetil Sreelatha

🚀 I’m thrilled to share that our paper, "Sebra: DeBiasing through Self-Guided Bias Ranking," has been accepted to ICLR 2025! 🎉

1 year ago 0 1 1 0
Video

Meet Srishti Yadav, an ELLIS #PhD Student at 🇩🇰 Uni Copenhagen & 🇳🇱 Uni Amsterdam. Passionate about #AI & society, she explores culturally aware & inclusive AI models. Read her advice for young scientists & learn why women's visibility in AI/ML is crucial. #WomenInELLIS

1 year ago 21 3 0 2

In this paper, we aim for responsible (fair and safe) text-to-image generation in diffusion models. Join me to discuss how we can improve generative models for better fairness and safety!

(5/5)

1 year ago 0 0 0 0

(2) Concept Denoising Score Matching for Responsible Text-to-Image Generation
🗓️ Sun, Dec 15 | 🕗 8:15 a.m. PST | 📍 Safe Generative AI Workshop
Authors : Silpa Vadakkeeveetil Sreelatha, Sauradip Nag, @serge.belongie.com , Muhammad Awais, and Anjan Dutta

(4/5)

1 year ago 1 1 1 0

We introduce DeNetDM, a debiasing method that improves robustness to spurious correlations. Our method requires no bias annotations or explicit data augmentation while performing on par with approaches that require either or both. More details can be found here : arxiv.org/abs/2403.19863

(3/5)

1 year ago 0 0 1 0

(1) DeNetDM: Debiasing by Network Depth Modulation
🗓️ Thu, Dec 12 | 🕚 11 a.m. – 2 p.m. PST | 📍 East Exhibit Hall A-C #4309
Authors : Silpa Vadakkeeveetil Sreelatha *, Adarsh Kappiyath*, Abhra Chaudhuri, Anjan Dutta — * equal contribution

(2/5)

1 year ago 2 1 1 0

🚀 Excited to be at @neuripsconf.bsky.social in Vancouver! I'll be presenting my work on AI fairness & interpretability. Here's where you can find me

(1/5)

1 year ago 1 0 1 0

With @neuripsconf.bsky.social right around the corner, we’re excited to be presenting our work soon! Here’s an overview

(1/5)

1 year ago 16 6 1 2

Could you please add me? I am an ELLIS PhD student.

1 year ago 1 0 1 0