Advertisement · 728 × 90

Posts by Divya Shanmugam

thanks for these comments, Ira :) :)

4 weeks ago 1 0 0 0
A roadmap for addressing the use of race and ethnicity in clinical algorithms Nature Health - Removing race and ethnicity from clinical algorithms is feasible, but it requires careful evaluation of algorithmic changes and systemic efforts to address underlying disparities.

Wonderful to work with co-first author @sidhikabalachandar.bsky.social along with Alex Chouldechova, James Diao, Kadija Ferryman, Arjun Manrari, @stephenpfohl.bsky.social, Neil Powe, @rajiinio.bsky.social, and @emmapierson.bsky.social on this piece.

You can read it here! rdcu.be/e9uZa

4 weeks ago 5 1 0 0

The goal is not just to remove race from models. It's to build systems where race no longer adds predictive power — because the factors it proxies for have been directly measured and the inequities it captures have been addressed.

4 weeks ago 3 1 1 0

Computational approaches to removing race are often insufficient. Why? Race often correlates with (1) unmeasured but relevant factors, like genetic traits (which should be measured directly) and (2) systemic disparities, like racism (which should be addressed, not adjusted for).

4 weeks ago 2 1 1 0

We lay out principles for these evaluations: measure not just model fit but downstream effects on treatment/resource allocation; report results by racial subgroup; and pair simulated analyses with post-deployment studies tracking real-world consequences. Lots of room here for more work!

4 weeks ago 1 0 1 0

Before changing the inputs to a clinical algorithm, it is critical to evaluate the consequences. Past work in nephrology and pulmonology has shown that removing race can have unpredictable effects, and can both improve and worsen disparities, making it essential to conduct rigorous evaluations.

4 weeks ago 2 0 1 0
Video

New in Nature Health: how might we move towards a world in which race is not used in clinical algorithms? We need (1) careful comparison of race-aware and race-neutral algorithms and (2) systemic efforts to address underlying disparities.

4 weeks ago 21 9 1 3
Post image

New paper! The Linear Representation Hypothesis is a powerful intuition for how language models work, but lacks formalization. We give a mathematical framework in which we can ask and answer a basic question: how many features can be stored under the hypothesis? 🧵 arxiv.org/abs/2602.11246

2 months ago 43 14 1 2
Advertisement
Post image

We found, for example, racial disparities in upward mobility —that is, the rate at which people move to higher-income areas varies according to the racial composition of their current area of residence, even after controlling for income levels. 6/9

2 months ago 5 2 1 0
Video

Our paper “Inferring fine-grained migration patterns across the United States” is now out in @natcomms.nature.com! We released a new, highly granular migration dataset. 1/9

2 months ago 71 27 2 5

CS ArXiv recently banned “review and position” papers, but what are those? Do they include more generated content? Who is most affected by this change? @yanai.bsky.social and I dug into the data to find out!

Nearly 50% of Computers & Society papers might be censored, vs 3% of Computer Vision ‼️

2 months ago 42 19 2 0
Preview
Fairness in PCA-Based Recommenders

🎙️ I had a great time joining the Data Skeptic podcast to talk about my work on recommender systems

If you're interested in embeddings, aligning group preferences, or music recommendations, check out the episode below 👇

open.spotify.com/episode/6IsP...

2 months ago 14 5 1 0
Video

I’m excited to share our new paper A Bayesian Model for Multi-stage Censoring, which I will present at #ML4H2025 in San Diego! 🧵 below:

5 months ago 7 1 1 0
Post image Post image

🧠⚙️ Interested in decision theory+cogsci meets AI? Want to create methods for rigorously designing & evaluating human-AI workflows?

I'm recruiting PhDs to work on:
🎯 Stat foundations of multi-agent collaboration
🌫️ Model uncertainty & meta-cognition
🔎 Interpretability
💬 LLMs in behavioral science

5 months ago 39 15 1 0
Post image

I’m recruiting students this upcoming cycle at UIUC! I’m excited about Qs on societal impact of AI, especially human-AI collaboration, multi-agent interactions, incentives in data sharing, and AI policy/regulation (all from both a theoretical and applied lens). Apply through CS & select my name!

5 months ago 41 18 1 0
Preview
The ‘Worst Test in Medicine’ is Driving America’s High C-Section Rate

if you think about AI, healthcare, women's health, or all of the above, i highly recommend this article on the role of fetal heart rate monitors in the rise of C-sections:

www.nytimes.com/2025/11/06/h...

5 months ago 5 1 0 0

Super cool, and something I wish existed within machine learning for healthcare too! I'm often wondering what people are actually doing in practice and assembling evidence for my guesses.

5 months ago 4 0 0 0
Advertisement
Cornell University, Empire AI Fellows Program Job #AJO30971, Postdoctoral Fellow, Empire AI Fellows Program, Cornell University, New York, New York, US

Cornell (NYC and Ithaca) is recruiting AI postdocs, apply by Nov 20, 2025! If you're interested in working with me on technical approaches to responsible AI (e.g., personalization, fairness), please email me.

academicjobsonline.org/ajo/jobs/30971

5 months ago 32 20 1 2

@michelleding.bsky.social has been doing amazing work laying out the complex landscape of "deepfake porn" and distilling the unique challenges in governing it. We hope this work informs future AI governance efforts to address the severe harms of this content - reach out to us to chat more!

11 months ago 4 2 0 0

p.s. we pronounce SSME as "Sesame" but you're welcome to your favorite pronunciation :)

6 months ago 0 0 0 0

Thanks also to our wonderful set of co-authors - Manish Raghavan (@manishraghav.bsky.social) , John Guttag, Bonnie Berger, and Emma Pierson (@emmapierson.bsky.social)-- without whom this work would not be possible!

6 months ago 1 0 1 0

Last but not least, thanks to @shuvoms.bsky.social,
who co-led this work with me, and is an excellent thinking partner. Collaborate with him if you can!!

6 months ago 0 0 1 0

The paper includes much more, including theoretical connections to the literature on semi-supervised mixture models. Lots of exciting directions ahead – come chat with me and Shuvom at NeurIPS this December in San Diego!
📄 Paper: arxiv.org/abs/2501.11866
💻 Code: github.com/divyashan/SSME

6 months ago 0 0 1 0

Across 8 tasks, 4 metrics, and dozens of classifiers, SSME consistently outperforms prior work, reducing estimation error by 5.1× vs. using labeled data alone and 2.4× vs. the next-best method!

6 months ago 0 0 1 0

SSME starts with a set of classifiers, unlabeled data, and bit of labeled data, and estimates the joint distribution of classifier scores and ground truth labels using a mixture model. SSME benefits from three sources of info: multiple classifiers, unlabeled data, and classifier scores.

6 months ago 0 0 1 0
Advertisement
Post image

New #NeurIPS2025 paper: how should we evaluate machine learning models without a large, labeled dataset? We introduce Semi-Supervised Model Evaluation (SSME), which uses labeled and unlabeled data to estimate performance! We find SSME is far more accurate than standard methods.

6 months ago 21 7 1 4

thank you, gabriel!! glad i've gotten to learn so much about maps from you :')

6 months ago 2 0 0 0

thank you, Erica 🥹 so glad we got to work together this year!

6 months ago 1 0 0 0

thank you, Emma!!! likewise, i'm so grateful for our collaborations over the years!

6 months ago 1 0 0 0

thank you, Kenny!!! that's so nice of you to say.

6 months ago 0 0 0 0