Along the way, the work received recognition including Best Application of AI (Michigan #AI Symposium), Best Poster (Michigan #HCAI), and I also presented it at #Stanford Trust and Safety Conference and it sparked a lot of great conversations!
Paper, summary, and demo: rayhan.io/diymod
Posts by Rayhan Rashed
This was a full end-to-end HCI project (needfinding → design → build → multi-round evaluation), and one of the most fun (and intense) things I’ve built from the ground up.
Our CHI'26 paper + system transforms social media content in real-time, based on your definition of harm. Rather than removing content, it can obfuscate, re-render, or modify text and images, softening what a user wants to avoid while preserving what they still find valuable in the same post.
So we asked: what if moderation could act on personalized notions of harm, while preserving as much social and informational value as possible?
What if that meant transforming content instead of suppressing it?
When platforms decide globally what’s "safe", they end up doing two things at once: failing to protect users from what they find harmful, and bluntly suppressing content that others would want to engage with.
What if moderation didn’t mean suppression for the user? #CHI2026
For the past year, my advisor @farnazj.bsky.social and I have been exploring a frustration: content moderation is centralized and binary, even though harm is often subjective.
So we asked: what if moderation could act on personalized notions of harm, while preserving as much social and informational value as possible? What if that meant transforming content instead of suppressing it?
3/n
When platforms decide globally what’s "safe", they end up doing two things at once: failing to protect users from what they find harmful, and bluntly suppressing content that others would want to engage with.
2/n