Come find me to chat AI governance, platform accountability, community moderation and the impacts of AI-generated content!
Posts by travis lloyd (træve) @ CHI26
I'm in Barcelona for #CHI26, presenting my paper "Beyond Community Notes: A Framework for Understanding and Building Crowdsourced Context Systems for Social Media" Friday in the 9am "Community Governance and Moderation" session.
@chi.acm.org
dl.acm.org/doi/10.1145/...
+1! IMO you are talking about our user informedness principle: users aren't informed by notes that they never view (or view too late). Our paper tries to identify design choices that can prevent this. For example, changes to the curation aspect of the system could lead to faster note publication.
The ACM (Association of Computing Machinery) CHI conference on Human Factors in Computing! chi2026.acm.org
I take your point! We did think a lot about the boundary for our term. Our argument is that the term is useful because these systems uniquely elicit *context* to facilitate post interpretation, which is not the goal of other crowdsourced systems. Check out section 4 of the paper!
Paper: arxiv.org/abs/2509.15434
w/ @tungdnguyen.bsky.social @karenlevy.bsky.social @informor.bsky.social
#communitynotes #socialmedia #research #paper #HCI #socialcomputing #misinformation #trustandsafety #contentmoderation #crowdsourcing
This framework can help researchers and platforms design more effective and equitable systems. Interested in CCS research? Come find me at @chi.acm.org in Barcelona! #CHI2026
Why does this matter? CCS design choices have real consequences for information quality and power dynamics on social media. We identify 3 normative principles for CCS evaluation: user informedness, distribution of power, and fairness.
(4) Presentation: How and where are notes displayed to users?
(5) Platform Treatment: How does the platform treat posts with notes?
(6) Transparency: Can the public observe and audit how the system works?
(1) Participation: Who can contribute and what can they do?
(2) Inputs: What form do contributions take?
(3) Curation: How are notes selected for display?
Through a review of CCS/Community Notes literature and analysis of real-world implementations (X, Meta, YouTube, TikTok), we identify 6 key design aspects that shape how these systems function and what impact they have.
"Community Notes" are reshaping how millions encounter information on social media--but what makes them work (or not)? We term these "Crowdsourced Context Systems" (CCS) and introduce a framework for designing and evaluating them in a new #CHI26 paper 🧵
Screenshot of a paper entry: Fictional Failures and Real-World Lessons: Ethical Speculation Through Design Fiction on Emotional Support Conversational AI Authors: Faye Kollig, Jessica Pater, Fayika Farhat Nova, Casey Fiesler (There are tabs with "abstract" and "summary" and "summary" is selected.)
The ACM Digital Library, where a LOT of computing-related research is published (I'd say at least 75% of my own publications), is now not only providing (without consent of the authors and without opt-in by readers) AI-generated summaries of papers, but they appear as the *default* over abstracts.
Spotify is garbage on every count: Its treatment of artists, its ICE advertising, the CEO's investment in military AI, its leading role in the commodification and AI slopification of music, its terrible audio quality—you name it.
So I quit, and put together a complete guide to getting off Spotify:
Excited to share a new working paper!
What happened when Change.org integrated an AI writing tool into their platform? We provide causal evidence that petition text changed significantly while outcomes did not improve. 1/
arxiv.org/abs/2511.13949
Check out the article! And for a deeper dive, the paper(s) I've written with @informor.bsky.social on the topic:
- Interview study w/ Reddit mods: dl.acm.org/doi/abs/10.1...
- Platform-wide analysis of subreddit rules about AI: dl.acm.org/doi/10.1145/...
I spoke with @kattenbarge.bsky.social for this @wired.com piece about my research into reddit moderators' experiences moderating AI-generated content. Moderators are working hard to keep Reddit "one of the most human spaces left on the internet," but it's a trying and often thankless task.
🇳🇴 I'm in Bergen for #CSCW25 🇳🇴
This Wednesday, in the "Content Moderation" session, I'll present my paper about how Reddit moderators are grappling with AI-Generated Content. I'm honored that it received a Best Paper Honorable Mention 🤓 If you're here too, let's connect!
dl.acm.org/doi/10.1145/...
We hope this framework will guide an HCI research agenda on this impactful new class of social media system. Check out the full paper here: www.arxiv.org/abs/2509.15434
@informor.bsky.social @karenlevy.bsky.social @tungdnguyen.bsky.social
We develop a framework composed of three parts:
1. A theoretical model to conceptualize and define CCS.
2. A design space encompassing six key aspects of CCS.
3. Key normative implications of different CCS design and implementation choices.
New preprint! Crowdsourced Context Systems (CCS) like X's and Meta's Community Notes are popping up on various social media platforms. How can we better understand, critique, and design such systems?
2. "There Has To Be a Lot That We're Missing": Moderating AI-Generated Content on Reddit (CSCW25): an interview study with subreddit moderators about their experiences moderating AI-generated content
arxiv.org/abs/2311.12702
I'll be discussing two recent Reddit studies:
1. AI Rules? Characterizing Reddit Community Policies Towards AI-Generated Content (CHI24): a large-scale data collection and analysis of subreddit community rules governing the use of AI (check out the dataset!):
dl.acm.org/doi/full/10....
I'm at Seattle 4S! I'll be part of the "Risks of 'Social Model Collapse' in the Face of Scientific and Technological Advances" panel Friday morning, discussing online community governance of AI-generated content. Would love to meet others studying AI's impact on the info ecosystem!
#STS #4S
It was a pleasure to present our (@jennahgosciak.bsky.social @tungdnguyen.bsky.social @informor.bsky.social) large-scale study of Reddit community's AI rules in the AI Ethics and Concerns session at #CHI25! The paper is now available open access in the ACM library: dl.acm.org/doi/10.1145/....
An important topic of discussion in today's workshop on Sociotechnical AI Governance!
chi-staig.github.io
One line of the recent CMV moderator statement stuck out to me: "Our sub is a decidedly human space that rejects undisclosed AI as a core value." It's important to support community autonomy towards AI. I'm at #CHI25 presenting work on this topic and would love to talk to others doing similar work.
I also want to point to my recent paper (forthcoming at CSCW25) about moderating AI-generated content on Reddit. I spoke to CMV mods about their experiences with, and stances towards AI, in their communities. It's not a surprise that this use of AI is unwelcome!
arxiv.org/abs/2311.12702
Thankful for @sarahagilbert.bsky.social's thoughtful words on the ethics issues here: a must read for all researching online communities.
@informor.bsky.social @jennahgosciak.bsky.social @tungdnguyen.bsky.social