What are YOU doing May 15-17?
Join us at Comm Horizons for a deep dive into Communication in the Age of AI & Algorithms!
Keynotes by Jeff Hancock & @angelhwang.bsky.social
74 presentations, 44 posters, & wine tasting in Napa!
Registration closes April 5 communication.ucdavis.edu/horizonconf2...
Posts by Angel Hsing-Chi Hwang
🚀Introducing 𝐆𝐔𝐈𝐃𝐄-𝐋𝐋𝐌: A reporting checklist for using LLMs in behavioral & social science
✅GUIDE-LLM is a reporting checklist designed by 80+ experts to improve transparency, reproducibility & ethical accountability of LLM-based research
📄 llm-checklist.com
Researching communication, AI, or algorithms? Join us at Comm Horizons 2026!
Keynotes from Jeff Hancock & @angelhwang.bsky.social, high-quality competitive programming, great feedback, & Napa wine tasting.
Abstract deadline approaching fast: March 1 (AOE)
communication.ucdavis.edu/horizonconf2...
Spread the word! 📢 The FATE (Fairness, Accountability, Transparency, and Ethics) group at @msftresearch.bsky.social in NYC is hiring interns and postdocs to start in summer 2026! 🎉
Apply by *December 15* for full consideration.
Beyond honored to speak as keynote for the Comm Horizons Conference at UC Davis!! Triple shout-outs to @richardhuskey.bsky.social, Jorge Peña, and @soojongkim.bsky.social Soojong Kim for organizing efforts. Please consider sending your work and attending the conference (+a short visit to Napa! 🍇🍷)
Excited to announce the third annual Comm Horizons @ucdavis.bsky.social Conference:
Communication in the Age of AI and Algorithms
Featuring cutting-edge research and keynotes from Jeff Hancock and @angelhwang.bsky.social
Hope you'll submit and share! communication.ucdavis.edu/horizonconf2...
Welcoming all new @acm-cscw.bsky.social followers 🥰!
I presented my paper "My Precious Crash Data: Barriers and Opportunities in Encouraging Autonomous Driving Companies to Share Safety-Critical Data" with @angelhwang.bsky.social , @fabulousqian.bsky.social and @wendyju.bsky.social already online!
Come join us on-site on Tue, Oct 21st, 2:30~4:00 PM CEST at Peer Gynt-salen if you’re attending CSCW in Bergen! programs.sigchi.org/cscw/2025/pr...
We will discuss how challenges, potential, and concerns for applying LLMs in research processes where *conversations* stand at the core of study design (e.g., interview, workshop, small group research)
One week away from our CSCW panel on applying LLMs in conversation-based research! Excited to engage in another methodological discussion with my amazing co-organizers and panelists @mariannealq.bsky.social @hopeschroeder.bsky.social Alejandro @stevenpdow.bsky.social Shivani and Eugenia!
I would also like to remind folks that OpenAI wrote a paper in which they prompted GPT-4 on which jobs they thought would be most exposed to automation.
They validated it by comparing it to responses that people who worked OpenAI gave to the same question.
arxiv.org/abs/2303.10130
AI is already changing how journalists operate. Reporters, editors, executives, and others across the news industry share their advice on how to engage—and where to draw the line. By @mikeananny.bsky.social
and @mattdpearce.com with USC's AI for Media & Storytelling. www.cjr.org/feature-2/ho...
If you run conjoint experiments, you need to read this.
Most conjoints estimate average effects for each attribute.
But what if the effect of one attribute depends on the others?
This paper has got you covered!
Headline: " International Journal of Communication Publishes a Forum on "Oops? Interdisciplinary Stories of Sociotechnical Error"" Abstract: " What can we learn about people and technology through interdisciplinary stories of sociotechnical errors, failures, breakdowns, and mistakes? Guest edited by Mike Ananny and Simogne Hudson, the Forum on Oops? Interdisciplinary Stories of Sociotechnical Error takes up the question through a playful and provocative mix of projects that show how sociotechnical errors happen, why they matter, and what they reveal about people, technology, and power. Amidst so many complex collisions among people, data, engineering, and media—and in an age when technological "innovation" is widely celebrated and inescapable—these articles offer changes to pause and ask what system failures show about how people and machines intersect and vie for power. Including scholars from communication, media studies, urban planning, critical data studies, and science and technology studies, the collection of essays invites readers to see failures anew—to consider errors, breakdowns, and mistakes from a different perspective, method, or normative stake. Use these essays to start conversations about what "error" means in your work or community, and why it matters. We invite you to read these articles that published in the International Journal of Communication on April 23, 2025. Please log into ijoc.org to read the papers of interest. We look forward to your feedback!"
List of authors and essay titles: Oops? Sociotechnical Errors as Interdisciplinary Stories of Complex Relations, Shared Consequences, and Resilient Hopes—Introduction Mike Ananny, Simogne Hudson Uncertainty as Spectacle: Real-Time Algorithmic Techniques on the Live Music Stage Stephen Yang When Faulty AI Falls Into the Wrong Hands: The Risks of Erroneous AI-Driven Healthcare Decisions Eugene Jang Fake It Till You Make It: Synthetic Data and Algorithmic Bias Sook-Lin Toh, Jiwon Park Discourses of Sociotechnical Error and Accuracy in U.S. and PRC News Media: The Case of the 1999 Bombing of the Chinese Embassy in Belgrade Max Berwald Affective Experiences of Error Megan Finn, Youngrim Kim, Ryan Ellis, Amelia Acker, Bidisha Chaudhuri, Stacey Wedlake Peeling Back the Layers of “Paint on Rotten Wood”: Unraveling the Senate’s “Big Tech and Child Sexual Exploitation Crisis” Hearing Kyooeun Jang Kicking Error Out of the Game: Video Assistant Referee as Technosolutionism Pratik Nyaupane, Alejandro Alvarado Rojas When User Consent Fails: How Platforms Undermine Data Governance Rohan Grover Ephemeral Platforms, Enduring Memories: Errors and Digital Afterlife Sui Wang :Chatting: Errors in Live Streamer Discord Servers Kirsten Crowe Hole in the (Pay)Wall: Monetized Access, Content Leaks, and Community Responsibility Celeste Oon Edges, Seams, and Ecotones: Error in Interstate Landscapes Cindy Lin, Steve J. Jackson Quantifying Housing Need in California: The Erroneous Practice of Evidence-Based Policy Elana R. Simon
So much is broken right now, but I want to share an amazing new set of short, teachable interdisciplinary essays on
** Sociotechnical Error **
Live at IJOC journal @ijoc-usc.bsky.social: ijoc.org/index.php/ij... (scroll to Forum)
Intro by me & Simogne Hudson: ijoc.org/index.php/ij...
Pls share!
Featuring all-⭐️ panelists✨ @mbernst.bsky.social, Shyam, Renwen, @manoelhortaribeiro.bsky.social, Yingdan, @serinachang5.bsky.social @manoelhortaribeiro.bsky.social @sherrytswu.bsky.social Aimei, @joon-s-pk.bsky.social Dmitri @ognyanova.bsky.social @ziangxiao.bsky.social Ayman @aaronshaw.bsky.social
3️⃣ How can researchers approach addressing homogeneity, biases, and ethical concerns of LLM simulation output?
2️⃣ Whether/how can researchers scale insights of LLM simulations of individuals' responses to study group and even network patterns?
1️⃣ When/how can researchers integrate the use of LLM simulation and synthetic data into existing human subjects research pipelines? How do we perform evaluation accordingly?
This panel will discuss the opportunities and perils regarding the use of LLM, simulation, and synthetic data for human subjects research. We will break the discussion down into three themes/challenges:
📣 Calling all #CHI2025 attendees who work with human participants: Join our panel discussion on #LLM, #simulation, #syntheticdata, and the future of human subjects research on Apr 30 (Wed), 2:10 - 3:40 PM (JP Time)
Post your questions for panelists here: forms.gle/m2mXY3xFafAX...
A yellow promotional graphic for the event “What is Work Worth” happening on May 6 at 5pm ET in NYC and on Zoom, with Dr. Julián Posada and Aiha Nguyen.
May 6, in NYC or online: Join @posada.website and Labor Futures Program Director @aihathing.bsky.social as they discuss the uneven effects of AI technologies across industries and on a broad diversity of workers. Learn more and RSVP! datasociety.net/events/what-...
Would love to stop by if time permits!
Thanks for sharing our work, Freddy!!
Additionally, @dohyojin.bsky.social, Jessica He,
@feldmanmolly.bsky.social, Seyun Kim, and I are organizing a workshop at #CHIWORK on "Navigating Generative AI Disclosure, Ownership, and Accountability." Check out more info here (chiwork-aidisclosure.github.io), and we would love to see you there!!
To further develop this workstream, I will present our latest findings and seek feedback at
#AOM, @ic2s2.bsky.social, and @datasociety.bsky.social 's upcoming workshop on "What is work worth?" See extended abstract here: angelhwang.github.io/doc/ic2s2_AI...
Yao-Yuan Yang and I verified this concern by tracking the performance of 9,149 freelancers across two platforms (Upwork and Bēhance): Creators who declare the use of AI receive significantly lower pay, but non-creatives jobs earn more by labeling themselves as "AI Pros."
This ongoing work is inspired by my favorite project with @qveraliao.bsky.social, Su Lin Blodgett, @aolteanu.bsky.social, and Adam Trischler. Writers felt they could preserve their authentic voice but worried audiences would not value AI-assisted work as much as solo work. arxiv.org/abs/2411.13032
Starting my journey on Bluesky with a topic that I care deeply about: AI tools can support creators in various ways, but disclosing AI use may risk devaluing creative work.
Check out our abstract here: angelhwang.github.io/doc/ic2s2_AI...
Inspired by our past work: arxiv.org/abs/2411.13032