Advertisement · 728 × 90

Posts by Angel Hsing-Chi Hwang

Preview
Comm Horizons @ UCD 2026: Communication in the Age of AI and Algorithms

What are YOU doing May 15-17?

Join us at Comm Horizons for a deep dive into Communication in the Age of AI & Algorithms!

Keynotes by Jeff Hancock & @angelhwang.bsky.social

74 presentations, 44 posters, & wine tasting in Napa!

Registration closes April 5 communication.ucdavis.edu/horizonconf2...

1 month ago 2 2 0 0
GUIDE-LLM Reporting Checklist for Studies with Large Language Models in the Behavioral and Social Sciences

🚀Introducing 𝐆𝐔𝐈𝐃𝐄-𝐋𝐋𝐌: A reporting checklist for using LLMs in behavioral & social science

✅GUIDE-LLM is a reporting checklist designed by 80+ experts to improve transparency, reproducibility & ethical accountability of LLM-based research

📄 llm-checklist.com

4 weeks ago 32 20 4 3
Preview
Comm Horizons @ UCD 2026: Communication in the Age of AI and Algorithms

Researching communication, AI, or algorithms? Join us at Comm Horizons 2026!

Keynotes from Jeff Hancock & @angelhwang.bsky.social, high-quality competitive programming, great feedback, & Napa wine tasting.

Abstract deadline approaching fast: March 1 (AOE)

communication.ucdavis.edu/horizonconf2...

1 month ago 4 6 0 0

Spread the word! 📢 The FATE (Fairness, Accountability, Transparency, and Ethics) group at @msftresearch.bsky.social in NYC is hiring interns and postdocs to start in summer 2026! 🎉

Apply by *December 15* for full consideration.

5 months ago 51 29 1 0

Beyond honored to speak as keynote for the Comm Horizons Conference at UC Davis!! Triple shout-outs to @richardhuskey.bsky.social, Jorge Peña, and @soojongkim.bsky.social Soojong Kim for organizing efforts. Please consider sending your work and attending the conference (+a short visit to Napa! 🍇🍷)

5 months ago 5 2 1 1
Preview
Comm Horizons @ UCD 2026: Communication in the Age of AI and Algorithms

Excited to announce the third annual Comm Horizons @ucdavis.bsky.social Conference:

Communication in the Age of AI and Algorithms

Featuring cutting-edge research and keynotes from Jeff Hancock and @angelhwang.bsky.social

Hope you'll submit and share! communication.ucdavis.edu/horizonconf2...

5 months ago 11 5 0 1
Preview
My Precious Crash Data: Barriers and Opportunities in Encouraging Autonomous Driving Companies to Share Safety-Critical Data | Proceedings of the ACM on Human-Computer Interaction Safety-critical data, such as crash and near-crash records, are crucial to improving autonomous vehicle (AV) design and development. Sharing such data across AV companies, academic researchers, regulators, and the public can help make all AVs safer. ...

Welcoming all new @acm-cscw.bsky.social followers 🥰!

I presented my paper "My Precious Crash Data: Barriers and Opportunities in Encouraging Autonomous Driving Companies to Share Safety-Critical Data" with @angelhwang.bsky.social , @fabulousqian.bsky.social and @wendyju.bsky.social already online!

6 months ago 6 1 1 0
Conference Programs

Come join us on-site on Tue, Oct 21st, 2:30~4:00 PM CEST at Peer Gynt-salen if you’re attending CSCW in Bergen! programs.sigchi.org/cscw/2025/pr...

6 months ago 2 0 0 0

We will discuss how challenges, potential, and concerns for applying LLMs in research processes where *conversations* stand at the core of study design (e.g., interview, workshop, small group research)

6 months ago 0 0 0 0
Post image

One week away from our CSCW panel on applying LLMs in conversation-based research! Excited to engage in another methodological discussion with my amazing co-organizers and panelists @mariannealq.bsky.social @hopeschroeder.bsky.social Alejandro @stevenpdow.bsky.social Shivani and Eugenia!

6 months ago 12 2 2 1
Advertisement

I would also like to remind folks that OpenAI wrote a paper in which they prompted GPT-4 on which jobs they thought would be most exposed to automation.

They validated it by comparing it to responses that people who worked OpenAI gave to the same question.

arxiv.org/abs/2303.10130

8 months ago 125 49 6 7
EC 2025 Accepted Papers - EC 2025 1. Optimality of Non-Adaptive Algorithms in Online Submodular Welfare Maximization with Stochastic Outcomes Authors: Rajan Udwani (University of California, Berkeley) 2. Investment and misallocation i...

Check out the terrific set of EC 2025 accepted papers! ec25.sigecom.org/program/acce...

11 months ago 22 11 0 0
Preview
How We’re Using AI The rapid development of AI is already changing how journalists operate. Reporters, editors, executives, and others across the news industry share their advice on how to engage—and where to draw the l...

AI is already changing how journalists operate. Reporters, editors, executives, and others across the news industry share their advice on how to engage—and where to draw the line. By @mikeananny.bsky.social
and @mattdpearce.com with USC's AI for Media & Storytelling. www.cjr.org/feature-2/ho...

11 months ago 7 3 0 3
Post image

If you run conjoint experiments, you need to read this.

Most conjoints estimate average effects for each attribute.

But what if the effect of one attribute depends on the others?

This paper has got you covered!

11 months ago 51 11 4 2
Headline: " International Journal of Communication Publishes a Forum on "Oops? Interdisciplinary Stories of Sociotechnical Error""

Abstract: " What can we learn about people and technology through interdisciplinary stories of sociotechnical errors, failures, breakdowns, and mistakes? 


Guest edited by Mike Ananny and Simogne Hudson, the Forum on Oops? Interdisciplinary Stories of Sociotechnical Error takes up the question through a playful and provocative mix of projects that show how sociotechnical errors happen, why they matter, and what they reveal about people, technology, and power. Amidst so many complex collisions among people, data, engineering, and media—and in an age when technological "innovation" is widely celebrated and inescapable—these articles offer changes to pause and ask what system failures show about how people and machines intersect and vie for power.


Including scholars from communication, media studies, urban planning, critical data studies, and science and technology studies, the collection of essays invites readers to see failures anew—to consider errors, breakdowns, and mistakes from a different perspective, method, or normative stake. Use these essays to start conversations about what "error" means in your work or community, and why it matters.

We invite you to read these articles that published in the International Journal of Communication on April 23, 2025. Please log into ijoc.org to read the papers of interest. We look forward to your feedback!"

Headline: " International Journal of Communication Publishes a Forum on "Oops? Interdisciplinary Stories of Sociotechnical Error"" Abstract: " What can we learn about people and technology through interdisciplinary stories of sociotechnical errors, failures, breakdowns, and mistakes? Guest edited by Mike Ananny and Simogne Hudson, the Forum on Oops? Interdisciplinary Stories of Sociotechnical Error takes up the question through a playful and provocative mix of projects that show how sociotechnical errors happen, why they matter, and what they reveal about people, technology, and power. Amidst so many complex collisions among people, data, engineering, and media—and in an age when technological "innovation" is widely celebrated and inescapable—these articles offer changes to pause and ask what system failures show about how people and machines intersect and vie for power. Including scholars from communication, media studies, urban planning, critical data studies, and science and technology studies, the collection of essays invites readers to see failures anew—to consider errors, breakdowns, and mistakes from a different perspective, method, or normative stake. Use these essays to start conversations about what "error" means in your work or community, and why it matters. We invite you to read these articles that published in the International Journal of Communication on April 23, 2025. Please log into ijoc.org to read the papers of interest. We look forward to your feedback!"

List of authors and essay titles:

   Oops? Sociotechnical Errors as Interdisciplinary Stories of Complex Relations, Shared 

     Consequences, and Resilient Hopes—Introduction

     Mike Ananny, Simogne Hudson

     Uncertainty as Spectacle: Real-Time Algorithmic Techniques on the Live Music Stage
     Stephen Yang

     When Faulty AI Falls Into the Wrong Hands: The Risks of Erroneous AI-Driven Healthcare Decisions 

     Eugene Jang

     Fake It Till You Make It: Synthetic Data and Algorithmic Bias

     Sook-Lin Toh, Jiwon Park

     Discourses of Sociotechnical Error and Accuracy in U.S. and PRC News Media: The Case of the 1999 
     Bombing of the Chinese Embassy in Belgrade

     Max Berwald


     Affective Experiences of Error 

     Megan Finn, Youngrim Kim, Ryan Ellis, Amelia Acker, Bidisha Chaudhuri, 

     Stacey Wedlake

     Peeling Back the Layers of “Paint on Rotten Wood”: Unraveling the Senate’s “Big Tech and Child 

     Sexual Exploitation Crisis” Hearing

     Kyooeun Jang

     Kicking Error Out of the Game: Video Assistant Referee as Technosolutionism

     Pratik Nyaupane, Alejandro Alvarado Rojas

     When User Consent Fails: How Platforms Undermine Data Governance

     Rohan Grover

     Ephemeral Platforms, Enduring Memories: Errors and Digital Afterlife

     Sui Wang

     :Chatting: Errors in Live Streamer Discord Servers

     Kirsten Crowe


     Hole in the (Pay)Wall: Monetized Access, Content Leaks, and Community Responsibility

     Celeste Oon


     Edges, Seams, and Ecotones: Error in Interstate Landscapes

     Cindy Lin, Steve J. Jackson

     Quantifying Housing Need in California: The Erroneous Practice of Evidence-Based Policy

     Elana R. Simon

List of authors and essay titles: Oops? Sociotechnical Errors as Interdisciplinary Stories of Complex Relations, Shared Consequences, and Resilient Hopes—Introduction Mike Ananny, Simogne Hudson Uncertainty as Spectacle: Real-Time Algorithmic Techniques on the Live Music Stage Stephen Yang When Faulty AI Falls Into the Wrong Hands: The Risks of Erroneous AI-Driven Healthcare Decisions Eugene Jang Fake It Till You Make It: Synthetic Data and Algorithmic Bias Sook-Lin Toh, Jiwon Park Discourses of Sociotechnical Error and Accuracy in U.S. and PRC News Media: The Case of the 1999 Bombing of the Chinese Embassy in Belgrade Max Berwald Affective Experiences of Error Megan Finn, Youngrim Kim, Ryan Ellis, Amelia Acker, Bidisha Chaudhuri, Stacey Wedlake Peeling Back the Layers of “Paint on Rotten Wood”: Unraveling the Senate’s “Big Tech and Child Sexual Exploitation Crisis” Hearing Kyooeun Jang Kicking Error Out of the Game: Video Assistant Referee as Technosolutionism Pratik Nyaupane, Alejandro Alvarado Rojas When User Consent Fails: How Platforms Undermine Data Governance Rohan Grover Ephemeral Platforms, Enduring Memories: Errors and Digital Afterlife Sui Wang :Chatting: Errors in Live Streamer Discord Servers Kirsten Crowe Hole in the (Pay)Wall: Monetized Access, Content Leaks, and Community Responsibility Celeste Oon Edges, Seams, and Ecotones: Error in Interstate Landscapes Cindy Lin, Steve J. Jackson Quantifying Housing Need in California: The Erroneous Practice of Evidence-Based Policy Elana R. Simon

So much is broken right now, but I want to share an amazing new set of short, teachable interdisciplinary essays on

** Sociotechnical Error **

Live at IJOC journal @ijoc-usc.bsky.social: ijoc.org/index.php/ij... (scroll to Forum)

Intro by me & Simogne Hudson: ijoc.org/index.php/ij...

Pls share!

11 months ago 17 8 1 3

Featuring all-⭐️ panelists✨ @mbernst.bsky.social, Shyam, Renwen, @manoelhortaribeiro.bsky.social, Yingdan, @serinachang5.bsky.social @manoelhortaribeiro.bsky.social @sherrytswu.bsky.social Aimei, @joon-s-pk.bsky.social Dmitri @ognyanova.bsky.social @ziangxiao.bsky.social Ayman @aaronshaw.bsky.social

1 year ago 3 0 0 0

3️⃣ How can researchers approach addressing homogeneity, biases, and ethical concerns of LLM simulation output?

1 year ago 0 0 0 0
Advertisement

2️⃣ Whether/how can researchers scale insights of LLM simulations of individuals' responses to study group and even network patterns?

1 year ago 0 0 1 0

1️⃣ When/how can researchers integrate the use of LLM simulation and synthetic data into existing human subjects research pipelines? How do we perform evaluation accordingly?

1 year ago 0 0 1 0

This panel will discuss the opportunities and perils regarding the use of LLM, simulation, and synthetic data for human subjects research. We will break the discussion down into three themes/challenges:

1 year ago 0 0 1 0
Post image

📣 Calling all #CHI2025 attendees who work with human participants: Join our panel discussion on #LLM, #simulation, #syntheticdata, and the future of human subjects research on Apr 30 (Wed), 2:10 - 3:40 PM (JP Time)

Post your questions for panelists here: forms.gle/m2mXY3xFafAX...

1 year ago 17 5 2 2
A yellow promotional graphic for the event “What is Work Worth” happening on May 6 at 5pm ET in NYC and on Zoom, with Dr. Julián Posada and Aiha Nguyen.

A yellow promotional graphic for the event “What is Work Worth” happening on May 6 at 5pm ET in NYC and on Zoom, with Dr. Julián Posada and Aiha Nguyen.

May 6, in NYC or online: Join @posada.website and Labor Futures Program Director @aihathing.bsky.social as they discuss the uneven effects of AI technologies across industries and on a broad diversity of workers. Learn more and RSVP! datasociety.net/events/what-...

1 year ago 5 2 0 2

Would love to stop by if time permits!

1 year ago 1 0 1 0

Thanks for sharing our work, Freddy!!

1 year ago 1 0 1 0
Navigating Generative AI Disclosure, Ownership, and Accountability in Co-Creative Domains

Additionally, @dohyojin.bsky.social, Jessica He,
@feldmanmolly.bsky.social, Seyun Kim, and I are organizing a workshop at #CHIWORK on "Navigating Generative AI Disclosure, Ownership, and Accountability." Check out more info here (chiwork-aidisclosure.github.io), and we would love to see you there!!

1 year ago 5 0 0 0
Advertisement

To further develop this workstream, I will present our latest findings and seek feedback at
#AOM, @ic2s2.bsky.social, and @datasociety.bsky.social 's upcoming workshop on "What is work worth?" See extended abstract here: angelhwang.github.io/doc/ic2s2_AI...

1 year ago 5 0 1 0

Yao-Yuan Yang and I verified this concern by tracking the performance of 9,149 freelancers across two platforms (Upwork and Bēhance): Creators who declare the use of AI receive significantly lower pay, but non-creatives jobs earn more by labeling themselves as "AI Pros."

1 year ago 0 0 1 0
Preview
"It was 80% me, 20% AI": Seeking Authenticity in Co-Writing with Large Language Models Given the rising proliferation and diversity of AI writing assistance tools, especially those powered by large language models (LLMs), both writers and readers may have concerns about the impact of th...

This ongoing work is inspired by my favorite project with @qveraliao.bsky.social, Su Lin Blodgett, @aolteanu.bsky.social, and Adam Trischler. Writers felt they could preserve their authentic voice but worried audiences would not value AI-assisted work as much as solo work. arxiv.org/abs/2411.13032

1 year ago 0 0 1 0
Preview
"It was 80% me, 20% AI": Seeking Authenticity in Co-Writing with Large Language Models Given the rising proliferation and diversity of AI writing assistance tools, especially those powered by large language models (LLMs), both writers and readers may have concerns about the impact of th...

Starting my journey on Bluesky with a topic that I care deeply about: AI tools can support creators in various ways, but disclosing AI use may risk devaluing creative work.

Check out our abstract here: angelhwang.github.io/doc/ic2s2_AI...
Inspired by our past work: arxiv.org/abs/2411.13032

1 year ago 27 5 1 1