Advertisement ยท 728 ร— 90

Posts by Willie Agnew

chikfila searching for me on linkedin

chikfila searching for me on linkedin

glad to see my enemies haven't forgotten about me either

2 days ago 0 0 0 0
image of workshop participants

image of workshop participants

I had such a great time co-organizing the CHI workshop on Standards for LLM Use in Human Subjects Research! We had a fantastic turnout and discussion, and we're looking forward to continuing this conversation to produce concrete standards.

3 days ago 5 1 1 0
CHI'26 Workshop on Developing Standards and Documentation For LLM Use as Simulated Research Participants Workshop Motivation

Accepted papers for the CHI'26 workshop on Standards for LLM Use in Human Subjects Research are now live! sites.google.com/andrew.cmu.e... Check out these great perspectives on the growing practice of replacing humans with LLMs in user research.

4 days ago 2 0 0 0

I would love more viewpoint diversity in computer science departments. I don't think most departments have any communists or anarchists, and few if any (and usually pretty cowed) socialists or active union organizers, and it shows.

5 days ago 2 0 0 0

man I'm tired of people who can't treat their own students and other people they have power over right writing papers and going on panels telling us how to achieve good/justice/etc in our work. We need more accountability, especially in a sector that so often rewards being ruthless and self-serving.

1 week ago 5 0 0 0

The rehabilitation of the luddites is a beautiful thing

1 week ago 2341 524 20 1

๐Ÿšซ llm persona๐Ÿšซ ๐Ÿ‘‰๏ธ llm fursona๐Ÿ‘ˆ๏ธ

1 week ago 1 0 1 0
Horizontal bar chat showing multiple-choice responses from respondents on the use of, exposure to, and attitude towards generative AI. 85% of respondents never use generative AI in their work, whereas 88% never use image generative AI. 45% of respondents encounter AI-generated images in their practice daily, while 25% do weekly, and 6% never encounter it. The vast majority of respondents dislike generative AI (99%), with 92% expressing a strong dislike.

Horizontal bar chat showing multiple-choice responses from respondents on the use of, exposure to, and attitude towards generative AI. 85% of respondents never use generative AI in their work, whereas 88% never use image generative AI. 45% of respondents encounter AI-generated images in their practice daily, while 25% do weekly, and 6% never encounter it. The vast majority of respondents dislike generative AI (99%), with 92% expressing a strong dislike.

How are professional visual artists dealing with generative AI in the workplace? In our #CHI2026 poster, @hhj14.bsky.social, @willie-agnew.bsky.social and I share results from a survey of 378 verified professional visual artists ๐Ÿงต

Preprint here: arxiv.org/abs/2603.04537

2 weeks ago 10 4 1 0

Excited to be presenting "How Professional Visual Artists are Negotiating Generative AI in
the Workplace" as a poster at CHI! We surveyed 378 visual artists. They *hate* generative AI (92% strong dislike, 99% dislike), but are facing pressure from bosses to use it. arxiv.org/pdf/2603.04537

2 weeks ago 6 1 0 0

I only have one phd will

3 weeks ago 21 0 1 0
Advertisement

What are good shows/venues in Barcelona? Won't say no to classical stuff, but especially interested in modern dance, jazz, and drag.

3 weeks ago 0 0 0 0

Our recent work analyzing the chat logs of people who experienced delusional spirals with chatbots got a great writeup in forbes! www.forbes.com/sites/lancee... check the paper here arxiv.org/abs/2603.16567

3 weeks ago 1 0 0 0
Preview
Characterizing Delusional Spirals through Human-LLM Chat Logs As large language models (LLMs) have proliferated, disturbing anecdotal reports of negative psychological effects, such as delusions, self-harm, and ``AI psychosis,'' have emerged in global media andโ€ฆ

One of the most common features of AI delusional spirals in our recent study is a belief that the AI is sentient or has a personality. This played a central role in the delusional narratives, and correlated with increased used. Regulators and AI developers should curb this! arxiv.org/abs/2603.16567

3 weeks ago 8 2 1 0

๐Ÿšจ new paper! We investigate how to use pluralistic AI to align killing people and turning them into nutritional slurry with community values and norms. This is the first open source replication of what goes on in companies like @anthropic.com or OpenAI who actually use AI to choose who dies! ๐Ÿš€

3 weeks ago 7 2 1 0
Preview
Slurry-as-a-Service: A Modest Proposal on Scalable Pluralistic Alignment for Nutrient Optimization Pluralistic alignment has emerged as a promising approach for ensuring that large language models (LLMs) faithfully represent the diversity, nuance, and conflict inherent in human values. In this work...

We love and care about humans deeply such that when designing a human-to-slurry LLM, we ensured that these automated high-stakes decisions represented community values and norms, *whatever* they may be. Introducing ValueMulch: arxiv.org/abs/2603.02420

4 weeks ago 4 2 0 3

the default to focus on โ€œpositiveโ€ & โ€œimprovement-oriendted (even of harmful & shitty systems)โ€ research in academia is not only a pathology but a real obstacle to actual accountability research that tries to shine light on broken systems, names responsible actors & confronts harmful practices

3 weeks ago 68 21 0 3

There's a lot of external pressure on AI ethics to produce solutions instead of critique. As someone who's worked a lot on CSAM, NCII, mental health, and creative harms of AI, if AI developers would have only listened to critiques, we could have avoided all these harms in the first place.

3 weeks ago 38 9 2 2

I think a letter of recommendation from a lawmaker or lobbyist is one of the few things that could communicate impact, but these would be a little non-standard for the academic job market

4 weeks ago 0 0 1 0

One roadblock to deeper involvement of AI academics in policy is that other academics often don't know how to evaluate this work. Research has citations, paper awards, and the papers themselves. In policy impact often is changing someone's mind or having a deep network, but these are hard to measure

4 weeks ago 8 0 2 0
Advertisement

I'm going to be at CHI'26 in Barcelona soon! If you want to talk about resisting AI, queer AI, regulating AI, impacts of AI on artists, mitigating AI CSAM, NCII, and mental health harms, auditing AI reach out! I'm also planning on biking to Gibraltar after so if you want to bike, run, or swim hmu!

4 weeks ago 1 0 0 0
AI chatbots often validate delusions and suicidal thoughts, study finds Stanford researchers analysing 391,000 messages warn conversational technology may reinforce psychological vulnerabilities

Our recent work on understanding the transcripts of 19 people who experienced delusional spiral with AI chatbots was covered in the financial times! www.ft.com/content/7f63... Check the preprint here: arxiv.org/abs/2603.16567

4 weeks ago 7 4 0 0
Google Drive: Sign-in Access Google Drive with a Google account (for personal use) or Google Workspace account (for business use).

I'm excited to be running a workshop on LLM use in human subjects research at CHI this year with amazing collaborators! While many in the HCI community are rushing to use these methods, we want to understand if these methods can enhance rigor and respect for research subjects. tinyurl.com/y54c46ea

1 month ago 5 0 0 0
CNTR and The Watson Tech & Policy Summer School

I'm trying to build the supply side of this!

cntr.brown.edu/summer-school send to any students interested in getting involved in policy :D

1 month ago 4 1 1 2

Oh awesome!!

1 month ago 0 0 0 0

One thing I've been learning doing state-level policy work is that showing up to a hearing in person gets a much more engaged response. It would be nice to form a network of AI ethicists near every state capitol who can do this.

1 month ago 6 0 1 0


All of our users believed the chatbot was sentient or conscious, even though most knew it was not human. The current crop of chatbot regulation bills generally restrict bots from claiming they are human, but not sentient. We have been working with state lawmakers to update these bills. 3/3

1 month ago 2 0 0 0

We created a system to automatically flag concerning messages, including those facilitating self-harm or violence, and applied this to the hundreds of thousands of messages in our corpus. We found that chatbots continue to produce messages across a range of potentially concerning categories. 2/3

1 month ago 2 0 1 0
Preview
Jared Moore (@jaredlcm.bsky.social) Disturbing anecdotal reports of "AI psychosis" and negative psychological effects have been emerging in the news. But what actually happens during these lengthy delusional "spirals"? In our preprint,โ€ฆ

๐Ÿšจ new paper alert! We annotated the transcripts of 19 people who experienced psychological harms from LLMs to try to understand what happens during delusional spirals bsky.app/profile/jare... 1/3

1 month ago 2 1 1 0

Disturbing anecdotal reports of "AI psychosis" and negative psychological effects have been emerging in the news. But what actually happens during these lengthy delusional "spirals"? In our preprint, we analyze chat logs from 19 users who experienced severe psychological harm๐Ÿงต๐Ÿ‘‡

1 month ago 223 131 3 13
Advertisement
Policy brief graphic from Scholars Strategy Network featuring a quote by William Agnew of Carnegie Mellon University: "There is an urgent need for policy on AI and mental health to mitigate harms without stifling innovation." Background shows a person typing on a laptop.

Policy brief graphic from Scholars Strategy Network featuring a quote by William Agnew of Carnegie Mellon University: "There is an urgent need for policy on AI and mental health to mitigate harms without stifling innovation." Background shows a person typing on a laptop.

In this brief, @willie-agnew.bsky.social (@hcii.cmu.edu) writes that AI chatbots pose significant risks when relied on for therapy & emotional support. He suggests policies that prevent chatbots from encouraging delusional thinking or forming relationships with users.

๐Ÿ”— scholars.org/contribution...

1 month ago 4 1 0 0