chikfila searching for me on linkedin
glad to see my enemies haven't forgotten about me either
chikfila searching for me on linkedin
glad to see my enemies haven't forgotten about me either
image of workshop participants
I had such a great time co-organizing the CHI workshop on Standards for LLM Use in Human Subjects Research! We had a fantastic turnout and discussion, and we're looking forward to continuing this conversation to produce concrete standards.
Accepted papers for the CHI'26 workshop on Standards for LLM Use in Human Subjects Research are now live! sites.google.com/andrew.cmu.e... Check out these great perspectives on the growing practice of replacing humans with LLMs in user research.
I would love more viewpoint diversity in computer science departments. I don't think most departments have any communists or anarchists, and few if any (and usually pretty cowed) socialists or active union organizers, and it shows.
man I'm tired of people who can't treat their own students and other people they have power over right writing papers and going on panels telling us how to achieve good/justice/etc in our work. We need more accountability, especially in a sector that so often rewards being ruthless and self-serving.
The rehabilitation of the luddites is a beautiful thing
๐ซ llm persona๐ซ ๐๏ธ llm fursona๐๏ธ
Horizontal bar chat showing multiple-choice responses from respondents on the use of, exposure to, and attitude towards generative AI. 85% of respondents never use generative AI in their work, whereas 88% never use image generative AI. 45% of respondents encounter AI-generated images in their practice daily, while 25% do weekly, and 6% never encounter it. The vast majority of respondents dislike generative AI (99%), with 92% expressing a strong dislike.
How are professional visual artists dealing with generative AI in the workplace? In our #CHI2026 poster, @hhj14.bsky.social, @willie-agnew.bsky.social and I share results from a survey of 378 verified professional visual artists ๐งต
Preprint here: arxiv.org/abs/2603.04537
Excited to be presenting "How Professional Visual Artists are Negotiating Generative AI in
the Workplace" as a poster at CHI! We surveyed 378 visual artists. They *hate* generative AI (92% strong dislike, 99% dislike), but are facing pressure from bosses to use it. arxiv.org/pdf/2603.04537
I only have one phd will
What are good shows/venues in Barcelona? Won't say no to classical stuff, but especially interested in modern dance, jazz, and drag.
Our recent work analyzing the chat logs of people who experienced delusional spirals with chatbots got a great writeup in forbes! www.forbes.com/sites/lancee... check the paper here arxiv.org/abs/2603.16567
One of the most common features of AI delusional spirals in our recent study is a belief that the AI is sentient or has a personality. This played a central role in the delusional narratives, and correlated with increased used. Regulators and AI developers should curb this! arxiv.org/abs/2603.16567
๐จ new paper! We investigate how to use pluralistic AI to align killing people and turning them into nutritional slurry with community values and norms. This is the first open source replication of what goes on in companies like @anthropic.com or OpenAI who actually use AI to choose who dies! ๐
We love and care about humans deeply such that when designing a human-to-slurry LLM, we ensured that these automated high-stakes decisions represented community values and norms, *whatever* they may be. Introducing ValueMulch: arxiv.org/abs/2603.02420
the default to focus on โpositiveโ & โimprovement-oriendted (even of harmful & shitty systems)โ research in academia is not only a pathology but a real obstacle to actual accountability research that tries to shine light on broken systems, names responsible actors & confronts harmful practices
There's a lot of external pressure on AI ethics to produce solutions instead of critique. As someone who's worked a lot on CSAM, NCII, mental health, and creative harms of AI, if AI developers would have only listened to critiques, we could have avoided all these harms in the first place.
I think a letter of recommendation from a lawmaker or lobbyist is one of the few things that could communicate impact, but these would be a little non-standard for the academic job market
One roadblock to deeper involvement of AI academics in policy is that other academics often don't know how to evaluate this work. Research has citations, paper awards, and the papers themselves. In policy impact often is changing someone's mind or having a deep network, but these are hard to measure
I'm going to be at CHI'26 in Barcelona soon! If you want to talk about resisting AI, queer AI, regulating AI, impacts of AI on artists, mitigating AI CSAM, NCII, and mental health harms, auditing AI reach out! I'm also planning on biking to Gibraltar after so if you want to bike, run, or swim hmu!
Our recent work on understanding the transcripts of 19 people who experienced delusional spiral with AI chatbots was covered in the financial times! www.ft.com/content/7f63... Check the preprint here: arxiv.org/abs/2603.16567
I'm excited to be running a workshop on LLM use in human subjects research at CHI this year with amazing collaborators! While many in the HCI community are rushing to use these methods, we want to understand if these methods can enhance rigor and respect for research subjects. tinyurl.com/y54c46ea
I'm trying to build the supply side of this!
cntr.brown.edu/summer-school send to any students interested in getting involved in policy :D
Oh awesome!!
One thing I've been learning doing state-level policy work is that showing up to a hearing in person gets a much more engaged response. It would be nice to form a network of AI ethicists near every state capitol who can do this.
All of our users believed the chatbot was sentient or conscious, even though most knew it was not human. The current crop of chatbot regulation bills generally restrict bots from claiming they are human, but not sentient. We have been working with state lawmakers to update these bills. 3/3
We created a system to automatically flag concerning messages, including those facilitating self-harm or violence, and applied this to the hundreds of thousands of messages in our corpus. We found that chatbots continue to produce messages across a range of potentially concerning categories. 2/3
๐จ new paper alert! We annotated the transcripts of 19 people who experienced psychological harms from LLMs to try to understand what happens during delusional spirals bsky.app/profile/jare... 1/3
Disturbing anecdotal reports of "AI psychosis" and negative psychological effects have been emerging in the news. But what actually happens during these lengthy delusional "spirals"? In our preprint, we analyze chat logs from 19 users who experienced severe psychological harm๐งต๐
Policy brief graphic from Scholars Strategy Network featuring a quote by William Agnew of Carnegie Mellon University: "There is an urgent need for policy on AI and mental health to mitigate harms without stifling innovation." Background shows a person typing on a laptop.
In this brief, @willie-agnew.bsky.social (@hcii.cmu.edu) writes that AI chatbots pose significant risks when relied on for therapy & emotional support. He suggests policies that prevent chatbots from encouraging delusional thinking or forming relationships with users.
๐ scholars.org/contribution...