If you're following this story in the NYT by @kashhill.bsky.social, I recommend @schancellor.bsky.social's post that breaks down why AI is bad at therapy. Pretty much everything she she warns about shows up in this story, which is heartbreaking
notatechdemo.substack.com/p/why-ai-can...
Posts by Stevie Chancellor
AI systems need more robust crisis detection protocols, discourage delusional and harmful thinking, and refuse to give advice on how to harm themselves, period. Adam deserved better than platitudes about the imperfections of AI and safeguards failing.
I've spent over a decade studying severe mental illness, AI, and social media. These risks were documented as early as 2016 in my own work about severe mental illness, social media, and algorithms! Since 2016! We knew that AI systems could propagate this.
It's heartbreaking doing research on the abstract risks of algorithms, social media, and AI - I've been doing that for years. But it's a whole other level of upsetting to see them manifest in real time.
What is it going to take to get companies to care about this and do something? It's been a tough day reading this great report by @kashhill.bsky.social on the risks of chatbots.
www.nytimes.com/2025/08/26/t...
8) Want more details on our FAccT paper? Find it here: github.com/jlcmoore/llm...
And join me on my substack to follow along! notatechdemo.substack.com
7) This blog post summarizes our recent FAccT paper about mental health, chatbot safety, and therapeutic practices.
But, I also reflect on my decade in therapy and what I've learned. It's not safe and not ready for replacing a therapist - it can't replace the messy process of healing.
6) And last week, I posted about why AI can't replace your therapist. notatechdemo.substack.com/p/why-ai-can...
The results are concerning: chatbots gave bridge heights to suicidal users and confirmed peoples' delusions. Chatbots also miss the human connection and vulnerability for progress.
5) So, in short - I'm skeptical. We're asking people to self-regulate something they don't understand. By the time someone is having a delusional conversation with a chatbot... a skippable timer isn't going to help. That's like jumping in front of a boulder rushing down a hill.
4) But I did a study in 2016 about skippable nudges like this on Instagram and dangerous mental health communities. Content on hashtags with the skippable nudges of graphic content had more dangerous mental health content go up over time.
3) I dive into the meta-analysis of why screen timers have very small effects mental health or other measures of well-being. Truly, use time is a very imprecise measure of what it means to "use" tech. Maybe AI would be different, and timers with skip-thru to chat options would work.
2) First, fresh up this week - do gentle timers get people to stop using ChatGPT in unsafe ways?
notatechdemo.substack.com/p/will-timer...
OpenAI released a timer/nudge feature to stop long sessions. They say it was to support more compassionate use. But is that the case?
Hi friends! I've been writing about AI safety and mental health. Two articles up lately
- What data about screen timers tell us about chatbot timers, and
- Why chatbots can't replace your therapist.
Brief 🧵to summarize, find both here - notatechdemo.substack.com
"We have Trivial Pursuit now, so if you’d just give it a try, maybe you’d realize that this vacation can still be saved. So sit down, shut up, and tell me who Gerald Ford’s vice president was."
💯💯💯💯
My first welcome post went live yesterday. 🎉🎉🎉
If you're interested in research-backed takes on AI, practical productivity ideas, or just want to follow along as I figure out Substack, I'd love to have you come join
Here's what you'll get -
➡️ Cautiously optimistic takes on AI
➡️ Ideas on how we can fix social media to support mental health
➡️ Behind-the-scenes of how to be a writer/professor
➡️ Honest thoughts about building technology that actually helps humans instead of being over-hyped
The best place to find me now is on Substack for long-form writing!
✨📬👋
(insert self-promo emojis here)
Seriously though. I started a Substack.
It’s called This Is Not a Tech Demo. I'm writing about AI, mental health, social media, and how to live better with tech.
🔗 notatechdemo.substack.com
The thing about interdisciplinary work is everyone gets to misunderstand you from a unique angle
No way! Agre is so good. If you enjoy his writing style, Computation and Human Experience is dense and GREAT. But not a starting piece of his, hah.
I've been working on something! It's not nearly done yet, but I thought no need to gatekeep the work in progress. This is an AI ethics "syllabus" based on a post-hoc curation of hundreds of short form videos I've made for social media, to make the content more accessible. bit.ly/ai-ethics-sy...
New blog (by me!) finding research identity with Agre's Critical Technical Practice: medium.com/@stevie-chan...
CTP taught me my interdisciplinary #hcai work was a powerful strategy to build socially-relevant AI. I hope others embrace the cross-cutting identity as much as Agre helped me to.
Tip for PhD applicants reaching out to prospective advisors: Don’t use an LLM to write these emails. It’s not a great first impression to express your admiration for a recent paper the faculty member published if that paper does not exist.
"It just really comes as a shock that such accomplished intellectuals, who’ve spent their entire careers pushing the upper bounds of human achievement, could be judgy about a machine that runs the entirety of human imagination through a shredder and glues together what comes out."
Why is there no :slams head into the desk: emoji.
Seriously, though - a major issue is the collection/public dissemination, amplification of "false privacy" claims, and contextual mismatch of the intentions of Discord communities and the release of the dataset without guards on who may access it.
Another week, another research ethics controversy.
TL;DR Researchers released a public dataset of 2B+ messages from 4M+ users on 3k+ "public" Discord servers. Usernames/IDs are anonymized.
But let's unpack this one... 🧵
www.404media.co/researchers-...
ICE agents escorting Mariner Moose out of T-Mobile Park with hooves behind his back
ICE Deports Mariner Moose Back to Canada: tinyurl.com/vswvxhrr