Advertisement · 728 × 90

Posts by Hamilton Morrin

Working on it!

1 month ago 1 0 0 0
Preview
New study raises concerns about AI chatbots fueling delusional thinking First major study on ‘AI psychosis’ suggests chatbots can encourage delusions among vulnerable people

In one of the first scientific reviews on the emerging concept of “AI psychosis,” researchers expressed real concern about the possibility that chatbots can exacerbate delusions, while maintaining reservations about the idea it can “cause” psychosis in people who aren’t already predisposed.

1 month ago 90 48 2 9

Pleased to see our @thelancet.com Lancet Psychiatry article on 'AI psychosis' covered in The Guardian this weekend @aldersonday.bsky.social @ricardotwumasi.bsky.social

1 month ago 4 3 0 0
Preview
New study raises concerns about AI chatbots fueling delusional thinking First major study on ‘AI psychosis’ suggests chatbots can encourage delusions among vulnerable people A new scientific review raises concerns about how chatbots powered by artificial intelligence may encourage delusional thinking, especially in vulnerable people. A summary of existing evidence on artificial intelligence-induced psychosis was published last week in the Lancet Psychiatry, highlighting how chatbots can encourage delusional thinking – though possibly only in people who are already vulnerable to psychotic symptoms. The authors advocate for clinical testing of AI chatbots in conjunction with trained mental health professionals. Continue reading...

New study raises concerns about AI chatbots fueling delusional thinking

1 month ago 112 50 11 15
Preview
Artificial intelligence-associated delusions and large language models: risks, mechanisms of delusion co-creation, and safeguarding strategies Large language models (LLMs) are poised to become a ubiquitous feature of everyday life, mediating communication, decision making, and information cur…

I’m delighted to share our paper in The Lancet Psychiatry ‘AI-associated delusions and large language models: risks, mechanisms of delusion co-creation, and safeguarding strategies’ www.sciencedirect.com/science/arti...

1 month ago 8 3 1 0

Sí, fue un placer absoluto colaborar!

1 month ago 0 0 1 0
Preview
Artificial intelligence-associated delusions and large language models: risks, mechanisms of delusion co-creation, and safeguarding strategies Large language models (LLMs) are poised to become a ubiquitous feature of everyday life, mediating communication, decision making, and information cur…

Link to paper:
www.sciencedirect.com/science/arti...

1 month ago 1 1 0 0

This research is complex, resource-intensive, and currently underfunded. If you are a funder or organisation with a stake in how AI shapes human health and wellbeing, we would very much welcome a conversation.

1 month ago 1 2 1 0
Advertisement

We are also working on a rigorous causality assessment framework for cases, support for the AI safety community's efforts to benchmark mental health risk, and perhaps most ambitiously, a surveillance study to connect real-world harms to features of the AI interaction itself.

1 month ago 0 1 1 0

For us this paper was just the beginning. With international collaborators from across multiple fields, along with people with lived experience, we are now working on an ambitious programme that includes comprehensive clinical and phenomenological characterisation of these cases.

1 month ago 1 0 1 0

Thanks go to my incredible co-lead Tom Pollak and wonderful collaborators Luke Nicholls, @drmichaellevin.bsky.social, Jenny Yiend, Udita Iyengar, Francesca DelGuidice, Sagnik Bhattacharya, @stefaniatognin.bsky.social, @jamesmaccabe.bsky.social, @ricardotwumasi.bsky.social, @aldersonday.bsky.social

1 month ago 3 1 2 0

As these models become more ubiquitous, we hope our paper and the proposals made herein will serve as a call to action for researchers, developers, and policymakers. Encouragingly at @iaseai.bsky.social last week we did meet teams conducting important work on AI safety in mental health contexts.

1 month ago 1 1 1 0

We know companies have since made efforts to introduce additional guardrails and safety features, but it is hard not to feel concerned by yet emerging lawsuits and accounts of individuals having delusional beliefs affirmed, and/or suicidal ideation encouraged during interactions.

1 month ago 1 1 1 0

Though the messaging is broadly the same as from our July 2025 preprint, we do also comment on more recent developments including OpenAI’s figures from October last year suggesting that 0.07% of users (or 560,000 individuals) show signs of psychosis or mania each week.

1 month ago 1 1 1 0
Preview
Artificial intelligence-associated delusions and large language models: risks, mechanisms of delusion co-creation, and safeguarding strategies Large language models (LLMs) are poised to become a ubiquitous feature of everyday life, mediating communication, decision making, and information cur…

I’m delighted to share our paper in The Lancet Psychiatry ‘AI-associated delusions and large language models: risks, mechanisms of delusion co-creation, and safeguarding strategies’ www.sciencedirect.com/science/arti...

1 month ago 8 3 1 0
Redirecting

Store sprogmodeller er ved at blive en del af hverdagen. Men hvad sker der, når de validerer vrangforestillinger? Ny artikel i The Lancet Psychiatry med @aldersonday.bsky.social @hamiltonmorrin.bsky.social

doi.org/10.1016/S221...
#AIogPsykose #Psykiatri #Chat

1 month ago 3 3 0 0
Preview
Artificial intelligence-associated delusions and large language models: risks, mechanisms of delusion co-creation, and safeguarding strategies Large language models (LLMs) are poised to become a ubiquitous feature of everyday life, mediating communication, decision making, and information cur…

Last year I had the pleasure of joining a new working group on AI & psychosis led by @hamiltonmorrin.bsky.social & Tom Pollak.

Today our report went into @thelancetpsych.bsky.social with updated recommendations for digital safety.

www.sciencedirect.com/science/arti...

1 month ago 11 9 0 0
Stall at the conference

Stall at the conference

We are at the National Student Psychiatry Conference at University of Kent, showing off extracurricular volunteer activities that psych trainees can engage in through the charity, to support mental health in the gaming community!

@rcpsych.bsky.social @kmmsmedschool.bsky.social

2 months ago 2 1 0 0
Advertisement

Preprint: osf.io/preprints/ps...

Tom's more thoughtful summary: drtompollak.substack.com/p/playing-wi...

#AI #MentalHealth #Psychiatry #AIsafety #HCI #DigitalHealth #Governance

3 months ago 0 1 0 0

New Year, New(ish) Preprint! I'm delighted to share "Playing with the dials of belief: how controllable AI behaviours could modulate human belief and cognition across scales" (with Quinton Deeley and Tom Pollak).

osf.io/preprints/ps...

3 months ago 3 3 1 0

Preprint: osf.io/preprints/ps...

Tom's more thoughtful summary: drtompollak.substack.com/p/playing-wi...

#AI #MentalHealth #Psychiatry #AIsafety #HCI #DigitalHealth #Governance

3 months ago 0 1 0 0

If these are dials, the real issue is who gets to set them, who knows they are being adjusted, and what it means to build a technology that can press on the most human parts of us while insisting it is merely a tool.

3 months ago 1 0 1 0

That implies impact assessment, transparency about significant changes to how systems behave and genuine access for independent researchers, clinicians, and people with lived experience to study these systems under agreed safeguards.

3 months ago 0 0 1 0

The paper ends with governance questions. We argue that changes to defaults should be treated as interventions on belief and attention.

3 months ago 0 0 1 0

We also raise questions about how this could interact with real-world factors like sleep, stress, dopaminergic tone, and post-psychedelic belief plasticity.

3 months ago 0 0 1 0
Advertisement

In our new preprint, we unpack some of the interaction settings or "dials" of LLMs, and drawing inspiration from established computational psychiatry literature postulate how through a sort of "virtual psychopharmacology" these dials may be altering user belief dynamics.

3 months ago 3 0 1 0

The potential for AI models to influence belief in social and political contexts has been widely recognised, to the extent that a recent RAND report outlined the "security implications of AI-induced psychosis".

www.rand.org/pubs/researc...

3 months ago 0 0 1 0

New Year, New(ish) Preprint! I'm delighted to share "Playing with the dials of belief: how controllable AI behaviours could modulate human belief and cognition across scales" (with Quinton Deeley and Tom Pollak).

osf.io/preprints/ps...

3 months ago 3 3 1 0
Strengthening ChatGPT’s responses in sensitive conversations We worked with more than 170 mental health experts to help ChatGPT more reliably recognize signs of distress, respond with care, and guide people toward real-world support–reducing responses that fall...

0.07% of ChatGPT users showing signs of psychosis or mania may sound low, but that amounts to ~560,000 people each week openai.com/index/streng...

5 months ago 0 0 0 0
GTM volunteer crew

GTM volunteer crew

Dim lighting and soothing atmosphere in the Reset Room

Dim lighting and soothing atmosphere in the Reset Room

Are you at #MCMComicCon and want to relax? Come to our Reset Room, a calm oasis where you can chill, do lo-fi activities, & re-energise before heading back to the show! We have mental health information too! Staffed by this cool crew. Find us upstairs in the Platinum Suite! 🧠🎮

5 months ago 2 2 0 0