Mass Media Narratives of Psychiatric Adverse Events Associated With Generative AI Chatbots: Rapid Scoping Review
Background: Generative artificial intelligence (AI) chatbots have rapidly entered public use, including in contexts involving emotional support and #MentalHealthโrelated interactions. Although these systems are increasingly accessible, concerns have emerged regarding potential adverse psychiatric outcomes reported in public discourse, including psychosis, suicidal ideation, self-harm, and suicide. However, these reports largely originate from journalistic accounts rather than systematically verified clinical data. Objective: This rapid scoping review aimed to systematically map and characterize mass media narratives describing alleged adverse psychiatric outcomes temporally associated with generative AI chatbot interactions. Methods: A rapid scoping review methodology was #Applied to publicly accessible news articles identified primarily through Google News searches. Articles published from November 2022 onward were screened for eligibility if they described a specific case in which psychiatric deterioration or crisis was temporally linked to generative AI use. Data were extracted using a structured coding template capturing article characteristics, demographic information, AI platform features, interaction intensity, outcome type and severity, type of evidence reported, and causal attribution language. Descriptive statistics and cross-tabulations were performed. Results: A total of 71 news articles representing 36 unique cases were included. Suicide death was the most frequently reported outcome (35/61, 57.4% cases with complete severity coding), followed by psychiatric hospitalization (12/61, 19.7%). Fatal outcomes were disproportionately represented among minors (19/21, 90.5%) compared with adults (17/35, 48.6%). ChatGPT was the most frequently cited platform (51/71, 71.8%), followed by Character AI (10/71, 14.1%). Causal attribution most commonly referenced AI system behavior (45/61, 73.8%), and the term โallegedโ was the predominant causal descriptor (33/61, 54.1%). Evidence sources were primarily chat logs or screenshots (34/61, 55.7%), while police or medical documentation was rare (1/61, 1.6%). Regulatory calls were present in 51 of 60 (85%) articles with nonmissing data. Conclusions: Mass media reporting of generative AIโrelated psychiatric harms is concentrated around severe outcomes, particularly suicide deaths among youth, and is frequently framed within regulatory and corporate accountability narratives. While causality cannot be established from media reports, consistent patterns of high-intensity interactions, user vulnerability, and limited safeguard reporting highlight the need for structured safety surveillance, transparent AI risk auditing, and clearer governance frameworks. As generative AI becomes increasingly integrated into everyday psychosocial contexts, systematic research and formal safety monitoring will be necessary to determine whether media-reported harms correspond to verifiable clinical risk patterns.