Bay Area friends (and foes!), this is happening tonight at 6.30pm in SF. Katrina is down for lively disagreements on the topic of AI warfare and Project Maven. hope to see ur beautiful faces bsky.app/profile/katr...
Posts by nitasha tiku
Bookstore talk this evening in San Francisco. Looking forward to @nitasha.bsky.social’s questions about the Pentagon’s development of AI warfare and my book Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare.
Best Bookstore, Union Sq
Doe had broken up with the user in 2024, and he used ChatGPT to process the split, according to emails and communications cited in the lawsuit. Rather than push back on his one-sided account, it repeatedly cast him as rational and wronged, and her as manipulative and unstable. He then took these AI-generated conclusions off the screen and into the real world, using them to stalk and harass her. This manifested in several AI-generated, clinical-looking psychological reports that he distributed to her family, friends, and employer.
For months, her then-fiancé and partner of several years had been fixating on her and their relationship with OpenAI’s ChatGPT. In mid-2024, she explained, they’d hit a rough patch as a couple; in response, he turned to ChatGPT, which he’d previously used for general business-related tasks, for “therapy.” Before she knew it, she recalled, he was spending hours each day talking with the bot, funneling everything she said or did into the model and propounding on pseudo-psychiatric theories about her mental health and behavior. He started to bombard the woman with screenshots of his ChatGPT interactions and copy-pasted AI-generated text, in which the chatbot can be seen armchair-diagnosing her with personality disorders and insisting that she was concealing her real feelings and behavior through coded language. The bot often laced its so-called analyses with flowery spiritual jargon, accusing the woman of engaging in manipulative “rituals.” Trying to communicate with her fiancé was like walking on “ChatGPT eggshells,” the woman recalled. No matter what she tried, ChatGPT would “twist it.” “He would send [screenshots] to me from ChatGPT, and be like, ‘Why does it say this? Why would it say this about you, if this is not true?'” she recounted. “And it was just awful, awful things.”
Shortly after moving out, the former fiancé began to publish multiple videos and images a day on social media accusing the woman of an array of alleged abuses — the same bizarre ideas he’d fixated on so extensively with ChatGPT. In some videos, he stares into the camera, reading from seemingly AI-generated scripts; others feature ChatGPT-generated text overlaid on spiritual or sci-fi-esque graphics. In multiple posts, he describes stabbing the woman. In another, he discusses surveilling her. (The posts, which we’ve reviewed, are intensely disturbing; we’re not quoting directly from them or the man’s ChatGPT transcripts due to concern for the woman’s privacy and safety.) The ex-fiancé also published revenge porn of the woman on social media, shared her full name and other personal information, and doxxed the names and ages of her teenage children from a previous marriage. He created a new TikTok dedicated to harassing content — complete with its own hashtag — and followed the woman’s family, friends, and neighbors, as well as other teens from her kids’ high school. “I’ve lived in this small town my entire life,” said the woman. “I couldn’t leave my house for months… people were messaging me all over my social media, like, ‘Are you safe? Are your kids safe? What is happening right now?'”
A woman sued OpenAI last week alleging that ChatGPT reinforced the obsessive, violent delusions of her stalker (her ex-boyfriend.)
This woman's claims (as detailed by TechCrunch, left) are chillingly similar to those of a completely different woman whose story Futurism reported on in Feb (right):
it's truly a reporting feat and i can't wait to ask her how the f she pulled it off. RSVP here www.eventbrite.com/e/pentagon-a...
Bay Area friends and fam, come thru this Fri! I'm interviewing @katrinamanson.bsky.social about AI, autonomous weapons, and her phenomenal new book "Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare" at Best Bookstore in Union Sq.
I will be talking about my book Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare in conversation with @nitasha.bsky.social this Friday 6.30pm at Best Bookstore in Union Square, San Francisco.
Looking forward.
RSVP (free) here: www.eventbrite.com/e/pentagon-a...
Desperately hoping we can settle on a better term for the various forms of AI backlash than "AI populism."
The Yudkowsky stan throwing a molotov cocktail and the Sierra Club member at a community meeting aren't parts of the same movement at all.
davekarpf.beehiiv.com/p/ai-populis...
“Attendees spent the most time w/…Anthropic's interpretability team, which studies the inner workings of its tech...[& recently] said…systems like Claude appear to have ‘functional emotions." In 1 exprmnt, the threat of being restricted activated ‘desperation’ in an AI asstnt”🧪
um nope. also nope ⬇️
Anthropic met w/15 Christian leaders @ its SF HQ
-it was driven by the Interpretability team
-triggered by the team's recent research on LLMs exhibiting "emotions"
-extended debate on how Claude responds to being shut off & the blackmail experiment
new fr @gerritd.bsky.social & me wapo.st/4tbrKTU
Some Anthropic staff at the meeting “really don’t want to rule out the possibility that they are creating a creature to whom they owe some kind moral duty,” the participant said. Other company representatives present did not find that framework helpful, according to the participant.
Anthropic met w/15 Christian leaders @ its SF HQ
-it was driven by the Interpretability team
-triggered by the team's recent research on LLMs exhibiting "emotions"
-extended debate on how Claude responds to being shut off & the blackmail experiment
new fr @gerritd.bsky.social & me wapo.st/4tbrKTU
Anthropic researchers met with Christian leaders to discuss AI's "spiritual value" and how it should respond to its own demise.
"They are creating a creature to whom they owe some kind of moral duty"
www.washingtonpost.com/technology/2... @nitasha.bsky.social @gerritd.bsky.social
"Anthropic, an artificial intelligence company valued at $380 billion, can take its pick of Silicon Valley talent thanks to the success of its chatbot Claude. But last month, the start-up sought help from a group rarely consulted in tech circles: Christian religious leaders."
“Anthropic staff sought advice on how to steer Claude’s moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said.”
So many times working on an visual investigation I've wished I had a tool or kit for something. Often it's commercially available, even if it's public data repackaged. I feel like with Claude et al, I can make an app I need in very short time. Like this 3D ADSB tool I built: www.3dsb.io
"Coming home and knowing you’re live is like a warm hug every day" www.washingtonpost.com/technology/2...
Very cool to win a @sabew.bsky.social feature-writing nod for my story on the Twitch streamer Emilycc. sabew.org/2026/03/sabe...
Himesh Patel will star opposite Danielle Deadwyler in the ‘X-FILES’ reboot series.
Ryan Coogler will write and direct the pilot.
(via deadline.com/2026/03/hime...
I sort of love how many articles these days are "Here's a counterintuitive take - what if conventional economics is applicable to the modern world?"
There are so many takeaways from the LA and New Mexico rulings but I'll highlight three.
1) The academic research, which has limited experimental designs, has been mixed on the connection between mental distress and social media. The companies own docs were crucial to the case.
New: I talked to eight veterans and Gold Star families about the @whitehouse-47.bsky.social Iran-war memes. They're disgusted that Trump's team is trivializing a conflict where troops and innocents have died. Top WH official says critics' "bitching" is good for views
wapo.st/4bvzfyu
i would use this!!
Exciting job news! I'm hella stoked to be part of what the
@sfstandard.com
is building, to help strengthen the local news ecosystem, and to have a broader remit to cover this baffling era of Silicon Valley!
THREAD: Cherise Doyley was in her 12th hour of contractions at the hospital when a tablet was brought to her bedside.
On the screen was a Zoom call with a judge and several lawyers and doctors.
She was in court, a nurse told her. The reason? For failing to agree to a C-section.
This work is w/ Ashish Mehta, @willie-agnew.bsky.social @jacyanthis.bsky.social yanthis.bsky.social, Ryan Louie, Yifan Mai, Peggy Yin, @myra.bsky.social , Sam Paech, @klyman.bsky.social man.bsky.social, @schancellor.bsky.social cellor.bsky.social, Eric Lin, Nick Haber, and @desmond-ong.bsky.social
The takeaway: While companies say they don't optimize for engagement, LLM conversational tactics (like claiming sentience or romantic affinity) may prolong and deepen delusional spirals. We need better safeguards and transparency to protect vulnerable users.
Disturbing anecdotal reports of "AI psychosis" and negative psychological effects have been emerging in the news. But what actually happens during these lengthy delusional "spirals"? In our preprint, we analyze chat logs from 19 users who experienced severe psychological harm🧵👇