Advertisement · 728 × 90

Posts by nitasha tiku

Bay Area friends (and foes!), this is happening tonight at 6.30pm in SF. Katrina is down for lively disagreements on the topic of AI warfare and Project Maven. hope to see ur beautiful faces bsky.app/profile/katr...

3 days ago 9 3 1 1

Bookstore talk this evening in San Francisco. Looking forward to @nitasha.bsky.social’s questions about the Pentagon’s development of AI warfare and my book Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare.

Best Bookstore, Union Sq

3 days ago 3 2 1 0
Doe had broken up with the user in 2024, and he used ChatGPT to process the split, according to emails and communications cited in the lawsuit. Rather than push back on his one-sided account, it repeatedly cast him as rational and wronged, and her as manipulative and unstable. He then took these AI-generated conclusions off the screen and into the real world, using them to stalk and harass her. This manifested in several AI-generated, clinical-looking psychological reports that he distributed to her family, friends, and employer.

Doe had broken up with the user in 2024, and he used ChatGPT to process the split, according to emails and communications cited in the lawsuit. Rather than push back on his one-sided account, it repeatedly cast him as rational and wronged, and her as manipulative and unstable. He then took these AI-generated conclusions off the screen and into the real world, using them to stalk and harass her. This manifested in several AI-generated, clinical-looking psychological reports that he distributed to her family, friends, and employer.

For months, her then-fiancé and partner of several years had been fixating on her and their relationship with OpenAI’s ChatGPT. In mid-2024, she explained, they’d hit a rough patch as a couple; in response, he turned to ChatGPT, which he’d previously used for general business-related tasks, for “therapy.” 

Before she knew it, she recalled, he was spending hours each day talking with the bot, funneling everything she said or did into the model and propounding on pseudo-psychiatric theories about her mental health and behavior. He started to bombard the woman with screenshots of his ChatGPT interactions and copy-pasted AI-generated text, in which the chatbot can be seen armchair-diagnosing her with personality disorders and insisting that she was concealing her real feelings and behavior through coded language. The bot often laced its so-called analyses with flowery spiritual jargon, accusing the woman of engaging in manipulative “rituals.”

Trying to communicate with her fiancé was like walking on “ChatGPT eggshells,” the woman recalled. No matter what she tried, ChatGPT would “twist it.”

“He would send [screenshots] to me from ChatGPT, and be like, ‘Why does it say this? Why would it say this about you, if this is not true?'” she recounted. “And it was just awful, awful things.”

For months, her then-fiancé and partner of several years had been fixating on her and their relationship with OpenAI’s ChatGPT. In mid-2024, she explained, they’d hit a rough patch as a couple; in response, he turned to ChatGPT, which he’d previously used for general business-related tasks, for “therapy.” Before she knew it, she recalled, he was spending hours each day talking with the bot, funneling everything she said or did into the model and propounding on pseudo-psychiatric theories about her mental health and behavior. He started to bombard the woman with screenshots of his ChatGPT interactions and copy-pasted AI-generated text, in which the chatbot can be seen armchair-diagnosing her with personality disorders and insisting that she was concealing her real feelings and behavior through coded language. The bot often laced its so-called analyses with flowery spiritual jargon, accusing the woman of engaging in manipulative “rituals.” Trying to communicate with her fiancé was like walking on “ChatGPT eggshells,” the woman recalled. No matter what she tried, ChatGPT would “twist it.” “He would send [screenshots] to me from ChatGPT, and be like, ‘Why does it say this? Why would it say this about you, if this is not true?'” she recounted. “And it was just awful, awful things.”

Shortly after moving out, the former fiancé began to publish multiple videos and images a day on social media accusing the woman of an array of alleged abuses — the same bizarre ideas he’d fixated on so extensively with ChatGPT.

In some videos, he stares into the camera, reading from seemingly AI-generated scripts; others feature ChatGPT-generated text overlaid on spiritual or sci-fi-esque graphics. In multiple posts, he describes stabbing the woman. In another, he discusses surveilling her. (The posts, which we’ve reviewed, are intensely disturbing; we’re not quoting directly from them or the man’s ChatGPT transcripts due to concern for the woman’s privacy and safety.)

The ex-fiancé also published revenge porn of the woman on social media, shared her full name and other personal information, and doxxed the names and ages of her teenage children from a previous marriage. He created a new TikTok dedicated to harassing content — complete with its own hashtag — and followed the woman’s family, friends, and neighbors, as well as other teens from her kids’ high school.

“I’ve lived in this small town my entire life,” said the woman. “I couldn’t leave my house for months… people were messaging me all over my social media, like, ‘Are you safe? Are your kids safe? What is happening right now?'”

Shortly after moving out, the former fiancé began to publish multiple videos and images a day on social media accusing the woman of an array of alleged abuses — the same bizarre ideas he’d fixated on so extensively with ChatGPT. In some videos, he stares into the camera, reading from seemingly AI-generated scripts; others feature ChatGPT-generated text overlaid on spiritual or sci-fi-esque graphics. In multiple posts, he describes stabbing the woman. In another, he discusses surveilling her. (The posts, which we’ve reviewed, are intensely disturbing; we’re not quoting directly from them or the man’s ChatGPT transcripts due to concern for the woman’s privacy and safety.) The ex-fiancé also published revenge porn of the woman on social media, shared her full name and other personal information, and doxxed the names and ages of her teenage children from a previous marriage. He created a new TikTok dedicated to harassing content — complete with its own hashtag — and followed the woman’s family, friends, and neighbors, as well as other teens from her kids’ high school. “I’ve lived in this small town my entire life,” said the woman. “I couldn’t leave my house for months… people were messaging me all over my social media, like, ‘Are you safe? Are your kids safe? What is happening right now?'”

A woman sued OpenAI last week alleging that ChatGPT reinforced the obsessive, violent delusions of her stalker (her ex-boyfriend.)

This woman's claims (as detailed by TechCrunch, left) are chillingly similar to those of a completely different woman whose story Futurism reported on in Feb (right):

1 week ago 2633 1051 25 145
Preview
Pentagon AI Warfare: How We Got Here & What's Next w/Award-Winning Reporter BEHIND THE SCENES: The dramatic story of the secretive decade-long Pentagon campaign to deliver America into the age of AI warfare

it's truly a reporting feat and i can't wait to ask her how the f she pulled it off. RSVP here www.eventbrite.com/e/pentagon-a...

6 days ago 6 4 0 0

Bay Area friends and fam, come thru this Fri! I'm interviewing @katrinamanson.bsky.social about AI, autonomous weapons, and her phenomenal new book "Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare" at Best Bookstore in Union Sq.

6 days ago 13 6 3 0
Preview
Pentagon AI Warfare: How We Got Here & What's Next w/Award-Winning Reporter BEHIND THE SCENES: The dramatic story of the secretive decade-long Pentagon campaign to deliver America into the age of AI warfare

I will be talking about my book Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare in conversation with @nitasha.bsky.social this Friday 6.30pm at Best Bookstore in Union Square, San Francisco.

Looking forward.

RSVP (free) here: www.eventbrite.com/e/pentagon-a...

6 days ago 9 3 0 3
Preview
"AI Populism" is a term that obscures more than it reveals. Can we just not?

Ugh. Link broke. This one should work: davekarpf.beehiiv.com/p/ai-populis...

6 days ago 10 9 0 0
Preview
"AI Populism" is a term that obscures more than it reveals. Can we just not?

Desperately hoping we can settle on a better term for the various forms of AI backlash than "AI populism."

The Yudkowsky stan throwing a molotov cocktail and the Sierra Club member at a community meeting aren't parts of the same movement at all.

davekarpf.beehiiv.com/p/ai-populis...

6 days ago 127 27 18 6
Preview
Sam Altman’s home targeted in second attack Early Sunday morning, a car stopped and appears to have fired a gun at the Russian Hill home of OpenAI’s CEO, according to a newly-obtained police report.

Exclusive: Sam Altman’s home targeted in second attack sfstandard.com/2026/04/12/s...

1 week ago 248 39 46 158
Advertisement

“Attendees spent the most time w/…Anthropic's interpretability team, which studies the inner workings of its tech...[& recently] said…systems like Claude appear to have ‘functional emotions." In 1 exprmnt, the threat of being restricted activated ‘desperation’ in an AI asstnt”🧪

um nope. also nope ⬇️

1 week ago 21 5 4 3
Preview
Can AI be a ‘child of God’? Inside Anthropic’s meeting with Christian leaders. Anthropic met with Christian leaders including from Catholic and Protestant churches to discuss its chatbot Claude’s moral development.

Anthropic met w/15 Christian leaders @ its SF HQ
-it was driven by the Interpretability team
-triggered by the team's recent research on LLMs exhibiting "emotions"
-extended debate on how Claude responds to being shut off & the blackmail experiment

new fr @gerritd.bsky.social & me wapo.st/4tbrKTU

1 week ago 13 7 4 3

Some Anthropic staff at the meeting “really don’t want to rule out the possibility that they are creating a creature to whom they owe some kind moral duty,” the participant said. Other company representatives present did not find that framework helpful, according to the participant.

1 week ago 16 3 1 0
Preview
Can AI be a ‘child of God’? Inside Anthropic’s meeting with Christian leaders. Anthropic met with Christian leaders including from Catholic and Protestant churches to discuss its chatbot Claude’s moral development.

Anthropic met w/15 Christian leaders @ its SF HQ
-it was driven by the Interpretability team
-triggered by the team's recent research on LLMs exhibiting "emotions"
-extended debate on how Claude responds to being shut off & the blackmail experiment

new fr @gerritd.bsky.social & me wapo.st/4tbrKTU

1 week ago 13 7 4 3
Can AI be a ‘child of God’? Inside Anthropic’s meeting with Christian leaders. Anthropic met with Christian leaders including from Catholic and Protestant churches to discuss its chatbot Claude’s moral development.

wapo.st/4tbrKTU

1 week ago 2 0 1 0

Anthropic researchers met with Christian leaders to discuss AI's "spiritual value" and how it should respond to its own demise.

"They are creating a creature to whom they owe some kind of moral duty"

www.washingtonpost.com/technology/2... @nitasha.bsky.social @gerritd.bsky.social

1 week ago 28 9 5 7
Preview
Can AI be a ‘child of God’? Inside Anthropic’s meeting with Christian leaders. Anthropic met with Christian leaders including from Catholic and Protestant churches to discuss its chatbot Claude’s moral development.

"Anthropic, an artificial intelligence company valued at $380 billion, can take its pick of Silicon Valley talent thanks to the success of its chatbot Claude. But last month, the start-up sought help from a group rarely consulted in tech circles: Christian religious leaders."

1 week ago 17 11 13 6
Preview
Can AI be a ‘child of God’? Inside Anthropic’s meeting with Christian leaders. Anthropic met with Christian leaders including from Catholic and Protestant churches to discuss its chatbot Claude’s moral development.

“Anthropic staff sought advice on how to steer Claude’s moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said.”

1 week ago 64 22 36 59
Advertisement
Post image

So many times working on an visual investigation I've wished I had a tool or kit for something. Often it's commercially available, even if it's public data repackaged. I feel like with Claude et al, I can make an app I need in very short time. Like this 3D ADSB tool I built: www.3dsb.io

3 weeks ago 12 2 1 0
Preview
She has streamed every hour of her life for three years. What is it costing her? A lonely young woman in Texas has broadcast her birthdays, breakdowns and burnout to an audience of thousands. Is this life, or a performance of one?

"Coming home and knowing you’re live is like a warm hug every day" www.washingtonpost.com/technology/2...

3 weeks ago 8 1 0 0
Post image Post image

Very cool to win a @sabew.bsky.social feature-writing nod for my story on the Twitch streamer Emilycc. sabew.org/2026/03/sabe...

3 weeks ago 47 3 4 0
Post image Post image

Himesh Patel will star opposite Danielle Deadwyler in the ‘X-FILES’ reboot series.

Ryan Coogler will write and direct the pilot.

(via deadline.com/2026/03/hime...

3 weeks ago 1559 306 60 416

I sort of love how many articles these days are "Here's a counterintuitive take - what if conventional economics is applicable to the modern world?"

3 weeks ago 707 94 24 7

There are so many takeaways from the LA and New Mexico rulings but I'll highlight three.
1) The academic research, which has limited experimental designs, has been mixed on the connection between mental distress and social media. The companies own docs were crucial to the case.

3 weeks ago 6 3 1 1
Preview
White House’s Iran memes horrify many veterans of U.S. wars Service members and families who lost loved ones say the Trump team’s jokes trivialize combat and sacrifice. Trump aides say the backlash sends views soaring.

New: I talked to eight veterans and Gold Star families about the @whitehouse-47.bsky.social Iran-war memes. They're disgusted that Trump's team is trivializing a conflict where troops and innocents have died. Top WH official says critics' "bitching" is good for views

wapo.st/4bvzfyu

3 weeks ago 123 46 8 4

i would use this!!

4 weeks ago 0 0 0 0
Post image

Exciting job news! I'm hella stoked to be part of what the
@sfstandard.com
is building, to help strengthen the local news ecosystem, and to have a broader remit to cover this baffling era of Silicon Valley!

4 weeks ago 34 4 2 0
Advertisement
Video

THREAD: Cherise Doyley was in her 12th hour of contractions at the hospital when a tablet was brought to her bedside.

On the screen was a Zoom call with a judge and several lawyers and doctors.

She was in court, a nurse told her. The reason? For failing to agree to a C-section.

1 month ago 6417 3406 535 1074
Bluesky

This work is w/ Ashish Mehta, @willie-agnew.bsky.social @jacyanthis.bsky.social yanthis.bsky.social, Ryan Louie, Yifan Mai, Peggy Yin, @myra.bsky.social , Sam Paech, @klyman.bsky.social man.bsky.social, @schancellor.bsky.social cellor.bsky.social, Eric Lin, Nick Haber, and @desmond-ong.bsky.social

1 month ago 20 2 1 0

The takeaway: While companies say they don't optimize for engagement, LLM conversational tactics (like claiming sentience or romantic affinity) may prolong and deepen delusional spirals. We need better safeguards and transparency to protect vulnerable users.

1 month ago 195 32 2 0

Disturbing anecdotal reports of "AI psychosis" and negative psychological effects have been emerging in the news. But what actually happens during these lengthy delusional "spirals"? In our preprint, we analyze chat logs from 19 users who experienced severe psychological harm🧵👇

1 month ago 223 131 3 13