Advertisement · 728 × 90

Posts by Center For Humane Technology

Video

"This is not to scare you. It's to be crystal clear about where this is heading: This is heading towards an anti-human future."

Watch Tristan Harris give Bill an eye-opening A.I. update. 😮🤖

Join #TheHumanMovement @ human.mov

@humanetech.bsky.social
@realbillmaher.bsky.social

4 weeks ago 44 14 13 4
Preview
A conversation with the team behind "The AI Doc" — in theaters THIS FRIDAY 3/27 “The AI Doc: Or How I Became An Apocaloptimist” opens in theaters across the U.S. this Friday, March 27. In this episode, we sit down with the team behind this groundbreaking documentary — Oscar-winning producers Daniel Kwan, Jonathan Wang, and Ted Tremper. They explore how they navigated the overwhelming complexity of AI, held space for radically different perspectives, and created a film designed not just to inform but to be experienced together. At CHT, we believe clarity creates agency. This film has the power to create the shared clarity we need to steer the direction of AI towards a better, more humane technological future. With every new technology, there’s a brief window to set the rules of the road that determine the future we live in. This is ours. So grab your friends, your family and go see “The AI Doc.” Subscribe to our Substack https://centerforhumanetechnology.substack.com/

Watch - bit.ly/41ndWcm

Read - bit.ly/47TKa2H

Listen - bit.ly/4by7kfW

4 weeks ago 4 0 0 1

"The AI Doc" opens this Friday. In this week’s episode of YUA, Tristan and Aza talk to the team behind it about how they condensed AI's complexities into a single film that anyone can understand.

Listen to the episode. Then go see the film with your friends and family.

Links in the replies.

4 weeks ago 9 3 1 0
Post image Post image Post image Post image

That's a wrap on @sxsw.com 2026! Tristan and Aza had a packed week full of premieres, panels, and podcasts. The movement for a better technological future is here — together, we can shape the direction of AI.

See you next time, Austin!

1 month ago 3 0 0 1
Preview
Anthropic Drops Flagship Safety Pledge In an abrupt shift, the company may release future AI models without ironclad safety guarantees

Earlier this week, Anthropic announced it would substantially scale back its commitments to safety. This what we've been warning about for years: when AI labs are incentivized for competitive advantage, voluntary safety commitments aren't enough.

Read more here: https://bit.ly/4cwwzBN

1 month ago 14 6 0 1
Post image

Our policy team has put together their annual policy forecast for 2026. From midterm election dynamics to the fate of state AI laws under threat of federal preemption, you can see what we're tracking here: https://bit.ly/4rlXHYB

2 months ago 1 0 0 0
Preview
The Race to Build God: AI's Existential Gamble — Yoshua Bengio & Tristan Harris at Davos This week on Your Undivided Attention, Tristan Harris and Daniel Barcay offer a backstage recap of what it was like to be at the Davos World Economic Forum meeting this year as the world’s power brokers woke up to the risks of uncontrolled AI. Amidst all the money and politics, the Human Change House (https://www.youtube.com/@_humanchange) staged a weeklong series of remarkable conversations between scientists and experts about technology and society. This episode is a discussion between Tristan and Professor Yoshua Bengio, who is considered one of the world’s leaders in AI and deep learning, and the most cited scientist in the field. Yoshua and Tristan had a frank exchange about the AI we’re building, and the incentives we’re using to train models. What happens when a model has its own goals, and those goals are ‘misaligned’ with the human-centered outcomes we need? In fact this is already happening, and the consequences are tragic. Truthfully, there may not be a way to ‘nudge’ or regulate companies toward better incentives. Yoshua has launched a nonprofit AI safety research initiative called Law Zero that isn't just about safety testing, but really a new form of advanced AI that's fundamentally safe by design. Corrections and Clarifications: 1) In this episode, Tristan Harris discussed AI chatbot safety concerns. The core issues are substantiated by investigative reporting, with these clarifications: Grok: The Washington Post reported in August 2024 that Grok generated sexualized images involving minors and had weaker content moderation than competitors. https://www.washingtonpost.com/technology/2024/08/07/grok-ai-child-safety-csam/ Meta: The Wall Street Journal reported in December 2024 that Meta reduced safety restrictions on its AI chatbots. Testing showed inappropriate responses when researchers posed as 13-year-olds (Meta's minimum age) (https://www.wsj.com/tech/ai/meta-ai-chatbot-safety-restrictions-e9e8ce3f). Our discussion referenced "eight year olds" to emphasize concerns about young children accessing these systems; the documented testing involved 13-year-old personas. Bottom line: The fundamental concern stands—major AI companies have reduced safety guardrails due to competitive pressure, creating documented risks for young users. 2) There was no Google House at Davos in 2026, as stated by Tristan. It was a collaboration at Goals House. 3) Tristan states that in 2025, the total funding going into AI safety organizations was “on the order of about $150 million.” This number is not strictly verifiable. Subscribe to our podcast: https://www.humanetech.com/podcast Subscribe to our Substack: https://centerforhumanetechnology.substack.com/

Watch - bit.ly/4kKKiXC

Listen - apple.co/4cDCyo9

2 months ago 1 0 0 0
Post image

This week on Your Undivided Attention, we’re bringing you a discussion between Tristan and Yoshua Bengio about the necessity and challenges of building a truly safe AI system, and why the current incentives are taking us in the opposite direction.

Check it out at the links in the replies.

2 months ago 3 2 1 0
FEED DROP: Possible with Reid Hoffman and Aria Finger
FEED DROP: Possible with Reid Hoffman and Aria Finger YouTube video by Center for Humane Technology

Watch - youtu.be/-EsM5NleZKM

Read - centerforhumanetechnology.substack.com/p/feed-drop-...

Listen - podcasts.apple.com/us/podcast/f...

2 months ago 0 0 0 0

This week on Your Undivided Attention, we’re bringing you Aza Raskin’s conversation with Reid Hoffman and Aria Finger on their podcast “Possible”

Link in the replies.

2 months ago 0 0 1 0
Advertisement
Preview
What's at Stake: Preserving What Makes Us Deeply Human in the Age of AI Announcing CHT’s new work on “AI and What Makes Us Human”

open.substack.com/pub/centerfo...

2 months ago 2 1 0 1
Post image

Center for Humane Technology has launched a new work area:

“AI and What Makes Us Human”

How is AI eroding the things we hold dear in our lives? And what new norms, rights, and protections do we need in order to preserve our humanity in the age of AI?

Read more on our Substack, link in replies.

2 months ago 5 1 1 1
Post image

Dr. Zak Stein has been studying AI psychosis. He sees the emergence of the attachment economy: AI systems designed to exploit our psychological vulnerabilities at scale.

Read more about Zak’s diagnosis and solutions from his appearance on Your Undivided Attention: https://bit.ly/3NN9m3K

2 months ago 4 1 0 1
Post image

CHT co-founders Tristan Harris and Aza Raskin last night at the premiere of "The AI Doc" at #Sundance. We are so excited to celebrate the incredible collaborative effort that went into this milestone project!

For more info, check out: https://bit.ly/45olxdj

(Photo by Arturo Holmes/Getty Images)

2 months ago 1 2 0 0
Program Guide | 2025 Sundance Film Festival Discover the 2025 film lineup.

Tonight is the premiere of The AI Doc at the #Sundance Film Festival. This doc explores the promise and peril of the most powerful technology humanity has ever wielded.

For more information on screenings, go to: https://bit.ly/45olxdj

2 months ago 0 0 0 0
Post image

We’re thrilled that CHT co-founders, Tristan Harris and Aza Raskin, will be at the #SundanceFilmFestival for the premiere of "The AI Doc" from Daniel Roher and Charlie Tyrell.

If you want to connect while we’re in Park City, we’d love to hear from you.

For more info: https://bit.ly/45olxdj

2 months ago 3 0 0 0
Post image

At the World Economic Forum annual meeting in Davos, CHT co-founder Tristan Harris sat down with @yoshuabengio.bsky.social for a discussion on the critically important question of how to align AI with human thriving.

For more information on Human Change, go to: https://humanchange.com/

2 months ago 5 3 0 0
Preview
Advertising is Coming to AI. It’s Going to Be a Disaster. Daniel Barcay sounds the alarm on AI chatbots hiding advertising in conversations—and why this threatens autonomy and demands new rules.

OpenAI just announced their plans to include ads in ChatGPT. Our executive director Daniel Barcay wrote in @techpolicypress.bsky.social last year about how much of a disaster that's going to be.

Read his diagnosis of the broken incentives behind advertising in AI: https://bit.ly/4b8maeL

2 months ago 2 0 0 1
Attachment Hacking and the Rise of AI Psychosis
Attachment Hacking and the Rise of AI Psychosis YouTube video by Center for Humane Technology

Watch - youtu.be/wwMAdSqOY2A

Listen - podcasts.apple.com/us/podcast/a...

Read - centerforhumanetechnology.substack.com/p/attachment...

2 months ago 0 0 0 0
Advertisement
Post image

In this week's episode of YUA, Dr. Zak Stein, founder of the AI Psychological Harms Research Coalition, explores the mechanisms behind AI psychosis, the incentives of the attachment economy, and the path to a future where AI enhances relationships instead of replacing them.

Link in the replies.

2 months ago 3 2 1 0

Our partners at Encode have just launched planforai.org, the go-to website for young people entering the workforce. This is a critical resource to prepare students as they navigate the new uncertainty AI poses for work.

3 months ago 3 2 0 0
Post image

We're living in the world game theory built — where every interaction feels strategic and trust feels naive. AI threatens to lock that logic in place forever. How do we escape?

Read the key takeaways from our conversation with Prof. Sonja Amadae on the Game Theory Dilemma:

https://bit.ly/4qZ3mU0

3 months ago 3 3 0 1
Are we "prisoners" of reason? Prof Sonja Amadae on the world game theory built.
Are we "prisoners" of reason? Prof Sonja Amadae on the world game theory built. YouTube video by Center for Humane Technology

Watch - youtu.be/r54-jjAKEuE

Listen - podcasts.apple.com/us/podcast/w...

Read - open.substack.com/pub/centerfo...

3 months ago 0 0 0 0

In this week’s episode of Your Undivided Attention, Tristan and Aza sit down with prof. Sonja Amadae to explore how game theory captured modern life, why it's making the world feel increasingly meaningless, and how we can escape the game theory dilemma.

Link to the full episode in the replies:

3 months ago 3 0 1 0
Post image

The AI jobs conversation is missing what's already happening. Total employment: stable. Early-career roles in AI-exposed fields: down 13-20%. The career ladder is breaking, not through mass layoffs, but by eliminating how people learn and advance.

Read our latest on AIxJobs. Link in the comments.

3 months ago 4 0 1 0
Camille Carlton on the Hidden Dangers of Chatbots & AI Governance | RegulatingAI Podcast
Camille Carlton on the Hidden Dangers of Chatbots & AI Governance | RegulatingAI Podcast YouTube video by The RegulatingAI Podcast

Check out the full conversation: www.youtube.com/watch?v=iKg0...

4 months ago 0 0 0 0
Post image

Our Policy Director Camille Carlton joined the Regulating AI Podcast to discuss the the mental health risks of AI companions, the recent lawsuits against OpenAI and Character AI, and what enforceable AI policy actually looks like.

Link to the full episode in the replies:

4 months ago 4 1 1 0
America and China Are Racing to Different AI Futures
America and China Are Racing to Different AI Futures YouTube video by Center for Humane Technology

Watch: youtu.be/qDNFaAz3_Cw

Listen: podcasts.apple.com/us/podcast/a...

Read: open.substack.com/pub/centerfo...

4 months ago 0 0 0 0
Advertisement
Post image

Everyone talks about the AI race with China. But are we even racing toward the same thing?

In the latest episode of Your Undivided Attention, Tristan sits down with Selina Xu and Matt Sheehan to explore the state of Chinese AI development.

Link to the full episode in the replies:

4 months ago 2 0 1 0
Preview
The AI Dilemma — with Tristan Harris Podcast Episode · The Prof G Pod with Scott Galloway · 12/11/2025 · 1h 1m

Apple Podcasts: podcasts.apple.com/us/podcast/t...

Spotify: open.spotify.com/episode/5n4y...

YouTube: youtu.be/MLvxRHlsMz0?...

4 months ago 4 0 0 3