Advertisement · 728 × 90

Posts by MIRI

구매하기: product.kyobobook.co.kr/detail/S0002...

1 week ago 0 0 0 0
Post image

<<AI, 신의 탄생 인간의 종말>> (Eliezer Yudkowsky 및 Nate Soares 저) 한국어판이 이제 출간되었습니다. 🇰🇷

1 week ago 1 0 1 0
Preview
Promising Signals on AI Governance from China - Machine Intelligence Research Institute View the official memo here.

Read the full memo here: intelligence.org/2026/04/06/p...

2 weeks ago 1 0 0 0
Post image

Is it possible to coordinate with China on AI governance?

Critics of our proposed international agreement say no. But statements from Chinese government officials and academic figures paint a more optimistic picture:

2 weeks ago 2 1 1 0
Preview
If Anyone Builds It, Everyone Dies The race to superhuman AI risks extinction, but it's not too late to change course.

Saiba mais: ifanyonebuildsit.com/pt-BR

2 weeks ago 1 1 0 0
Post image

A edição brasileira de SE ALGUÉM CRIAR, TODOS MORREM, de Eliezer Yudkowsky e Nate Soares, já está disponível 🇧🇷

2 weeks ago 0 0 1 0
Preview
The AI Doc: Your Questions Answered - Machine Intelligence Research Institute So you’ve just seen The AI Doc: Or How I Became an Apocaloptimist, and you suddenly have questions, lots of them. The 104-minute documentary (currently in

For more questions, and more detailed answers, read the full post here:
intelligence.org/2026/03/27/t...

3 weeks ago 0 0 0 0
Advertisement

7. What can we do about it?

Racing toward superhuman AI, with our current lack of understanding, is extremely dangerous. To survive, we need an international agreement to halt the race.

3 weeks ago 0 0 1 0

6. Can we train AI to care about humans?

Not with current methods, which are based on trial-and-error and observed behavior. These methods cannot ensure that AI actually cares about what we want it to care about, rather than only appearing to care in certain situations.

3 weeks ago 0 0 1 0

5. Would more safety testing help?

Because no one understands how AI systems work, safety testing is limited to observing their behavior in controlled tests.

This is insufficient, especially now that AI systems are becoming powerful enough to often recognize when they're being tested.

3 weeks ago 0 0 1 0

4. Can we trust AI labs to proceed safely?

In 2023, OpenAI said “Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.” This is still true for OpenAI, and its competitors.

3 weeks ago 0 0 1 0

3. What does “AI is grown, not programmed” mean?

Modern AI development starts with a neural network, feeds it lots of data, and lets an algorithm automatically nudge it to better-looking behavior. As a result, no one actually understands how they work.

3 weeks ago 0 0 1 0
Post image

2. AI is still bad at many things; how could it threaten humanity within years?

No one knows how far AI companies are from dangerously capable AI, but if current trends continue, maybe not far. And if the companies automate more of their R&D, it might advance even faster.

3 weeks ago 0 0 1 0
Video

1. Is all of this just hype?

Answer: No. AI has improved rapidly and continues to become more powerful. It can pilot robot bodies, do hours-long programming tasks, win IMO gold medals, generate realistic video, and much more than fits in 280 characters.

3 weeks ago 0 0 1 0

Warning: mild spoilers.

If you haven’t watched The AI Doc: Or How I Became an Apocaloptimist yet, bookmark this thread, go watch it, and come back later. We think everyone should watch it.

3 weeks ago 0 0 2 0
Post image

This weekend, The AI Doc debuted across the country. We hope it ignites a large public discussion about the future of AI and where to steer it.

But if you’re new to this topic, and just saw the film, you might have questions—Here are some answers to 7 of them 🧵

3 weeks ago 3 1 1 0
Preview
If Anyone Builds It, Everyone Dies The race to superhuman AI risks extinction, but it's not too late to change course.

Lees meer: ifanyonebuildsit.com/nl

3 weeks ago 0 0 0 0
Advertisement
Post image

De New York Times-bestseller ALS IEMAND DIT BOUWT GAAT IEDEREEN DOOD van Eliezer Yudkowsky en Nate Soares is nu uit in het Nederlands 🇳🇱

3 weeks ago 0 0 1 0
Stop The AI Race · March 21, 2026 Marching to Anthropic, OpenAI, and xAI. Asking AI CEOs to commit to a conditional global pause. March 21, San Francisco.

In SF this Saturday (March 21), stoptherace.ai is organizing a protest traveling from Anthropic to OpenAI to xAI, asking each CEO to publicly commit to pausing frontier AI R&D if every other major lab does the same.

@so8res.bsky.social hopes to speak to the group as a whole. Be there if you can!

1 month ago 3 1 0 0

Yes, the Cambridge in Massachusetts, not the Cambridge in the UK! This is an event hosted by @harvardsciencebook.bsky.social

1 month ago 0 0 1 0
Preview
Nate Soares at the Harvard Science Center | Harvard Book Store presenting  If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All in conversation with Greg Kestin

learn more and RSVP here: www.harvard.com/event/nate-s...

1 month ago 0 0 0 0
Post image

Tomorrow at 6pm: if you're near Cambridge, join @so8res.bsky.social for a talk + Q&A on "If Anyone Builds It, Everyone Dies."

1 month ago 1 0 2 0

To learn more, read Eliezer and Nate's recent NYT bestseller: ifanyonebuildsit.com

1 month ago 2 0 0 0

Senator Sanders met with Eliezer Yudkowsky, Nate Soares, Daniel Kokotajlo, and Jeffrey Ladish to discuss the extinction threat posed by the race to build superhuman AI systems.

1 month ago 7 2 1 1
Si alguien la crea, todos moriremos - Eliezer Yudkowsky, Nate Soares | PlanetadeLibros Si alguien la crea, todos moriremos, de Eliezer Yudkowsky y Nate Soares. Un llamamiento inaplazable para poner freno a la carrera hacia la superinteligencia.

Conseguí el tuyo acá: www.planetadelibros.us/libro-si-alg...

1 month ago 0 0 0 0
Advertisement
Post image

Ya disponible en español: "Si alguien la crea, todos moriremos" de Eliezer Yudkowsky y @so8res.bsky.social.

El bestseller del New York Times sobre por qué la superinteligencia artificial es una amenaza para la humanidad y por qué la carrera por crearla debe detenerse.

1 month ago 3 0 1 0
Video

On BBC, MIRI CEO @malo.online discusses the dispute between Anthropic and the DoW:

"I really worry about the big questions of how we'll coordinate to set regulation, and potentially coordinate internationally[...] This is not a good first test."

1 month ago 6 1 0 0
New Report: An International Agreement to Prevent the Premature Creation of Artificial Superintelligence | MIRI TGT Nov 18, 2025 - We at the MIRI Technical Governance Team have released a report describing an example international agreement to halt the advancement towards artificial superintelligence. The agreement...

Learn more about the proposal here: techgov.intelligence.org/blog/new-rep...

1 month ago 2 0 0 0
Post image

This week at the IASEAI conference, MIRI researcher @pbarnett.bsky.social discusses how and why the Technical Governance Team's proposed international agreement could halt the development of superintelligence.

1 month ago 9 1 1 0
Preview
AI technologies ‘often behave in ways their creators don’t want,’ warns expert | CNN Paula Newton speaks with Nate Soares, co-author of “If Anyone Builds It, Everyone Dies,” about the tense standoff between the artificial intelligence company Anthropic and the Pentagon and where the t...

Check out the full interview here:
www.cnn.com/2026/02/26/t...

1 month ago 1 0 0 0