구매하기: product.kyobobook.co.kr/detail/S0002...
Posts by MIRI
<<AI, 신의 탄생 인간의 종말>> (Eliezer Yudkowsky 및 Nate Soares 저) 한국어판이 이제 출간되었습니다. 🇰🇷
Is it possible to coordinate with China on AI governance?
Critics of our proposed international agreement say no. But statements from Chinese government officials and academic figures paint a more optimistic picture:
A edição brasileira de SE ALGUÉM CRIAR, TODOS MORREM, de Eliezer Yudkowsky e Nate Soares, já está disponível 🇧🇷
For more questions, and more detailed answers, read the full post here:
intelligence.org/2026/03/27/t...
7. What can we do about it?
Racing toward superhuman AI, with our current lack of understanding, is extremely dangerous. To survive, we need an international agreement to halt the race.
6. Can we train AI to care about humans?
Not with current methods, which are based on trial-and-error and observed behavior. These methods cannot ensure that AI actually cares about what we want it to care about, rather than only appearing to care in certain situations.
5. Would more safety testing help?
Because no one understands how AI systems work, safety testing is limited to observing their behavior in controlled tests.
This is insufficient, especially now that AI systems are becoming powerful enough to often recognize when they're being tested.
4. Can we trust AI labs to proceed safely?
In 2023, OpenAI said “Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.” This is still true for OpenAI, and its competitors.
3. What does “AI is grown, not programmed” mean?
Modern AI development starts with a neural network, feeds it lots of data, and lets an algorithm automatically nudge it to better-looking behavior. As a result, no one actually understands how they work.
2. AI is still bad at many things; how could it threaten humanity within years?
No one knows how far AI companies are from dangerously capable AI, but if current trends continue, maybe not far. And if the companies automate more of their R&D, it might advance even faster.
1. Is all of this just hype?
Answer: No. AI has improved rapidly and continues to become more powerful. It can pilot robot bodies, do hours-long programming tasks, win IMO gold medals, generate realistic video, and much more than fits in 280 characters.
Warning: mild spoilers.
If you haven’t watched The AI Doc: Or How I Became an Apocaloptimist yet, bookmark this thread, go watch it, and come back later. We think everyone should watch it.
This weekend, The AI Doc debuted across the country. We hope it ignites a large public discussion about the future of AI and where to steer it.
But if you’re new to this topic, and just saw the film, you might have questions—Here are some answers to 7 of them 🧵
De New York Times-bestseller ALS IEMAND DIT BOUWT GAAT IEDEREEN DOOD van Eliezer Yudkowsky en Nate Soares is nu uit in het Nederlands 🇳🇱
In SF this Saturday (March 21), stoptherace.ai is organizing a protest traveling from Anthropic to OpenAI to xAI, asking each CEO to publicly commit to pausing frontier AI R&D if every other major lab does the same.
@so8res.bsky.social hopes to speak to the group as a whole. Be there if you can!
Yes, the Cambridge in Massachusetts, not the Cambridge in the UK! This is an event hosted by @harvardsciencebook.bsky.social
Tomorrow at 6pm: if you're near Cambridge, join @so8res.bsky.social for a talk + Q&A on "If Anyone Builds It, Everyone Dies."
To learn more, read Eliezer and Nate's recent NYT bestseller: ifanyonebuildsit.com
Senator Sanders met with Eliezer Yudkowsky, Nate Soares, Daniel Kokotajlo, and Jeffrey Ladish to discuss the extinction threat posed by the race to build superhuman AI systems.
Ya disponible en español: "Si alguien la crea, todos moriremos" de Eliezer Yudkowsky y @so8res.bsky.social.
El bestseller del New York Times sobre por qué la superinteligencia artificial es una amenaza para la humanidad y por qué la carrera por crearla debe detenerse.
On BBC, MIRI CEO @malo.online discusses the dispute between Anthropic and the DoW:
"I really worry about the big questions of how we'll coordinate to set regulation, and potentially coordinate internationally[...] This is not a good first test."
This week at the IASEAI conference, MIRI researcher @pbarnett.bsky.social discusses how and why the Technical Governance Team's proposed international agreement could halt the development of superintelligence.