Advertisement · 728 × 90

Posts by Rob Bensinger

Post image

Note that people's placements are very approximate, and reliant on some amount of guesswork. Also, there are huge selection effects: most people haven't publicly weighed in on this topic! Treat this image more like a vague review of public statements, and less like a representative survey.

1 week ago 0 0 0 0
Post image

Who should I add to this? Also, did I get anyone's view wrong? What's your own view about AI companies' attempts to build ever-larger data centers in the race to build superhuman AI?

1 week ago 0 0 1 0

Good thing it's only killing them off figuratively... for now.
bsky.app/profile/robb...

1 month ago 3 1 0 0
AI Expert Tells Bernie: “The Humans will be Discarded”
AI Expert Tells Bernie: “The Humans will be Discarded” YouTube video by Senator Bernie Sanders

Will AI become smarter than humans?

If so, is humanity in danger?

I went to Silicon Valley to ask some of the leading AI experts that question.

Here’s what they had to say:

1 month ago 692 211 101 51

A tale of two warning shots, #1: COVID happened. Scientists are divided on whether it was a lab leak. The world did not rally against dangerous viral research in labs. The warning shot was squandered.

2 months ago 2 1 1 0
Preview
A Near-Term Policy for Not Getting Killed by AI Hundreds of scientists, including three of the four most cited living AI scientists, have said that AI poses a very real chance of killing us all.

In post form: nothingismere.substack.com/p/a-near-ter...

2 months ago 2 0 0 0
Preview
A Near-Term Policy for Not Getting Killed by AI Hundreds of scientists, including three of the four most cited living AI scientists, have said that AI poses a very real chance of killing us all.

nothingismere.substack.com/p/a-near-ter...

2 months ago 3 0 0 0

bsky.app/profile/robb...

2 months ago 1 0 0 0
Advertisement
Preview
What Would It Take to Shut Down Global AI Development? | If Anyone Builds It, Everyone Dies Resources and Q&A for the book If Anyone Builds It, Everyone Dies.

¹¹ ifanyonebuildsit.com/13/what-woul...
¹² forbes.com/sites/federi...
¹³ ifanyonebuildsit.com/treaty (different version at arxiv.org/pdf/2511.10783)
¹⁴ reuters.com/world/china/...
¹⁵ archive.ph/K9mVn

2 months ago 1 0 0 0
Preview
How much does it cost to train frontier AI models? The cost of training top AI models has grown 2-3x annually for the past eight years. By 2027, the largest models could cost over a billion dollars.

Sources:
¹ epoch.ai/blog/how-muc...
² epoch.ai/data-insight...
³ arxiv.org/pdf/2408.16074
⁴ datacenterdynamics.com/en/news/tsmc...
⁵ wsj.com/tech/a-criti...
⁶ , ⁹ cset.georgetown.edu/publication/...
⁷ intelligence.org/wp-content/u...
⁸ x.com/ESYudkowsky/...
¹⁰ arxiv.org/pdf/2511.10783

2 months ago 1 0 1 1

My hope, in writing this, is to wake people up a bit faster. If you share that hope, maybe share this post, or join the conversation about it; or write your own, better version of a "wake-up" warning. Don't give up on the world so easily.

2 months ago 1 0 1 0

Policies that prevent human extinction are good for liberal democracies and for authoritarian regimes, so clueful people on all sides will endorse those policies.

The question, again, is just whether people will clue in to what's happening soon enough to matter.

2 months ago 1 0 1 0

The CCP is a US adversary. That doesn't mean they're idiots who will destroy their own country in order to thumb their nose at the US. If a policy is Good, that doesn't mean that everyone Bad will automatically oppose it.

2 months ago 1 0 1 0
Post image

The pitch "We can't let China beat us at Russian roulette!" is not very compelling. Even if you suspect China might be unwilling to make a deal, there's zero cost to making an attempt. And the US has already expressed some interest in brokering an international agreement as well:

2 months ago 1 0 1 0
Post image Post image

And, quoting The Economist:¹⁵

2 months ago 1 0 1 0

A: The CCP has already expressed interest in international coordination and regulation on AI. E.g., Reuters reported that Chinese Premier Li Qiang said, "We should strengthen coordination to form a global AI governance framework that has broad consensus as soon as possible."¹⁴

2 months ago 3 0 1 0
Advertisement

So the nuclear analogy is pretty limited in what it can tell us. But it can tell us that international law and norms have enormous power.

- Q: But what about China? Surely they’d never agree to an arrangement like this.

2 months ago 1 0 1 0

Going from "zero superintelligences" to "one superintelligence" is already lethally dangerous. The challenge is to block the construction of ASI while there's still time, not to limit proliferation after it already exists, when it's far too late to take the steering wheel.

2 months ago 1 0 1 0

... and were instead facing a world where dozens or hundreds of nations possess nuclear weapons.

When it comes to superintelligence, anyone building "god-like AI" is likely to get us all killed — whether the developer is a military or a company, and whether their intentions are good or ill.

2 months ago 1 0 1 0

By analogy, nuclear nonproliferation efforts haven’t been perfectly successful. Over the past 75 years, the number of nuclear powers has grown from 2 to 9. But this is a much more survivable state of affairs than if we hadn’t tried to limit proliferation at all...

2 months ago 1 0 1 0

If instead a tiny fraction of the world is trying to find sneaky ways to build a small researcher-starved frontier AI project here and there, while dealing with enormous international pressure and censure, then that seems like a much more survivable situation.

2 months ago 1 0 1 0

... (and no, I don't think this is realistic in the current landscape)...

... that chance increasingly goes out the window as the race heats up, because prioritizing safety will mean sacrificing your competitive edge.

2 months ago 1 0 1 0

If the whole world is racing to build superintelligence as fast as possible, then we’re very likely dead. Even if you think there's a chance that cautious devs could stay in control as AI starts to vastly exceed the intelligence of the human race...

2 months ago 1 0 1 0

A: It’s very rare for countries (or companies!) to deliberately violate international law. It’s rare for countries to take actions that are widely seen as serious threats to other nations’ security. (If it weren't rare, it wouldn't be a big news story when it does happen!)

2 months ago 1 0 1 0
Advertisement

- Q: But surely there will be countries that end up defecting from such an agreement. Even if you’re right that it’s in no one’s interest to race once they understand the situation, plenty of people won’t understand the situation, and will just see superintelligent AI as a way to get rich quick.

2 months ago 1 0 1 0

(Some templates of agreements that would do the job have already been drafted.¹³)

Govs can create a deterrence regime by articulating clear limits and enforcement actions. It’s in no country’s interest to race to its own destruction, and a deterrence regime like this provides an alternative path.

2 months ago 1 0 1 0

Q: But if the US halts, isn’t that just ceding the race to authoritarian regimes?

A: The US shouldn’t halt unilaterally; that would just drive AI research to other countries. Rather, the US should broker an international agreement where everyone agrees to halt simultaneously.

2 months ago 1 0 1 0

What's left is to dial up the volume on that talk, translate that talk into planning and fast action, and recognize that "there's uncertainty how much time we have left" makes this a more urgent problem, not less.

2 months ago 1 0 1 0

At that point, the cat has already firmly left the bag. (And it's not as though there's anything unusual about governments heavily regulating powerful new technologies.)

2 months ago 1 0 1 0

Building superintelligence is unpopular with the voting public,¹² and hundreds of elected officials have already named this issue as a serious priority. The UN Secretary-General and major heads of state are routinely talking about AI loss-of-control scenarios and human extinction.

2 months ago 1 0 1 0