Advertisement ยท 728 ร— 90

Posts by Nate

I agree. There are actions that AI execs could yet take that would make a big difference for Earth's survival, by alerting the world to the danger. In fact, some actions far short of burning the whole company down could help in that regard.

1 month ago 1 0 0 0

In an interview today I said AI execs don't really have the power to stop this suicide race; we need global intervention. The interviewer objected that if one of these companies simply shut down, foregoing billions and tanking lawsuits, the world wouldn't be able to ignore it.

1 month ago 1 0 1 0

I'm partway through seven Spanish interviews and three Dutch ones, and they're asking great questions. No "please relate this to modern politics for me", just basics like "What do you mean that nobody understands AI?" and "Why would it kill us?" and "holy shit". Warms the heart.

1 month ago 4 0 0 0
Preview
If Anyone Builds It, Everyone Dies The race to superhuman AI risks extinction, but it's not too late to change course.

For anyone interested in learning more about the extinction threat, Eliezer and I recently wrote a book about it: ifanyonebuildsit.com. It's a grim title, but it starts with "if", and the book ends on a hopeful note. The danger can be averted if we act in time.

1 month ago 1 1 0 0

I'm honored that you took the time to speak with us. It gives me hope.

1 month ago 0 0 1 0

ChatGPT is just a sophisticated text transformer. ChatGPT cannot kill you. That doesn't mean that smarter text transformers are impossible, nor that they would be safe.

1 month ago 1 0 0 0

A housecat is just biochemical reactions. A housecat cannot kill you. That doesn't mean that tigers are impossible, nor that they are safe.

1 month ago 1 0 1 0
Advertisement

When it comes to stopping the reckless race to artificial superintelligence: Don't sit around hoping for a warning shot. Prepare well.

2 months ago 2 0 0 0

The moral of the first tale is that warning shots alone don't save you. The moral of the second tale is that you don't need to wait for exactly the right disaster to do something you think needs doing. You just need to prepare well, and wait for an excuse.

2 months ago 2 0 1 0

A tale of two warning shots, #2: George W. Bush was worried about Saddam Hussein. Some completely unrelated terrorists slammed planes into the twin towers. That was the final straw. The USA took out Saddam Hussein.

2 months ago 1 0 1 0

A tale of two warning shots, #1: COVID happened. Scientists are divided on whether it was a lab leak. The world did not rally against dangerous viral research in labs. The warning shot was squandered.

2 months ago 2 1 1 0

(Also: modern AIs are trained on problem-solving, not just prediction. But even ignoring that, "next-token prediction" is not an atomic ability that is superficial and therefore safe; doing it really really well would require potent & dangerous internal mechanics.)

2 months ago 1 0 0 0

"The AI just predicts the next token" is like "the container just lifts up." How does it achieve lift? Both a hot air balloon and a rocket lift up, but one of them violently explodes and kills everyone in range if you design a big one even slightly wrong.

2 months ago 0 0 1 0

Just having the book in front of the crowd here was probably worth more than all my sideline interviews and roundtables combined. I'm honored by whoever made that call. I've heard folks behind the conf see superintelligence as a possibility worth putting on the radar. Progress.

2 months ago 1 0 0 0

The empty holders are where more copies used to sit, and at the beginning of the conference there were stacks. When I walked in just now, the people working at the bookstore were reading it and asked for my signature.

2 months ago 0 0 1 0

The Munich Security Conference has a little bookstore where they feature about a dozen books, including If Anyone Builds It, Everyone Dies. Most of the books have multiple stacks left. My book has exactly one copy left.

2 months ago 1 0 1 0
Advertisement

I gave my remarks to a bunch of high-powered intelligence community folks, and didn't shy away from the dangers. Most responded with something mild that avoided acknowledging superintelligence was a possiibilty. But they also didn't argue against it. I'm honored they invited me. It's progress.

2 months ago 1 0 0 0

A couple people have recognized me and told me they like my book and confided in me that they harbor worries. Then when it's their turn to talk they say something mild that avoids acknowledging superintelligence as a possibility.

2 months ago 0 0 1 0

I also feel a bit like a child here because it keeps feeling like I'm the child straight out of "The Emperor Has No Clothes".

2 months ago 0 0 1 0

I was invited to give remarks at a high-powered intelligence community dinner, but my badge lacked the "high powered intelligence community member" marker so I got held up at the door until a grownup came and rescued me.

2 months ago 0 0 1 0

I, uh, also had to go to the mall and buy a suit real quick, once I noticed what kind of conference I was at.

2 months ago 0 0 1 0

For some reason the check-in instructions didn't make it to me, so I just tried to waltz onto the premises rather than routing through the special check-in desk that was squirreled away in a building I was unaware of. The police didn't love that.

2 months ago 0 0 1 0

I feel like a child here, in part because most everyone else is older and in part because I keep having funny mishaps. For instance:

2 months ago 0 0 1 0

I'm at the Munich Security Conference. It's interesting. It feels a lot like a normal academic conference, except it's packed with generals and senators and there's armed police pouring out or every orifice. ๐Ÿงต

2 months ago 3 0 1 0
Advertisement