Advertisement · 728 × 90

Posts by Peter Wildeford

It would be better to have a prepared government that already has practice getting things right, rather than a government rushing to the scene after it’s already too late.

1 day ago 1 0 0 0

The point is not that Mythos will go rogue. The concern is that AI 10 more iterations above Mythos could go rogue… and Mythos illustrates, perhaps for the first time, how a superintelligent AI going rogue would actually pose a big deal for national security.

1 day ago 2 0 1 0

Every time a new model comes out, people focus on what it can do right now and don’t think enough about the trend line.

A year ago, AI could barely hack. In June 2025 that AI was helpful for hacking, but it wasn’t until November that AI could autonomously implement.

1 day ago 0 0 1 0

What could’ve happened if Anthropic had simply released Mythos publicly, as most AI companies would do with a flagship model? There’s no law against it. Overnight, every intelligence community operation that depends on signals exploitation is potentially compromised.

1 day ago 0 0 1 0

Anthropic made every consequential decision in this story. Whether to lock down, what to lock down, when to tell the government, what to share, who gets early access and who doesn’t, how to vet those who get access, and what “responsible” means across all of this…

1 day ago 0 0 1 0
Preview
Mythos is just the beginning If you were waiting for a sign that superintelligence is coming, this is it

What should government policy be when a company produces, among other things, an unparalleled cyberweapon? What if future releases are even more capable?

Today I ask these questions about Mythos. Because Mythos is just the beginning. blog.peterwildeford.com/p/mythos-is-...

1 day ago 2 0 2 0
Preview
China Is Reverse-Engineering America’s Best AI Models How AI distillation attacks risk extracting US frontier AI at scale

New blog post from Theo Bearman and me on distillation.

Distillation attacks are occurring where Chinese AI companies train on US AI outputs and use that to make their models better than they otherwise would be.

What does this mean? peterwildeford.substack.com/p/china-is-r...

1 month ago 3 1 0 0

🔴Rep Johnson (SD)
🔵Rep Liccardo (CA)
🔴Rep Kiley (CA)
🔵Rep Lieu (CA)
🔴Rep Mace (SC)
🔵Rep Moulton (MA)
🔴Rep Moran (TX)
🔵Rep Sherman (CA)
🔴Rep Paulina Luna (FL)
🔵Rep Tokuda (HI)
🔴Rep Perry (PA)
🔵Rep Whitesides (CA)

1 month ago 3 0 1 0

🔴Sen Lee (UT)
🔵Sen Sanders (VT)
🔴Sen Lummis (WY)
🔵Sen Schumer (NY)
🔴Rep Biggs (AZ)
🔵Rep Beyer (VA)
🔴Rep Burleson (MO)
🔵Rep Casten (IL)
🔴Rep Crane (AZ)
🔵Rep Foster (IL)
🔴Rep Dunn (FL)
🔵Rep Krishnamoorthi (IL)
(continued)

1 month ago 3 0 1 0
Advertisement

30 current members of Congress have publicly discussed AGI, AI superintelligence, AI loss of control, recursive self-improvement, or the Singularity:

🔴Sen Banks (IN)
🔵Sen Blumenthal (CT)
🔴Sen Blackburn (TN)
🔵Sen Hickenlooper (CO)
🔴Sen Hawley (MO)
🔵Sen Murphy (CT)
(continued)

1 month ago 7 0 1 1

agree that's key. It's obviously harder to measure but it seems to be increasing at a roughly similar rate but from a lower base.

1 month ago 1 0 0 0

x.com/sama/status/...

1 month ago 5 0 0 0

But people should know what the "red lines" rest on, which is just "trust us bro". Nothing else.

1 month ago 10 1 1 0

I expect Sam Altman has a much better relationship with the Pentagon, so maybe this will work. I certainly wish him and OpenAI luck and I hope they can de-escalate the situation.

1 month ago 6 0 1 0

And recall that this is the same Pentagon that just went "0 to 60" nuclear in declaring Anthropic a supply chain risk despite this previously being a Cold War national security technique normally only used for Chinese and Soviet companies.

1 month ago 8 1 1 0

To emphasize - OpenAI's "red lines" are just held together by trust that the Pentagon won't screw OpenAI over on this.

1 month ago 8 0 1 0

This is a way one can go about doing this, and it's OpenAI's right to decide how to do business. But this is a lot less reassuring than what and OpenAI had originally been saying.They had said that their approach was more ironclad than Anthropic's and it's just... not.

1 month ago 9 1 2 0

I asked this question to Sam Altman and the way I interpreted his reply was that they are going to use the "deployment architecture and safety stack" and they expect the Pentagon to be good people and not push back. And if they do push back, then OpenAI would decide what to do.

1 month ago 8 0 2 0
Advertisement

The Pentagon can just say "we both know your model can do this, you should remove that safeguard". And then OpenAI would have to comply or be sued.

1 month ago 8 0 1 0

The way OpenAI bridges this is by saying the protections live in this "deployment architecture and safety stack" rather than the contract language. But if this contract says "all lawful purposes" and your safety stack prevents a lawful purpose, you're in breach of contract.

1 month ago 10 0 1 0

OpenAI is trying to claim simultaneously that (a) their contract with the Pentagon allows for "all lawful purposes" and (b) also that their red lines are fully protected.

1 month ago 20 2 1 1
Preview
The Pentagon's War on Anthropic The Pentagon has a legitimate principle, and a terrible strategy for enforcing it

The Pentagon has a legitimate principle that private companies shouldn't hold moral vetoes over military doctrine.

But they agreed to the contract. And now they're using unprecedented + disproportionate coercion. This should trouble everyone.

My latest - peterwildeford.substack.com/p/the-pentag...

1 month ago 10 1 0 0
Preview
Cold War lessons for the AI era

Are there Cold War lessons to learn for AI?

We've had very fierce competition with the Soviets, and did not trust the Soviets at all, but we were still able to make mutually verified treaties.

In Politico today I'm quoted saying we should do the same with China → www.politico.com/newsletters/...

1 month ago 5 0 0 0
Post image

Adversaries can tamper with or poison leading US models. There also can be risks from insider threats, including potentially the AIs themselves.

Dave Banerjee at IAPS has a roadmap for how to defend -> www.iaps.ai/research/ai-...

1 month ago 3 0 0 0

AI is a real thing

1 month ago 1 0 0 0
Post image Post image

yeah I agree - that's a good point

Maybe you'd see major progress on CAIS's Remote Labor Index or OpenAI's "OpenAI Proof Q&A"?

1 month ago 1 0 0 0

agree - probably measurement noise (in both estimates)

1 month ago 1 0 0 0

16. Joseph Sifakis (Turing Award '07)
17. John C. Mather (Physics '06)
18. Frank Wilczek (Physics '04)
19. Joseph Stiglitz (Economics '01)
20. Andrew Yao (Turing Award '00)

*The Turing Award is equivalent to the CS Nobel

1 month ago 3 1 1 0
Advertisement

7. Giorgio Parisi (Physics '21)
8. Jennifer Doudna (Chemistry '20)
9. Yoshua Bengio (Turing Award '18*)
10. Beatrice Fihn (Peace '17)
11. Oliver Hart (Economics '16)
12. Juan Manuel Santos (Peace '16)
13. Ahmet Üzümcü (Peace '13)
14. Jean Jouzel (Peace '07)
15. Riccardo Valentini (Peace '07)

1 month ago 3 1 1 0

20 Nobel Prize winners have warned that we may someday lose human control over advanced AI systems

1. Geoffrey Hinton (Physics '24)
2. John Hopfield (Physics '24)
3. Demis Hassabis (Chemistry '24)
4. Daron Acemoglu (Economics '24)
5. Ben Bernanke (Economics '22)
6. Maria Ressa (Peace '21)

1 month ago 8 1 1 0