It would be better to have a prepared government that already has practice getting things right, rather than a government rushing to the scene after it’s already too late.
Posts by Peter Wildeford
The point is not that Mythos will go rogue. The concern is that AI 10 more iterations above Mythos could go rogue… and Mythos illustrates, perhaps for the first time, how a superintelligent AI going rogue would actually pose a big deal for national security.
Every time a new model comes out, people focus on what it can do right now and don’t think enough about the trend line.
A year ago, AI could barely hack. In June 2025 that AI was helpful for hacking, but it wasn’t until November that AI could autonomously implement.
What could’ve happened if Anthropic had simply released Mythos publicly, as most AI companies would do with a flagship model? There’s no law against it. Overnight, every intelligence community operation that depends on signals exploitation is potentially compromised.
Anthropic made every consequential decision in this story. Whether to lock down, what to lock down, when to tell the government, what to share, who gets early access and who doesn’t, how to vet those who get access, and what “responsible” means across all of this…
What should government policy be when a company produces, among other things, an unparalleled cyberweapon? What if future releases are even more capable?
Today I ask these questions about Mythos. Because Mythos is just the beginning. blog.peterwildeford.com/p/mythos-is-...
New blog post from Theo Bearman and me on distillation.
Distillation attacks are occurring where Chinese AI companies train on US AI outputs and use that to make their models better than they otherwise would be.
What does this mean? peterwildeford.substack.com/p/china-is-r...
🔴Rep Johnson (SD)
🔵Rep Liccardo (CA)
🔴Rep Kiley (CA)
🔵Rep Lieu (CA)
🔴Rep Mace (SC)
🔵Rep Moulton (MA)
🔴Rep Moran (TX)
🔵Rep Sherman (CA)
🔴Rep Paulina Luna (FL)
🔵Rep Tokuda (HI)
🔴Rep Perry (PA)
🔵Rep Whitesides (CA)
🔴Sen Lee (UT)
🔵Sen Sanders (VT)
🔴Sen Lummis (WY)
🔵Sen Schumer (NY)
🔴Rep Biggs (AZ)
🔵Rep Beyer (VA)
🔴Rep Burleson (MO)
🔵Rep Casten (IL)
🔴Rep Crane (AZ)
🔵Rep Foster (IL)
🔴Rep Dunn (FL)
🔵Rep Krishnamoorthi (IL)
(continued)
30 current members of Congress have publicly discussed AGI, AI superintelligence, AI loss of control, recursive self-improvement, or the Singularity:
🔴Sen Banks (IN)
🔵Sen Blumenthal (CT)
🔴Sen Blackburn (TN)
🔵Sen Hickenlooper (CO)
🔴Sen Hawley (MO)
🔵Sen Murphy (CT)
(continued)
agree that's key. It's obviously harder to measure but it seems to be increasing at a roughly similar rate but from a lower base.
x.com/sama/status/...
But people should know what the "red lines" rest on, which is just "trust us bro". Nothing else.
I expect Sam Altman has a much better relationship with the Pentagon, so maybe this will work. I certainly wish him and OpenAI luck and I hope they can de-escalate the situation.
And recall that this is the same Pentagon that just went "0 to 60" nuclear in declaring Anthropic a supply chain risk despite this previously being a Cold War national security technique normally only used for Chinese and Soviet companies.
To emphasize - OpenAI's "red lines" are just held together by trust that the Pentagon won't screw OpenAI over on this.
This is a way one can go about doing this, and it's OpenAI's right to decide how to do business. But this is a lot less reassuring than what and OpenAI had originally been saying.They had said that their approach was more ironclad than Anthropic's and it's just... not.
I asked this question to Sam Altman and the way I interpreted his reply was that they are going to use the "deployment architecture and safety stack" and they expect the Pentagon to be good people and not push back. And if they do push back, then OpenAI would decide what to do.
The Pentagon can just say "we both know your model can do this, you should remove that safeguard". And then OpenAI would have to comply or be sued.
The way OpenAI bridges this is by saying the protections live in this "deployment architecture and safety stack" rather than the contract language. But if this contract says "all lawful purposes" and your safety stack prevents a lawful purpose, you're in breach of contract.
OpenAI is trying to claim simultaneously that (a) their contract with the Pentagon allows for "all lawful purposes" and (b) also that their red lines are fully protected.
The Pentagon has a legitimate principle that private companies shouldn't hold moral vetoes over military doctrine.
But they agreed to the contract. And now they're using unprecedented + disproportionate coercion. This should trouble everyone.
My latest - peterwildeford.substack.com/p/the-pentag...
Are there Cold War lessons to learn for AI?
We've had very fierce competition with the Soviets, and did not trust the Soviets at all, but we were still able to make mutually verified treaties.
In Politico today I'm quoted saying we should do the same with China → www.politico.com/newsletters/...
Adversaries can tamper with or poison leading US models. There also can be risks from insider threats, including potentially the AIs themselves.
Dave Banerjee at IAPS has a roadmap for how to defend -> www.iaps.ai/research/ai-...
AI is a real thing
yeah I agree - that's a good point
Maybe you'd see major progress on CAIS's Remote Labor Index or OpenAI's "OpenAI Proof Q&A"?
agree - probably measurement noise (in both estimates)
16. Joseph Sifakis (Turing Award '07)
17. John C. Mather (Physics '06)
18. Frank Wilczek (Physics '04)
19. Joseph Stiglitz (Economics '01)
20. Andrew Yao (Turing Award '00)
*The Turing Award is equivalent to the CS Nobel
7. Giorgio Parisi (Physics '21)
8. Jennifer Doudna (Chemistry '20)
9. Yoshua Bengio (Turing Award '18*)
10. Beatrice Fihn (Peace '17)
11. Oliver Hart (Economics '16)
12. Juan Manuel Santos (Peace '16)
13. Ahmet Üzümcü (Peace '13)
14. Jean Jouzel (Peace '07)
15. Riccardo Valentini (Peace '07)
20 Nobel Prize winners have warned that we may someday lose human control over advanced AI systems
1. Geoffrey Hinton (Physics '24)
2. John Hopfield (Physics '24)
3. Demis Hassabis (Chemistry '24)
4. Daron Acemoglu (Economics '24)
5. Ben Bernanke (Economics '22)
6. Maria Ressa (Peace '21)