Someone should build seL4-ablate-bench. Progressively delete proofs, lemmas, theorems and see how much a long-running AI agent can reconstruct. End state: just give it the code + top spec, and rebuild the whole 1m+ line Isabelle proof
Posts by Mike Dodds
Context: Knuth on Claude www-cs-faculty.stanford.edu/~knuth/paper...
I formalised the Knuth / Stappers / Claude theorem in Lean4. Claude for scaffolding and Harmonic‘s Aristotle AI for the core proofs
This is just the construction Claude found, not all 760 constructions
(Disclaimer: theorems look plausible to me, but mistakes possible)
github.com/septract/cla...
Commands: #tcb (what's in your trust base), #tcb_tree (dependency graph), #tcb_why (why is this included?). v0.1.0, rough edges expected, feedback welcome
github.com/OathTech/lea...
Weekend project w/ Claude: in Lean, it can be hard to know which definitions you need to review to trust a theorem. So I built lean-tcb. It figures out your trusted computing base, ie the definitions that actually give a theorem its meaning (vs proof machinery the kernel checks)
I got curious whether Claude Code could handle a low-representation theorem prover like ACL2 - turns out yes! I proved a bunch of small to medium theorem, and for good measure built a MCP server, all in about 4 hrs. I’ve never used ACL2 before. Write-up here: mikedodds.org/posts/2025/1...
That’s true, and I think that’s exactly why Claude does so well proofs. It‘s just I happen to know first hand that proofs are a particularly difficult *kind* of program
I wrote about Claude Code, which to my absolute astonishment is quite good at theorem proving. For people who don't know theorem proving, this is like spending your whole life building F1 engines and getting lapped by a Tesco's shopping trolley www.galois.com/articles/cla...
Screenshot of article text: “Formal verification today is very useful, but for most systems it’s very difficult to write the kinds of complete specifications that verification needs. However, other kinds of specifications are popular: for example, a test case is a kind of limited, partial specification. The key is that a test case is immediately useful, and doesn’t impose undue costs on the development team. We need to find ways to specify systems that have these virtues, and avoid the trap of imposing a complete and coherent view that fundamentally does not exist.”
Screenshot of article text: “ I’ve come to think writing formal specifications is just a very difficult task. It requires a top-down view of the system that designers and engineers typically don’t have or, more importantly, need. In contrast, informal specifications can be ambiguous, partial, flexible. Informal specifications are intended as communication mechanisms between humans, and as a result they can be ‘wrong but useful’, and elide aspects of the system that are not of interest. This is a strength, but it also results in systems that can’t be easily formalized.”
Screenshot of article text: “ I think systems are typically both designed from the top and grown incrementally. Most systems have some degree of top-down structure, but few systems have a mathematically coherent specification that covers every behavior. The effect is that most systems obey some formal specification for some core functionality, but if outside this core, we rapidly enter muddy territory where it is unclear what the system should do, or whether the designer should even care”
Screenshot of article text: “ It’s a formal verification cliché that writing the specification tends to uncover most of the bugs in a system. To me, this suggests an analogy between specification and programming—both are tools for expressing what we want. In one way, this is a pessimistic thought: no tool can remove the burden of clarifying our ideas. But also, it gives me some hope. Programming is very difficult, but through careful tool design, we’ve made it available to hundreds of millions of people. With luck and skill, perhaps we can do the same for specifications.”
New Galois blog: “Specifications Don’t Exist”. If we want to formally verify more systems, we need formal specifications, but most real systems are hard to specify for very deep reasons www.galois.com/articles/spe...
I’m not sure how I missed this but it’s an extremely good article and you should absolutely read it. It’s about formal methods, but anyone who cares about integrating research into industry will find it valuable!
I saw a *ton* of parallels with resilience engineering too :)
Hey :) Seems like a lot of people moved here from old Twitter and I’m still catching up
A labyrinth icon, serving as a metaphor for the process of formal verification
At Galois, we often say things like: “Formal methods form the backbone of everything we do.”
But what exactly are formal methods? How do they work, and why are they so important?
We created a handy reference page to explain: www.galois.com/what-are-for...
Text screenshot: I sometimes hear people claiming that formal methods are demonstrably better than the techniques software engineers mostly use today. The only reason formal techniques aren’t more popular (according to this theory) is that engineering teams are unaware, conservative, maybe put off by superficial difficulties like poor interfaces and documentation. I don’t think this is quite right. My observation is that engineers are mostly rational when thinking about costs and benefits, at least within the bounds of their particular systems and problems.
If a tool is not popular, it’s uncompelling to argue that everyone is just mistaken. At some point you should ask why the tool isn’t useful (at the current cost/benefit point)
A graph of costs and benefits plotted against each other. There is a line under which is “Your favourite under-appreciated formal method”. There are two arrows pointing orthogonally away: “be cheaper” and “be more beneficial”
New-ish @galoisinc.bsky.social blog: “What Works (and Doesn't) Selling Formal Methods”. The boring truth: engineers are rational and adoption is all about cost/benefit tradeoffs www.galois.com/articles/wha...
What actually works when selling formal methods in industry?
What doesn't?
The way Galois Principal Scientist @m-dodds.bsky.social sees it, many FM projects don’t pencil out not because clients are irrational, but because the cost/benefit tradeoffs don’t make sense.
www.galois.com/articles/wha...
c2rust is available on the Godbolt Compiler Explorer! c2rust is a tool we developed with Immunant that can convert nearly any piece of C code into compilable Rust godbolt.org/z/crsWEGEKM
I wrote about o3, the Frontier Math benchmark, and what it means if AI math keeps getting better
I do think a lot of people are in denial though!
I don’t think literally everyone should drop what they’re doing. But my sense is PL research as a whole is significantly under-reacting to AI. So I suppose I think *some more* PL people should bet on AI (but maybe not you!)
Happy to mail you a couple. Email me, my address is on my website
I think you’ve put your finger on the exact worldview mismatch because 5-10 years seems like an insanely long time horizon to me
Why constrain the grammar - just pull more samples and keep the ones that pass :p
Hot take for POPL: the PL community is still mostly in denial about AI. This is bad because PL+AI go great together
- PL can solve the hardest problem with AI - trusting the output it produces
- AI can solve the hardest problem with PL - finding enough engineers who can even use the tools
I’m bringing these cute Galois stickers to POPL so if you want one, come find me
If I understand right, the private test set is only used during evaluation of the model - not available to the team doing the training
Seems almost certain it’s deliberately trained on math reasoning. The way the o-series models seem to work is by long CoT, with reinforcement learning to impose correct reasoning. Not much public about how o3 works internally, but Chollet has some speculation: arcprize.org/blog/oai-o3-...
A big jump on coding skill as well: