Advertisement · 728 × 90

Posts by Geoffrey Irving

Post image

Arena allocators are pretty.

2 days ago 3 0 0 0

At least we got some (black hole) fireworks displays.

1 week ago 0 0 0 0

In a catastrophic typo, researchers ask superintelligence to optimize for CVE

1 year ago 82 4 7 1

What's your guess on the equilibrium?

3 weeks ago 0 0 0 0

Vibe coding parsers for untrusted data is a useful warning-sign-filled experience to have.

3 weeks ago 5 0 1 0

As long as the knives are moving very slowly.

3 weeks ago 3 0 1 0

(Hard to resist when it’s a TCS person replying…)

1 month ago 3 0 0 0
Preview
GitHub - girving/aks: Seiferas's simplified construction of the AKS O(log n) depth sorting network, formalised in Lean Seiferas's simplified construction of the AKS O(log n) depth sorting network, formalised in Lean - girving/aks

“One could code” is shorthand for that these days. But it’s not the kind of thing that is easy to recognize as correct, and thus very bad to vibe code.

Or I could cheat: I want to watch for papers with easier-to-formalize near-Ramanujan expander constructions. Know any? :)

github.com/girving/aks

1 month ago 0 0 1 0

Any recommended tools for following some narrow area of research, to get notifications when interesting papers come out? By narrow I mean as specific as an arbitrary LLM prompt.

(I realise one could code up such a system, but I am interested in what exists without me writing it.)

1 month ago 5 1 2 0

After all it can’t be Coke Zeros, as that wouldn’t be a different thing.

1 month ago 0 0 0 0
Advertisement

If you use round to even, you’d get 4/1.

1 month ago 1 0 0 0

Ah, and also I currently bound size using the trivial corollary of the depth bound, but there's an additional size-only OOM or two that can be saved.

1 month ago 1 0 0 0

Until the Ramanujan graphs, there are at least three different ways to tighten the constants by smaller factors at least.

1. Use sparse halvers (only of size 2n^2 rather than all even numbers).
2. Save a factor of two off separator depth.
3. Tighten the bag sorting theorems.

1 month ago 1 0 1 0
Post image

The README.md has space for a timeline in any case.

1 month ago 1 0 1 0

It's a shame: the 1.41e64 constant is _almost_ an LLM capability measure ("What quality expanders can the machines formalise?"), but alas it appears there's an enormous jump from what's already done (MGG) to Ramanujan graphs w/ deep number theory, with little in between.

1 month ago 1 0 1 0

Alexander Hicks replied with some useful links on X. zkPi seems very on target!

x.com/alexanderlhi...

1 month ago 1 0 0 0

Note that only (2) and part of (1) are verified in this picture: most of (3) would not be. The main point would be spitting out a verifier that is much faster than the zkVM systems.

1 month ago 2 1 1 0
Advertisement

1. Start with Lean4Lean.
2. Rewrite it into a proven-equivalent nondeterministic computation, where as much work as possible is done via hints whose generation does not have to be checked.
3. Spit out Arkworks code.

1 month ago 1 0 1 0

I'm curious how long it would take someone to make an optimised SNARK system for Lean verification, based on Lean4Lean (arxiv.org/abs/2403.14064) and arkworks.rs. 🧵

1 month ago 3 0 1 0

(FWIW I do agree the models are very good, just also that this wasm test is easy to misinterpret as a measure of their value. github.com/girving/aks)

1 month ago 3 0 0 0

Neat!

How much slower is the WASM than writing the code in plain Javascript and running it through a normal Javascript VM? :)

1 month ago 1 0 1 0

Does it have any vulnerabilities?

1 month ago 1 0 1 0

studying chatgpt's busy beaver number: how long can it run and still halt. finished one prompt in slightly under 24 hrs. the response was just as unhinged as a human would sound after grinding that long

4 months ago 20 1 1 0

Your periodic reminder that software engineering is a mixture of easy-to-verify subtasks and hard-to-verify subtasks, and the fact that the machines are getting better at coding assistance should not be explained away as "they're only good on easy-to-verify stuff".

1 month ago 5 0 0 0

The constant on that O(log n) depth theorem is 1.41e64, because Margulis–Gabber–Galil is not a very good spectral expander. What we really need is Ramanujan graphs, but this requires harder number theory than is feasible with February 2026 Mathlib + LLMs.

Contributions welcome!

1 month ago 1 0 0 0
Advertisement

The mmap routine copies all data to a private temporary file before exposing it to Lean as a String to avoid the chance that someone mutates the data during the check.

github.com/girving/aks/...

1 month ago 0 0 1 0

This required generating a 6GB certificate in Rust in O(n^3) time, then checking it via an optimised, threaded, verified Lean certificate checker in O(n^2) time. I also wrote a small C FFI-based mmap routine to avoid constantly crashing the machines I was working on.

github.com/girving/aks/...

1 month ago 0 0 1 0
Preview
Random Number

In addition to the MGG expander graph construction, I also formalised the zig-zag construction of expander graphs using a native_decide-checked certificate that a particular "random" base 16-regular graph on 65536 vertices is a good spectral expander, per the discussion here.

xkcd.com/221

1 month ago 0 0 1 0

I would not be confident without careful checks: once Opus proved a theorem just for ɣ = ½ despite me saying I needed general ɣ. When I complained, it churned for a while and declared victory; when I looked it had a proved a theorem that took an arbitrary ɣ, but also an (hɣ : ɣ = ½) hypothesis.

1 month ago 0 0 1 0

I've run the proof through two post-hoc proof checkers via github.com/leanprover/c... (Lean's and github.com/ammkrn/nanod...), so I'm reasonably confident it's correct. See github.com/girving/aks/... for the spec.

1 month ago 0 0 1 0