It was two weeks, not six months. Pretty impressive actually, given that they didn't just find the problem but also made a working malicious circuit that met the qubit requirement.
Posts by Craig Gidney
I took solace in the part of the post where he'd bypassed the need to uncompute (a *HUGE* advantage for saving qubits) but was still struggling to match our qubit count 😈.
With an advantage that physically impossible, I bet you can hit ~800 qubits instead of ~1150.
Congrats to Keegan Ryan for being the first to exploit the simulator we used to validate the secret quantum circuits: blog.trailofbits.com/2026/04/17/w...
It kills me that the (now fixed) bugs were simple (we didn't port the op validation code from C++ to Rust!), but that's to be expected.
It is transversal in the sense that it is performed by broadcasting the physical operation over the physical qubits of the logical qubit. And often this is fault tolerant.
...but yes I agree and the blog post makes this exact point. It is conditionally transversal, not truly transversal.
The Z stabilizers are needed to check for physical X errors. Immediately after a measurement that's fine because X errors do nothing to X eigenstates. But once you do the transversal T the X errors start to matter so you need to be able to check for them. So you need the Z stabilizers back before T.
A Z basis stabilizer that touches qubit q does not commute with an X basis measurement of q. If you measure the stabilizer before the measurement and after the measurement, you don't get the same result. In a transversal X measurement this happens to all Z stabilizers.
My dream paper is a zero-knowledge proof that I know the answer to the Collatz conjecture.
This is from a paper that should have appeared on arXiv today but due to technical issues will only be there tomorrow; for the moment it's at quantumai.google/static/site-...
See also this blog post on the idea: research.google/blog/safegua...
As timelines tighten, details might benefit attackers more than defenders. So we're trying something weird: proving a circuit exists without revealing it.
For example, here's a zero-knowledge proof that we found 10x smaller quantum circuits for ECDLP: github.com/tanujkhattar...
There's support for making SVG diagrams of circuits, and there are existing tools to convert SVGs into TikZ.
While I was attending APS March Meeting, I appeared in the 632nm podcast. The recording is available on youtube: www.youtube.com/watch?v=lnHg...
Never been on a podcast before, and no doubt that shows, but it was fun.
I would bet against Q day by 2030, but I wouldn't bet against it at 10:1 odds. ~10% risk is unacceptably high here, so I'm very in favor of transitioning to quantum-safe cryptography by 2029: blog.google/innovation-a...
Yes this means I 90% expect to be made fun of in 2030. Oh well.
I have to admit, reading the abstract, I don't understand how this differs from what McKague+ showed in 2008 (or equivalently what I described in my blog post). Eve can pass any test for complex gates using only real gates if she has pre-distributed some constant-sized entangled catalysts.
The slides from my talk "how to eat magic states" at APS 2026:
docs.google.com/presentation...
...The content of the talk definitely drifted from when I picked the title. Mostly I talked about realizing I didn't understand reaction depth a few months ago.
Whoops, I missed that they have multiple shots. The gate overhead is ~5600x not 256x.
Chevignard et al show residues also reduce the qubit cost of quantum attacks on elliptic curves: eprint.iacr.org/2026/280
The space savings is less dramatic than for factoring (1.6x instead of 6x), and they again pay a big gate count penalty (256x), but very interesting.
Made a video tutorial of using crumble to create a quantum error correction circuit: youtu.be/SnpLSvyyEx8
And, of course, if you're in charge of some security thing that's vulnerable to quantum attacks, you should be *assuming progress*. The strategy "we'll start our 3 year plan when it looks 4 years away" is just guaranteeing an "oh shit" moment when progress subtracts 2 years.
The cost of holding assumptions constant is they go stale. In 2012, demanding 1e-3 noise was audacious. Now it's conservative. Locality? The mechanisms for long-range connections are multiplying and improving. Frankly, if you assume *progress*, 100k qubits starts looking high.
This might be unappreciated outside the field, but it's easy to juice numbers by demanding more from hardware. It's a common type of paper (e.g. arxiv.org/abs/2103.06159 and arxiv.org/abs/2302.06639). To avoid this confounder. I've tried to hold my assumptions constant over time.
I've been asked several times to comment on arxiv.org/abs/2602.11457, which claims to reduce the qubit cost of factoring by 10x.
My take is that they demand a *lot* more qubit connectivity for that number. Your mileage depends entirely on how plausible you find those demands.
From having seen Austin explain the surface code many times: this makes sense to me as the progression he'd chose. He's known surface codes so long that the complications become hard to perceive. And with prerecorded lectures the students can't stop the runaway train with "wait what?" questions.
Yup, it's a bit like zooming way in on the boundary of a hyperbolic code. Except I think those don't have a boundary at the edge, but instead have non-local links to other sections of the edge. Otherwise you can't get the constant coding rates they advertise.
I use *programming* to generate images. I like SVG as a target format because it's so easy to generate programmatically from any language. In this case I was calling stuff I'd previously written to draw the stabilizers of a code. So I just picked qubit locations and stabilizer bases.
A surface code that goes from a normal tiling to one where each row keeps using half as many qubits as the last.
I can't think of a reason to do this, but it's visually interesting.
Or someone has to endorse you. TBH I thought this was already how it worked. For my first submission nine years ago I had to get an endorsement. I got it from Dave Bacon.
It looks like the main change is that previously endorsement was skipped if your email address was associated with a university.
In other words: Early Fault Tolerance starts when quality transitions from an impassable barrier to a mere tradeoff.
Concretely… I arbitrarily declare EFT begins once it's demonstrably possible to do T gates with an infidelity of 10^-10. You mostly won't do T gates that way... but you *could*.
To me it refers to the quality problem being solved while the quantity problem remains.
So you *could* perform universal logical gates far better than you could ever need... but you can't hold a lot of qubits at once. Sacrificing quality to gain a bit of quantity is then a natural tradeoff.