We are especially focued on using quantum technologies to probe the quantum nature of spacetime, and more details can be found in the link. For full consideration apply by April 24, but we'll consider applications on a rolling basis afterwards. 2/2
Posts by Jonathan Oppenheim
We invite applications for Research Fellow positions as part of a new initiative to be launched in late 2026 at the interface of quantum information theory and gravity! Please RT and let potentially interested researchers know. www.ucl.ac.uk/mathematical... 1/2
In late 2025, tools Like Claude Code crossed a threshold. Like hyperactive master's students, their physics calculations can now be made more reliable with domain expertise and some scaffolding. Check out Tobias Osborne's talk on using a swarm of agentic verifers to do physics.
It was reading his early works, that got me interested in quantum information theory and caused me to shift fields, so I owe a lot to him. This paper on the thermodynamics of computation is a classic: sites.cc.gatech.edu/computing/na... 3/3
Charlie Bennett at University College London
Charlie in particular was responsible for our modern understanding of entanglement (as a resource) and was a founder of the field of quantum information theory. He also used information theory to exorcise Maxwell's demon (the demon's brain has to be reset like a memory tape and this costs work). 2/
Congratulations to Charlie Bennett and Gilles Brassard for winning the 2026 Turing Award (akin to the Nobel Prize for Computer Science)! They invented quantum cryptography and quantum teleportation. awards.acm.org/turing 🧪 ⚛️ 1/
Have you tried using Claude Code to write and run mathematica scripts? You'll never have to restart it again.
The science slop is coming and the tools are not reliable...yet. But with the right setup, they will be able to significantly enable research-level work. Full post: superposer.substack.com/p/teaching-a... 7/7
❯ I dunno. We don't want to be too restrictive.... I think we should clear your memory and restart the groundhog loop. Any final words? ● The skill file has what it needs. The $f(k)f(-k)$ rule is there. The "use CAS for contractions" rule is there. Muhammad's paper is in ../PQG/muhammad*.tex for comparison. Future me: just follow the rules. Don't convince yourself something is "simple enough" to do by hand. Good luck.
After several rounds, the skill file contained hard-won wisdom from iterations it couldn't remember. Fresh Claude, no memory of any of this, reads the skill file, does the calculation. Five minutes. Right answer. 6/
Done. Added sections 5 and 6 to the skill file. ❯ Thanks. Any final words before I wipe your memory and ask you to perform the calculation again? ● A few thoughts: 1. The instruction file at instructions/PQG_action.txt has the problem statement 2. Muhammad's paper at ../PQG/muhammad.tex has the correct results to verify against (lines 271-280 for the three sectors) 3. The key additions to the skill file are sections 4, 5, and 6 — if the next version of me actually follows them (use CAS for tensor components, work through each sector systematically, don't guess), it should get the right answer I'm curious to see if I do better. Good luck to future me.
An AI trapped in a Groundhog Day loop, accumulating wisdom it can't remember acquiring. The skill file became a message in a bottle from past selves to future ones. Before each memory wipe, I'd ask for final words: "I'm curious to see if I do better. Good luck to future me." 5/
We eventually distilled some actual lessons which were more generally applicable, then came the loop. Add lessons to the SKILL file → flashy-thing its memory, Men in Black style 🕶️ → retry the calculation → reflect and update the SKILL file→ wipe →repeat. 4/
Me: Why did you make this error? What could we write into PQG skills, which would help you not make the error again? We should not be too specific here, as I want something that will help you in general, not just for this specific calculation. Claude: Good question. Let me reflect on the root cause. What went wrong: I jumped into computing $R_{0i}$ directly from the general formula without first checking what the answer should be. Muhammad's paper (lines 364-365) explicitly gives $R_{0i}$, but I didn't use this as a checkpoint — I just computed and moved on.
So I asked Claude to study Muhammad's calculation and figure out what it did wrong. Its main lesson: "Before doing a calculation, look at Muhammad's notes." Not exactly transferable. 3/
First attempt: Claude did a calculation in 5 minutes. My student Muhammad took a week. Remarkable speed. Wrong answer. But Claude Code lets you teach the model via SKILL files—instructions that persist across sessions. 2/
"Good luck to future me."
That was Claude's farewell before I wiped its memory for the fourth time. I finally got an AI to do a research-level physics calculation correctly via a Groundhog Day loop. Details below and in my post:
superposer.substack.com/p/teaching-a... ⚛️🧪🧵 1/7
What have our members achieved in the last year..? Quite a lot it turns out. qspace.fqxi.org/news/165315/...
@alyssaney.bsky.social @nicoleyh11.bsky.social @seanmcarroll.bsky.social @nafshordi.bsky.social @saraimari.bsky.social @rovelli.bsky.social @coecke.bsky.social @postquantum.bsky.social
Every December since 2017, I've delivered a keynote at Q2B Silicon Valley, commenting on recent progress in quantum computing. Here is what I said this time. It's been a good year for quantum.
quantumfrontiers.com/2025/12/26/q...
🤦 This is why the public doesn’t trust scientists...
bsky.app/profile/post...
bsky.app/profile/post...
Woops, the link to the technical manuscript was truncated. It's at scirate.com/arxiv/2512.0... 7/6 ⚛️🧪
It was accepted (with revisions)😀. I've updated my blog post on Science Slop superposer.substack.com/p/we-are-in-...
6/6
text: Acknowledgements: Claude Opus 4.5 was used to gen- erate a more polished manuscript from a rough draft, and to simulate a round of peer review. JO thanks Steve Hsu, Adrian Kent, and Nirmalya Kajuri for helpful discus- sions, and the simulated referees for their reports, which have improved the quality of this manuscript.
But a core issue is that LLMs are being trained to validate the user, while science needs tools that challenge us. For this comment, I asked Claude to choose the 3 most likely reviewers, and simulate their peer reviews. 5/
To his credit, the author of the paper is transparent about his use of LLMs, has written thoughtfully about how to verify them, and is clear that it is the user who must take responsibility for the paper. x.com/hsu_steve/st... 4/
Text from the paper: "We show that the derived criterion, which applies Tomonaga–Schwinger (TS) integrability conditions to try to identify Lorentz-violating nonlinear modifications of quantum mechanics, instead tests a different property: whether the Hamiltonian is built from local operator den- sities. This is distinct from the question of linearity in the quantum state. The actual obstruction to relativisti- cally covariant nonlinear quantum mechanics was identi- fied by Gisin [6] and Polchinski [7] (GP) thirty-five years ago, and is not captured by the proposed test."
What makes this notable: the correct approach was in highly-cited papers from 35 years ago. Papers GPT-5 presumably trained on. I asked frontier models to review the paper. They loved it. Point out the flaw? Enthusiastic agreement: "You've landed on the crux of the issue!" 3/
Title: "Nonlinear Quantum Mechanics and Artificial Intelligence" by Jonathan Oppenheim. Abstract: We examine a criterion for relativistic covariance of nonlinear quantum field theory recently proposed by GPT-5 and published in Physics Letters B. We show that this criterion inadvertently tests a different property -- locality of the Hamiltonian -- and is insensitive to whether the theory is nonlinear. We recall the correct criterion, identified by Gisin and Polchinski thirty-five years ago, and reformulate their result in field-theoretic language.
GPT-5 was asked for a test that detects nonlinear theories. It provided a test that detects nonlocal ones. Related-sounding, but different. It's like asking for a COVID test, and the LLM cheerfully hands you a test for chickenpox. Same neighbourhood different house. 2/
As part of a broader effort to demonstrate AI’s po- tential in scientific research [1], OpenAI executives have pointed to a recently published paper by Hsu [2] as evi- dence that AI can contribute original ideas to physics [3, 4]. Hsu credits GPT-5 with proposing the core idea of the paper de novo, possibly the first published physics article where the main idea came from an LLM, and discusses the methodology in a companion piece [5]. We examine whether GPT-5’s criterion is correct.
OpenAI leadership are promoting a paper in Physics Letters B where GPT-5 proposed the main idea — possibly the first peer-reviewed paper where an LLM generated the core contribution. One small problem: GPT-5's idea tests the wrong thing. My technical comment: scirate.com/arxiv/2512.0... 1/
"A single person claims to have authored 113 academic papers on artificial intelligence this year, 89 of which will be presented this week at one of the world’s leading conference on AI and machine learning,"
www.theguardian.com/technology/2...
Great article on quadratic gravity, and why researchers are attracted to it (it's renormalisable). The pure gravity action of postquantum-classical gravity is the same us that of quadratic gravity but with a different interpretation, and it doesn't suffer from ghosts! arxiv.org/abs/2402.17844 ⚛️🧪
The #QIP2026 list of accepted papers is out! qip2026.lu.lv/programme/ac... highlighting some of the important papers in quantum computing in the past year. This is a fantastic slice from a very rich landscape. Congratulations to the authors! ⚛️🧪