#Quantum #QuantumComputing #Physics #TQFT #QuantumResourceTheory #QuantumMagic #Mathematics #Science
Posts by Dr. William Munizzi
📝 The takeaway: The Clifford hierarchy has a topological shadow. Algebra and cohomology tell you which gates a TQFT can reach, with different magic producing capabilities, and the obstructions between levels have concrete geometric interpretations.
.
.
.
The difference is controlled entirely by the cohomology of the topological data.
⿻ We show that Dijkgraaf-Witten theory with finite gauge group produces the T gate exactly via a cohomological construction. Strikingly, the same geometric operation produces a Clifford gate in Chern-Simons theory and a non-Clifford gate in Dijkgraaf-Witten theory.
We define two problems, Dehn surgery and logical leakage cancellation, as remaining obstacles to an explicit realization.
We show that SU(2)_3 is the minimal level where this is resolved, via branching fusion rules, and existence of the gate is guaranteed by a density argument on the mapping class group.
🚫 We prove that Toffoli gate construction is prohibited in SU(2)_1 by the fusion algebra, which can only distinguish parity and cannot implement the AND conditional.
This gate generates non-local magic for nearly all values of its tuning parameter, save for a few Clifford points, and we derive explicit formulas for its non-stabilizing power and its operator entanglement.
Three main results:
🍩 We show how the Ising interaction gate can be prepared in Chern-Simons theory by path integration over a disjoint union of handlebodies.
In a new paper (joint with Howard Schnitzer) we extend this program to non-Clifford gates, and show how the magic generating power of each gate is determined by the topological data of the underlying theory.
Previous work established that stabilizer states and Clifford gates can be prepared by path integration in Chern-Simons theory.
Every useful quantum computer needs magic. We show which topological quantum field theories can generate it, and how the amount is fixed by the algebra of the theory. scirate.com/arxiv/2604....
We invite applications for Research Fellow positions as part of a new initiative to be launched in late 2026 at the interface of quantum information theory and gravity! Please RT and let potentially interested researchers know. www.ucl.ac.uk/mathematical... 1/2
📄 arxiv.org/abs/2604.06319
#Physics #QuantumComputing #FaultTolerance #ErrorCorrection #ComputingArchitecture #Science
This philosophy reframes quantum scaling from a race to build one perfect device, into a systems engineering problem that mirrors how classical computing evolved and matured.
For those of you working on scaling or large-scale architecture, how do you view this approach?
Superconducting qubits for fast processing, trapped ions or neutral atoms for memory, photonics for interconnects, each playing to their respective strengths within a unified architecture.
For RSA-2048 factorization, this modular approach reduces the requirement from 900k physical qubits to 190k, with a runtime under 10 days.
Perhaps most inspiring is the broader implication that there may not be a single "winning" qubit.
The result is striking, yielding up to a 138× reduction in physical qubit overhead and 551× reduction in algorithmic error, compared to a monolithic baseline with comparable runtime.
Instead, Q-NEXUS routes idle quantum data to dedicated memory modules which utilize different qubit types and error-correcting codes matched to the task.
The motivation is compelling. In algorithms like Shor factoring, qubits remain idle up to 97% of the time. Holding idle data in expensive, actively error-corrected hardware is enormously wasteful.
Last week Q-Ctrl announced Q-NEXUS, a heterogeneous quantum computing architecture inspired by a familiar idea from classical computing: Separating the processor from memory so each component can focus on what it does best.💡
Scaling quantum computing isn’t just about building better qubits, it’s about designing better architectures. ⚛️
That makes sense, and of course it’s unreasonable to expect a group of randomly sampled experts to adequately cover each expertise.
That being said.. the admittance of being a non-expert followed by the strong condemnation of our work was a bit strong
This year’s TQC rejection provided quite the laugh, perhaps I should pursue more interesting topics! 😂
“This is an in-depth numerical investigation of the matter, which is probably interesting for experts on the topic. As a non-expert, however, I find the results rather weak.”
#Physics #Quantum #QuantumComputing #TQC
Would love to hear how others think about this, especially those working at one of the many intersections of AI and science.
#Physics #AI #MachineLearning #Science
This isn't meant to serve as a criticism of AI, just an honest question about the nature of learning models. I suspect someone has already looked into these ideas with more rigor, so relevant references are welcome!
Is creativity just extrapolation along manifolds we don't yet know how to characterize, or is there a fundamentally different process at work here?
Nevertheless we often seem to reach beyond the limitations of our training set, and sometimes further conjecturing about the science outside our observable universe or mathematics that lie beyond the scope of our best formalisms.
The boundary is information-theoretic, not computational.
Now, that being said, what genuinely distinguishes human learning in this regard? We are also trained on a finite set of input data, driven by sensory experience, education, conversations, and media.