Advertisement · 728 × 90

Posts by Aleksei Udovenko

Abstract. The Ethereum Foundation recently announced the Proximity Prize which aims to resolve some open problems that play an important role in the design of succinct proof systems. This paper reviews the open problems relevant to the Proximity Prize. We focus on some grand challenges relating to list decoding bounds, proximity gaps, correlated agreement, and mutual correlated agreement,as they relate to proof systems and Reed–Solomon codes. Along the way we survey the known results on these topics.

Abstract. The Ethereum Foundation recently announced the Proximity Prize which aims to resolve some open problems that play an important role in the design of succinct proof systems. This paper reviews the open problems relevant to the Proximity Prize. We focus on some grand challenges relating to list decoding bounds, proximity gaps, correlated agreement, and mutual correlated agreement,as they relate to proof systems and Reed–Solomon codes. Along the way we survey the known results on these topics.

Open Problems in List Decoding and Correlated Agreement (Gal Arnon, Dan Boneh, Giacomo Fenzi) ia.cr/2026/680

2 days ago 9 5 0 0
Preview
Springer Nature Discovers MDPI – The Strain on Scientific Publishing Home page for the paper ‘The Strain on Scientific Publishing’ by Mark A Hanson, Dan Brockington, Paolo Crosetto and Pablo Gomez Barreiro

These "discover something" journals from Springer Nature are here to steal your research money, as those from MDPI and Frontiers (and many others). ⚒️ 🧪

Learn more here: the-strain-on-scientific-publishing.github.io/website/post...

2/2

2 days ago 25 16 0 1
Abstract. A recent paper by Calderini et al. investigates the use of a CCZ transformation to mask the quadratic central map in a multivariate scheme, providing an instance leading to a system of degree four. A following paper by Caminata et al. presents two methods to reduce the masked system back to a quadratic system. In this work we further study the method based on the quadratic relations between input and output of the masked function, generalizing it and applying to any CCZ transformation (of any quadratic map). Moreover, we study how the existence of these quadratic relations can be used to study whether a function can be CCZ equivalent to a quadratic map and, more generally, to study whether two functions can be CCZ equivalent. In fact, this analysis gives us necessary conditions that can be checked also in relatively large dimensions.

Abstract. A recent paper by Calderini et al. investigates the use of a CCZ transformation to mask the quadratic central map in a multivariate scheme, providing an instance leading to a system of degree four. A following paper by Caminata et al. presents two methods to reduce the masked system back to a quadratic system. In this work we further study the method based on the quadratic relations between input and output of the masked function, generalizing it and applying to any CCZ transformation (of any quadratic map). Moreover, we study how the existence of these quadratic relations can be used to study whether a function can be CCZ equivalent to a quadratic map and, more generally, to study whether two functions can be CCZ equivalent. In fact, this analysis gives us necessary conditions that can be checked also in relatively large dimensions.

Counting and recovering the quadratic relations of a vectorial function (Irene Villa) ia.cr/2026/652

4 days ago 1 1 0 0
Abstract. In the noisy k-XOR problem, one is given y ∈ 𝔽₂^(M) and must distinguish between the case where y is uniform and the case where y = Ax + e, where A is the adjacency matrix of a k-left-regular bipartite graph with N variables and M constraints, x ∈ 𝔽₂^(N) is random, and e is noise with rate η. Lower bounds in restricted computational models such as Sum-of-Squares and low-degree polynomials are closely tied to the expansion of the underlying constraint graph, leading to conjectures that expansion implies hardness. We show that such conjectures are false by constructing an explicit family of graphs with near-optimal expansion for which noisy k-XOR is solvable in polynomial time.

Our construction combines two powerful directions of work in pseudorandomness and coding theory that have not been previously put together. Specifically, our graphs are based on the lossless expanders of Guruswami, Umans and Vadhan (JACM 2009). Our key insight is that by an appropriate interpretation of the vertices of their graphs, the noisy XOR problem turns into the problem of decoding Reed-Muller codes from random errors. Then we build on a powerful body of work from the 2010s correcting from large amounts of random errors. Putting these together yields our construction.

Concretely, we obtain explicit families for which noisy k-XOR is solvable in polynomial time at constant noise rate η = 1/3, with graphs satisfying M = 2^(O(log²N)), k = (log N)^(O(1)), and (N^(1 − α), 1 − o(1))-expansion. Under standard conjectures on Reed–Muller codes over the binary erasure channel, this extends to families with M = N^(O(1)), k = (log N)^(O(1)), (N^(1 − α), 1 − o(1))-expansion, and polynomial-time algorithms at noise rate η = N^(−c).

Abstract. In the noisy k-XOR problem, one is given y ∈ 𝔽₂^(M) and must distinguish between the case where y is uniform and the case where y = Ax + e, where A is the adjacency matrix of a k-left-regular bipartite graph with N variables and M constraints, x ∈ 𝔽₂^(N) is random, and e is noise with rate η. Lower bounds in restricted computational models such as Sum-of-Squares and low-degree polynomials are closely tied to the expansion of the underlying constraint graph, leading to conjectures that expansion implies hardness. We show that such conjectures are false by constructing an explicit family of graphs with near-optimal expansion for which noisy k-XOR is solvable in polynomial time. Our construction combines two powerful directions of work in pseudorandomness and coding theory that have not been previously put together. Specifically, our graphs are based on the lossless expanders of Guruswami, Umans and Vadhan (JACM 2009). Our key insight is that by an appropriate interpretation of the vertices of their graphs, the noisy XOR problem turns into the problem of decoding Reed-Muller codes from random errors. Then we build on a powerful body of work from the 2010s correcting from large amounts of random errors. Putting these together yields our construction. Concretely, we obtain explicit families for which noisy k-XOR is solvable in polynomial time at constant noise rate η = 1/3, with graphs satisfying M = 2^(O(log²N)), k = (log N)^(O(1)), and (N^(1 − α), 1 − o(1))-expansion. Under standard conjectures on Reed–Muller codes over the binary erasure channel, this extends to families with M = N^(O(1)), k = (log N)^(O(1)), (N^(1 − α), 1 − o(1))-expansion, and polynomial-time algorithms at noise rate η = N^(−c).

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Expanders Meet Reed–Muller: Easy Instances of Noisy k-XOR (Jarosław Błasiok, Paul Lou, Alon Rosen, Madhu Sudan) ia.cr/2026/664

4 days ago 1 1 0 0
Preview
The Four Color Theorem with Linearly Many Reducible Configurations and Near-Linear Time Coloring We give a near-linear time 4-coloring algorithm for planar graphs, improving on the previous quadratic time algorithm by Robertson et al. from 1996. Such an algorithm cannot be achieved by the known p...

4색 정리 새로운 증명이 arXiv에 올라왔습니다.
New proof of the four color theorem
by
Yuta Inoue, Ken-ichi Kawarabayashi, Atsuyuki Miyashita, Bojan Mohar, Carsten Thomassen, Mikkel Thorup
arxiv.org/abs/2603.24880

2 weeks ago 11 8 0 0
Preview
Shor's algorithm is possible with as few as 10,000 reconfigurable atomic qubits Quantum computers have the potential to perform computational tasks beyond the reach of classical machines. A prominent example is Shor's algorithm for integer factorization and discrete logarithms, w...

...we show that Shor's algorithm can be executed at cryptographically relevant scales with as few as 10,000 reconfigurable atomic qubits. ... the runtime for discrete logarithms on the P-256 elliptic curve could be just a few days for a system with 26,000 physical qubits, arxiv.org/abs/2603.28627

1 week ago 8 4 1 0

I should add to this: winner of best paper award at Eurocrypt 2026!!

As a single author. During his PhD. Incredible achievement!(protip: he will graduate soon-ish 👀)

1 week ago 3 3 0 0
Advertisement
Securing Elliptic Curve Cryptocurrencies against Quantum Vulnerabilities:
Resource Estimates and Mitigations
Ryan Babbush,1, ∗ Adam Zalcman,1, † Craig Gidney,1, ‡ Michael Broughton,1
Tanuj Khattar,1 Hartmut Neven,1 Thiago Bergamaschi,1, 2 Justin Drake,3 and Dan Boneh4
1Google Quantum AI, Santa Barbara, CA 93111, United States
2Department of Computer Science, University of California Berkeley, Berkeley, CA 94720, United States
3Ethereum Foundation, Zeughausgasse 7a, 6300 Zug, Switzerland
4Department of Computer Science, Stanford University, Stanford, CA 94305, United States
(Dated: March 30, 2026)
The expected emergence of cryptographically relevant quantum computers (CRQCs) will represent
a singular discontinuity in the history of digital security, with wide ranging impacts. This whitepaper
seeks to elucidate specific implications that the capabilities of developing quantum architectures have
on blockchain vulnerabilities and potential mitigation strategies. First, we provide new resource
estimates for breaking the 256-bit Elliptic Curve Discrete Logarithm Problem over the secp256k1
curve, the core of modern blockchain cryptography. We demonstrate that Shor’s algorithm for this
problem can execute with either ≤ 1200 logical qubits and ≤ 90 million Toffoli gates or ≤ 1450
logical qubits and ≤ 70 million Toffoli gates. In the interest of responsible disclosure, we use a zero-
knowledge proof to validate these results without disclosing attack vectors. On superconducting
architectures with 10−3 physical error rates and planar connectivity, those circuits can execute in
minutes using fewer than half a million physical qubits. We introduce a critical distinction between
“fast-clock” (such as superconducting and photonic) and “slow-clock” (such as neutral atom and ion
trap) architectures. Our analysis reveals that the first fast-clock CRQCs would enable “on-spend”
attacks on public mempool transactions of some cryptocurrencies. We survey major crypto…

Securing Elliptic Curve Cryptocurrencies against Quantum Vulnerabilities: Resource Estimates and Mitigations Ryan Babbush,1, ∗ Adam Zalcman,1, † Craig Gidney,1, ‡ Michael Broughton,1 Tanuj Khattar,1 Hartmut Neven,1 Thiago Bergamaschi,1, 2 Justin Drake,3 and Dan Boneh4 1Google Quantum AI, Santa Barbara, CA 93111, United States 2Department of Computer Science, University of California Berkeley, Berkeley, CA 94720, United States 3Ethereum Foundation, Zeughausgasse 7a, 6300 Zug, Switzerland 4Department of Computer Science, Stanford University, Stanford, CA 94305, United States (Dated: March 30, 2026) The expected emergence of cryptographically relevant quantum computers (CRQCs) will represent a singular discontinuity in the history of digital security, with wide ranging impacts. This whitepaper seeks to elucidate specific implications that the capabilities of developing quantum architectures have on blockchain vulnerabilities and potential mitigation strategies. First, we provide new resource estimates for breaking the 256-bit Elliptic Curve Discrete Logarithm Problem over the secp256k1 curve, the core of modern blockchain cryptography. We demonstrate that Shor’s algorithm for this problem can execute with either ≤ 1200 logical qubits and ≤ 90 million Toffoli gates or ≤ 1450 logical qubits and ≤ 70 million Toffoli gates. In the interest of responsible disclosure, we use a zero- knowledge proof to validate these results without disclosing attack vectors. On superconducting architectures with 10−3 physical error rates and planar connectivity, those circuits can execute in minutes using fewer than half a million physical qubits. We introduce a critical distinction between “fast-clock” (such as superconducting and photonic) and “slow-clock” (such as neutral atom and ion trap) architectures. Our analysis reveals that the first fast-clock CRQCs would enable “on-spend” attacks on public mempool transactions of some cryptocurrencies. We survey major crypto…

> We demonstrate that Shor’s algorithm...can execute with either ≤ 1200 logical qubits and ≤ 90 million Toffoli gates or ≤ 1450 logical qubits and ≤ 70 million Toffoli gates

research.google/blog/safegua...

quantumai.google/static/site-...

1 week ago 18 9 0 4
Preview
The System That Decides What Science Gets Published Is Breaking Down The peer review system that validates scientific research is trapped in a self-defeating cycle. A new mathematical model shows why—and what comes next.

Some first-rate science writing: For this story, @jdrakephd.bsky.social carefully read our recent paper and then we spent a very fun 90 minutes or so talking on zoom. His article that gets right to the heart of our model, explains it clearly, and then explores why it will matter in the future.

1 week ago 276 114 9 10

Writeup of the crypto-challenge MonoDOOM ETERNAL from #KalmarCTF - A follow up from last-years MonoDOOM challenge, this time with botched side-channel protection!

jonathke.github.io/monoDOOM-ETE...

1 week ago 4 3 1 0
Abstract. The Number Field Sieve algorithm and its variants are the best known algorithms to solve the discrete logarithm problem in finite fields. When the extension degree is composite, the Tower variant TNFS is the most efficient. Looking at finite fields with composite extension degrees such as 6 and 12 is motivated by pairing-based cryptography that does not yet have a good quantum-resistant equivalent. The two most costly steps in TNFS are the relation collection} and linear algebra steps. Although the use of order k Galois automorphisms allows one to accelerate the relation collection step by a factor of k, their use to accelerate the linear algebra step remains an open problem. In previous work, this problem is solved for k = 2, leveraging a quadratic acceleration factor equal to 4.

In this article, we bring a solution both for k = 6 and k = 12. We propose a new construction that allows the use of an order 6 (resp. 12) Galois automorphism in any finite field 𝔽_(p⁶) (resp. 𝔽_(p¹²)), thus accelerating the linear algebra step with approximately a factor of 36 (resp. 144). Moreover, we provide a SageMath implementation of TNFS and our construction, and validate our findings on small examples.

Abstract. The Number Field Sieve algorithm and its variants are the best known algorithms to solve the discrete logarithm problem in finite fields. When the extension degree is composite, the Tower variant TNFS is the most efficient. Looking at finite fields with composite extension degrees such as 6 and 12 is motivated by pairing-based cryptography that does not yet have a good quantum-resistant equivalent. The two most costly steps in TNFS are the relation collection} and linear algebra steps. Although the use of order k Galois automorphisms allows one to accelerate the relation collection step by a factor of k, their use to accelerate the linear algebra step remains an open problem. In previous work, this problem is solved for k = 2, leveraging a quadratic acceleration factor equal to 4. In this article, we bring a solution both for k = 6 and k = 12. We propose a new construction that allows the use of an order 6 (resp. 12) Galois automorphism in any finite field 𝔽_(p⁶) (resp. 𝔽_(p¹²)), thus accelerating the linear algebra step with approximately a factor of 36 (resp. 144). Moreover, we provide a SageMath implementation of TNFS and our construction, and validate our findings on small examples.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

High-Order Galois Automorphisms for TNFS Linear Algebra (Haetham Al Aswad, Cécile Pierrot, Emmanuel Thomé) ia.cr/2026/560

2 weeks ago 1 2 0 0
Post image

Nature Publishing Group finding more dastardly ways to lock us in, well beyond what I complained about in my letter declining reviewing for them

4 weeks ago 92 51 8 25
Post image

This is the distribution of the number of pages of papers submitted to Crypto 2026. 314 of the 752 papers have at least 50 pages, and 22 have at least 100 pages. Clearly the 30 page limit isn't cutting it.

1 month ago 7 2 2 0
Abstract. In this work, we present new cryptanalytic attacks on recently proposed, theory-inspired constructions of weak pseudorandom functions (weak-PRFs). We demonstrate attacks on several such designs, showing that the initial security arguments require significant refinement. Methodologically, our approach relies on novel observations about the structure of cyclic matrices, applications of Wagner’s generalized birthday technique, and conversion into polynomial systems over 𝔽₃. These findings highlight the need for a more careful analysis of those weak-PRF candidates

Abstract. In this work, we present new cryptanalytic attacks on recently proposed, theory-inspired constructions of weak pseudorandom functions (weak-PRFs). We demonstrate attacks on several such designs, showing that the initial security arguments require significant refinement. Methodologically, our approach relies on novel observations about the structure of cyclic matrices, applications of Wagner’s generalized birthday technique, and conversion into polynomial systems over 𝔽₃. These findings highlight the need for a more careful analysis of those weak-PRF candidates

Cryptanalysis of Two Alternating Moduli Weak PRFs (Kai Hu, Gregor Leander, Håvard Raddum, Arne Sandrib, Aleksei Udovenko) ia.cr/2026/482

1 month ago 1 1 0 0
Abstract. It has been a very long standing open question whether the CFS signature scheme whose security is basically that of a McEliece scheme based on very high rate binary Goppa codes could be attacked or not. There was a first cryptanalytic result by Faugère et al in 2011 consisting in finding a distinguisher for the binary Goppa codes used in this scheme showing that these codes can be distinguished in polynomial time from a random binary linear code. However despite numerous cryptanalytic attempts and even if the original distinguisher has been significantly improved, no attack on the McEliece scheme based on binary Goppa codes has been found so far except for very peculiar Goppa codes of degree 2. We show here that the Pfaffian modeling used in the distinguishing attack of Couvreur, Mora and Tillich of Asiacrypt 2023 can actually be used together with a shortening trick and looking for squares in the corresponding ideal to find a polynomial attack on the CFS scheme based on very high rate binary Goppa codes.This breaks this 25 years old signature scheme. We demonstrate the effectiveness of this approach by recovering the key of TII McEliece challenges with a claimed key security of up to 210 bits.

Abstract. It has been a very long standing open question whether the CFS signature scheme whose security is basically that of a McEliece scheme based on very high rate binary Goppa codes could be attacked or not. There was a first cryptanalytic result by Faugère et al in 2011 consisting in finding a distinguisher for the binary Goppa codes used in this scheme showing that these codes can be distinguished in polynomial time from a random binary linear code. However despite numerous cryptanalytic attempts and even if the original distinguisher has been significantly improved, no attack on the McEliece scheme based on binary Goppa codes has been found so far except for very peculiar Goppa codes of degree 2. We show here that the Pfaffian modeling used in the distinguishing attack of Couvreur, Mora and Tillich of Asiacrypt 2023 can actually be used together with a shortening trick and looking for squares in the corresponding ideal to find a polynomial attack on the CFS scheme based on very high rate binary Goppa codes.This breaks this 25 years old signature scheme. We demonstrate the effectiveness of this approach by recovering the key of TII McEliece challenges with a claimed key security of up to 210 bits.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

An attack on the CFS scheme and on TII McEliece challenges (Magali Bardet, Axel Lemoine, Jean-Pierre Tillich) ia.cr/2026/430

1 month ago 1 1 0 0
Benevolent Bureau of Birds logo

Benevolent Bureau of Birds logo

Please welcome the #defcon34 #CTF organizers, Benevolent Bureau of Birds!

You can sample their wares in the #DC34CTF Qualifier Round, May 22-24, 2026.

The Birds are online at bbbirds.org.

Info about our legendary CTF: www.defcon.org/html/links/d....

We hope we'll see you in the arena.

1 month ago 19 8 0 1
Abstract. The selection of shift polynomials is a pivotal yet challenging step in Coppersmith’s method for computing modular roots of multivariate polynomials. We propose a novel, determinant-based strategy for generating these polynomials, thereby presenting an improved variant of Coppersmith’s method tailored for certain multivariate modular equations. Our approach is first validated on solving the Modular Inversion Hidden Number Problem (MIHNP) and predicting the Inversive Congruential Generator (ICG), where it is shown to outperform prior methods both in theory and in practice. Furthermore, when applied to the Modular Inversion Double Hidden Numbers Problem (MIDHNP), our analysis reveals that MIDHNP is not harder than MIHNP, thereby disproving a conjecture by Boneh et al. (Asiacrypt 2001).

Abstract. The selection of shift polynomials is a pivotal yet challenging step in Coppersmith’s method for computing modular roots of multivariate polynomials. We propose a novel, determinant-based strategy for generating these polynomials, thereby presenting an improved variant of Coppersmith’s method tailored for certain multivariate modular equations. Our approach is first validated on solving the Modular Inversion Hidden Number Problem (MIHNP) and predicting the Inversive Congruential Generator (ICG), where it is shown to outperform prior methods both in theory and in practice. Furthermore, when applied to the Modular Inversion Double Hidden Numbers Problem (MIDHNP), our analysis reveals that MIDHNP is not harder than MIHNP, thereby disproving a conjecture by Boneh et al. (Asiacrypt 2001).

Coppersmith’s Method for Solving Modular Inversion Hidden Number Problem via Determinant-Based Elimination (Zhaopeng Ding, Zhaopeng Dai, Baofeng Wu, Rundong Wang, Yanshuo Zhang) ia.cr/2026/423

1 month ago 1 1 0 0
Advertisement
Post image

About page limits. Why do CS conferences have them? If you look at the length of papers posted to eprint.iacr.org, 96% of them are at most 75 pages. Seems like we should just publish them instead of jumping through hoops to fit the old world of paper.

1 month ago 7 3 1 0
Preview
announcing our €3,8M seed round and more on what's next

today, we're announcing our €3,8M ($4.5M) seed financing round, led by byFounders with participation from Bain Capital Crypto, Antler, Thomas Dohmke (former CEO of GitHub), Avery Pennarun (CEO of Tailscale) among other incredible angels.

read more on what's next: blog.tangled.org/seed

1 month ago 826 149 54 66
Preview
Leave big tech behind! How to replace Amazon, Google, X, Meta, Apple – and more A handful of companies monopolise the web, with unprecedented access to our data. But there are many more ethical – and often distinctively European – alternatives

Leave big tech behind! How to replace Amazon, Google, X, Meta, Apple – and more

1 month ago 328 165 16 18
Abstract. Computing cube roots in quadratic extensions of finite fields is a subroutine that arises in elliptic-curve point decompression, hash-to-curve and isogeny-based protocols. We present a new algorithm that, for p ≡ 1 (mod  3) –which holds in all settings where 𝔽_(p²) cube roots arise in practice– reduces the 𝔽_(p²) cube root to operations entirely in the base field 𝔽_(p) via the algebraic torus 𝕋₂(𝔽_(p)) and Lucas sequences. We prove correctness in all residuosity cases and implement the algorithm using the gnark-crypto open-source library. Benchmarks on six primes spanning pairing-based and isogeny-based cryptography show 1.6–2.3× speed-ups over direct (addition chain) exponentiations in 𝔽_(p²). We also extend the approach to p ≡ 2 (mod  3) and, more generally, to any odd n-th roots in quadratic towers 𝔽_(p^(2^(k))) with gcd (n, p + 1) = 1.

Abstract. Computing cube roots in quadratic extensions of finite fields is a subroutine that arises in elliptic-curve point decompression, hash-to-curve and isogeny-based protocols. We present a new algorithm that, for p ≡ 1 (mod  3) –which holds in all settings where 𝔽_(p²) cube roots arise in practice– reduces the 𝔽_(p²) cube root to operations entirely in the base field 𝔽_(p) via the algebraic torus 𝕋₂(𝔽_(p)) and Lucas sequences. We prove correctness in all residuosity cases and implement the algorithm using the gnark-crypto open-source library. Benchmarks on six primes spanning pairing-based and isogeny-based cryptography show 1.6–2.3× speed-ups over direct (addition chain) exponentiations in 𝔽_(p²). We also extend the approach to p ≡ 2 (mod  3) and, more generally, to any odd n-th roots in quadratic towers 𝔽_(p^(2^(k))) with gcd (n, p + 1) = 1.

Fast cube roots in Fp2 via the algebraic torus (Youssef El Housni) ia.cr/2026/392

1 month ago 5 1 0 0
Abstract. Quadratic Boolean functions (that is, Boolean functions of algebraic degree at most 2), bent Boolean functions (i.e. maximally nonlinear Boolean functions in even numbers of variables) and, as we prove in this paper, partially-bent Boolean functions (i.e. affine extensions of bent functions to linear super-spaces), share a strong property: all their restrictions to affine hyperplanes are plateaued (i.e. have a Walsh transform valued in a set of the form {0, ±λ}, where λ is a positive integer called the amplitude). In this paper we determine for any n and k < n the class C_(k)^(n) of those n-variable Boolean functions whose restrictions to all k-dimensional affine subspaces of 𝔽₂^(n) are plateaued (of any amplitude). We characterize partially-bent (resp., quadratic) Boolean functions as those functions that are plateaued on any affine hyperplane (resp., any affine subspace of dimension k, where 3 ≤ k ≤ n − 2, while these are all Boolean functions for 0 ≤ k ≤ 2). For n ≥ 5, each of the following classes of Boolean functions happens then to be strictly included in the next one: quadratic functions, partially-bent functions, the restrictions of partially-bent functions to affine hyperplanes, plateaued functions, the restrictions of plateaued functions to affine hyperplanes, and all Boolean functions. We leave open the two problems of determining exactly what are the third and fifth of these classes (we begin the study of the first of these two classes by giving a non-trivial characterization). Our characterization of partially-bent (resp., quadratic) functions extends to strongly plateaued vectorial functions. We state an open question on vectorial functions that happens to be related to an important one on crooked functions.

Abstract. Quadratic Boolean functions (that is, Boolean functions of algebraic degree at most 2), bent Boolean functions (i.e. maximally nonlinear Boolean functions in even numbers of variables) and, as we prove in this paper, partially-bent Boolean functions (i.e. affine extensions of bent functions to linear super-spaces), share a strong property: all their restrictions to affine hyperplanes are plateaued (i.e. have a Walsh transform valued in a set of the form {0, ±λ}, where λ is a positive integer called the amplitude). In this paper we determine for any n and k < n the class C_(k)^(n) of those n-variable Boolean functions whose restrictions to all k-dimensional affine subspaces of 𝔽₂^(n) are plateaued (of any amplitude). We characterize partially-bent (resp., quadratic) Boolean functions as those functions that are plateaued on any affine hyperplane (resp., any affine subspace of dimension k, where 3 ≤ k ≤ n − 2, while these are all Boolean functions for 0 ≤ k ≤ 2). For n ≥ 5, each of the following classes of Boolean functions happens then to be strictly included in the next one: quadratic functions, partially-bent functions, the restrictions of partially-bent functions to affine hyperplanes, plateaued functions, the restrictions of plateaued functions to affine hyperplanes, and all Boolean functions. We leave open the two problems of determining exactly what are the third and fifth of these classes (we begin the study of the first of these two classes by giving a non-trivial characterization). Our characterization of partially-bent (resp., quadratic) functions extends to strongly plateaued vectorial functions. We state an open question on vectorial functions that happens to be related to an important one on crooked functions.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Determining those Boolean functions whose restrictions to affine spaces are plateaued (Claude Carlet, Darrion Thornburgh) ia.cr/2026/386

1 month ago 2 1 0 0
4 Mar, Wed 1pm
TCS+ Talk: Sophie Huiberts (CNRS)

18 Mar, Wed 1pm
TCS+ Talk: Christopher Gartland (UNC Charlotte)

8 Apr, Wed 1pm
TCS+ Talk: Rahul Ilango (MIT)

22 Apr, Wed 1pm
TCS+ Talk: Yichuan Wang (UC Berkeley)

4 Mar, Wed 1pm TCS+ Talk: Sophie Huiberts (CNRS) 18 Mar, Wed 1pm TCS+ Talk: Christopher Gartland (UNC Charlotte) 8 Apr, Wed 1pm TCS+ Talk: Rahul Ilango (MIT) 22 Apr, Wed 1pm TCS+ Talk: Yichuan Wang (UC Berkeley)

The schedule for the upcoming talks (stay tuned for the May speakers!) can be found on our website: exciting times ahead!
sites.google.com/view/tcsplus/

1 month ago 14 4 0 0
Abstract. We provide a new interpretation of the arithmetic on Hessian Kummer lines using level-3 theta structures. This allows us to break the record for tripling on elliptic curves and their Kummer lines, requiring only 4 multiplications and 4 squarings per tripling for well-chosen curve parameters.

Abstract. We provide a new interpretation of the arithmetic on Hessian Kummer lines using level-3 theta structures. This allows us to break the record for tripling on elliptic curves and their Kummer lines, requiring only 4 multiplications and 4 squarings per tripling for well-chosen curve parameters.

Tripling on Hessian curves via isogeny decomposition (Thomas Decru, Sabrina Kunzweiler) ia.cr/2026/334

1 month ago 4 3 0 0
Advertisement
Abstract. We study the syndrome decoding problem (SDP) in the presence of side information. The SDP asks, given a binary parity-check matrix H and a syndrome s, to find a low Hamming weight binary error e such that He = s over 𝔽₂. Recent work (Cayrel et al., Eurocrypt ’21) exploits a fault injection attack to reveal syndrome entries over the integers, referred to as perfect hints. Subsequent works considered side-channel scenarios to reveal similar, but noisy, information (approximate hints).

Both types of hints have been shown empirically to allow for solving the SDP once enough of them are available. However, fundamental questions about the impact of these hints on the hardness of the SDP, such as thresholds for a collapse into the polynomial-time regime or how to exploit arbitrary amounts of hints, remain open.

In this work, we show that both types of hints effectively allow one to transform the SDP instance into a soft-decision decoding instance. We then adapt Information Set Decoding (ISD) algorithms, the best known technique to solve generic SDP instances, to this setting. In contrast to previous work, we obtain non-trivial speedups for any amount of available hints, interpolating smoothly between the complexity of standard ISD (no hints) and polynomial time (sufficient hints). Furthermore, our practical simulations show that Hint-ISD achieves the polynomial-time regime generally under fewer hints than previous approaches.

We then provide an explicit bound on the number of hints required to reach the polynomial-time regime. This bound confirms earlier practical observations that higher error weights, such as those found in the McEliece cryptosystem, exhibit higher resistance against hint exposure than schemes using smaller error weights, such as HQC.

Abstract. We study the syndrome decoding problem (SDP) in the presence of side information. The SDP asks, given a binary parity-check matrix H and a syndrome s, to find a low Hamming weight binary error e such that He = s over 𝔽₂. Recent work (Cayrel et al., Eurocrypt ’21) exploits a fault injection attack to reveal syndrome entries over the integers, referred to as perfect hints. Subsequent works considered side-channel scenarios to reveal similar, but noisy, information (approximate hints). Both types of hints have been shown empirically to allow for solving the SDP once enough of them are available. However, fundamental questions about the impact of these hints on the hardness of the SDP, such as thresholds for a collapse into the polynomial-time regime or how to exploit arbitrary amounts of hints, remain open. In this work, we show that both types of hints effectively allow one to transform the SDP instance into a soft-decision decoding instance. We then adapt Information Set Decoding (ISD) algorithms, the best known technique to solve generic SDP instances, to this setting. In contrast to previous work, we obtain non-trivial speedups for any amount of available hints, interpolating smoothly between the complexity of standard ISD (no hints) and polynomial time (sufficient hints). Furthermore, our practical simulations show that Hint-ISD achieves the polynomial-time regime generally under fewer hints than previous approaches. We then provide an explicit bound on the number of hints required to reach the polynomial-time regime. This bound confirms earlier practical observations that higher error weights, such as those found in the McEliece cryptosystem, exhibit higher resistance against hint exposure than schemes using smaller error weights, such as HQC.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Syndrome Decoding with Hints (Letizia D'Achille, Andre Esser, Nicolai Kraus) ia.cr/2026/341

1 month ago 1 1 0 0
Abstract. In a recent preprint, Grigoriev, Monico, and Shpilrain proposed a digital signature protocol based on the use of matrices over the tropical integer semiring. We show some design flaws of the proposed scheme, together with an efficient attack to forge signatures for an arbitrary message, and a key-recovery attack when given access to a list of honest signatures.

Abstract. In a recent preprint, Grigoriev, Monico, and Shpilrain proposed a digital signature protocol based on the use of matrices over the tropical integer semiring. We show some design flaws of the proposed scheme, together with an efficient attack to forge signatures for an arbitrary message, and a key-recovery attack when given access to a list of honest signatures.

Breaking digital signatures from tropical matrix semirings (Alessandro Sferlazza) ia.cr/2026/327

1 month ago 4 3 0 0

I am very happy to announce that thanks to the hard work of many people (The "MIKE Team"), we now have a working implementation in SageMath of MIKE (Module Isogeny Key Exchange).

1 month ago 9 8 1 3
Preview
CHES 2026 Cryptographic Hardware and Embedded Systems

🏆 CHES is currently seeking organisers for the CHES 2026 Challenge: ches.iacr.org/2026/callfor...

This is your opportunity to gamify and spotlight your favorite crypto engineering topic!

1 month ago 3 2 0 0
Abstract. The unbalanced oil and vinegar signature scheme (UOV) was proposed by Kipnis et al. in 1999 as a multivariate-based scheme. UOV is regarded as one of the most promising candidates for post-quantum cryptography owing to its short signatures and fast performance. Recently, Ran proposed a new key recovery attack on UOV over a field of even characteristic, reducing the security of its proposed parameters. Furthermore, Jin et al. generalized Ran’s attack to schemes over a field of arbitrary characteristic by exploiting the structure of the symmetric algebra. In this work, we propose a new framework for recovering the secret subspace of UOV over a finite field 𝔽_(p^(e)) by generalizing these preceding results. First, we show that a key recovery against UOV can be successfully performed using the XL algorithm by exploiting the structure of the p-truncated polynomial ring R^((p)) = 𝔽_(p^(e))[x₁, …, x_(n)]/⟨x₁^(p), …, x_(n)^(p)⟩. This result simplifies the description of the attacks proposed by Jin et al. by formulating them in terms of the polynomial ring, independent of the structure of the symmetric algebra. Second, we generalize this result to the polynomial rings of more general forms, namely, the p^(ℓ)-truncated polynomial rings R^((p^(ℓ))) for any 1 ≤ ℓ ≤ e. This result is due to our description in terms of the polynomial ring and can relax the constraints on the solving degree of the XL algorithm using R^((p^(ℓ))) by taking a larger ℓ. Finally, we consider performing the reconciliation and intersection attacks using the p^(ℓ)-truncated polynomial rings against UOV. In particular, we newly take into account the intersection attack using this framework, which has not been considered in previous analyses. Based on our complexity estimation, we confirm that the optimal complexity of the reconciliation attack using the proposed framework is consistent with that of the symmetric-algebra attack by Jin et al. We further show that the intersection attack using the proposed framework outperforms the reconciliation attack against the proposed parameters of UOV and reduces the security of multiple parameters compared to their claimed security levels. In addition, we show that our complexity estimation of the reconciliation attack using the proposed framework reduces the security of multiple parameters of SNOVA compared to previously known attacks.

Abstract. The unbalanced oil and vinegar signature scheme (UOV) was proposed by Kipnis et al. in 1999 as a multivariate-based scheme. UOV is regarded as one of the most promising candidates for post-quantum cryptography owing to its short signatures and fast performance. Recently, Ran proposed a new key recovery attack on UOV over a field of even characteristic, reducing the security of its proposed parameters. Furthermore, Jin et al. generalized Ran’s attack to schemes over a field of arbitrary characteristic by exploiting the structure of the symmetric algebra. In this work, we propose a new framework for recovering the secret subspace of UOV over a finite field 𝔽_(p^(e)) by generalizing these preceding results. First, we show that a key recovery against UOV can be successfully performed using the XL algorithm by exploiting the structure of the p-truncated polynomial ring R^((p)) = 𝔽_(p^(e))[x₁, …, x_(n)]/⟨x₁^(p), …, x_(n)^(p)⟩. This result simplifies the description of the attacks proposed by Jin et al. by formulating them in terms of the polynomial ring, independent of the structure of the symmetric algebra. Second, we generalize this result to the polynomial rings of more general forms, namely, the p^(ℓ)-truncated polynomial rings R^((p^(ℓ))) for any 1 ≤ ℓ ≤ e. This result is due to our description in terms of the polynomial ring and can relax the constraints on the solving degree of the XL algorithm using R^((p^(ℓ))) by taking a larger ℓ. Finally, we consider performing the reconciliation and intersection attacks using the p^(ℓ)-truncated polynomial rings against UOV. In particular, we newly take into account the intersection attack using this framework, which has not been considered in previous analyses. Based on our complexity estimation, we confirm that the optimal complexity of the reconciliation attack using the proposed framework is consistent with that of the symmetric-algebra attack by Jin et al. We further show that the intersection attack using the proposed framework outperforms the reconciliation attack against the proposed parameters of UOV and reduces the security of multiple parameters compared to their claimed security levels. In addition, we show that our complexity estimation of the reconciliation attack using the proposed framework reduces the security of multiple parameters of SNOVA compared to previously known attacks.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Image showing part 3 of abstract.

Image showing part 3 of abstract.

Key Recovery Attacks on UOV Using p^l-truncated Polynomial Rings (Hiroki Furue, Yasuhiko Ikematsu) ia.cr/2026/298

1 month ago 1 1 0 0
Abstract. Two linear codes 𝒞, 𝒞’ over 𝔽_(q) are linearly equivalent if one can be mapped to the other via a monomial transformation. Recovering this monomial from 𝒞 and 𝒞’ is known as the Linear Code Equivalence (LCE) problem.

The most efficient algorithms to solve the LCE problem follow a common framework based on finding low-weight codewords. This framework admits a natural lower bound obtained by assuming that among the found low-weight codewords, a single equivalent codeword pair can be identified and used to reconstruct the monomial without overhead. Whether this lower bound can be achieved by a constructive instantiation has remained an open problem. Existing algorithms all require multiple equivalent pairs for monomial reconstruction, resulting in both concrete and asymptotic gaps to the lower bound.

In this work, we answer the question of whether there exists such an optimal framework instantiation in the affirmative. We introduce a canonical labeling technique, as a generalization of canonical forms, that allows for monomial reconstruction from a single pair of equivalent codewords. Crucially, this labeling procedure, even if not necessarily polynomial-time, can be embedded into the codeword-search framework to identify equivalent codewords and perform final monomial recovery without overhead. This gives rise to the first framework instantiation that meets its lower bound both asymptotically and concretely up to negligible tolerance.

For the parameter sets proposed for the LESS signature scheme, an active second-round contender in the NIST PQC standardization process, our analysis reduces the estimated bit security by up to 15 bits.

Abstract. Two linear codes 𝒞, 𝒞’ over 𝔽_(q) are linearly equivalent if one can be mapped to the other via a monomial transformation. Recovering this monomial from 𝒞 and 𝒞’ is known as the Linear Code Equivalence (LCE) problem. The most efficient algorithms to solve the LCE problem follow a common framework based on finding low-weight codewords. This framework admits a natural lower bound obtained by assuming that among the found low-weight codewords, a single equivalent codeword pair can be identified and used to reconstruct the monomial without overhead. Whether this lower bound can be achieved by a constructive instantiation has remained an open problem. Existing algorithms all require multiple equivalent pairs for monomial reconstruction, resulting in both concrete and asymptotic gaps to the lower bound. In this work, we answer the question of whether there exists such an optimal framework instantiation in the affirmative. We introduce a canonical labeling technique, as a generalization of canonical forms, that allows for monomial reconstruction from a single pair of equivalent codewords. Crucially, this labeling procedure, even if not necessarily polynomial-time, can be embedded into the codeword-search framework to identify equivalent codewords and perform final monomial recovery without overhead. This gives rise to the first framework instantiation that meets its lower bound both asymptotically and concretely up to negligible tolerance. For the parameter sets proposed for the LESS signature scheme, an active second-round contender in the NIST PQC standardization process, our analysis reduces the estimated bit security by up to 15 bits.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

One Pair to Rule Them All: An Optimal Algorithm for Solving Code Equivalence via Codeword Search (Alessandro Budroni, Andre Esser) ia.cr/2026/268

1 month ago 1 1 0 0