Advertisement · 728 × 90

Posts by ePrint Updates

Abstract. End-to-end encrypted applications protect user data by ensuring that user secrets are only available on client devices. However, if a user loses all of their devices, they need a way to recover their data using only a short password. To realize a password-based secret recovery system resilient to brute-force attacks, prior works relied on secure hardware or a few non-colluding servers.

In this work, we take a conceptually different approach that distributes trust across the many clients already in the system, while using the server only as an orchestrator without relying on it for privacy. To achieve this, we design and implement Chorus, a secret recovery system that employs ephemeral committees, each consisting of approximately a thousand clients, to provide strong privacy with high scalability. Committees change frequently in Chorus, typically on the order of a few minutes, to severely limit an attacker’s ability to compromise clients on a committee. We design Chorus for unreliable, resource-constrained clients and show that the per-client overhead decreases as more clients join the system.

Assuming each user performs recovery once a year, the expected per-client overhead in Chorus is under 30 s of computation on a mobile device and 13.2 MB of communication, both incurred only once every four months in a configuration with 100M clients, up to 50M of which may be offline and at most 10M may be compromised. To achieve this performance, we contribute two key techniques: (i) a password-based secret recovery scheme that confines expensive committee interactions to infrequent, latency-tolerant operations, and (ii) a non-interactive verifiable secret-sharing scheme that reduces client overhead by two orders of magnitude by delegating computation to the server.

Abstract. End-to-end encrypted applications protect user data by ensuring that user secrets are only available on client devices. However, if a user loses all of their devices, they need a way to recover their data using only a short password. To realize a password-based secret recovery system resilient to brute-force attacks, prior works relied on secure hardware or a few non-colluding servers. In this work, we take a conceptually different approach that distributes trust across the many clients already in the system, while using the server only as an orchestrator without relying on it for privacy. To achieve this, we design and implement Chorus, a secret recovery system that employs ephemeral committees, each consisting of approximately a thousand clients, to provide strong privacy with high scalability. Committees change frequently in Chorus, typically on the order of a few minutes, to severely limit an attacker’s ability to compromise clients on a committee. We design Chorus for unreliable, resource-constrained clients and show that the per-client overhead decreases as more clients join the system. Assuming each user performs recovery once a year, the expected per-client overhead in Chorus is under 30 s of computation on a mobile device and 13.2 MB of communication, both incurred only once every four months in a configuration with 100M clients, up to 50M of which may be offline and at most 10M may be compromised. To achieve this performance, we contribute two key techniques: (i) a password-based secret recovery scheme that confines expensive committee interactions to infrequent, latency-tolerant operations, and (ii) a non-interactive verifiable secret-sharing scheme that reduces client overhead by two orders of magnitude by delegating computation to the server.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Chorus: Secret Recovery with Ephemeral Client Committees (Deevashwer Rathee, Emma Dauterman, Allison Li, Raluca Ada Popa) ia.cr/2026/715

2 days ago 0 0 0 0
Abstract. Polynomial commitment schemes (PCSs) are a fundamental cryptographic primitive that allows a prover to reveal evaluations for a committed polynomial. Motivated by the inefficiency of proof generation for large-scale computations as well as the concerns regarding third-party reliance and quantum threats, a line of recent works has focused on distributing code-based PCS, where the proving workload is distributed among multiple sub-provers to accelerate proof generation, while preserving transparent setup and plausible quantum resilience. However, for M sub-provers generating an evaluation proof for a polynomial of size N, existing solutions either require O(N) total communication among sub-provers, or incur an O(M) overhead in proof size.

In this paper, we introduce Veloz, a novel distribution framework for code-based multilinear PCS, which for the first time achieves communication cost sublinear in N, and eliminates the dependence of proof size on the number of sub-provers, without compromising proving speed or security. At its core is a customized proof aggregation method from interleaved code that efficiently combines sub-proofs via minimum communication. We further present two instantiations of Veloz: one based on Reed-Solomon code, Veloz_(RS), achieves $O(\frac{N}{M}\log{\frac{N}{\log{N}}})$ proving time, $O(\lambda \cdot \frac{\log^{2}{N}}{\log\log{N}} + M\cdot\frac{N}{\log{N}})$ communication, and $O(\lambda \cdot \frac{\log^{2}{N}}{\log\log{N}})$ proof size; the other based on the fast linear code from Brakedown (Golovnev et al., CRYPTO 2023), Veloz_(Lin), features $O(\frac{N}{M})$ proving time, $O(\lambda \cdot K + M\cdot\frac{N}{K})$ communication, and O(λ ⋅ K) proof size for $K \in [\sqrt[3]{N}, \sqrt{N}]$, while enjoying field agnosticity.

We also implement both schemes in Rust and conduct a comprehensive performance evaluation. The experimental results demonstrate their linear scalability with increasing M. More specifically, for N = 2³⁰ and M = 8, Veloz_(RS) takes 74.8s for proof generation, achieving a 5.18 × speedup compared to running a single prover, while Veloz_(Lin) generates a proof in 26.9s and achieves a 7.02 × speedup.

Abstract. Polynomial commitment schemes (PCSs) are a fundamental cryptographic primitive that allows a prover to reveal evaluations for a committed polynomial. Motivated by the inefficiency of proof generation for large-scale computations as well as the concerns regarding third-party reliance and quantum threats, a line of recent works has focused on distributing code-based PCS, where the proving workload is distributed among multiple sub-provers to accelerate proof generation, while preserving transparent setup and plausible quantum resilience. However, for M sub-provers generating an evaluation proof for a polynomial of size N, existing solutions either require O(N) total communication among sub-provers, or incur an O(M) overhead in proof size. In this paper, we introduce Veloz, a novel distribution framework for code-based multilinear PCS, which for the first time achieves communication cost sublinear in N, and eliminates the dependence of proof size on the number of sub-provers, without compromising proving speed or security. At its core is a customized proof aggregation method from interleaved code that efficiently combines sub-proofs via minimum communication. We further present two instantiations of Veloz: one based on Reed-Solomon code, Veloz_(RS), achieves $O(\frac{N}{M}\log{\frac{N}{\log{N}}})$ proving time, $O(\lambda \cdot \frac{\log^{2}{N}}{\log\log{N}} + M\cdot\frac{N}{\log{N}})$ communication, and $O(\lambda \cdot \frac{\log^{2}{N}}{\log\log{N}})$ proof size; the other based on the fast linear code from Brakedown (Golovnev et al., CRYPTO 2023), Veloz_(Lin), features $O(\frac{N}{M})$ proving time, $O(\lambda \cdot K + M\cdot\frac{N}{K})$ communication, and O(λ ⋅ K) proof size for $K \in [\sqrt[3]{N}, \sqrt{N}]$, while enjoying field agnosticity. We also implement both schemes in Rust and conduct a comprehensive performance evaluation. The experimental results demonstrate their linear scalability with increasing M. More specifically, for N = 2³⁰ and M = 8, Veloz_(RS) takes 74.8s for proof generation, achieving a 5.18 × speedup compared to running a single prover, while Veloz_(Lin) generates a proof in 26.9s and achieves a 7.02 × speedup.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Veloz: Efficient and Flexible Distribution Framework for Code-Based Polynomial Commitment Scheme (Yuanzhuo Yu, Shi-Feng Sun, Yuncong Zhang, Chenhua Fan, Tianyi Ma, Dawu Gu) ia.cr/2026/714

2 days ago 0 0 0 0
Abstract. Designing cryptographic hash functions that simultaneously achieve high throughput and strong provable security remains a fundamental challenge in symmetric cryptography. Traditional constructions, such as the Merkle-Damgrd (MD) and SPONGE paradigms, inherently suffer from structural limitations: they either expose internal chaining values to length-extension attacks or impose a rigid trade-off between security (capacity) and efficiency (rate), bounding their architectural potential.

In this paper, we introduce the paradigm, a modular design principle that structurally decouples a hash function into two independent components with distinct security objectives: (1) a (VIL) compression component optimized for high-speed message absorption, requiring only collision resistance and multiple-preimage resistance; and (2) a (FIL) finalization component utilizing independent random permutations to ensure indifferentiability from a random oracle. This separation enables the VIL component to maximize processing throughput (approaching the full primitive width) while the FIL component provides robust randomness extraction, effectively mitigating length-extension attacks and achieving tight security bounds.

Leveraging this paradigm, we propose the , comprising two instantiations: (based on the JH iteration structure with wide-pipe design) and (utilizing dual parallel Cipher-Block-Chaining lanes). Both constructions employ pairwise distinct round permutations and achieve superior message processing rates compared to conventional SPONGE-based designs, with Rocket-2 delivering more than 2× the throughput of SHA3-512 for large messages.

For practical instantiation without requiring multiple independent cryptographic primitives, we present , a novel domain-separation technique that derives 2^(w) effectively independent round functions from a single large permutation using a counter-based input diversification. While backward queries introduce a heuristic assumption regarding preimage multiplicity (bounded by a negligible failure probability  ≤ 2⁻⁵²⁶ for counter size w ≥ 64 and hash length 512 bits), we prove the construction remains sound for practical message lengths up to 2^(w − 8) blocks.

Finally, to formalize the security-efficiency trade-off, we propose (H.E.), a scale-invariant heuristic metric defined as the product of normalized security level and normalized processing rate. We demonstrate that conventional Merkle-Damgrd and SPONGE constructions exhibit H.E. values fundamentally bounded by 1/8, whereas the Rocket constructions achieve H.E. values approaching 1/4 (specifically, 0.227 for Rocket-2 with Keccak-p[1600]), thereby establishing a new Pareto frontier in hash function design.

Abstract. Designing cryptographic hash functions that simultaneously achieve high throughput and strong provable security remains a fundamental challenge in symmetric cryptography. Traditional constructions, such as the Merkle-Damgrd (MD) and SPONGE paradigms, inherently suffer from structural limitations: they either expose internal chaining values to length-extension attacks or impose a rigid trade-off between security (capacity) and efficiency (rate), bounding their architectural potential. In this paper, we introduce the paradigm, a modular design principle that structurally decouples a hash function into two independent components with distinct security objectives: (1) a (VIL) compression component optimized for high-speed message absorption, requiring only collision resistance and multiple-preimage resistance; and (2) a (FIL) finalization component utilizing independent random permutations to ensure indifferentiability from a random oracle. This separation enables the VIL component to maximize processing throughput (approaching the full primitive width) while the FIL component provides robust randomness extraction, effectively mitigating length-extension attacks and achieving tight security bounds. Leveraging this paradigm, we propose the , comprising two instantiations: (based on the JH iteration structure with wide-pipe design) and (utilizing dual parallel Cipher-Block-Chaining lanes). Both constructions employ pairwise distinct round permutations and achieve superior message processing rates compared to conventional SPONGE-based designs, with Rocket-2 delivering more than 2× the throughput of SHA3-512 for large messages. For practical instantiation without requiring multiple independent cryptographic primitives, we present , a novel domain-separation technique that derives 2^(w) effectively independent round functions from a single large permutation using a counter-based input diversification. While backward queries introduce a heuristic assumption regarding preimage multiplicity (bounded by a negligible failure probability  ≤ 2⁻⁵²⁶ for counter size w ≥ 64 and hash length 512 bits), we prove the construction remains sound for practical message lengths up to 2^(w − 8) blocks. Finally, to formalize the security-efficiency trade-off, we propose (H.E.), a scale-invariant heuristic metric defined as the product of normalized security level and normalized processing rate. We demonstrate that conventional Merkle-Damgrd and SPONGE constructions exhibit H.E. values fundamentally bounded by 1/8, whereas the Rocket constructions achieve H.E. values approaching 1/4 (specifically, 0.227 for Rocket-2 with Keccak-p[1600]), thereby establishing a new Pareto frontier in hash function design.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Image showing part 3 of abstract.

Image showing part 3 of abstract.

How to construct even faster and indifferentiable hash functions from random permutations (Liting Zhang, Han Sui, Lei Zhang, Wenling Wu) ia.cr/2026/713

2 days ago 0 0 0 0
Abstract. We give a public key encryption scheme with plausible quasi-exponential security based on the conjectured intractability of two constraint satisfaction problems (CSPs), both of which are instantiated with a corruption rate of 1 − o(1). First, we conjecture the hardness of a new large alphabet random predicate CSP (LARP-CSP) defined over an arbitrary but strongly expanding factor graph, where the vast majority of predicate outputs are replaced with random outputs. Second, we conjecture the hardness of the standard kXOR problem defined over a random factor graph, again where the vast majority of parity computations are replaced with random bits. In support of our hardness conjecture for LARP-CSPs, we give a variety of lower bounds, ruling out many natural attacks including all known attacks that exploit non-random factor graphs.

Our public key encryption scheme is the first to leverage high corruption CSPs while simultaneously achieving a plausible security level far above quasi-polynomial. At the heart of our work is a new method for planting cryptographic trapdoors based on the label extended factor graph for a CSP.

Along the way to achieving our result, we give the first uniform construction of an error-correcting code that has an expanding, low density generator matrix while simultaneously allowing for efficient decoding from a 1 − o(1) fraction of corruptions.

Abstract. We give a public key encryption scheme with plausible quasi-exponential security based on the conjectured intractability of two constraint satisfaction problems (CSPs), both of which are instantiated with a corruption rate of 1 − o(1). First, we conjecture the hardness of a new large alphabet random predicate CSP (LARP-CSP) defined over an arbitrary but strongly expanding factor graph, where the vast majority of predicate outputs are replaced with random outputs. Second, we conjecture the hardness of the standard kXOR problem defined over a random factor graph, again where the vast majority of parity computations are replaced with random bits. In support of our hardness conjecture for LARP-CSPs, we give a variety of lower bounds, ruling out many natural attacks including all known attacks that exploit non-random factor graphs. Our public key encryption scheme is the first to leverage high corruption CSPs while simultaneously achieving a plausible security level far above quasi-polynomial. At the heart of our work is a new method for planting cryptographic trapdoors based on the label extended factor graph for a CSP. Along the way to achieving our result, we give the first uniform construction of an error-correcting code that has an expanding, low density generator matrix while simultaneously allowing for efficient decoding from a 1 − o(1) fraction of corruptions.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Public Key Encryption from High-Corruption Constraint Satisfaction Problems (Isaac M Hair, Amit Sahai) ia.cr/2026/712

2 days ago 0 0 0 0
Abstract. Masking is an effective defense against side-channel attacks, yet it remains costly under hardware constraints. The Caliptra Root-of-Trust is a representative case, where its masked ML-DSA implementation incurs about 6× area overhead. We propose a novel first-order masking solution that optimizes Caliptra, achieving significant improvements in area–delay efficiency. Compared to Caliptra’s ML-DSA reduction, our design achieves a 12.1× speedup, reducing LUTs by 86.7% and FFs by 94.5%, while improving area–delay efficiency by 91×. TVLA, with over 1,000,000 traces, shows no first-order leakage, satisfies Caliptra’s security requirements, and significantly improves implementation efficiency.

Abstract. Masking is an effective defense against side-channel attacks, yet it remains costly under hardware constraints. The Caliptra Root-of-Trust is a representative case, where its masked ML-DSA implementation incurs about 6× area overhead. We propose a novel first-order masking solution that optimizes Caliptra, achieving significant improvements in area–delay efficiency. Compared to Caliptra’s ML-DSA reduction, our design achieves a 12.1× speedup, reducing LUTs by 86.7% and FFs by 94.5%, while improving area–delay efficiency by 91×. TVLA, with over 1,000,000 traces, shows no first-order leakage, satisfies Caliptra’s security requirements, and significantly improves implementation efficiency.

Drop-In Masked Modular Reduction for ML-DSA: Cutting Side-Channel Cost in the Root-of-Trust (Merve Karabulut, Reza Azarderakhsh) ia.cr/2026/711

5 days ago 0 0 0 0
Abstract. Threshold signatures distribute trust across multiple parties, eliminating single points of failure and reducing insider and key-exfiltration risks—properties that are increasingly important for high-assurance deployments and recently emphasized by NIST’s Multi-Party Threshold Cryptography (MPTC) initiative. We present a practical t-out-of-n threshold variant and emulation of MAYO, a post-quantum signature candidate to NIST’s call for additional signatures. Our proposal builds upon the threshold MAYO design of Celi, Escudero and Niot (PQCrypto2025), which we significantly refine to achieve practical performance. To this end, we introduce two algorithmic modifications to MAYO tailored for the distributed setting: (1) Explicit-Salt MAYO, which allows for pre-determined salts to enable a single-round online phase; and (2) Depth-Reduced MAYO, which restructures the signing algorithm to minimize the depth of secret-dependent operations. We then propose a unified protocol framework that integrate these techniques, plus other MPC specific optimizations, with the goal of minimizing online latency. Finally, we provide a concrete instantiation and local emulation in the dishonest majority setting, secure against active adversaries. Our emulation shows that threshold signing is practical at typical threshold sizes and amenable to deployment. By releasing an open-source implementation and reporting end-to-end performance, this work offers a concrete reference for the thresholdization of post-quantum signatures. Clearly the aforementioned framework is not limited to MAYO, and can be applied to the UOV family of signatures more generally.

Abstract. Threshold signatures distribute trust across multiple parties, eliminating single points of failure and reducing insider and key-exfiltration risks—properties that are increasingly important for high-assurance deployments and recently emphasized by NIST’s Multi-Party Threshold Cryptography (MPTC) initiative. We present a practical t-out-of-n threshold variant and emulation of MAYO, a post-quantum signature candidate to NIST’s call for additional signatures. Our proposal builds upon the threshold MAYO design of Celi, Escudero and Niot (PQCrypto2025), which we significantly refine to achieve practical performance. To this end, we introduce two algorithmic modifications to MAYO tailored for the distributed setting: (1) Explicit-Salt MAYO, which allows for pre-determined salts to enable a single-round online phase; and (2) Depth-Reduced MAYO, which restructures the signing algorithm to minimize the depth of secret-dependent operations. We then propose a unified protocol framework that integrate these techniques, plus other MPC specific optimizations, with the goal of minimizing online latency. Finally, we provide a concrete instantiation and local emulation in the dishonest majority setting, secure against active adversaries. Our emulation shows that threshold signing is practical at typical threshold sizes and amenable to deployment. By releasing an open-source implementation and reporting end-to-end performance, this work offers a concrete reference for the thresholdization of post-quantum signatures. Clearly the aforementioned framework is not limited to MAYO, and can be applied to the UOV family of signatures more generally.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Optimizing and Implementing Threshold MAYO (Diego Aranha, Giacomo Borin, Sofia Celi, Guilhem Niot) ia.cr/2026/710

5 days ago 2 1 0 1
Abstract. Retrieval-augmented generation (RAG) systems critically depend on a vector-retrieval stage that selects relevant documents from a large embedding database. In a RAG-as-a-Service setting, however, clients have no visibility into the server’s proprietary embeddings and index structures, and thus cannot distinguish faithful execution from arbitrary deviations. This motivates service consistency for retrieval: the returned results must be consistent with executing an agreed approximate nearest neighbor search (ANNS) algorithm on a fixed, committed database together with a fixed, committed ANNS index, while revealing nothing beyond the outputs themselves.

We present the first polynomial interactive oracle proof (PIOP) tailored to the widely deployed HNSW ANNS algorithm. Building on this PIOP, we introduce zkRAG, a zero-knowledge, succinct, non-interactive argument for RAG retrieval that enables practical verification. Our design achieves asymptotically optimal online prover efficiency, with prover time linear in the HNSW search trace length, while keeping verification succinct. We introduce several new techniques that may be of independent interest, including a hybrid lookup argument, a highly efficient checker-based PIOP for checking priority-queue updates, and an efficient checker for membership selector vectors. For a benchmark with 10⁶ vectors of dimension 128, single-thread zkRAG proves a typical HNSW query in 50 seconds-over 1000× faster than existing baselines–while keeping verification lightweight, demonstrating the feasibility of efficient zero-knowledge service-consistent RAG retrieval.

Abstract. Retrieval-augmented generation (RAG) systems critically depend on a vector-retrieval stage that selects relevant documents from a large embedding database. In a RAG-as-a-Service setting, however, clients have no visibility into the server’s proprietary embeddings and index structures, and thus cannot distinguish faithful execution from arbitrary deviations. This motivates service consistency for retrieval: the returned results must be consistent with executing an agreed approximate nearest neighbor search (ANNS) algorithm on a fixed, committed database together with a fixed, committed ANNS index, while revealing nothing beyond the outputs themselves. We present the first polynomial interactive oracle proof (PIOP) tailored to the widely deployed HNSW ANNS algorithm. Building on this PIOP, we introduce zkRAG, a zero-knowledge, succinct, non-interactive argument for RAG retrieval that enables practical verification. Our design achieves asymptotically optimal online prover efficiency, with prover time linear in the HNSW search trace length, while keeping verification succinct. We introduce several new techniques that may be of independent interest, including a hybrid lookup argument, a highly efficient checker-based PIOP for checking priority-queue updates, and an efficient checker for membership selector vectors. For a benchmark with 10⁶ vectors of dimension 128, single-thread zkRAG proves a typical HNSW query in 50 seconds-over 1000× faster than existing baselines–while keeping verification lightweight, demonstrating the feasibility of efficient zero-knowledge service-consistent RAG retrieval.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

zkRAG: Efficiently Proving RAG Retrieval in Zero Knowledge (Yanze Jiang, Xinyang Yang, Xuanming Liu, Yanpei Guo, Jiaheng Zhang) ia.cr/2026/709

5 days ago 0 0 0 0
Abstract. In this report, we describe how block circulant (BC) codes can serve as an alternative to one-dimensional (1D) and two-dimensional (2D) Reed–Solomon (RS) codes in the Ethereum PeerDAS protocol. We begin by reviewing the implementation of 1D RS codes in PeerDAS and then present efficient encoding and reconstruction algorithms for BC codes, developed within the same implementation framework as the respective 1D RS algorithms. The proposed encoding algorithm also permits a graceful integration of KZG commitment scheme. Finally, we evaluate and compare the performance of BC codes in terms of code rate, stopping rate, and the size of the local codes they contain, against both 1D and 2D RS codes.

Abstract. In this report, we describe how block circulant (BC) codes can serve as an alternative to one-dimensional (1D) and two-dimensional (2D) Reed–Solomon (RS) codes in the Ethereum PeerDAS protocol. We begin by reviewing the implementation of 1D RS codes in PeerDAS and then present efficient encoding and reconstruction algorithms for BC codes, developed within the same implementation framework as the respective 1D RS algorithms. The proposed encoding algorithm also permits a graceful integration of KZG commitment scheme. Finally, we evaluate and compare the performance of BC codes in terms of code rate, stopping rate, and the size of the local codes they contain, against both 1D and 2D RS codes.

Block Circulant Codes for Ethereum PeerDAS (Birenjith Sasidharan, Emanuele Viterbo, Dankrad Feist) ia.cr/2026/708

5 days ago 0 0 0 0
Abstract. An improved low-memory hash function scheme called Alternating Sponge (ASP) is proposed to achieve the beyond-birthday-bound security with respect to the capacity c. The scheme redesigns the state layout and permutation mechanism of double sponge construction, effectively reducing the total state space from 2r + 2c bits to 2r + c bits and increasing the number of bits squeezed per round from r to 2r. The scheme introduces a two-phase serial permutation sequence to update the shared state, achieving a complete state transition without increasing the input size of the permutation function. A security proof is provided within the indifferentiability framework, demonstrating that the scheme can achieve security comparable to the double sponge construction, while operating with a reduced state size and an increased number of squeezed bits. Overall, the design ensures the compactness and diffusion of each round of processing through layered absorption and alternating permutations, providing a solution for deploying high-security hash functions in memory-constrained environments.

Abstract. An improved low-memory hash function scheme called Alternating Sponge (ASP) is proposed to achieve the beyond-birthday-bound security with respect to the capacity c. The scheme redesigns the state layout and permutation mechanism of double sponge construction, effectively reducing the total state space from 2r + 2c bits to 2r + c bits and increasing the number of bits squeezed per round from r to 2r. The scheme introduces a two-phase serial permutation sequence to update the shared state, achieving a complete state transition without increasing the input size of the permutation function. A security proof is provided within the indifferentiability framework, demonstrating that the scheme can achieve security comparable to the double sponge construction, while operating with a reduced state size and an increased number of squeezed bits. Overall, the design ensures the compactness and diffusion of each round of processing through layered absorption and alternating permutations, providing a solution for deploying high-security hash functions in memory-constrained environments.

Alternating Sponge: A Low-Memory Hash Function with Beyond-Birthday-Bound Security (Ziyang Luo, Yaobin Shen, Hailun Yan, Lei Wang, Dawu Gu) ia.cr/2026/707

5 days ago 0 0 0 0
Advertisement
Abstract. The Permuted Kernel Problem (PKP), introduced by Shamir, is a computationally hard problem underlying recent post-quantum signature schemes such as PERK, SUSHSYFISH and PKP-DSS. Among these, PERK is a second-round candidate in NIST’s Additional Digital Signature Schemes post-quantum standardization process, SUSHSYFISH appeared at EUROCRYPT~2020, and PKP-DSS was one of the finalists in the CACR competition. The best known attacks on PKP rely on combinatorial meet-in-the-middle techniques, notably the algorithms of Koussa, Macario-Rat, and Patarin (KMP) and Santini, Baldi, and Chiaraluce (SBC), whose complexities remain super-exponential, with memory requirements of the same order as their time complexities.

In this work, we obtain improved cryptanalytic results for PKP that strictly outperform all previously known attacks. In particular, although no parameter sets of PERK~v2.2.0 fall below the NIST security levels, we provide the first evidence that secret vector recovery for all PERK~v2.2.0 parameter sets can be achieved with complexity below their estimated bit-security levels. We additionally obtain improved bit-complexity estimates for the SUSHSYFISH and PKP-DSS parameter sets. We further introduce a Schroeppel–Shamir style time–memory trade-off in the PKP setting. Although PKP does not admit square-root memory as in the classical subset-sum problem, our adaptation achieves substantial memory reductions while keeping the time complexity close to the best known attacks. Overall, our results provide improved cryptanalytic insight into PKP and refine the current understanding of the concrete security of PKP-based signature schemes.

Abstract. The Permuted Kernel Problem (PKP), introduced by Shamir, is a computationally hard problem underlying recent post-quantum signature schemes such as PERK, SUSHSYFISH and PKP-DSS. Among these, PERK is a second-round candidate in NIST’s Additional Digital Signature Schemes post-quantum standardization process, SUSHSYFISH appeared at EUROCRYPT~2020, and PKP-DSS was one of the finalists in the CACR competition. The best known attacks on PKP rely on combinatorial meet-in-the-middle techniques, notably the algorithms of Koussa, Macario-Rat, and Patarin (KMP) and Santini, Baldi, and Chiaraluce (SBC), whose complexities remain super-exponential, with memory requirements of the same order as their time complexities. In this work, we obtain improved cryptanalytic results for PKP that strictly outperform all previously known attacks. In particular, although no parameter sets of PERK~v2.2.0 fall below the NIST security levels, we provide the first evidence that secret vector recovery for all PERK~v2.2.0 parameter sets can be achieved with complexity below their estimated bit-security levels. We additionally obtain improved bit-complexity estimates for the SUSHSYFISH and PKP-DSS parameter sets. We further introduce a Schroeppel–Shamir style time–memory trade-off in the PKP setting. Although PKP does not admit square-root memory as in the classical subset-sum problem, our adaptation achieves substantial memory reductions while keeping the time complexity close to the best known attacks. Overall, our results provide improved cryptanalytic insight into PKP and refine the current understanding of the concrete security of PKP-based signature schemes.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Improved Cryptanalysis of the Permuted Kernel Problem with Applications to PERK v2.2.0, SUSHSYFISH and PKP-DSS (Abul Kalam, Sudeshna Karmakar, Arindam Mukherjee, Soumya Sahoo, Santanu Sarkar) ia.cr/2026/706

5 days ago 0 0 0 0
Abstract. Restricted Syndrome Decoding (ResSD) is a variant of linear code decoding problem where each of the error’s entries must belong to a fixed small set of values. This problem underlies the security of CROSS, a post-quantum signature scheme that is one of the Round~2 candidates of NIST’s ongoing additional signatures call. We show that solutions to this problem can be deduced from vectors of a particular structure and a small norm in newly constructed codes, in both Hamming and Euclidean metrics. This allows us to reduce Restricted Syndrome Decoding to both code-based (Regular Syndrome Decoding) and lattice-based problems (Closest Vector Problem, List of Short/Close Vectors), increasing the attack surface and providing new insights into the security of ResSD. We evaluate our attacks on CROSS instances both theoretically and experimentally on reduced parameters.

Abstract. Restricted Syndrome Decoding (ResSD) is a variant of linear code decoding problem where each of the error’s entries must belong to a fixed small set of values. This problem underlies the security of CROSS, a post-quantum signature scheme that is one of the Round~2 candidates of NIST’s ongoing additional signatures call. We show that solutions to this problem can be deduced from vectors of a particular structure and a small norm in newly constructed codes, in both Hamming and Euclidean metrics. This allows us to reduce Restricted Syndrome Decoding to both code-based (Regular Syndrome Decoding) and lattice-based problems (Closest Vector Problem, List of Short/Close Vectors), increasing the attack surface and providing new insights into the security of ResSD. We evaluate our attacks on CROSS instances both theoretically and experimentally on reduced parameters.

Cross-Paradigm Models of Restricted Syndrome Decoding with Application to CROSS (Étienne Burle, Aleksei Udovenko) ia.cr/2026/705

5 days ago 1 0 0 0
Abstract. We give efficient formulas to evaluate isogenies of ordinary elliptic curves over finite fields of characteristic 2, extending the odd-characteristic techniques of Hisil–Costello and Renes to binary fields. For odd prime degree ℓ = 2s + 1, our affine product evaluation computes the image x-coordinate using 5sM field multiplications, or 4sM when the kernel points are normalized. We derive an inversion-free variant that evaluates the x-map in projective and twisted Kummer coordinates, allowing carried points to remain projective across successive isogeny steps. Over 𝔽_(2⁵¹¹), microbenchmarks show that the inversion-free projective and twisted variants are faster than Vélu-style x-evaluation when outputs are kept in projective/twisted form, while the affine one-inversion variant is about 4.2× faster.

Abstract. We give efficient formulas to evaluate isogenies of ordinary elliptic curves over finite fields of characteristic 2, extending the odd-characteristic techniques of Hisil–Costello and Renes to binary fields. For odd prime degree ℓ = 2s + 1, our affine product evaluation computes the image x-coordinate using 5sM field multiplications, or 4sM when the kernel points are normalized. We derive an inversion-free variant that evaluates the x-map in projective and twisted Kummer coordinates, allowing carried points to remain projective across successive isogeny steps. Over 𝔽_(2⁵¹¹), microbenchmarks show that the inversion-free projective and twisted variants are faster than Vélu-style x-evaluation when outputs are kept in projective/twisted form, while the affine one-inversion variant is about 4.2× faster.

Fast Isogeny Evaluation on Binary Curves (Gustavo Banegas, Nicolas Sarkis, Benjamin Smith) ia.cr/2026/704

6 days ago 1 0 0 0
Abstract. Basically, public-key searchable encryption schemes require a linear search time with respect to the total number of ciphertexts. Xu, Wu, Wang, Susilo, Domingo-Ferrer, and Jin (IEEE Transactions on Information Forensics and Security, 2015) introduced Searchable Public-Key Ciphertexts with Hidden Structure (SPCHS). In SPCHS, ciphertexts associated with the same keyword are linked through a hidden structure that can be revealed using a trapdoor. This enables efficient extraction of matching ciphertexts and achieves optimal asymptotic search efficiency by traversing only the result set. However, a drawback of the SPCHS scheme and its subsequent works is that they are all pairing-based or rely on identity-based encryption (IBE) as a foundational component. To improve efficiency, this paper proposes a generic construction of SPCHS. Our core building block is Non-Interactive Key Exchange (NIKE), and the remaining components are symmetric-key primitives, namely secret-key encryption and a pseudorandom function (PRF). In particular, our search algorithm depends solely on these symmetric-key primitives and is independent of any public-key primitives. We implement the most efficient instantiation of the generic construction using Diffie-Hellman NIKE, an HMAC-based PRF, and AES. Our evaluation demonstrates that the search time of the DH-based instantiation is over 250 times faster than that of the asymmetric pairing-based scheme by Wang et al. (IEEE Transactions on Industrial Informatics, 2020). Moreover, we reduce the ciphertext size by a factor of approximately 20. In addition to this practical contribution of significantly improving efficiency, the proposed generic construction also makes a theoretical contribution by enabling the realization of the first set of post-quantum SPCHS schemes based on lattices, isogenies, and codes.

Abstract. Basically, public-key searchable encryption schemes require a linear search time with respect to the total number of ciphertexts. Xu, Wu, Wang, Susilo, Domingo-Ferrer, and Jin (IEEE Transactions on Information Forensics and Security, 2015) introduced Searchable Public-Key Ciphertexts with Hidden Structure (SPCHS). In SPCHS, ciphertexts associated with the same keyword are linked through a hidden structure that can be revealed using a trapdoor. This enables efficient extraction of matching ciphertexts and achieves optimal asymptotic search efficiency by traversing only the result set. However, a drawback of the SPCHS scheme and its subsequent works is that they are all pairing-based or rely on identity-based encryption (IBE) as a foundational component. To improve efficiency, this paper proposes a generic construction of SPCHS. Our core building block is Non-Interactive Key Exchange (NIKE), and the remaining components are symmetric-key primitives, namely secret-key encryption and a pseudorandom function (PRF). In particular, our search algorithm depends solely on these symmetric-key primitives and is independent of any public-key primitives. We implement the most efficient instantiation of the generic construction using Diffie-Hellman NIKE, an HMAC-based PRF, and AES. Our evaluation demonstrates that the search time of the DH-based instantiation is over 250 times faster than that of the asymmetric pairing-based scheme by Wang et al. (IEEE Transactions on Industrial Informatics, 2020). Moreover, we reduce the ciphertext size by a factor of approximately 20. In addition to this practical contribution of significantly improving efficiency, the proposed generic construction also makes a theoretical contribution by enabling the realization of the first set of post-quantum SPCHS schemes based on lattices, isogenies, and codes.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Quick Draw Queries: Lightweight Searchable Public-key Ciphertexts with Hidden Structures via Non-Interactive Key Exchange (Keita Emura, Toshihiro Ohigashi, Nobuyuki Sugio) ia.cr/2026/703

6 days ago 1 0 0 0
Abstract. We review the traditional techniques of message authentication from the perspective of Constructive Cryptography, one of the major composable security frameworks. For each of the ‘textbook’ means of message authentication, known results are compiled, and various gaps are addressed. We also point out areas of remaining study, and obstacles to demonstrating composability in those cases. The results can be thought of as a toolkit showing how to realise composable authentication with various cryptographic primitives, with precise statements on the security guarantees and set-up requirements.

Abstract. We review the traditional techniques of message authentication from the perspective of Constructive Cryptography, one of the major composable security frameworks. For each of the ‘textbook’ means of message authentication, known results are compiled, and various gaps are addressed. We also point out areas of remaining study, and obstacles to demonstrating composability in those cases. The results can be thought of as a toolkit showing how to realise composable authentication with various cryptographic primitives, with precise statements on the security guarantees and set-up requirements.

A Constructive Treatment of Authentication (Christopher Battarbee) ia.cr/2026/702

6 days ago 2 0 0 0
Abstract. This paper studies efficient realizations of arithmetic over the binary field 𝔽₂ in nonabelian groups using only intrinsic group operations, namely multiplication and inversion. The constructions rely on commutators to implement Boolean computation within the group structure. Two complementary approaches are presented: a realization of a universal Boolean gate (NAND) and direct realizations of the field operations XOR and AND. These approaches apply to finite nonabelian simple groups and can be implemented using a small number of group operations. Explicit realizations are provided in the alternating groups A₅ and A₆. For the smallest nonabelian simple group A₅, these constructions achieve state-of-the-art efficiency in the number of group operations.

Abstract. This paper studies efficient realizations of arithmetic over the binary field 𝔽₂ in nonabelian groups using only intrinsic group operations, namely multiplication and inversion. The constructions rely on commutators to implement Boolean computation within the group structure. Two complementary approaches are presented: a realization of a universal Boolean gate (NAND) and direct realizations of the field operations XOR and AND. These approaches apply to finite nonabelian simple groups and can be implemented using a small number of group operations. Explicit realizations are provided in the alternating groups A₅ and A₆. For the smallest nonabelian simple group A₅, these constructions achieve state-of-the-art efficiency in the number of group operations.

Boolean Arithmetic over 𝔽₂ from Group Commutators (Marc Joye) ia.cr/2026/701

6 days ago 1 1 0 0
Abstract. GRAFHE is a recently proposed noise-free fully homomorphic encryption scheme based on rewriting systems in symmetric groups. We provide an efficiently computable distinguisher, breaking IND-CPA security.

Abstract. GRAFHE is a recently proposed noise-free fully homomorphic encryption scheme based on rewriting systems in symmetric groups. We provide an efficiently computable distinguisher, breaking IND-CPA security.

GRAFHEN is not IND-CPA secure (Remi Geraud-Stewart) ia.cr/2026/700

6 days ago 1 0 0 0
Abstract. Quantum computing threatens classical public key systems and has motivated NIST’s PQC standardization, and within this process HAWK has been submitted as a lattice-based signature scheme. While prior analyses focused on mathematical security, the consequences of side-channel leakage in practical HAWK implementations are less systematically understood. We address this gap by formalizing discrete Gaussian sampler leakage as HAWK with Hint and studying three leakage categories: (i) full coefficient recovery, (ii) sign-only recovery, and (iii) noisy sign recovery. For each category, including (i) which has been covered in prior work, we design algebraic key recovery algorithms that derive explicit linear relations between leaked information and the secret basis and solve the resulting structured linear systems in polynomial time.

For HAWK-1024, we obtain the following results under the stated threat models. With full coefficient leakage, a single signature suffices for secret key recovery by directly solving the induced linear system, improving prior work that required two signatures under the same conditions. With sign-only leakage, the secret key is recovered using 14 signatures in approximately 100 seconds. With noisy sign leakage, we achieve key recovery using 400 signatures within about 30 seconds at a 10% bit-error rate, and still succeed with roughly 7,000 signatures within about one minute even when the bit-error rate increases to 40%. We implement all attacks and empirically validate them, clarifying how measurement noise affects the required number of signatures.

Abstract. Quantum computing threatens classical public key systems and has motivated NIST’s PQC standardization, and within this process HAWK has been submitted as a lattice-based signature scheme. While prior analyses focused on mathematical security, the consequences of side-channel leakage in practical HAWK implementations are less systematically understood. We address this gap by formalizing discrete Gaussian sampler leakage as HAWK with Hint and studying three leakage categories: (i) full coefficient recovery, (ii) sign-only recovery, and (iii) noisy sign recovery. For each category, including (i) which has been covered in prior work, we design algebraic key recovery algorithms that derive explicit linear relations between leaked information and the secret basis and solve the resulting structured linear systems in polynomial time. For HAWK-1024, we obtain the following results under the stated threat models. With full coefficient leakage, a single signature suffices for secret key recovery by directly solving the induced linear system, improving prior work that required two signatures under the same conditions. With sign-only leakage, the secret key is recovered using 14 signatures in approximately 100 seconds. With noisy sign leakage, we achieve key recovery using 400 signatures within about 30 seconds at a 10% bit-error rate, and still succeed with roughly 7,000 signatures within about one minute even when the bit-error rate increases to 40%. We implement all attacks and empirically validate them, clarifying how measurement noise affects the required number of signatures.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

HAWK with Hint: Algebraic Key Recovery from Side-Channel Leakage (Byoungchan Chi, Changmin Lee, Inhun Lee) ia.cr/2026/699

6 days ago 1 0 0 0
Abstract. Deduplication for encrypted data reduces the storage costs of server while keeping the sensitive data secure. Fuzzy deduplication is developing for the multimedia data for its similarity not exact equality, hence it has better storage space saving capabilities than exact deduplication. However, the most of existing works face the security threat such as brute-force guessing attacks, key recovery attacks and so on. We conduct research on fuzzy deduplication from the fundamental layer for these security goals. In this work, the data similarity is novelty defined based on information entropy theory so that the similarity verification by cloud is without online clients assistance. Moreover, the parameter robustness verification is firstly proposed to be against the key recovery attacks and guessing attacks fundamentally. The simulation experiments show that our scheme can save approximately 87% of storage space while maintaining a tamper detection success rate of no less than 97%.

Abstract. Deduplication for encrypted data reduces the storage costs of server while keeping the sensitive data secure. Fuzzy deduplication is developing for the multimedia data for its similarity not exact equality, hence it has better storage space saving capabilities than exact deduplication. However, the most of existing works face the security threat such as brute-force guessing attacks, key recovery attacks and so on. We conduct research on fuzzy deduplication from the fundamental layer for these security goals. In this work, the data similarity is novelty defined based on information entropy theory so that the similarity verification by cloud is without online clients assistance. Moreover, the parameter robustness verification is firstly proposed to be against the key recovery attacks and guessing attacks fundamentally. The simulation experiments show that our scheme can save approximately 87% of storage space while maintaining a tamper detection success rate of no less than 97%.

Entropy-based Fuzzy Deduplication with Perfect Resistance to Key Recovery Attack (Zehui Tang, Shengke Zeng, Minfeng Shao) ia.cr/2026/698

6 days ago 1 0 0 0
Abstract. Ring signatures are cryptographic primitives that enable a user to sign a message on behalf of a group of users in an anonymous manner. The anonymity of the signer is the fundamental security property of ring signatures. However, unrestricted anonymity can be misused, as a malicious signer could use it to send spam or excessive messages. To address this issue, a controlled restriction on signer anonymity is required, which makes such schemes more practical. In this paper, we propose a k-times traceable ring signature scheme that allows the public key of the signer to be traced publicly if the signer exceeds a predetermined signing limit k. The novelty of our construction lies in its reliance on lattice-based assumptions, which ensure post-quantum security. Consequently, the proposed scheme is well-suited for practical deployment in the presence of emerging quantum threats. Our approach achieves competitive performance while providing stronger security guarantees, making it an appropriate candidate for modern cryptographic applications. We also present efficiency analysis and compare our scheme with existing constructions.

Abstract. Ring signatures are cryptographic primitives that enable a user to sign a message on behalf of a group of users in an anonymous manner. The anonymity of the signer is the fundamental security property of ring signatures. However, unrestricted anonymity can be misused, as a malicious signer could use it to send spam or excessive messages. To address this issue, a controlled restriction on signer anonymity is required, which makes such schemes more practical. In this paper, we propose a k-times traceable ring signature scheme that allows the public key of the signer to be traced publicly if the signer exceeds a predetermined signing limit k. The novelty of our construction lies in its reliance on lattice-based assumptions, which ensure post-quantum security. Consequently, the proposed scheme is well-suited for practical deployment in the presence of emerging quantum threats. Our approach achieves competitive performance while providing stronger security guarantees, making it an appropriate candidate for modern cryptographic applications. We also present efficiency analysis and compare our scheme with existing constructions.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Post-Quantum Secure k-Times Traceable Ring Signature (Vishal Pareek, Aditi Kar Gangopadhyay, Sugata Gangopadhyay) ia.cr/2026/697

6 days ago 0 0 0 0
Advertisement
Abstract. We study key-schedule design under boundary round-key leakage, namely leakage of the first round key, the last round key, or both end round keys. We propose the nonlinear key-schedule RK_(i) = K ⊕ F(K ⊕ T(i)), where K is the master key, T(i) is a public domain separation value, and F is a public SPN-based permutation parameterized by its round count N_(F).

Under the boundary-leakage model considered in this paper, leakage of one end round key yields an instance of the equation Z = U ⊕ F(U), whereas leakage of both end round keys yields a differential constraint of the form F(U) ⊕ F(U ⊕ Δ) = Γ, where Δ is determined by the two end indices and Γ is derived from the two leaked round-key values. These reductions clarify the nonlinear systems induced by boundary leakage and the absence of a linear elimination route to the master key.

We also evaluate reduced variants of the resulting systems through Gr"obner basis experiments, and further examine them by SAT-based key-recovery experiments and right-censored runtime analysis via a Weibull AFT model. Within the tested range, we do not observe degree collapse or unusually strong linear bias. These results provide heuristic support for the view that, under the boundary-leakage model considered here, the tested instantiations of the proposed key-schedule family do not admit an obvious efficient inversion route.

Abstract. We study key-schedule design under boundary round-key leakage, namely leakage of the first round key, the last round key, or both end round keys. We propose the nonlinear key-schedule RK_(i) = K ⊕ F(K ⊕ T(i)), where K is the master key, T(i) is a public domain separation value, and F is a public SPN-based permutation parameterized by its round count N_(F). Under the boundary-leakage model considered in this paper, leakage of one end round key yields an instance of the equation Z = U ⊕ F(U), whereas leakage of both end round keys yields a differential constraint of the form F(U) ⊕ F(U ⊕ Δ) = Γ, where Δ is determined by the two end indices and Γ is derived from the two leaked round-key values. These reductions clarify the nonlinear systems induced by boundary leakage and the absence of a linear elimination route to the master key. We also evaluate reduced variants of the resulting systems through Gr"obner basis experiments, and further examine them by SAT-based key-recovery experiments and right-censored runtime analysis via a Weibull AFT model. Within the tested range, we do not observe degree collapse or unusually strong linear bias. These results provide heuristic support for the view that, under the boundary-leakage model considered here, the tested instantiations of the proposed key-schedule family do not admit an obvious efficient inversion route.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

A Key Schedule Design and Evaluation under Boundary Round-Key Leakage (Yu Morishima, Hideki Yoshikawa, Masahiro Kaminaga) ia.cr/2026/696

6 days ago 0 0 0 0
Abstract. Multi-scalar multiplication (MSM), $MSM(\vec{P},\vec{x})=\sum_{i=1}^n x_i P_i$, is a dominant computational kernel in discrete-logarithm–based cryptography and often becomes a bottleneck for verifiers and other resource-constrained clients. We present 2G2T, a simple protocol for verifiably outsourcing MSM to an untrusted server. 2G2T is efficient for both parties: the server performs only two MSM computations and returns only two group elements to the client, namely the claimed result A = MSM(P⃗, x⃗) and an auxiliary group element B. Client-side verification consists of a single length-n field inner product and only three group operations (two scalar multiplications and one group addition). In our Ristretto255 implementation, verification is up to  ∼ 300× faster than computing the MSM locally using a highly optimized MSM routine (for n up to 2¹⁸). Moreover, 2G2T enables latency-hiding verification: nearly all verifier work can be performed while waiting for the server’s response, so once (A, B) arrives the verifier completes the check with only one scalar multiplication and one group addition (both independent of n). Finally, despite its simplicity and efficiency, we prove that 2G2T achieves statistical soundness: for any (even unbounded) adversarial server, the probability of accepting an incorrect result is at most 1/q per query, and at most e/q over e adaptive executions, in a prime-order group of size q.

Abstract. Multi-scalar multiplication (MSM), $MSM(\vec{P},\vec{x})=\sum_{i=1}^n x_i P_i$, is a dominant computational kernel in discrete-logarithm–based cryptography and often becomes a bottleneck for verifiers and other resource-constrained clients. We present 2G2T, a simple protocol for verifiably outsourcing MSM to an untrusted server. 2G2T is efficient for both parties: the server performs only two MSM computations and returns only two group elements to the client, namely the claimed result A = MSM(P⃗, x⃗) and an auxiliary group element B. Client-side verification consists of a single length-n field inner product and only three group operations (two scalar multiplications and one group addition). In our Ristretto255 implementation, verification is up to  ∼ 300× faster than computing the MSM locally using a highly optimized MSM routine (for n up to 2¹⁸). Moreover, 2G2T enables latency-hiding verification: nearly all verifier work can be performed while waiting for the server’s response, so once (A, B) arrives the verifier completes the check with only one scalar multiplication and one group addition (both independent of n). Finally, despite its simplicity and efficiency, we prove that 2G2T achieves statistical soundness: for any (even unbounded) adversarial server, the probability of accepting an incorrect result is at most 1/q per query, and at most e/q over e adaptive executions, in a prime-order group of size q.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

2G2T: Constant-Size, Statistically Sound MSM Outsourcing (Majid Khabbazian) ia.cr/2026/695

6 days ago 0 0 0 0
Abstract. In this note we introduce the concept of proximity signatures, where verifiers who can only access a small part of some data would like a guarantee that (a) this data is “close” to a uniquely decodable message (so the message can be decoded from the data via error decoding) and (b) the uniquely decodable message is signed by an associated secret key. This is useful in situations where the message is very large but the verifiers are small devices who only need the guarantee that the message was signed by some specific secret key or set of keys. As a motivating example, we consider the data availability problem. There, users submit large signed pieces of data that together form a larger data matrix. The signatures and integrity of this data must then be checked by nodes who can only download a small proportion of this matrix. We present a construction inspired by linear subspace signatures.

Abstract. In this note we introduce the concept of proximity signatures, where verifiers who can only access a small part of some data would like a guarantee that (a) this data is “close” to a uniquely decodable message (so the message can be decoded from the data via error decoding) and (b) the uniquely decodable message is signed by an associated secret key. This is useful in situations where the message is very large but the verifiers are small devices who only need the guarantee that the message was signed by some specific secret key or set of keys. As a motivating example, we consider the data availability problem. There, users submit large signed pieces of data that together form a larger data matrix. The signatures and integrity of this data must then be checked by nodes who can only download a small proportion of this matrix. We present a construction inspired by linear subspace signatures.

Proximity Signatures (Guillermo Angeris, Kobi Gurkan) ia.cr/2026/694

6 days ago 3 1 0 0
Abstract. Hamming Quasi-Cyclic (HQC) has been selected by NIST for standardization in the post-quantum landscape. As deployment approaches, implementation security becomes as critical as mathematical hardness. In this work, we demonstrate that source-level constant-time coding is not a standalone guarantee: the compiled binary must inherently preserve this behavior.

We identify a severe compiler-induced vulnerability within the official AVX2-optimized implementation of HQC, despite its claims of being constant-time. Although the C source code relies on secure, mask-based conditional selection, certain compiler optimizations rewrite this logic systematically. This transformation silently introduces secret-dependent control flow into the inner Reed-Muller decoding process, resulting in secret-dependent cache access patterns.

Exploiting this vulnerability, we mount, to the best of our knowledge, the first cache-timing Full-Decryption-style oracle attack against a post-quantum cryptosystem. Using Flush+Reload on shared libraries, an unprivileged co-located adversary can extract fine-grained predicates of the decoder’s internal state. To achieve full key recovery, we develop a novel, reliability-aware Soft Information Set Decoding (Soft-ISD) post-processing framework. Leveraging a GPU-accelerated meet-in-the-middle strategy optimized for heterogeneous platforms (including Apple Silicon), we demonstrate end-to-end secret key recovery for hqc-1 with less than 10 seconds of online trace collection.

Abstract. Hamming Quasi-Cyclic (HQC) has been selected by NIST for standardization in the post-quantum landscape. As deployment approaches, implementation security becomes as critical as mathematical hardness. In this work, we demonstrate that source-level constant-time coding is not a standalone guarantee: the compiled binary must inherently preserve this behavior. We identify a severe compiler-induced vulnerability within the official AVX2-optimized implementation of HQC, despite its claims of being constant-time. Although the C source code relies on secure, mask-based conditional selection, certain compiler optimizations rewrite this logic systematically. This transformation silently introduces secret-dependent control flow into the inner Reed-Muller decoding process, resulting in secret-dependent cache access patterns. Exploiting this vulnerability, we mount, to the best of our knowledge, the first cache-timing Full-Decryption-style oracle attack against a post-quantum cryptosystem. Using Flush+Reload on shared libraries, an unprivileged co-located adversary can extract fine-grained predicates of the decoder’s internal state. To achieve full key recovery, we develop a novel, reliability-aware Soft Information Set Decoding (Soft-ISD) post-processing framework. Leveraging a GPU-accelerated meet-in-the-middle strategy optimized for heterogeneous platforms (including Apple Silicon), we demonstrate end-to-end secret key recovery for hqc-1 with less than 10 seconds of online trace collection.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Breaking Optimized HQC: The First Cache-Timing Full Decryption Oracle Key-Recovery Attack in Post-Quantum Cryptography (Haiyue Dong, Qian Guo) ia.cr/2026/693

6 days ago 0 0 0 0
Abstract. (Partially) blind signatures are foundational cryptographic primitives that enable privacy-preserving authentication in digital systems. Blind signatures allow a user to obtain a signature on a message without revealing its content, while partially blind signatures permit the controlled inclusion of agreed-upon public information into the signed message. These primitives are central to applications such as electronic voting, anonymous credentials, and digital cash.

The first isogeny-based construction of (partially) blind signatures, CSIOtter, were proposed by Katsumata et al. at CRYPTO’23. However, its concurrent security was later broken by Katsumata et al. (CRYPTO’24) and Do et al. (EUROCRYPT’24). These findings imply that CSI-Otter is secure only in sequential settings. Recently, Hanzlik et al. (ASIACRYPT’25) has proposed a novel framework for concurrently secure blind signatures which can be instantiated from isogenies with smaller signature size (but larger public key and secret key size). It is not known yet whether their construction can be extended to partially blind signatures.

In this paper, we present a new and efficient construction of partially blind signatures based on isogenies with substantially smaller signature and public key sizes than CSI-Otter. This makes our construction the most compact post-quantum partially blind signature scheme known to date. As similar to CSIOtter, our scheme uses small challenge space resulting in only achieving sequential concurrent security. Our design follows the Abe-Okamoto paradigm in the group action setting, building upon the framework introduced by Tessaro and Zhu (EUROCRYPT’22), whose security is based on the Discrete Logarithm Problem. We rigorously prove the security of our scheme in the Algebraic Group Action Model and the Random Oracle Model, under the hardness assumption of the Group Action Discrete Logarithm Problem.

Abstract. (Partially) blind signatures are foundational cryptographic primitives that enable privacy-preserving authentication in digital systems. Blind signatures allow a user to obtain a signature on a message without revealing its content, while partially blind signatures permit the controlled inclusion of agreed-upon public information into the signed message. These primitives are central to applications such as electronic voting, anonymous credentials, and digital cash. The first isogeny-based construction of (partially) blind signatures, CSIOtter, were proposed by Katsumata et al. at CRYPTO’23. However, its concurrent security was later broken by Katsumata et al. (CRYPTO’24) and Do et al. (EUROCRYPT’24). These findings imply that CSI-Otter is secure only in sequential settings. Recently, Hanzlik et al. (ASIACRYPT’25) has proposed a novel framework for concurrently secure blind signatures which can be instantiated from isogenies with smaller signature size (but larger public key and secret key size). It is not known yet whether their construction can be extended to partially blind signatures. In this paper, we present a new and efficient construction of partially blind signatures based on isogenies with substantially smaller signature and public key sizes than CSI-Otter. This makes our construction the most compact post-quantum partially blind signature scheme known to date. As similar to CSIOtter, our scheme uses small challenge space resulting in only achieving sequential concurrent security. Our design follows the Abe-Okamoto paradigm in the group action setting, building upon the framework introduced by Tessaro and Zhu (EUROCRYPT’22), whose security is based on the Discrete Logarithm Problem. We rigorously prove the security of our scheme in the Algebraic Group Action Model and the Random Oracle Model, under the hardness assumption of the Group Action Discrete Logarithm Problem.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Efficient Partially Blind Signatures from Isogenies (Dung Hoang Duong, Chunpeng Ge, Xuan Thanh Khuc, Willy Susilo) ia.cr/2026/692

6 days ago 0 0 0 0
Abstract. Zero-knowledge Succinct Non-interactive Arguments of Knowledge (zk-SNARKs) are cryptographic protocols that allow a prover to convince verifiers of the correctness of a statement without revealing any additional information. Recent zk-SNARK constructions have shifted from univariate to multivariate polynomial-based designs, reducing the proving complexity from quasilinear to linear by avoiding costly univariate polynomial interpolation. This shift, however, makes the sumcheck protocol—a core primitive for verifying polynomial relations over the Boolean hypercube—the dominant component in the prover’s workload. Consequently, the iterative nature and intensive intermediate data movement of the sumcheck protocol introduce severe performance bottlenecks in CPU- and GPU-based implementations, especially for large-scale multivariate polynomials.

In this paper, we present PipeSC, a resource-efficient, ASIC-based accelerator for sumcheck. PipeSC combines deep pipelining, modular-multiplier reuse, and a finite-state machine-based dependency scheduler to sustain high utilization of computational resources across phases of the protocol. In addition, we introduce an Equality-MLE generation module that employs hierarchical scheduling and multiplier reuse, yielding a unified hardware substrate shared by multiple proving subroutines.

Against state-of-the-art CPU, GPU, and ASIC implementations, PipeSC delivers up to speedup over the GPU implementation and up to speedup over the CPU implementation, while improving the area–time product (ATP) by up to compared with the ASIC design. These results show that careful hardware–algorithm co-design and conflict-free scheduling substantially accelerate sumcheck, paving the way for fully integrated zk-SNARK hardware pipelines.

Abstract. Zero-knowledge Succinct Non-interactive Arguments of Knowledge (zk-SNARKs) are cryptographic protocols that allow a prover to convince verifiers of the correctness of a statement without revealing any additional information. Recent zk-SNARK constructions have shifted from univariate to multivariate polynomial-based designs, reducing the proving complexity from quasilinear to linear by avoiding costly univariate polynomial interpolation. This shift, however, makes the sumcheck protocol—a core primitive for verifying polynomial relations over the Boolean hypercube—the dominant component in the prover’s workload. Consequently, the iterative nature and intensive intermediate data movement of the sumcheck protocol introduce severe performance bottlenecks in CPU- and GPU-based implementations, especially for large-scale multivariate polynomials. In this paper, we present PipeSC, a resource-efficient, ASIC-based accelerator for sumcheck. PipeSC combines deep pipelining, modular-multiplier reuse, and a finite-state machine-based dependency scheduler to sustain high utilization of computational resources across phases of the protocol. In addition, we introduce an Equality-MLE generation module that employs hierarchical scheduling and multiplier reuse, yielding a unified hardware substrate shared by multiple proving subroutines. Against state-of-the-art CPU, GPU, and ASIC implementations, PipeSC delivers up to speedup over the GPU implementation and up to speedup over the CPU implementation, while improving the area–time product (ATP) by up to compared with the ASIC design. These results show that careful hardware–algorithm co-design and conflict-free scheduling substantially accelerate sumcheck, paving the way for fully integrated zk-SNARK hardware pipelines.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

PipeSC: A Resource-efficient and Pipelined Hardware Accelerator for Sumcheck Protocol (Kaixuan Wang, Yifan Yanggong, Xiaoyu Yang, Chenti Baixiao, Lei Wang) ia.cr/2026/691

6 days ago 0 0 0 0
Abstract. Adaptor signatures extend digital signatures with conditional disclosure capabilities, enabling atomic swaps, payment channels, and other advanced blockchain protocols. Although post-quantum adaptor signatures have been explored under lattice, isogeny, and coding-theoretic assumptions, no constructions have yet been realised from the multivariate quadratic (MQ) family of signatures. Classical algebraic adaptor techniques rely on embedding the witness into signing randomness, which is natural for discrete-log-based schemes but does not apply to MQ signatures such as UOV and MAYO: to the best of our knowledge, MQ signing randomness lacks the algebraic structure needed for witness embedding, and no such algebraic adaptor construction is currently known. This motivates a different approach. We propose MWAS, the first commitment-based adaptor-style construction for MQ signatures, specifically UOV and MAYO from the NIST PQC process, implemented via the Open Quantum Safe library. Our construction uses a lightweight HMAC-SHA256 commitment and a concatenation-based adaptation, supporting a hash-preimage witness relation. We prove correctness, witness hiding, and witness extractability in the ROM under MQ-hardness and PRF assumptions. A prototype implementation on a Raspberry Pi~5 shows pre-signature generation under 0.4~ms for UOV and 0.5–3.3~ms for MAYO across 128–256-bit security levels, with throughput up to 710~ops/s and public key sizes of 1.4~KB (MAYO) to 1.2~MB (UOV). These results indicate that commitment-based MQ adaptor signatures are a viable post-quantum option for settings where hash-preimage witness relations are appropriate.

Abstract. Adaptor signatures extend digital signatures with conditional disclosure capabilities, enabling atomic swaps, payment channels, and other advanced blockchain protocols. Although post-quantum adaptor signatures have been explored under lattice, isogeny, and coding-theoretic assumptions, no constructions have yet been realised from the multivariate quadratic (MQ) family of signatures. Classical algebraic adaptor techniques rely on embedding the witness into signing randomness, which is natural for discrete-log-based schemes but does not apply to MQ signatures such as UOV and MAYO: to the best of our knowledge, MQ signing randomness lacks the algebraic structure needed for witness embedding, and no such algebraic adaptor construction is currently known. This motivates a different approach. We propose MWAS, the first commitment-based adaptor-style construction for MQ signatures, specifically UOV and MAYO from the NIST PQC process, implemented via the Open Quantum Safe library. Our construction uses a lightweight HMAC-SHA256 commitment and a concatenation-based adaptation, supporting a hash-preimage witness relation. We prove correctness, witness hiding, and witness extractability in the ROM under MQ-hardness and PRF assumptions. A prototype implementation on a Raspberry Pi~5 shows pre-signature generation under 0.4~ms for UOV and 0.5–3.3~ms for MAYO across 128–256-bit security levels, with throughput up to 710~ops/s and public key sizes of 1.4~KB (MAYO) to 1.2~MB (UOV). These results indicate that commitment-based MQ adaptor signatures are a viable post-quantum option for settings where hash-preimage witness relations are appropriate.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Multivariate Witness-Hiding Adaptor Signatures (Ayush Meshram, Ayush Banerjee) ia.cr/2026/690

6 days ago 0 0 0 0
Abstract. Ensuring ciphertext indistinguishability is fundamental to cryptographic security, but empirically validating this property in real implementations and hybrid settings presents practical challenges. The transition to post-quantum cryptography (PQC), with its hybrid constructions combining classical and quantum-resistant primitives, makes empirical validation approaches increasingly valuable. By modeling indistinguishability under chosen-plaintext attack (IND-CPA) games as binary classification tasks and training on labeled ciphertext data with binary cross-entropy loss, we study deep neural network (DNN) distinguishers for ciphertext indistinguishability. We apply this methodology to PQC key encapsulation mechanisms (KEMs). We specifically test the public-key encryption (PKE) schemes used to construct examples such as ML-KEM, BIKE, and HQC. Moreover, a novel extension of this DNN modeling for empirical distinguishability testing of hybrid KEMs is presented. We implement and test this on combinations of PQC KEMs with unpadded RSA, RSA-OAEP, and plaintext. Finally, methodological generality is illustrated by applying the DNN IND-CPA classification framework to cascade symmetric encryption, where we test combinations of AES-CTR, AES-CBC, AES-ECB, ChaCha20, and DES-ECB. In our experiments on PQC algorithms, KEM combiners, and cascade encryption, no algorithm or combination of algorithms demonstrates a significant advantage (evaluated via two-sided binomial tests with significance level α = 0.01), consistent with theoretical guarantees that hybrids including at least one IND-CPA-secure component preserve indistinguishability, and with the absence of exploitable patterns under the considered DNN adversary model. These illustrate the potential of using deep learning as an adaptive, practical, and versatile empirical estimator for indistinguishability in more general IND-CPA settings, allowing data-driven validation of implementations and compositions and complementing the analytical security analysis.

Abstract. Ensuring ciphertext indistinguishability is fundamental to cryptographic security, but empirically validating this property in real implementations and hybrid settings presents practical challenges. The transition to post-quantum cryptography (PQC), with its hybrid constructions combining classical and quantum-resistant primitives, makes empirical validation approaches increasingly valuable. By modeling indistinguishability under chosen-plaintext attack (IND-CPA) games as binary classification tasks and training on labeled ciphertext data with binary cross-entropy loss, we study deep neural network (DNN) distinguishers for ciphertext indistinguishability. We apply this methodology to PQC key encapsulation mechanisms (KEMs). We specifically test the public-key encryption (PKE) schemes used to construct examples such as ML-KEM, BIKE, and HQC. Moreover, a novel extension of this DNN modeling for empirical distinguishability testing of hybrid KEMs is presented. We implement and test this on combinations of PQC KEMs with unpadded RSA, RSA-OAEP, and plaintext. Finally, methodological generality is illustrated by applying the DNN IND-CPA classification framework to cascade symmetric encryption, where we test combinations of AES-CTR, AES-CBC, AES-ECB, ChaCha20, and DES-ECB. In our experiments on PQC algorithms, KEM combiners, and cascade encryption, no algorithm or combination of algorithms demonstrates a significant advantage (evaluated via two-sided binomial tests with significance level α = 0.01), consistent with theoretical guarantees that hybrids including at least one IND-CPA-secure component preserve indistinguishability, and with the absence of exploitable patterns under the considered DNN adversary model. These illustrate the potential of using deep learning as an adaptive, practical, and versatile empirical estimator for indistinguishability in more general IND-CPA settings, allowing data-driven validation of implementations and compositions and complementing the analytical security analysis.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Evaluating PQC KEMs, Combiners, and Cascade Encryption via Adaptive IND-CPA Testing Using Deep Learning (Simon Calderon, Niklas Johansson, Onur Günlü) ia.cr/2026/689

1 week ago 1 1 0 1
Abstract. The Learning with Errors (LWE) problem forms the foundation for numerous post-quantum cryptographic schemes, such as the NIST-selected CRYSTALS-KYBER and CRYSTALS-DILITHIUM. Algebraic analysis on LWE traditionally relies on solving the Arora-Ge system via Gröbner bases, yet its performance is far from satisfactory when only a limited number of samples is available. Meanwhile, recent dual attacks have proven highly effective against concrete LWE-based algorithms. This gap motivates us to investigate whether integrating the technique from dual attacks into algebraic analysis can have a positive effect.

We propose a novel, two-stage algebraic algorithm for LWE. First, dual lattice reduction is applied to transform the original samples into a reduced dimension. From an algebraic perspective, this stage reduces the number of variables at the cost of increasing the noise. Second, instead of solving the classic Arora-Ge system, we introduce a new polynomial construction that exploits the error distribution and solves it via a resultant-based method. When given m = n samples, our two-stage algorithm yields better complexity estimates for CRYSTALS-KYBER than Gröbner bases reported in Steiner’s work (Eurocrypt~2024). As an independent contribution, we show that for LWE with a small secret, applying the resultant-based method directly to the Arora-Ge system provides a provable complexity estimate that achieves an exponential speed-up over the proven bounds established by Steiner.

Finally, we show how various forms of side information—namely, perfect hints, modular hints, and approximate hints—can be systematically incorporated into our two-stage algorithm.

Abstract. The Learning with Errors (LWE) problem forms the foundation for numerous post-quantum cryptographic schemes, such as the NIST-selected CRYSTALS-KYBER and CRYSTALS-DILITHIUM. Algebraic analysis on LWE traditionally relies on solving the Arora-Ge system via Gröbner bases, yet its performance is far from satisfactory when only a limited number of samples is available. Meanwhile, recent dual attacks have proven highly effective against concrete LWE-based algorithms. This gap motivates us to investigate whether integrating the technique from dual attacks into algebraic analysis can have a positive effect. We propose a novel, two-stage algebraic algorithm for LWE. First, dual lattice reduction is applied to transform the original samples into a reduced dimension. From an algebraic perspective, this stage reduces the number of variables at the cost of increasing the noise. Second, instead of solving the classic Arora-Ge system, we introduce a new polynomial construction that exploits the error distribution and solves it via a resultant-based method. When given m = n samples, our two-stage algorithm yields better complexity estimates for CRYSTALS-KYBER than Gröbner bases reported in Steiner’s work (Eurocrypt~2024). As an independent contribution, we show that for LWE with a small secret, applying the resultant-based method directly to the Arora-Ge system provides a provable complexity estimate that achieves an exponential speed-up over the proven bounds established by Steiner. Finally, we show how various forms of side information—namely, perfect hints, modular hints, and approximate hints—can be systematically incorporated into our two-stage algorithm.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Too Far Behind? Narrowing the Gap with a Dual-Enhanced Two-Stage Algebraic Framework for LWE (Rui-Jie Wang, Hong-Sen Yang, Zhong-Xiao Wang, Qun-Xiong Zheng) ia.cr/2026/688

1 week ago 0 0 0 0
Advertisement
Abstract. Emerging 6G communication systems impose unprecedented requirements on cryptographic primitives, demanding ultra-high throughput, low latency, and strong resistance to implementation-level attacks. LOL2.0 is a recently proposed stream cipher framework that achieves high software efficiency and strong security in post-quantum settings. While several stream ciphers have been proposed to address performance demands, side-channel-protected hardware implementations capable of sustaining 6G-class throughput remain largely unexplored.

In this work, we present the first side-channel-protected hardware implementation that meets the 6G-class throughput demand. Focusing on the LOL2.0 stream cipher framework, we leverage Time Sharing Masking to achieve first-order security under the glitch-extended probing model. This design realizes full-phase protection covering initialization, keystream generation, and tag generation. To address diverse deployment requirements, we design two masked architectures: a compact variant optimized for area and randomness efficiency, and a fast variant targeting the maximum achievable throughput.

The proposed fast implementations achieve peak throughputs of 183 Gbps and 142 Gbps for the unmasked and masked configurations, respectively. Meanwhile, the compact architecture reduces hardware cost by achieving areas as low as 23.25 kGE and 141.98 kGE in unmasked and masked designs, respectively, while still maintaining competitive throughput. Security is validated through practical side-channel evaluations using Test Vector Leakage Assessment on FPGA platforms. Across up to 100 million measured power traces, no statistically significant first-order leakage is observed for any protected configuration.

Overall, this work realizes side-channel-protected stream cipher hardware that sustains ultra-high throughput, providing a concrete path toward secure cryptographic deployment in future 6G communication systems.

Abstract. Emerging 6G communication systems impose unprecedented requirements on cryptographic primitives, demanding ultra-high throughput, low latency, and strong resistance to implementation-level attacks. LOL2.0 is a recently proposed stream cipher framework that achieves high software efficiency and strong security in post-quantum settings. While several stream ciphers have been proposed to address performance demands, side-channel-protected hardware implementations capable of sustaining 6G-class throughput remain largely unexplored. In this work, we present the first side-channel-protected hardware implementation that meets the 6G-class throughput demand. Focusing on the LOL2.0 stream cipher framework, we leverage Time Sharing Masking to achieve first-order security under the glitch-extended probing model. This design realizes full-phase protection covering initialization, keystream generation, and tag generation. To address diverse deployment requirements, we design two masked architectures: a compact variant optimized for area and randomness efficiency, and a fast variant targeting the maximum achievable throughput. The proposed fast implementations achieve peak throughputs of 183 Gbps and 142 Gbps for the unmasked and masked configurations, respectively. Meanwhile, the compact architecture reduces hardware cost by achieving areas as low as 23.25 kGE and 141.98 kGE in unmasked and masked designs, respectively, while still maintaining competitive throughput. Security is validated through practical side-channel evaluations using Test Vector Leakage Assessment on FPGA platforms. Across up to 100 million measured power traces, no statistically significant first-order leakage is observed for any protected configuration. Overall, this work realizes side-channel-protected stream cipher hardware that sustains ultra-high throughput, providing a concrete path toward secure cryptographic deployment in future 6G communication systems.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

High-Throughput Side-Channel-Protected Stream Cipher Hardware for 6G Systems (Yuluan Cao, Cankun Zhao, Bohan Yang, Wenping Zhu, Hanning Wang, Min Zhu, Leibo Liu) ia.cr/2026/687

1 week ago 0 0 0 0
Abstract. Zero-knowledge proof (ZKP) schemes enable a prover to convince a verifier of the validity of a statement without revealing the underlying secret. These schemes have found extensive applications in secure communications, privacy-preserving transactions and blockchain technologies. However, the computational cost of proof generation remains a major obstacle to practical deployment. Although various acceleration techniques have been proposed, they often rely on specialized hardware that may not be locally available. A promising yet underexplored alternative is to offload computation to a more powerful third party, such as a cloud server, in a secure and efficient manner. Rather than outsourcing the entire proof generation process, selectively offloading the most computationally intensive operations offers greater flexibility and simplicity. In this work, we propose a secure outsourcing scheme for multi-scalar multiplication (MSM), which is the most computationally expensive operation in many widely used ZKP protocols. Our scheme enables users to delegate MSM computations to a server while preserving the confidentiality of the secret inputs (i.e., the scalars) and allowing verification of the server’s output. Our performance analysis shows that the proposed scheme significantly reduces the computational burden on the user while imposing only minimal overhead on the server.

Abstract. Zero-knowledge proof (ZKP) schemes enable a prover to convince a verifier of the validity of a statement without revealing the underlying secret. These schemes have found extensive applications in secure communications, privacy-preserving transactions and blockchain technologies. However, the computational cost of proof generation remains a major obstacle to practical deployment. Although various acceleration techniques have been proposed, they often rely on specialized hardware that may not be locally available. A promising yet underexplored alternative is to offload computation to a more powerful third party, such as a cloud server, in a secure and efficient manner. Rather than outsourcing the entire proof generation process, selectively offloading the most computationally intensive operations offers greater flexibility and simplicity. In this work, we propose a secure outsourcing scheme for multi-scalar multiplication (MSM), which is the most computationally expensive operation in many widely used ZKP protocols. Our scheme enables users to delegate MSM computations to a server while preserving the confidentiality of the secret inputs (i.e., the scalars) and allowing verification of the server’s output. Our performance analysis shows that the proposed scheme significantly reduces the computational burden on the user while imposing only minimal overhead on the server.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Secure MSM Outsourcing Computation for Zero-knowledge Proof Generation (Wujie Xiong, Arefeh Rahaei, Sangwon Shin, Xinxin Fan, Taeweon Suh, Veronika Kuchta, Sica Francesco, Weidong Shi, Lei Xu) ia.cr/2026/686

1 week ago 0 0 0 0