Advertisement · 728 × 90

Posts by siva

Abstract. We construct compilers that convert any secure signature scheme into a single-round blind signature scheme. An important property of the construction is that the final blind signature has exactly the same format as the underlying signature scheme, making the blind signature scheme backwards compatible with the underlying scheme. Our compilers make use of (two-key) fully homomorphic encryption and zero-knowledge proofs to ensure unforgeability and blindness of the final scheme. We present three compilers where the main differences is which party does the bulk of the work: the client, the signer, or both. Along the way we introduce a new notion of verifiable FHE that we call committed verifiable FHE, where the verifier does not see the circuit in the clear.

Abstract. We construct compilers that convert any secure signature scheme into a single-round blind signature scheme. An important property of the construction is that the final blind signature has exactly the same format as the underlying signature scheme, making the blind signature scheme backwards compatible with the underlying scheme. Our compilers make use of (two-key) fully homomorphic encryption and zero-knowledge proofs to ensure unforgeability and blindness of the final scheme. We present three compilers where the main differences is which party does the bulk of the work: the client, the signer, or both. Along the way we introduce a new notion of verifiable FHE that we call committed verifiable FHE, where the verifier does not see the circuit in the clear.

A Universal Blinder: One-round Blind Signatures from FHE (Dan Boneh, Jaehyung Kim) ia.cr/2026/574

1 month ago 2 1 1 0

Perhaps a naive question, is it just interaction, or the number of queries also matter in this case, does the lower bounds hold if one allows for log k queries?

4 months ago 0 0 0 0
Abstract. We prove that SVP_(p) is NP-hard to approximate within a factor of 2^(log^(1 − ε)n), for all constants ε > 0 and p > 2, under standard deterministic Karp reductions. This result is also the first proof that SVP_(p) is NP-hard in a finite ℓ_(p) norm. Hardness for SVP_(p) with p finite was previously only known if NP ⊈ RP, and under that assumption, hardness of approximation was only known for all constant factors. As a corollary to our main theorem, we show that under the Sliding Scale Conjecture, SVP_(p) is NP-hard to approximate within a small polynomial factor, for all constants p > 2.

Our proof techniques are surprisingly elementary; we reduce from a regularized PCP instance directly to the shortest vector problem by using simple gadgets related to Vandermonde matrices and Hadamard matrices.

Abstract. We prove that SVP_(p) is NP-hard to approximate within a factor of 2^(log^(1 − ε)n), for all constants ε > 0 and p > 2, under standard deterministic Karp reductions. This result is also the first proof that SVP_(p) is NP-hard in a finite ℓ_(p) norm. Hardness for SVP_(p) with p finite was previously only known if NP ⊈ RP, and under that assumption, hardness of approximation was only known for all constant factors. As a corollary to our main theorem, we show that under the Sliding Scale Conjecture, SVP_(p) is NP-hard to approximate within a small polynomial factor, for all constants p > 2. Our proof techniques are surprisingly elementary; we reduce from a regularized PCP instance directly to the shortest vector problem by using simple gadgets related to Vandermonde matrices and Hadamard matrices.

SVP_(p) is Deterministically NP-Hard for all p > 2, Even to Approximate Within a Factor of 2^(log^(1 − ε)n) (Isaac M Hair, Amit Sahai) ia.cr/2025/2181

4 months ago 7 3 0 0

I didn't realise, it's from last Asiacrypt!

5 months ago 0 0 0 0

Excited to share eprint.iacr.org/2025/1905.pdf that re-envisions how to use folding/accumulation in succinct proof systems.
We provide a new framework to build folding-based SNARKs by eliminating the need to prove Fiat-Shamir inside circuits and by introducing a high-arity lattice folding scheme.

6 months ago 2 2 1 0
Screenshot of the title and abstract of the paper "Vive Galois! Part 1: Optimal SIMD Packing and Packed Bootstrapping for FHE", by Chris Peikert and Zachary Pepin.

Abridged abstract:

The vast majority of work on the efficiency of lattice-based cryptography, including fully homomorphic encryption (FHE), has relied on cyclotomic number fields and rings.
This is because cyclotomics offer a wide variety of benefits, including good geometrical properties, fast ring arithmetic, and rich homomorphic operations like vectorized (SIMD) operations on "packed" plaintexts, automorphisms, and ring-switching.
However, selecting a suitable cyclotomic that has the desired number and type of plaintext "slots," while also balancing security and efficiency, is a highly constrained problem that often lacks an ideal solution, resulting in wasted SIMD capacity and lost efficiency.

This work provides a suite of tools for instantiating ring-based lattice cryptography to work over *subfields* of cyclotomics, which provide more flexibility and better-fitting parameters for applications.
A particular focus is on realizing FHE with *optimal plaintext packing* and homomorphic SIMD parallelism for *any* plaintext characteristic, along with efficient packed bootstrapping that fully exploits this parallelism.

Screenshot of the title and abstract of the paper "Vive Galois! Part 1: Optimal SIMD Packing and Packed Bootstrapping for FHE", by Chris Peikert and Zachary Pepin. Abridged abstract: The vast majority of work on the efficiency of lattice-based cryptography, including fully homomorphic encryption (FHE), has relied on cyclotomic number fields and rings. This is because cyclotomics offer a wide variety of benefits, including good geometrical properties, fast ring arithmetic, and rich homomorphic operations like vectorized (SIMD) operations on "packed" plaintexts, automorphisms, and ring-switching. However, selecting a suitable cyclotomic that has the desired number and type of plaintext "slots," while also balancing security and efficiency, is a highly constrained problem that often lacks an ideal solution, resulting in wasted SIMD capacity and lost efficiency. This work provides a suite of tools for instantiating ring-based lattice cryptography to work over *subfields* of cyclotomics, which provide more flexibility and better-fitting parameters for applications. A particular focus is on realizing FHE with *optimal plaintext packing* and homomorphic SIMD parallelism for *any* plaintext characteristic, along with efficient packed bootstrapping that fully exploits this parallelism.

Thrilled to finally share this ⚡𝙣𝙚𝙬 𝙥𝙖𝙥𝙚𝙧⚡ with my (now-graduated!) student Zachary Pepin, which will appear at TCC 2025.

We tackle a frequent inconvenience in BGV/BFV-style homomorphic encryption: getting the desired kind of "SIMD slots" for plaintext packing. 🧵

web.eecs.umich.edu/~cpeikert/pu...

6 months ago 15 4 1 0
Abstract. One-round secure computation is generally believed impossible due to the residual function attack: any honest-but-curious participant can replay the protocol in their head changing their input, and learn, in this way, a new output. Inputless functionalities are among the few that are immune to this problem.

This paper studies one-round, multi-party computation protocols (MPC) that implement the most natural inputless functionality: one that generates a random sample from a fixed distribution. These are called distributed samplers. At Eurocrypt 2022, Abram, Scholl and Yakoubov showed how to build this primitive in the semi-honest model with dishonest majority. In this work, we give a lower bound for constructing distributed samplers with a malicious adversary in the standard model. More in detail, we show that for any construction in the stand-alone model with black-box simulation, even with a CRS and honest majority, the output of the sampling protocol must have low entropy. This essentially implies that this type of construction is useless in applications. Our proof is based on an entropic argument, drawing a new connection between computationally secure MPC, information theory and learning theory.

Abstract. One-round secure computation is generally believed impossible due to the residual function attack: any honest-but-curious participant can replay the protocol in their head changing their input, and learn, in this way, a new output. Inputless functionalities are among the few that are immune to this problem. This paper studies one-round, multi-party computation protocols (MPC) that implement the most natural inputless functionality: one that generates a random sample from a fixed distribution. These are called distributed samplers. At Eurocrypt 2022, Abram, Scholl and Yakoubov showed how to build this primitive in the semi-honest model with dishonest majority. In this work, we give a lower bound for constructing distributed samplers with a malicious adversary in the standard model. More in detail, we show that for any construction in the stand-alone model with black-box simulation, even with a CRS and honest majority, the output of the sampling protocol must have low entropy. This essentially implies that this type of construction is useless in applications. Our proof is based on an entropic argument, drawing a new connection between computationally secure MPC, information theory and learning theory.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

On the Impossibility of Actively Secure Distributed Samplers (Damiano Abram, Serge Fehr, Maciej Obremski, Peter Scholl) ia.cr/2025/1730

7 months ago 1 1 0 0
Abstract. We resolve the Correlated Agreement (CA) problem for Reed-Solomon codes up to the information-theoretic capacity limit by introducing a fundamental change of basis: from the traditional evaluation domain to the syndrome space. Viewed through this “Syndrome-Space Lens,” the problem of proximity testing transforms into a transparent question of linear-algebraic geometry: a single affine line of syndromes traversing a family of low-dimensional subspaces. This new perspective makes a sharp phase transition at the capacity boundary visible, allowing for a complete characterization of the problem’s behavior across all parameter regimes, yielding short, self-contained proofs.

Classification. We establish a precise trichotomy organized by the rank margin Δ := t − d. At the capacity boundary (Δ = 0), the CA premise is information-theoretically vacuous, and we prove that no rigidity can be concluded without imposing additional structure. One step beyond capacity (Δ = 1), the problem enters a “knife-edge” regime where unconditional rigidity does not hold; soundness is recovered either through a combinatorial witness (such as a repeated error support or a small union of supports) or by adding protocol-level structure (such as independent two-fold MCA checks, DEEP/STIR out-of-domain sampling, or a global error locator). For stricter gaps (Δ ≥ 2), unconditional rigidity holds under a simple algebraic condition ((r + 1)k < m + 1), with explicit quantitative bounds.

MCA and Practical Implications. Below capacity (δ < 1 − ρ), the strengthened mutual correlated agreement (MCA) problem reduces to ordinary correlated agreement. MCA holds under the same hypotheses as CA. When folds are generated with independent challenges (e.g., via domain-separated Fiat-Shamir), the per-round security margins add. The model-scoped soundness law is Pr [FA] ≤ q^(−(∑Δ_(i))s), providing a clear and complete rulebook for selecting safe and efficient parameters in FRI/STARK systems. This work bypasses the complex machinery of list-decoding algorithms entirely and resolves the long-standing open problem concerning the gap between the Johnson bound and capacity.

Abstract. We resolve the Correlated Agreement (CA) problem for Reed-Solomon codes up to the information-theoretic capacity limit by introducing a fundamental change of basis: from the traditional evaluation domain to the syndrome space. Viewed through this “Syndrome-Space Lens,” the problem of proximity testing transforms into a transparent question of linear-algebraic geometry: a single affine line of syndromes traversing a family of low-dimensional subspaces. This new perspective makes a sharp phase transition at the capacity boundary visible, allowing for a complete characterization of the problem’s behavior across all parameter regimes, yielding short, self-contained proofs. Classification. We establish a precise trichotomy organized by the rank margin Δ := t − d. At the capacity boundary (Δ = 0), the CA premise is information-theoretically vacuous, and we prove that no rigidity can be concluded without imposing additional structure. One step beyond capacity (Δ = 1), the problem enters a “knife-edge” regime where unconditional rigidity does not hold; soundness is recovered either through a combinatorial witness (such as a repeated error support or a small union of supports) or by adding protocol-level structure (such as independent two-fold MCA checks, DEEP/STIR out-of-domain sampling, or a global error locator). For stricter gaps (Δ ≥ 2), unconditional rigidity holds under a simple algebraic condition ((r + 1)k < m + 1), with explicit quantitative bounds. MCA and Practical Implications. Below capacity (δ < 1 − ρ), the strengthened mutual correlated agreement (MCA) problem reduces to ordinary correlated agreement. MCA holds under the same hypotheses as CA. When folds are generated with independent challenges (e.g., via domain-separated Fiat-Shamir), the per-round security margins add. The model-scoped soundness law is Pr [FA] ≤ q^(−(∑Δ_(i))s), providing a clear and complete rulebook for selecting safe and efficient parameters in FRI/STARK systems. This work bypasses the complex machinery of list-decoding algorithms entirely and resolves the long-standing open problem concerning the gap between the Johnson bound and capacity.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Image showing part 3 of abstract.

Image showing part 3 of abstract.

The Syndrome-Space Lens: A Complete Resolution of Proximity Gaps for Reed-Solomon Codes (Russell Okamoto) ia.cr/2025/1712

7 months ago 3 2 0 0
Post image

In polynomial commitments, evaluation proofs are nothing but a reduction from average case correctness of the pcs to worst case correctness of the pcs, isn't it, inspired from LDCs and LTCs.

7 months ago 1 0 0 0

Witness extended emulatable snarks/ polynomial commitments are secure under self composition, but need not be UC secure, as witness extended emulation involves rewinding, am I right here?

8 months ago 0 0 0 0
Advertisement
Abstract. Constrained PRFs are PRFs that allow the generation of constrained keys, which can be used to evaluate the PRF on a subset of the inputs. The PRF is still pseudorandom for an adversary how obtains multiple of these constrained keys on all inputs where none of the constrained keys allow it to evaluate the PRF. Constrained PRFs are known for some simple constraint classes (such as puncturing or intervals) from one-way functions, but for more powerful constraint classes such as bitfixing the only known constructions need heavy machinery, like indistinguishability obfuscation or multilinear maps.

In this work we show that constrained PRFs (for any constraint class) do not imply key agreement in a black-box way with an oracle separation. The result also applies to delegatable constrained PRFs, where constrained keys can be used to generate other, more restrictive constrained keys.

We show that this result has interesting applications for identity-based cryptography, where users obtain secret keys from a trusted, central authority. Namely, it shows that primitives that allow key agreement in this setting, like identity-based non-interactive key exchange and a weaker variant of identity-based encryption that we introduce in this work, do not imply key agreement and thus can exist even in a world where traditional key agreement and public-key encryption is impossible.

As a side result, we obtain a very simple proof of the classic result by Impagliazzo and Rudich (STOC 89) that separates key agreement from one-way functions. The proof is similar to a proof by Brakerski, Katz, Segev, and Yerukhimovich (TCC 2011), but more general because it applies to key agreement with non-perfect correctness. The proof also shows that Merkle puzzles are optimal, which Barak and Mahmoody (Crypto 2009) have shown before, but our proof is much simpler (and has tighter constants).

Abstract. Constrained PRFs are PRFs that allow the generation of constrained keys, which can be used to evaluate the PRF on a subset of the inputs. The PRF is still pseudorandom for an adversary how obtains multiple of these constrained keys on all inputs where none of the constrained keys allow it to evaluate the PRF. Constrained PRFs are known for some simple constraint classes (such as puncturing or intervals) from one-way functions, but for more powerful constraint classes such as bitfixing the only known constructions need heavy machinery, like indistinguishability obfuscation or multilinear maps. In this work we show that constrained PRFs (for any constraint class) do not imply key agreement in a black-box way with an oracle separation. The result also applies to delegatable constrained PRFs, where constrained keys can be used to generate other, more restrictive constrained keys. We show that this result has interesting applications for identity-based cryptography, where users obtain secret keys from a trusted, central authority. Namely, it shows that primitives that allow key agreement in this setting, like identity-based non-interactive key exchange and a weaker variant of identity-based encryption that we introduce in this work, do not imply key agreement and thus can exist even in a world where traditional key agreement and public-key encryption is impossible. As a side result, we obtain a very simple proof of the classic result by Impagliazzo and Rudich (STOC 89) that separates key agreement from one-way functions. The proof is similar to a proof by Brakerski, Katz, Segev, and Yerukhimovich (TCC 2011), but more general because it applies to key agreement with non-perfect correctness. The proof also shows that Merkle puzzles are optimal, which Barak and Mahmoody (Crypto 2009) have shown before, but our proof is much simpler (and has tighter constants).

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Limits on the Power of Constrained PRFs and Identity-based Cryptography (Roman Langrehr) ia.cr/2025/1338

9 months ago 3 1 0 0
Preview
A Fiat–Shamir Transformation From Duplex Sponges We analyze a variant of the Fiat–Shamir transformation based on an ideal permutation. The transformation relies on the popular duplex sponge paradigm, and minimizes the number of calls to the permutat...

We updated our paper on Fiat-Shamir!

We now take a closer look at the gap between what symmetric cryptography has focused on for over 10 years (indifferentiability) and what is actually needed for the soundness of ZKPs and SNARKs (something stronger!).

eprint.iacr.org/2025/536

9 months ago 15 5 2 0
Abstract. Interactive Oracle Proofs (IOPs) form the backbone of some of the most efficient general-purpose cryptographic proof-systems. In an IOP, the prover can interact with the verifier over multiple rounds, where in each round the prover sends a long message, from which the verifier only queries a few symbols.

State-of-the-art IOPs achieve a linear-size prover and a poly-logarithmic verifier but require a relatively large, logarithmic, number of rounds. While the Fiat-Shamir heuristic can be used to eliminate the need for actual interaction, in modern highly-parallelizable computer architectures such as GPUs, the large number of rounds still translates into a major bottleneck for the prover, since it needs to alternate between computing the IOP messages and the Fiat-Shamir hashes. Motivated by this fact, in this work we study the round complexity of linear-prover IOPs.

Our main result is an IOP for a large class of Boolean circuits, with only O(log^(*)(S)) rounds, where log^(*) denotes the iterated logarithm function (and S is the circuit size). The prover has linear size O(S) and the verifier runs in time polylog(S) and has query complexity O(log^(*)(S)). The protocol is both conceptually simpler, and strictly more efficient, than prior linear prover IOPs for Boolean circuits.

Abstract. Interactive Oracle Proofs (IOPs) form the backbone of some of the most efficient general-purpose cryptographic proof-systems. In an IOP, the prover can interact with the verifier over multiple rounds, where in each round the prover sends a long message, from which the verifier only queries a few symbols. State-of-the-art IOPs achieve a linear-size prover and a poly-logarithmic verifier but require a relatively large, logarithmic, number of rounds. While the Fiat-Shamir heuristic can be used to eliminate the need for actual interaction, in modern highly-parallelizable computer architectures such as GPUs, the large number of rounds still translates into a major bottleneck for the prover, since it needs to alternate between computing the IOP messages and the Fiat-Shamir hashes. Motivated by this fact, in this work we study the round complexity of linear-prover IOPs. Our main result is an IOP for a large class of Boolean circuits, with only O(log^(*)(S)) rounds, where log^(*) denotes the iterated logarithm function (and S is the circuit size). The prover has linear size O(S) and the verifier runs in time polylog(S) and has query complexity O(log^(*)(S)). The protocol is both conceptually simpler, and strictly more efficient, than prior linear prover IOPs for Boolean circuits.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Linear Prover IOPs in Log Star Rounds (Noor Athamnah, Noga Ron-Zewi, Ron D. Rothblum) ia.cr/2025/1269

9 months ago 1 1 0 0
Abstract. Probabilistically checkable proofs (PCPs) allow encoding a computation so that it can be quickly verified by only reading a few symbols. Inspired by tree codes (Schulman, STOC’93), we propose tree PCPs; these are PCPs that evolve as the computation progresses so that a proof for time t is obtained by appending a short string to the end of the proof for time t − 1. At any given time, the tree PCP can be locally queried to verify the entire computation so far.

We construct tree PCPs for non-deterministic space-s computation, where at time step t, the proof only grows by an additional poly(s,log(t)) bits, and the number of queries made by the verifier to the overall proof is poly(s) ⋅ t^(ϵ), for an arbitrary constant ϵ > 0.

Tree PCPs are well-suited to proving correctness of ongoing computation that unfolds over time. They may be thought of as an information-theoretic analog of the cryptographic notion of incrementally verifiable computation (Valiant, TCC’08). In the random oracle model, tree PCPs can be compiled to realize a variant of incrementally verifiable computation where the prover is allowed a small number of queries to a large evolving state. This yields the first construction of (a natural variant of) IVC in the random oracle model.

Abstract. Probabilistically checkable proofs (PCPs) allow encoding a computation so that it can be quickly verified by only reading a few symbols. Inspired by tree codes (Schulman, STOC’93), we propose tree PCPs; these are PCPs that evolve as the computation progresses so that a proof for time t is obtained by appending a short string to the end of the proof for time t − 1. At any given time, the tree PCP can be locally queried to verify the entire computation so far. We construct tree PCPs for non-deterministic space-s computation, where at time step t, the proof only grows by an additional poly(s,log(t)) bits, and the number of queries made by the verifier to the overall proof is poly(s) ⋅ t^(ϵ), for an arbitrary constant ϵ > 0. Tree PCPs are well-suited to proving correctness of ongoing computation that unfolds over time. They may be thought of as an information-theoretic analog of the cryptographic notion of incrementally verifiable computation (Valiant, TCC’08). In the random oracle model, tree PCPs can be compiled to realize a variant of incrementally verifiable computation where the prover is allowed a small number of queries to a large evolving state. This yields the first construction of (a natural variant of) IVC in the random oracle model.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Tree PCPs (Tamer Mour, Alon Rosen, Ron Rothblum) ia.cr/2025/1252

9 months ago 1 1 0 0
Video

There are days when math feels scary and confusing.
There are other days when it brings order and peace to a grateful universe.

9 months ago 8 4 1 0
Abstract. Private constrained PRFs are constrained PRFs where the constrained key hides information about the predicate circuit. Although there are many constructions and applications of PCPRF, its relationship to basic cryptographic primitives, such as one-way functions and public-key encryptions, has been unclear. For example, we don’t know whether one-way functions imply PCPRFs for general predicates, nor do we know whether 1-key secure PCPRF for all polynomial-sized predicates imply public-key primitives such as public-key encryption and secret-key agreement.

In this work, we prove the black-box separation between a 1-key secure PCPRF for any predicate and a secret-key agreement, which is the first black-box separation result about PCPRF. Specifically, we prove that there exists an oracle relative to which 1-key secure PCPRFs exist while secret-key agreement does not. Our proof is based on the simulation-based technique proposed by Impagliazzo and Rudich (STOC 89). The main technical challenge in generalizing the simulation-based technique to PCPRF is the issue of of Eve’s simulation to the real world because our oracle is more complicated than a random oracle. We introduce a new technique which we call the “weighting” technique and show how to leverage it to circumvent the issue of unfaithfulness in the proof framework of Impagliazzo and Rudich.

Abstract. Private constrained PRFs are constrained PRFs where the constrained key hides information about the predicate circuit. Although there are many constructions and applications of PCPRF, its relationship to basic cryptographic primitives, such as one-way functions and public-key encryptions, has been unclear. For example, we don’t know whether one-way functions imply PCPRFs for general predicates, nor do we know whether 1-key secure PCPRF for all polynomial-sized predicates imply public-key primitives such as public-key encryption and secret-key agreement. In this work, we prove the black-box separation between a 1-key secure PCPRF for any predicate and a secret-key agreement, which is the first black-box separation result about PCPRF. Specifically, we prove that there exists an oracle relative to which 1-key secure PCPRFs exist while secret-key agreement does not. Our proof is based on the simulation-based technique proposed by Impagliazzo and Rudich (STOC 89). The main technical challenge in generalizing the simulation-based technique to PCPRF is the issue of of Eve’s simulation to the real world because our oracle is more complicated than a random oracle. We introduce a new technique which we call the “weighting” technique and show how to leverage it to circumvent the issue of unfaithfulness in the proof framework of Impagliazzo and Rudich.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Limits on the Power of Private Constrained PRFs (Mengda Bi, Chenxin Dai, Yaohua Ma) ia.cr/2025/1196

9 months ago 1 1 0 0
Clément Canonne: What is deterministic amplification?
Clément Canonne: What is deterministic amplification? YouTube video by Sydney Mathematical Research Institute - SMRI

Recently came across this fantastic talk by @ccanonne.github.io on deterministic amplification via expander graphs—elegant ideas, crystal-clear exposition. A real gem!

www.youtube.com/watch?v=3AAU...

9 months ago 14 2 1 0

Does anyone know what is the maximum size of the subgroup where discrete log is polytime computable in the group of units of the following galois ring

Z_{p}[X]/(X^{2^{k}}+1)

Or any pointers are also fine.

Thanks in advance.

9 months ago 0 0 0 0
Abstract. We study several problems in the intersection of cryptography and complexity theory based on the following high-level thesis.

1)  Obfuscation can serve as a general-purpose worst-case to average-case reduction, reducing the existence of various forms of cryptography to corresponding worst-case assumptions.
2)  We can therefore hope to overcome barriers in cryptography and average-case complexity by (i) making worst-case hardness assumptions beyond P ≠ NP, and (ii) leveraging worst-case hardness reductions, either proved by traditional complexity-theoretic methods or facilitated further by cryptography.

Concretely, our results include: - Optimal hardness. Assuming sub-exponential indistinguishability obfuscation, we give fine-grained worst-case to average case reductions for circuit-SAT. In particular, if finding an NP-witness requires nearly brute-force time in the worst case, then the same is true for some efficiently sampleable distribution. In fact, we show that under these assumptions, there exist families of one-way functions with optimal time-probability security tradeoffs.

Under an additional, stronger assumption – the optimal non-deterministic hardness of refuting circuit-SAT – we construct additional cryptographic primitives such as PRGs and public-key encryption that have such optimal time-advantage security tradeoffs.

-   Direct Product Hardness. Again assuming i𝒪 and optimal non-deterministic hardness of SAT refutation, we show that the ``(search) k-fold SAT problem” – the computational task of finding satisfying assignments to k circuit-SAT instances simultaneously – has (optimal) hardness roughly (T/2^(n))^(k) for time T algorithms. In fact, we build “optimally secure one-way product functions” (Holmgren-Lombardi, FOCS ’18), demonstrating that optimal direct product theorems hold for some choice of one-way function family.

-   Single-Input Correlation Intractability. Assuming either i𝒪 or LWE, we show a worst-case to average-case reduction for strong forms of single-input correlation intractability. That is, powerful forms of correlation-intractable hash functions exist provided that a collection of “correlation-finding” problems are hard.

-   Non-interactive Proof of Quantumness. Assuming sub-exponential i𝒪 and OWFs, we give a non-interactive proof of quantumness based on the worst-case hardness of the white-box Simon problem. In particular, this proof of quantumness result does not explicitly assume quantum advantage for an average-case task.

To help prove our first two results, we show along the way how to improve the Goldwasser-Sipser “set lower bound” protocol to have communication complexity quadratically smaller in the multiplicative approximation error ϵ.

Abstract. We study several problems in the intersection of cryptography and complexity theory based on the following high-level thesis. 1) Obfuscation can serve as a general-purpose worst-case to average-case reduction, reducing the existence of various forms of cryptography to corresponding worst-case assumptions. 2) We can therefore hope to overcome barriers in cryptography and average-case complexity by (i) making worst-case hardness assumptions beyond P ≠ NP, and (ii) leveraging worst-case hardness reductions, either proved by traditional complexity-theoretic methods or facilitated further by cryptography. Concretely, our results include: - Optimal hardness. Assuming sub-exponential indistinguishability obfuscation, we give fine-grained worst-case to average case reductions for circuit-SAT. In particular, if finding an NP-witness requires nearly brute-force time in the worst case, then the same is true for some efficiently sampleable distribution. In fact, we show that under these assumptions, there exist families of one-way functions with optimal time-probability security tradeoffs. Under an additional, stronger assumption – the optimal non-deterministic hardness of refuting circuit-SAT – we construct additional cryptographic primitives such as PRGs and public-key encryption that have such optimal time-advantage security tradeoffs. - Direct Product Hardness. Again assuming i𝒪 and optimal non-deterministic hardness of SAT refutation, we show that the ``(search) k-fold SAT problem” – the computational task of finding satisfying assignments to k circuit-SAT instances simultaneously – has (optimal) hardness roughly (T/2^(n))^(k) for time T algorithms. In fact, we build “optimally secure one-way product functions” (Holmgren-Lombardi, FOCS ’18), demonstrating that optimal direct product theorems hold for some choice of one-way function family. - Single-Input Correlation Intractability. Assuming either i𝒪 or LWE, we show a worst-case to average-case reduction for strong forms of single-input correlation intractability. That is, powerful forms of correlation-intractable hash functions exist provided that a collection of “correlation-finding” problems are hard. - Non-interactive Proof of Quantumness. Assuming sub-exponential i𝒪 and OWFs, we give a non-interactive proof of quantumness based on the worst-case hardness of the white-box Simon problem. In particular, this proof of quantumness result does not explicitly assume quantum advantage for an average-case task. To help prove our first two results, we show along the way how to improve the Goldwasser-Sipser “set lower bound” protocol to have communication complexity quadratically smaller in the multiplicative approximation error ϵ.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Image showing part 3 of abstract.

Image showing part 3 of abstract.

Cryptography meets worst-case complexity: Optimal security and more from iO and worst-case assumptions (Rahul Ilango, Alex Lombardi) ia.cr/2025/1087

10 months ago 1 1 0 0
Advertisement

(as mentioned earlier, further it increases the ciphertext dimension, whereas the decryption circuit is only defined for "linear" ciphertexts).

10 months ago 0 0 0 0

As per my understanding, leveled FHE given in BGV paper gives only a bounded noise ciphertext, and the decryption circuit falls within this level, if one has to do one more multiplication, if one multiplies ciphertexts, it increases the noise,

10 months ago 0 0 1 0

3) Clearly f_mult can't be the multiplication of two ciphertexts as the bootstrapping operation is defined only for linear ciphertexts, isn't it?

4) and would like to know how bootstrapping helps in noise reduction?

10 months ago 0 0 1 0

2) how is f_mult gate constructed using the bootstrapping operation

Is it f_mult = Dec(ct1, s) x Dec(ct2, s), if so will this circuit be within the level L for the leveled FHE? ( For the lower level ciphertexts)

10 months ago 0 0 1 0

[[\sum_i (c1 2^i) s_i + c_0]_q]_p is the decryption circuit

1) how is the mod p operation performed using the bootstrapping key? And do we treat the decryption circuit as a function over R_q?

10 months ago 0 0 1 0

I'm trying to understand bootstrapping and its use in evaluating arbitrary circuits in the BGV encryption scheme for RLWE.
Here the bootstrapping key is the encryption of binary decomposition of the secret key s, s and s_i \in R_q = Z_q[x]/(x^n+1), and the plain text space is R_p = Z_p[x]/(x^n + 1)

10 months ago 0 0 1 0
Abstract. We study the problem of combating viral misinformation campaigns in end-to-end encrypted (E2EE) messaging systems such as WhatsApp. We propose a new notion of Hop Tracking Signatures (HTS) that allows for tracing originators of messages that have been propagated on long forwarding paths (i.e., gone viral), while preserving anonymity of everyone else. We define security for HTS against malicious servers.

We present both negative and positive results for HTS: on the one hand, we show that HTS does not admit succinct constructions if tracing and anonymity thresholds differ by exactly one “hop”. On the other hand, by allowing for a larger gap between tracing and anonymity thresholds, we can build succinct HTS schemes where the signature size does not grow with the forwarding path. Our positive result relies on streaming algorithms and strong cryptographic assumptions.

Prior works on tracing within E2EE messaging systems either do not achieve security against malicious servers or focus only on tracing originators of pre-defined banned content.

Abstract. We study the problem of combating viral misinformation campaigns in end-to-end encrypted (E2EE) messaging systems such as WhatsApp. We propose a new notion of Hop Tracking Signatures (HTS) that allows for tracing originators of messages that have been propagated on long forwarding paths (i.e., gone viral), while preserving anonymity of everyone else. We define security for HTS against malicious servers. We present both negative and positive results for HTS: on the one hand, we show that HTS does not admit succinct constructions if tracing and anonymity thresholds differ by exactly one “hop”. On the other hand, by allowing for a larger gap between tracing and anonymity thresholds, we can build succinct HTS schemes where the signature size does not grow with the forwarding path. Our positive result relies on streaming algorithms and strong cryptographic assumptions. Prior works on tracing within E2EE messaging systems either do not achieve security against malicious servers or focus only on tracing originators of pre-defined banned content.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

How to Trace Viral Content in End-to-End Encrypted Messaging (Pedro Branco, Matthew Green, Aditya Hegde, Abhishek Jain, Gabriel Kaptchuk) ia.cr/2025/1052

10 months ago 3 2 0 0
Advertisement
Abstract. In this work, we show that parallel repetition of public-coin interactive arguments reduces the soundness error at an exponential rate even in the post-quantum setting. Moreover, we generalize this result to hold for threshold verifiers, where the parallel repeated verifier accepts if and only if at least t of the executions are accepted (for some threshold t). Prior to this work, these results were known only when the cheating prover was assumed to be classical.

We also prove a similar result for three-message private-coin arguments. Previously, Bostanci, Qian, Spooner, and Yuen (STOC 2024) proved such a parallel repetition result in the more general setting of quantum protocols, where the verifier and communication may be quantum. We consider only protocols where the verifier is classical, but obtain a simplified analysis, and for the more general setting of threshold verifiers.

Abstract. In this work, we show that parallel repetition of public-coin interactive arguments reduces the soundness error at an exponential rate even in the post-quantum setting. Moreover, we generalize this result to hold for threshold verifiers, where the parallel repeated verifier accepts if and only if at least t of the executions are accepted (for some threshold t). Prior to this work, these results were known only when the cheating prover was assumed to be classical. We also prove a similar result for three-message private-coin arguments. Previously, Bostanci, Qian, Spooner, and Yuen (STOC 2024) proved such a parallel repetition result in the more general setting of quantum protocols, where the verifier and communication may be quantum. We consider only protocols where the verifier is classical, but obtain a simplified analysis, and for the more general setting of threshold verifiers.

Parallel Repetition for Post-Quantum Arguments (Andrew Huang, Yael Tauman Kalai) ia.cr/2025/1027

10 months ago 1 1 0 0
Abstract. Collaborative zk-SNARKs (Ozdemir and Boneh, USENIX’22) are a multiparty variant of zk-SNARKs where multiple, mutually distrustful provers, each holding a private input, jointly compute a zk-SNARK using their combined inputs. A sequence of works has proposed efficient constructions of collaborative zk-SNARKs using a common template that involves designing secure multiparty computation (MPC) protocols to emulate a zk-SNARK prover without making non-black-box use of cryptography. To achieve security against malicious adversaries, these works adopt compilers from the MPC literature that transform semi-honest MPC into malicious-secure MPC.

In this work, we revisit this design template. • Pitfalls: We demonstrate two pitfalls in the template, which can lead to a loss of input privacy. We first show that it is possible to compute collaborative proofs on invalid witnesses, which in turn can leak the inputs of honest provers. Next, we show that using state-of-the-art malicious security compilers as-is for proof computation is insecure, in general. Finally, we discuss mitigation strategies. • Malicious Security Essentially for Free: As our main technical result, we show that in the honest-majority setting, one can forego malicious security checks performed by state-of-the-art malicious security compilers during collaborative proof generation of several widely used zk-SNARKs. In other words, we can avoid the overheads of malicious security compilers, enabling faster proof generation.

To the best of our knowledge, this is the first example of non-trivial computations where semi-honest MPC protocols achieve malicious security. The observations underlying our positive results are general and may have applications beyond collaborative zkSNARKs.

Abstract. Collaborative zk-SNARKs (Ozdemir and Boneh, USENIX’22) are a multiparty variant of zk-SNARKs where multiple, mutually distrustful provers, each holding a private input, jointly compute a zk-SNARK using their combined inputs. A sequence of works has proposed efficient constructions of collaborative zk-SNARKs using a common template that involves designing secure multiparty computation (MPC) protocols to emulate a zk-SNARK prover without making non-black-box use of cryptography. To achieve security against malicious adversaries, these works adopt compilers from the MPC literature that transform semi-honest MPC into malicious-secure MPC. In this work, we revisit this design template. • Pitfalls: We demonstrate two pitfalls in the template, which can lead to a loss of input privacy. We first show that it is possible to compute collaborative proofs on invalid witnesses, which in turn can leak the inputs of honest provers. Next, we show that using state-of-the-art malicious security compilers as-is for proof computation is insecure, in general. Finally, we discuss mitigation strategies. • Malicious Security Essentially for Free: As our main technical result, we show that in the honest-majority setting, one can forego malicious security checks performed by state-of-the-art malicious security compilers during collaborative proof generation of several widely used zk-SNARKs. In other words, we can avoid the overheads of malicious security compilers, enabling faster proof generation. To the best of our knowledge, this is the first example of non-trivial computations where semi-honest MPC protocols achieve malicious security. The observations underlying our positive results are general and may have applications beyond collaborative zkSNARKs.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Malicious Security in Collaborative zk-SNARKs: More than Meets the Eye (Sanjam Garg, Aarushi Goel, Abhishek Jain, Bhaskar Roberts, Sruthi Sekar) ia.cr/2025/1026

10 months ago 1 1 0 0
Abstract. Introduced by Canetti in 2001, Universal Composability (UC) is a widely adopted security model that enables the specification and proof of security for a broad range of protocols, offering strong security guarantees. At its core lies the universal composition theorem (UC theorem), which ensures that protocols proven secure within the framework remain secure even when deployed in real-world environments with multiple instances of them.

In this work, we present two key contributions. First, we identify several problems with the UC framework, in particular the UC Theorem. They include counterexamples, limitations that make it unusable for important classes of protocols, and weaknesses in its proof. These problems reveal flaws in nearly all the fundamental concepts of UC.

Secondly, we update the main concepts of UC to address these problems. Although these revisions are nontrivial, our updated definitions are intended to stay as closely aligned with the original model as possible, while providing greater simplicity overall. To ensure the validity of these updates, we present a proof of the updated UC theorem, which is more detailed and modular than the original.

Abstract. Introduced by Canetti in 2001, Universal Composability (UC) is a widely adopted security model that enables the specification and proof of security for a broad range of protocols, offering strong security guarantees. At its core lies the universal composition theorem (UC theorem), which ensures that protocols proven secure within the framework remain secure even when deployed in real-world environments with multiple instances of them. In this work, we present two key contributions. First, we identify several problems with the UC framework, in particular the UC Theorem. They include counterexamples, limitations that make it unusable for important classes of protocols, and weaknesses in its proof. These problems reveal flaws in nearly all the fundamental concepts of UC. Secondly, we update the main concepts of UC to address these problems. Although these revisions are nontrivial, our updated definitions are intended to stay as closely aligned with the original model as possible, while providing greater simplicity overall. To ensure the validity of these updates, we present a proof of the updated UC theorem, which is more detailed and modular than the original.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Diving Deep Into UC: Uncovering and Resolving Issues in Universal Composability (Céline Chevalier, Éric Sageloli) ia.cr/2025/934

11 months ago 6 3 1 0
Abstract. We analyze the post-quantum security of succinct interactive arguments constructed from interactive oracle proofs (IOPs) and vector commitment schemes. Specifically, we prove that an interactive variant of the is secure in the standard model against quantum adversaries when the vector commitment scheme is collapse binding.

Prior work established the post-quantum security of Kilian’s succinct interactive argument, a special case of the BCS transformation for one-message IOPs (i.e., PCPs). That analysis is inherently limited to one message because the reduction, like all prior quantum rewinding reductions, aims to extract classical information (a PCP string) from the quantum argument adversary. Our reduction overcomes this limitation by instead extracting a that implements an IOP adversary; representing such an adversary classically may in general require exponential complexity.

Along the way we define , which we propose as the definition of collapse binding for vector commitment schemes, eliminating shortcomings of prior definitions.

As an application of our results, we obtain post-quantum secure succinct arguments, in the standard model (no oracles), with the .

Abstract. We analyze the post-quantum security of succinct interactive arguments constructed from interactive oracle proofs (IOPs) and vector commitment schemes. Specifically, we prove that an interactive variant of the is secure in the standard model against quantum adversaries when the vector commitment scheme is collapse binding. Prior work established the post-quantum security of Kilian’s succinct interactive argument, a special case of the BCS transformation for one-message IOPs (i.e., PCPs). That analysis is inherently limited to one message because the reduction, like all prior quantum rewinding reductions, aims to extract classical information (a PCP string) from the quantum argument adversary. Our reduction overcomes this limitation by instead extracting a that implements an IOP adversary; representing such an adversary classically may in general require exponential complexity. Along the way we define , which we propose as the definition of collapse binding for vector commitment schemes, eliminating shortcomings of prior definitions. As an application of our results, we obtain post-quantum secure succinct arguments, in the standard model (no oracles), with the .

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Quantum Rewinding for IOP-Based Succinct Arguments (Alessandro Chiesa, Marcel Dall'Agnol, Zijing Di, Ziyi Guan, Nicholas Spooner) ia.cr/2025/947

10 months ago 3 1 0 0