In An Efficient Quantum Algorithm for Lattice Problems Achieving Subexponential Approximation Factor, the author claims they give a polynomial-time quantum algorithm for solving the Bounded Distance Decoding problem with a subexponential approximation factor on a class of integer lattices. What does this result mean? Will it imply the insecurity of lattice cryptography? Is it as important as quantum algorithm for factoring by Peter Shor?
3 Answers
There is no public paper available yet, so this answer is preliminary and based on what was presented in the talk and the follow-up discussion. A full understanding cannot be reached until there is a paper to verify, evaluate, and compare to prior work and known results. However, a good understanding of the situation already seems to be emerging.
The tl;dr is: the special problem the authors address is classically easy to solve using standard lattice algorithms (no quantum needed), as shown in this note. Moreover, the core new quantum step can instead be implemented classically (and much more simply and effectively) as well. So, the work doesn’t show any quantum advantage versus what we already knew how to do classically, nor anything new about what we can do classically. Details follow.
The clause “on a class of integer lattices” is a very important qualifier. The BDD problem the authors address is one where the lattice is “$q$-ary” and generated by a single $n$-dimensional mod-$q$ vector (or a small number of them), the modulus $q \gg 2^{\sqrt{n}}$, and the target vector is within a $\ll 2^{-\sqrt{n}}$ factor of the minimum distance of the lattice. This setting is far from anything that has ever been used in lattice cryptography (to my knowledge), so the result would not have any direct effect on proposed lattice systems. Of course the broader question is whether the techniques could lead to stronger results that do affect lattice crypto.
Based on the description given in the talk, several expert attendees believe it’s very likely that the special lattice problem the authors address is already easily solvable using known classical techniques (no quantum needed). UPDATE: this has turned out to be the case, and is substantiated in this note. In other words, the particular form of the BDD problem makes it easy to solve in known and unsurprising ways. The algorithm is merely the standard sequence of LLL basis reduction followed by Babai nearest-plane decoding, but showing that this actually works relies on some deeper (but previously known) properties of LLL than the ones that are usually invoked.
What about the broader question: could the main techniques lead to stronger results that we can’t currently obtain classically? It turns out that what the core quantum step accomplishes, the “worst-case to average-case” transformation, can be done classically (and more simply and efficiently) using a well known randomization technique—what’s called the “LWE self reduction“ or “($q$-ary) BDD to LWE reduction.” See Section 5 and Theorem 5.3 of this paper and the earlier works cited therein for details.
More precisely, $n$-dimensional $q$-ary BDD for relative distance $\alpha$ (the problem considered by the authors) classically reduces to LWE with error rate $\alpha \cdot O(\sqrt{n})$. While this reduction looks unnecessary to solve the original BDD problem in question, it shows that the core new quantum step can be replaced by something classical that performs at least as well (and likely even better in terms of parameters). This indicates that the main quantum technique probably does not hold any novel or surprising power in this context.
- 5,893
- 1
- 26
- 28
I created a website to crowdsource what is known about algorithms for lattice problems in NP intersect CoNP:
Our paper is up:
https://arxiv.org/abs/2201.13450
For the record, here was the timeline we followed:
Until 8/17/21, we went through the classical literature pretty throughly. A classical algorithm would also have been fine so that we can feed it back into quantum algorithms. But since the base case is a worst-case type of the well-studied hidden number problem, it seemed reasonable that nothing is known.
8/17/21-9/21/21: We asked a sequence of 5 experts what is known about the problem. In one case we indicated the best classical approach we could find. We received no responses with new information.
9/21/21: The decision was made to go with the special base case with one vector in the talk because (1) it will aid people in solving it, and (2) it's a colloquium talk in a quantum seminar and therefore needs to be accessible to a broad audience. A talk with only lattices or only quantum is already a challenge, and to combine both, well, try it! 9/21/21: Lively discussion about possibilities during panel, but no algorithms.
9/22/21: We are contacted with a new classical algorithm for the special case, and after we made it clear what our algorithms can do, a revision outlining how to get a more general case by analyzing LLL.
1/31/22: No classical answer for our Schnorr bound has been received yet.
Sean
- 111
- 1
- 3
One could give a much longer answer to this question (and I would be quite interested in seeing someone like Chris's perspective), but the following two points probably suffice for a non-specialist.
Approximation Factors: The main way this attack should be seen as (potentially) threatening to lattice-based cryptography is via the possibility of future improvements. The approximation factor this targets (which I believe is $2^{n^{1/2}}$) is large enough that even if LWE was classically weak in this range, one could still construct things like:
- PKE
- Digital Signatures
- FHE
E.g. most of what you might care about would still exist. So the current attack itself does not really impact the main body of "practical" lattice-based cryptography, although future improvements to the attack are of course possible.
The Possibility of Dequantization: The attack has two parts:
- A (quantum) dimensionality reduction step (from $n\to \sqrt{n}$)
- Use standard classical techniques to solve the $\sqrt{n}$-dimension instance.
A number of people suggested the possibility of dequantizing the first step, via things like taking random linear combinations (say with Gaussian coefficients) of the basis vectors. If this dimension reduction works, one gets BDD with approximation factor $2^{n^{1/2}}$ in polynomial time (I believe).
While this would make the algorithm itself stronger, it would also make it less concerning in a certain sense. This is because we (classically) have a fairly decent idea of how hard lattice problems are. By this, I am in particular thinking of things like:
- The various "fine-grained" hardness results that exist (say under the Exponential Time Hypothesis, or variants of it),
- the recent lattice sieving lower bounds, and
- hardness results in the presence of pre-processing (for example CVPP hardness results).
Of course these do not rule out all possible attacks, but they should be mentioned as a growing body of formal evidence for the classical hardness of lattice problems. This is to say that the main concerning thing about the linked talk is the existence of a non-trivial quantum speedup --- if this is dequantized, we are back in the setting of classical computing, where our understanding of the hardness of lattice problems is better.
- 15,089
- 1
- 22
- 53