8

Learning with Error (LWE) problem seems like a generalization of Learning Parity with Noise (LPN) problem, where in the latter one uses bits. But, this also makes LPN seem very related to the problem of decoding a random linear code. I was just wondering whether LPN is equivalent to the problem of decoding a random linear code? And whether there are some positive or negative results about the equivalence of LPN and LWE?

kelalaka
  • 49,797
  • 12
  • 123
  • 211

1 Answers1

10

Yes, LPN is (essentially by definition) equivalent to the hardness of decoding a random linear code over $\mathbb{F}_2$. No, there is no known reductions between LPN and LWE. It is usually believed that LPN is (in some sense) "harder to break" than LWE, simply because we know much less attacks on LPN. It seems to have less structure that could be exploited in advanced attacks - but also, for the very same reason, has far less known applications. Yet, no formal reduction relating any of them is known. It should also be mentioned (thanks for TMM for pointing that out) that the best known attack on LPN takes time $2^{O(n/\log n)}$ (where $n$ is the dimension) while the best known attack on LWE takes time $2^{O(n)}$ - hence LWE appears stronger than LPN when considering solely the running time of the best known algorithms.

Note that while they clearly seem related, there is a fundamental difference: LWE uses a Gaussian noise, while LPN uses a Bernouilli noise. The first type crucially relies on some "non-black box" considerations on the field where it's implemented (since we need to be able to talk about "small" and "big" field elements), while the latter one is actually completely oblivious to the field (i.e., you can define LPN over a field in a way that makes a black-box use of the field).

To elaborate on your sentence "Learning with Error (LWE) problem seems like a generalization of Learning Parity with Noise (LPN) problem, where in the latter one uses bits": it's not really the case, actually. The natural generalization of LPN to a larger field $\mathbb{F}$ would be obtained by adding to each linear equation a noise which is a uniformly random element with some probability $p$, and $0$ with probability $1-p$ (this is the natural generalization of the Bernoulli noise over $\mathbb{F}$). LWE, instead, uses a small Gaussian noise everywhere, which is quite different.

EDIT: to clarify, the last paragraph above is my point of view, but not the formal "demonstration" that LWE is not a generalization of LPN. As pointed out by TMM and Chris Peikert in the comment, the Bernoulli distribution over $\mathbb{F}_2$ can technically be seen as a Gaussian distribution - hence LPN can be seen as LWE over $\mathbb{F}_2$ (or conversely, LWE can be seen as the generalization of LPN over larger fields). My point in the last paragraph is that this is not the only possible generalization, and the alternative one which I mention seems to me much closer in spirit to the original LPN assumption: under the "LPN over large fields with Bernoulli noise", one gets essentially the same implications as with standard LPN (e.g. public key encryption, but not collision resistance), and the same structural properties (i.e. the best known attacks on this generalization are the natural generalizations of the best known attacks on LPN over $\mathbb{F}_2$). LWE is technically a generalization of LPN over larger fields, but one with important differences compared to the original LPN assumption (different attacks, different (and much more "advanced") implications, non-black-box use of the field, etc).

Geoffroy Couteau
  • 21,719
  • 2
  • 55
  • 78