4

I've been reading through MIT's lecture notes on learning with errors here, and I'm trying to understand the reduction from Search LWE to Decision LWE, as described there in Section 2.7, "Algorithm 1".

I cannot seem to understand why we need to repeat the sampling part (for guessing the $i$th coordinate and feeding the sample to the discriminator) a polynomial number of times? ("$For\ l=1,\dots,L$"). Why can't we just choose the fist value for which the discriminator outputs $1$ (since it's more likely to output $1$ when the guess is correct)?

Anon
  • 413
  • 2
  • 8

1 Answers1

2

The assumptions on the decider are weak - it has advantage at least $\epsilon$ (which you can imagine to be some small, but non-trivial quantity, say .01).

This is enough to break LWE. But the decider used in the reduction needs to have a stronger property. In particular, it needs to be simultaneously correct for all coordinates with good probability.

Concretely, you can think of the reduction flipping a coin with probability $(1/2)+\epsilon$ for each coordinate. The search reduction needs all of these coins to be heads with good probability. This will clearly not happen for $\epsilon$ small.

In light of this, the reduction first "boosts" the advantage of the decider, meaning turns a small $\epsilon$ decider into a decider with advantage close to 1. This is precisely the purpose of the loop you are talking about.

Mark Schultz-Wu
  • 15,089
  • 1
  • 22
  • 53