I've been reading through MIT's lecture notes on learning with errors here, and I'm trying to understand the reduction from Search LWE to Decision LWE, as described there in Section 2.7, "Algorithm 1".
I cannot seem to understand why we need to repeat the sampling part (for guessing the $i$th coordinate and feeding the sample to the discriminator) a polynomial number of times? ("$For\ l=1,\dots,L$"). Why can't we just choose the fist value for which the discriminator outputs $1$ (since it's more likely to output $1$ when the guess is correct)?