8

Short version: how is it possible to round a continuous Gaussian into a true discrete Gaussian (usually denoted $\mathcal{D}_{\mathbb{Z},\alpha q}$)? The goal is to obtain a reduction from continuous LWE to a true-discrete LWE and combine it with the reduction from $\textsf{GapSVP}_\gamma$ to continuous LWE.

Longer version: in [Reg05], the discrete Gaussian they consider (denoted $\bar{\Psi}_\alpha$, or sometimes $\lfloor\mathcal{D}_{\alpha q}\rceil$) for the noise is a "strange Gaussian": it is obtainable from a continuous Gaussian of parameter $\alpha q$ that you just round to the nearest integer modulo $q$. They also prove that if you can solve continuous LWE, you can solve $\textsf{GapSVP}_\gamma$. In order to also prove the security of LWE with this strange discrete Gaussian, they explain a trivial reduction from continuous LWE to this "strange discrete" LWE: if you can solve the "strange discrete" LWE, you can solve the continuous LWE problem by just rounding your samples to the nearest integer, and call the discrete oracle on these samples. So the hardness goes like this:

$$\text{strange discrete LWE} \leftarrow \text{continuous LWE} \leftarrow \textsf{GapSVP}_\gamma$$

However, if I understand correctly, this Gaussian is not a "real Gaussian", and people prefer to use instead a "true discrete Gaussian" like [MP12] (I guess it has better mathematical properties when you want more involved properties, like bounds on singular values). But then, it is not possible to use in the same way the [Reg05] result to prove the hardness of "true-discrete" LWE, since we can't anymore turn a continuous distribution into a true-discrete one.

So what is the usual way to do this rounding to obtain the following reduction? $$\text{true discrete LWE} \leftarrow \text{continuous LWE}$$ This paper [GMPW20] suggests that [Pei10] solves this problem... but I can't find where.

Also, is there a reduction that directly does:

$$\text{true discrete LWE} \leftarrow \text{GapSVP}_\gamma$$

without transiting via the discrete case?

References

[MP12] Trapdoor for Lattices: Simpler, Tighter, Faster, Smaller, Micciancio, Peikert.

[GPV08] How to Use a Short Basis: Trapdoors for Hard Lattices and New Cryptographic Construction, Gentry, Peikert, Vaikuntanathan.

[Pei10] An Efficient and Parallel Gaussian Sampler for Lattices, Peikert.

[MW17] Gaussian sampling over the integers: Efficient, generic,constant-time, Micciancio, Walter.

[HSL17] Rounded Gaussians, Hülsing, Lange, Smeets.

[GMPW20] Improved Discrete Gaussian and Subgaussian Analysis for Lattice Cryptography, Genise, Micciancio, Peikert, Walter.

Bonus: Additional unrelated information: Out of curiosity, what is the current state of the art on the sampling over $\mathcal{D}_{\mathbb{Z},\alpha q}$? Is there any exact sampling method? And what is the recommended method to program that kind of sampling, which is both simple to program and not too inefficient in practice? For now I saw that Section 4.1 of [GPV08] gives a simple rejection sampling method to approximate a sample of a true discrete Gaussian. It was later improved in [Pei10], which is a bit more complex. I just saw [MW17], I need to check what it actually does.

-- EDIT --

I'm also quite confused why people like "true discrete Gaussians" that much: people may say "the sum of two discrete Gaussians is a discrete Gaussian". Fair enough. But you could always say for "strange Gaussians" that rounding a continuous Gaussian makes the problem more complicated compared to the initial continuous Gaussian: so in the proof, you can always say "Let's replace the strange Gaussian with a continuous Gaussian", and now we analyse the attacks on this new protocol: and now, we do have this nice property that the sum of two continuous Gaussians is a continuous Gaussian. And strange discrete Gaussians also seem to be more efficient [HSL17] to sample than true discrete Gaussians, so what's the benefit? Would you have an example of application in which this true discrete Gausian is really required?

For instance, [MP12] uses these true Gaussian, but the search to decision reduction from [MP12] is formulated for continuous Gaussians. The only theorem I can see (I did not checked the last section "Applications") that could actually requires true Gaussian would be Lemma 2.9 which bounds singular values of $\mathbf{R}$ (required for correctness). However, the theorem is true for any $\delta$-subgaussian distribution, so I would expect the strange Gaussian to also be $\delta$-subgaussian for some reasonable $\delta$, and since for true discrete Gaussian they obtain a value for $C$ only empirically, I guess there is chances that this can also be done for strange Gaussians.

Léo Colisson
  • 1,551
  • 13
  • 14

2 Answers2

4

Out of curiosity, what is the current state of the art on the sampling over $D_{\mathbb{Z},\alpha q}$

This is a fairly involved question to answer. There are a number of competing ways to sample it, which you can roughly divide into:

  1. Techniques that work for any probability distribution
  2. Techniques that are specific to the discrete Gaussian

Table 1 of [MW17] discusses some sampling methods. Michael Walter's paper *Sampling the Integers with Low Relative Error surveys a few of the methods as well, so maybe a good resource. There are more though. In particular, I also know of:

  1. The polar sampler

  2. The Conditional Density Sampler

There are additionally many different things one can do to optimize things like rejection sampling. I remember this paper about the NIST PQC candidate Falcon, but I believe there have been more recent attempts to make rejection sampling "constant time" [1] that I do not recall offhand.

There are also quite simple things one can do depending on the particular application. I have seen people mention that for encryption, $\mathcal{D}_{\sigma,\mathbb{Z}}$ is the wrong distribution to look at, and one can instead sample from distributions such as $\mathsf{Binom}(n, p)$ for suitable $p$. This is vaguely like a discrete gaussian (light-tailed, centered, etc), but much easier to sample from. This optimization only works for encryption and not signatures though [2].

Is there any exact sampling method?

There are many. Karney's method is probably the closest to what one means by "an exact sampling method", although things like Knuth-Yao sampling also fit into that description. One cannot exactly sample from a distribution of infinite support in worst-case constant time, so exact sampling methods are not particularly useful in practice. The particulars of Karney's method mean that in spite of this it is "almost" an exact sampling method even in that model. Karney's paper explains this fairly well, for example here is the abstract:

An algorithm for sampling exactly from the normal distribution is given. The algorithm reads some number of uniformly distributed random digits in a given base and generates an initial portion of the representation of a normal deviate in the same base. Thereafter, uniform random digits are copied directly into the representation of the normal deviate. Thus, in contrast to existing methods, it is possible to generate normal deviates exactly rounded to any precision with a mean cost that scales linearly in the precision. The method performs no extended precision arithmetic, calls no transcendental functions, and, indeed, uses no floating point arithmetic whatsoever; it uses only simple integer operations. It can easily be adapted to sample exactly from the discrete normal distribution whose parameters are rational numbers.

I think the cost of the initial portion is not constant (so the algorithm is not "a constant-time algorithm + pasting uniformly random bits at the end of the output), but this is still a suitably different result than other "exact" samplers that it is worth pointing out.

[1] Constant-time is a misnomer, what one really wants is that the timing distribution is independent of any secrets. I believe the linked Falcon paper (or a different falcon paper) attempts to formalize this through a notion called isochronos algorithms or something along those lines, but this (formal) notion does not yet appear to be widespread.

[2] See this for more details.

Mark Schultz-Wu
  • 15,089
  • 1
  • 22
  • 53
4

To answer your first short question: this is a (very) special case of Theorem 3.1 of Peikert’10. Specifically, use the “$x_2$ is chosen from a continuous Gaussian” variant, let $\Lambda_1+c_1$ be the integer lattice $\mathbb{Z}$, and let $\Sigma, \Sigma_1, \Sigma_2$ be suitable positive reals.

Regarding why true discrete Gaussians are useful: it is often because they allow us to prove what we want, either for functionality (e.g., nice singular values) or security. For the latter it is not enough to just say that known attacks (seem to) become “more complicated” with rounded Gaussians; we have to prove that the rounding can’t be exploited in any way. For example, many proofs need to properly simulate a specific distribution that is used in the real system. Using true discrete Gaussians often makes this possible, whereas it’s not as clear how to do it with rounded ones.

Chris Peikert
  • 5,893
  • 1
  • 26
  • 28