13

The NTRU public-key cryptosystem has a lot of interesting properties (being resistant to quantum computer attacks, being standardized by several important bodies), but it also has a pretty unique property:

The decryption algorithm does not always work. Sometimes it just gives wrong answers.

Has this been fixed? Is it really a cryptosystem if having the private key is insufficient to decrypt the encrypted messages?

For instance, from Howgrave-Graham et al. (2003) one reads,

“First, we notice that decryption failures cannot be ignored, as they happen much more frequently than one would have expected. If one strictly follows the recommendations of the EESS standard [3], decryption failures happen as often as every 212 messages with N = 139, and every 225 messages with N = 251. It turns out that the probability is somewhat lower (around 2−40) with NTRU products, as the key generation implemented in NTRU products surprisingly differs from the one recommended in [3]. In any case, decryption failures happen sufficiently often that one cannot dismiss them, even in NTRU products.”

  • Nick Howgrave-Graham, Phong Q. Nguyen, David Pointcheval, John Proos, Joseph H. Silverman, Ari Singer, and William Whyte. "The Impact of Decryption Failures on the Security of NTRU Encryption", Advances in Cryptology - CRYPTO 2003, 23rd Annual International Cryptology Conference, Santa Barbara, California, USA, August 17-21, 2003, Proceedings, Lecture Notes in Computer Science, 2003, Volume 2729/2003, 226-246, DOI:10.1007/978-3-540-45146-4_14
Jack Schmidt
  • 294
  • 1
  • 9

3 Answers3

17

The likelihood of a decryption failure can be made arbitrarily small. IEEE P1363.1 says in appendix A.4.10:

For ternary polynomials with $d$ $+1$s and the same number of $-1$s, the chance of a decryption failure is given by [B30]:

$$\operatorname{Prob}_{(q, d, N)}(\text{Decryption fails}) = P_{(d, N)} \left( \frac{q - 2}{6} \right)$$

where

$$P_{(d, N)}(c) = N \times \operatorname{erfc} \left( \frac{c}{\sigma\sqrt{2N}} \right)$$

and

$$ \sigma(d, N) = \sqrt{\frac{8d}{3N}}$$

where $\operatorname{erfc}(x)$ is the Gauss error function.

As a practical example, for the EES1087EP2 parameter set where $N=1087$, $q=2048$, and $d=120$, the failure probability is $5.62·10^{-78}$, which is a bit less than $2^{-256}$. Those parameters have been chosen for a $256$-bit security level in general, and the failure probability is also smaller than $2^{-256}$. So exploiting a decryption failure requires just as much work as breaking other parts of the system.

Ps. The [B30] reference is to the paper "Hybrid Lattice reduction and Meet in the Middle Resistant Parameter Selection for NTRUEncrypt" by P. Hirschhorn, J. Hoffstein, N. Howgrave-Graham, J. Pipher, J. H. Silverman and W. Whyte.

Prashand Gupta
  • 301
  • 1
  • 3
10

I'm Chief Scientist at Security Innovation, which owns NTRU, and have contributed to the design of NTRUEncrypt and NTRUSign.

The headline answer here is: NTRUEncrypt doesn't necessarily require decryption failures; it's a tradeoff you make, trading off key and ciphertext size against decryption failure probabilities. Parameter sets that don't give decryption failures are possible but ones that have a small but non-zero decryption failure probability are more efficient.

The most helpful way to understand this is to think about NTRUEncrypt as a lattice cryptosystem. Here, encryption is a matter of selecting a point (which is effectively a random vector mod q) and adding the message (which is a small vector) to it. Decryption is a matter of mapping the ciphertext point back to the lattice point and recovering the message as the difference between the two. Call this lattice point the "masking point" because it's used to mask the message.

Say we have a two-dimensional lattice, the private basis vectors are (5, 0) and (0, 5), and the message vector is defined as having coordinates with absolute value 1 or 0. So you have 9 possible messages that can be encrypted. In this case, each encrypted message is always closer to the masking point to any other point. (In the case where the masking point is (10, 15), the possible encrypted message values are (9, 14), (9, 15), (9, 16), ... , (11, 16)).

If we said the message vector could have coordinates with absolute value (0, 1, 2), we could encrypt 25 possible messages and the encrypted message would still be closer to the masking point than to any other point.

However, if we said the message vector could have coordinates with absolute value (0, 1, 2, 3), then although we could encrypt 49 messages, any message with a 3 as one of the coordinates would be closer to some other point than to the masking point (because 3 rounds in a different direction mod 5 than 2 does).

What happens in NTRUEncrypt is similar, modulo the differences that you get from moving to higher dimensions. We've defined constraints on the message to be encrypted that ensure that almost all the time, the message will round back to the masking point. We can estimate the probability that the rounding will happen incorrectly and set it to be less than the security level (as Prashand Gupta said). We could also eliminate decryption failures altogether by increasing q, which would corresponding to increasing the size of the private basis relative to the message; we don't see a need to do this, because the decryption failure probability is sufficiently low already and bringing it to 0 would increase q from 2048 to 4096 or 8192, adding N or 2N bits to the size of the ciphertext and key.

William Whyte
  • 856
  • 6
  • 8
6

Decryption in NTRU is probabilistic, however for correctly chosen parameters, the chance of a decryption failure is very small. It is not a worry in practice.

PulpSpy
  • 8,767
  • 2
  • 31
  • 46