10

It is known that the two prime factors $p$ and $q$ of an RSA modulus $n$ should not be too close to each other, otherwise an attacker may factor the modulus. In other words, $\Delta = \left| p - q \right|$ should not be too small.

However, "too small" is a somewhat subjective measure. "Too small" with respect to what? And why?

I do realize that it is considered a best practice - given a modulus $n$ of $k$ bits - to pick $p$ and $q$ as random primes with bit length $k/2$ (see also HAC, section 8.8, note ii).

On the other hand, Section B.3 of the Digital Signature Standard standard recommends $\Delta > 2^{k/2-100} $ when generating an RSA key. That is, $p$ and $q$ should simply differ by a number which is at least 100 bits long somewhere in their 100 most-significant bits (thanks poncho!), independently of the bit length. That is not in contrast to the best practice above, and it could be deemed a redundant check.

However, both approaches seem to imply that a dangerous $\Delta$ is much, much less than $2^{k/2}$. Is there any formal proof of that? And is there any quantification for such dangerous $\Delta$?

3 Answers3

8

This question is only relevant if you choose $p,q$ in a non-standard way. The standard way to choose $p,q$ is to choose them as two independent random $k/2$-bit numbers. If you do it the standard way, the question is not relevant (the probability that $|p-q|$ is too small is negligible -- and is dominated by the chances of other kinds of failures).

This question would be relevant if you were choosing $p,q$ in some funny way that had an unusually high probability of making $|p-q|$ be unusually small. Yes, you can quantify how much easier this makes factoring. For instance, the Fermat factoring method works as follows: for $a=\lceil \sqrt{n} \rceil, \lceil \sqrt{n} \rceil+1, \lceil \sqrt{n} \rceil+2,\dots$, it checks whether $n/a^2$ is a perfect square; if so, it has factored $n$.

We can analyze the running time of Fermat's method. Let $\epsilon=(p/\sqrt{n}) - 1$, so that $p=\sqrt{n}(1+\epsilon)$ and $q=\sqrt{n}/(1+\epsilon)=\sqrt{n}(1-\epsilon+\epsilon^2-\cdots)$. Fermat's method succeeds when $a=(p+q)/2=\sqrt{n}(1+\epsilon^2/2-\cdots)$. In other words, it requires $\approx \sqrt{n} \epsilon^2/2$ iterations. So, if you want this to take at least $2^{100}$ time, you need $\sqrt{n} \epsilon^2/2 \ge 2^{100}$, or equivalently, $\epsilon \ge 2^{50.5}/n^{1/4}$. Since $|p-q| \approx 2\sqrt{n}\epsilon$, this means we need $|p-q| \ge 2^{51.5} n^{1/4} = 2^{51.5} 2^{k/4}$.

In other words, if you want Fermat factoring to take at least $2^{100}$ time, you need $\Delta$ to be at least $2^{51.5} 2^{k/4}$. For a detailed derivation, see

  • Cryptanalysis of RSA with small prime difference, Benne de Weger, Applicable Algebra in Engineering, Communication and Computing (AAECC) vol 13 no 1 pp.17-28, 2002. See Section 3 (much of the rest is not relevant and addresses a different question).

See also the following paper, which says that $n$ can be factored in polynomial time if $|p-q| \le 2^{k/3}$:

For instance, the paper gives an example of a 1024-bit RSA modulus ($k=1024$). It says that if $p$ and $q$ are identical in their 171 most significant bits, then you can factor $n$. You can compare this to the requirement in the DSS standard, if you like.

But again, the right way to make this attack infeasible is to choose $p,q$ independently at random (as is the standard method). And if you choose $p,q$ in the proper way, the threat is rendered infeasible, and you don't need to worry about the size of $|p-q|$. For example, if you want a 2048-bit RSA key, choose a random 1024-bit key $p$, and then choose a 1024-bit key $q$. Don't worry about their difference; the mathematics say that they will have a sufficiently large difference.

I hope that this answers your question sufficiently.


I see that the DSS standard does contain the requirement that you mention. I think it is ill-considered, or perhaps not there for the reason you might think it is. It is true that if you chose a RSA modulus by picking $p$ and $q$ in some crazy way that made it likely $|p-q|$ would be small, then RSA would be insecure (there are factoring methods that can be used to factor $n$ in this circumstance). However, in that case the problem there would not be that you chose $p,q$ with a small difference: the problem would be that you failed to generate $p,q$ independently at random. So, don't do that. As long as you do generate $p,q$ properly, you don't need to separately check any condition on $|p-q|$; if $p,q$ are chosen randomly and independently at random, the chances that $|p-q|$ is too small is negligible (less than the chance of getting struck by lightning several times in a row, less than the chance of someone factoring your RSA modulus, etc.).

Why does the DSS contain this recommendation? I don't know. I think the recommendation is misguided and unnecessary.

Bottom line: the best answer to your question is to un-ask the question, as it contains some implicit assumptions that are not valid.

D.W.
  • 36,982
  • 13
  • 107
  • 196
4

This recommendation is here specifically to prevent Fermat's Factorization Method from yielding a factorization. This method can yield factor a number if its two factors are sufficiently close; this recommendation would prevent that from being a possibility.

Now, you ask 'is such a recommendation reliable'? Well, it certainly does prevent that factorization method, and it is pretty cheap (involving a simple test; testing to verify that the code does the right thing when the test rejects your $p$ and $q$ is probably your greatest concern).

On the other hand, for RSA-sized numbers, Fermat's method is extremely unlikely to yield a factorization, even if we don't check $\Delta$; it is sufficiently unlikely that an intelligent attacker would try it only if he had apriori reason to believe that $\Delta$ was extremely small; otherwise, he'd be wasting resources that he could have used running NFS or ECM (factorization methods with much better probability of success).

So, the question comes down to 'do you run a cheap test to prevent a factorization method that isn't much of a concern in the first place?' Personally, I'd say "no", the authors of FIPS 186-3 felt differently.

BTW: the recommendation $\Delta > 2^{k/2-100}$ doesn't mean '$p$ and $q$ should differ by a number which is at least 100 bits long', it means closer to '$p$ and $q$ should differ somewhere in their 100 most-significant bits'.

poncho
  • 154,064
  • 12
  • 239
  • 382
0

If n = p1*p2 do not choose two primes where x is small, x = 2((p1+p2)/2-sqrt(n)). And the bigger the n value is the bigger x should be.

Dima
  • 1