1

A safe way to generate a 256 bit ECC key from 128 bits of entropy is to use a CSPRNG, according to this answer: https://crypto.stackexchange.com/a/56551/43864

However, it's difficult to find a cross language, cross platform CSPRNG that guarantees to produce the same result across all implementations.

What would be a safe alternative to using a CSPRNG? Ideally the answer would involve just the use of SHA3-256, for which implementations are widely available.

knaccc
  • 4,880
  • 1
  • 18
  • 33

3 Answers3

4

If you need a CSPRNG, there is no safe alternative.

Your assertions that “it's difficult to find a cross language, cross platform CSPRNG” and “SHA3-256, for which implementations are widely available” are contradictory. SHA3 implementations are not very common, whereas every cryptography library provides a CSPRNG. Furthermore, given a hash implementation, it is fairly easy to construct a CSPRNG: use either Hash_DRBG or HMAC_DRBG, both defined by NIST SP 800-09A. Hash_DRBG is slightly faster and HMAC_DRBG is slightly more resistant in case a weakness in the hash algorithm is found.

2

You can just use any extendable-output function like SHAKE128 or HKDF-SHA256 for this purpose. For the specific size of 256 bits, even SHA-256 or SHA3-256 would work just fine: $$k = \operatorname{SHA-256}(\text{‘my application 128-bit key derivation’} \mathbin\Vert p).$$ This is effectively a kind of single-output CSPRNG, which is standard and easy to implement and will behave the same way on all platforms.

This isn't the US federal government standard procedure from NIST—to approximate one standard procedure of sampling twice the bits you need, you could use SHA-512 instead; for the alternative standard procedure of rejection sampling, in principle you'll need a CSPRNG with arbitrarily many 256-bit outputs, but in practice you will never need more than one output. (In this case the benefit of the standard procedure is negligible, but maybe you don't want to risk it when you're cutting corners already so you keep the auditor's job easy.)

Caveat: I am taking as a premise that you want to select users' keys from among only $2^{128}$ distinct scalars, and not addressing the cost of a multi-target attack with high probability of success against any one of your users. I would not recommend this! At the very least, you should include a unique per-user id in the hash too, if you insist on passwords with <256 bits of entropy.

Squeamish Ossifrage
  • 49,816
  • 3
  • 122
  • 230
2

A simple, good enough method is to apply SHA-256 to

  • a per-user 128-bit secret full-entropy key
  • a fixed-size constant pepper (constant for a given application, mildly confidential that is distributed only on need-to-know basis; distribution in code is not ideal, but better than no pepper, and there is little alternative.
  • the user ID (can be variable-length)

and use the result as the 256-bit private key. An academically better option is SHA-3-256, or HMAC-SHA-256 with secret∥pepper the HMAC key, because that's a stronger keyed PRF.

Introducing the user ID is useful to prevent multi-target attack (attempting to find a private key matching any of many public keys). Without this, the expected effort for breaking one random private key among those of $n$ users is $2^{127}/n$ operations (hash, compute public key, find a match among the $n$ public keys), almost $n$ times lower than $2^{127}$ operations (hash, compute public key, compare to the one public key) with the user ID hashed.

Depending on perspective, what's proposed could be criticized as giving equivalent security as a full 256-bit random private key, or not:

  • The best additional attack enabled is brute force search of the 128-bit seed, which requires an expected $2^{127}$ tests, each with a hash and some nontrivial ECC operation. That additional attack does not improve when attacking multiple targets. Other attack on ECC with 256-bit public key (Polard's Rho..) essentially breaking the discrete logarithm have comparable expected cost (like $2^{129}$) measured in number of some (other) elementary operations (I'm unsure about their multi-target status).
  • But if we dive deeper: if we want that odds of success for an adversary doing a certain amount of work be below some low residual probability of break for a given of operations (say $\epsilon<2^{-30}$ for $2^{100}$ operations), then we have a problem: brute force search of the per-user 128-bit secret has $\epsilon\approx2^{128}/n=2^{-28}$ (not meeting our residual probability gooal), while for Pollard's Rho and friends $\epsilon\approx2^{257}/{n^2}=2^{-57}$ (acceptable with flying colors). And there is the additional, independent issue that perhaps the ECC cost for a brute force search can be lowered sizably with some pre-computations.

If we want an extra level of assurance, we can use an iterated hash function (PBKDF2, Bcrypt, Scrypt, Argon2, Balloon..) instead of a hash and get significant extra resistance to brute force search of the per-user 128-bit secret; like +20 bits of security for that attack.

fgrieu
  • 149,326
  • 13
  • 324
  • 622