I was wondering what the known metrics are for studying the randomness of TRNG (besides NIST tests).
For example, for PUFs, there are known metrics such as uniformity, uniqueness, BER, etc.
2 Answers
The one metric that generically matters in cryptography for a physical entropy source is the min-entropy: the exponent of the most probable outcome, in bits. This depends on the physics of the entropy source. As long as it exceeds 256, you can feed a sample through a typical preimage-resistant hash function such as SHAKE256, a conditioner, and you will have what is effectively a uniform random string fit for use as cryptographic key material.
(Sometimes the physical device is called a TRNG; sometimes the composition of the physical device and the conditioner like SHAKE256 is called a TRNG.)
If your device can't produce a sample with that much min-entropy at once, but it can produce a sequence of IID samples, then you can concatenate them. The result may be much longer than 256 bits—even if it is very far from uniform in whatever is your favorite measure of statistical distance, what matters for cryptography is only that its min-entropy be at least 256 bits.
The NIST tests hypothesize various families of probabilistic models for the entropy source, fit parameters based on a sample, and then print the entropy of the models with the fitted parameters. These models are very simple-minded and were designed without knowledge of your device, so they are at best a way to spot-check particularly obvious predictable distributions—so obvious an engineer thought of them without even knowing what your device is. (More details on how ‘entropy tests’ work.)
Generic measures computed on samples from your device, designed without reference to any model of the physics of your device, have very little value in studying the security of the system. The min-entropy you advertise must be computed from a specific probabilistic model of the physics of the system to give any meaningful confidence in it.
- 49,816
- 3
- 122
- 230
There are two common documents that pertain to TRNGs. There's US NIST Special Publication 800-90C, Recommendation for Random Bit Generator (RBG) Constructions, and the less sanctimonious German BSI A proposal for: Functionality classes for random number generators.
There are all sorts of stochastic measures listed therein, but they boil down to two essential ones:-
- The next bit that emerges from the TRNG has a 50.0% chance of being a "1" when measured in the long run. This determination is made via tests such as ent, diehard and dieharder, TestU01 and others. The appropriate test suite is often indicated by the output rate as some TRNGs only run at 2kbps or even less. And each of those tests are composed of smaller individual tests like counting runs and frequencies. The 50.0% value automatically infers each output value be independent from one another.
- That the output sequence doesn't repeat itself when the TRNG is restarted. This is kinda subsumed into (1) above in the most general case if you think about it.
If you accept that random is random, it clearly follows that the output of any working TRNG has to be entirely independent of the internal processes, other than speed. Some people get confused with the distinctions between an entropy source and a TRNG. The entropy source is the internal circuit that generates a non deterministic (random but often auto correlated) signal. This is then processed and independent uniformly distributed bytes are output from the TRNG. At this point, one cannot identify how the bytes were made. They're just plain bytes. You would not be able to differentiate any particular TRNG by even the most detailed inspections of their outputs.
As a couple of metrics examples, see the test certificates for the online bet365 casino ComScire TRNG using some custom tests and diehard, and the Quantis TRNG using diehard.
- 15,905
- 2
- 32
- 83