If $S$ is a finite set (such as outcomes of a dice). If I have a sequence $x_0 ... x_n$ of elements over $S$, how can I measure randomness quality of such a sequence as compared to measuring quality of random bits?
1 Answers
The simplest approach is the bias away from uniformity of the sequence, $\epsilon$. $P(x_n = \text{any member of S}) = \frac{1}{|S|} \pm \epsilon$. So a regular die has 6 possible outcomes and thus a cardinality of 6. Therefore in the case of a perfectly fair die, $P(x_n = \text{any member of S}) = \frac{1}{6} + 0.$
The USA's NIST organisation aims for $\epsilon = 2^{-64} $ when characterising a sequence as 'fully' random. And this leads to issues of mensuration as it's very difficult to generate and managed the amount of data needed for an accurate determination with such small bias.
We therefore resort to stochastic methods within acceptable bounds of certainty, like chi-square tests. To that end, see How many rolls do I need to determine if my dice are fair? Cryptographic tests typically use a confidence level of 0.1%.
- 15,905
- 2
- 32
- 83