I am trying to understand the calculations for the success probability of finding an unknown key using rainbow tables.
Basically, I have a target success probability, a known keyspace of size $N$ and want to derive the neccessary table building parameters: the number of chains $m$, and the length of each chain $t$, and potentially the number of tables $l$ (see below).
The original paper by Oechslin gives the success probability of a single $m \times t$ table for finding a key in a keyspace of $N$ keys as $$ P_{table} = 1 - \prod_{i=1}^{t}\left(1 - \frac{m_i}{N}\right) $$
with $m_i$ recursively defined as $m_1 = m$ and $m_{n+1} = N\left(1-e^{\frac{-m_n}{N}}\right)$.
The appendix in the paper gives an explanation of this formula, which I can mostly follow. However, with $m_i$ defined like above, I cannot see how one could derive the parameters $m, t$ except by just evaluating the function, which is quite computationally expensive. These posts suggest, that one just fixes $m$ to the available amount of memory and then scales $t$ to achieve the targeted success probability. This post links to a paper discussing this problem, but as far as I can tell perfect tables are assumed and the computations do not get much simpler.
It is argued, that the rainbow tables have a similar success probability to multiple Hellmann tables (each using a different reduction function), totaling to the same size as a single rainbow table. However, I could not find a proof for this claim anywhere.
Additionally, in Section 4.3 of Oechslin's paper, multiple rainbow tables are used to achieve a higher success probability than with a single table.
- In what way do these multiple rainbow tables differ from each other?
- How does using multiple tables differ from just constructing a single larger table?
- How does using multiple rainbow tables increase the success probability?