15

Let's assume we have an $n$-bit hash function and a $b$-bit partial preimage attack that is faster than brute force. Does this imply a faster than brute force preimage attack on the whole hash?

It seems that it does, because if you run the $t<2^b$ time partial preimage attack on an input you have a $2^{b-n}$ chance of finding a full preimage, which is better than the chance with a brute force attack in that time.

On the other hand, if you run it on $2^{n-b}$ inputs you expect one of them to find a full preimage and this takes $t \cdot 2^{n-b} < 2^n$ time. However, with that many inputs a brute force attack would find one of them in just $2^b$ time, which is better unless $b>\frac{n}{2}$ and the partial preimage attack is very fast. (Complicating matters further, some of the inputs may have the same partial hash, not sure how to take that into account.)

I am trying to figure out what assumptions can be made about preimage attacks when considering the truncation of an arbitrary $n$-bit secure $n$-bit hash.

otus
  • 32,462
  • 5
  • 75
  • 167

3 Answers3

2

It depend on what notion of preimage resistance you mean, and on whether the preimage attack applies to only a tiny class of images or whether it works uniformly on all of them.

Fix a random hash family $H\colon \{0,1\}^{4b} \to \{0,1\}^{2b}$. Define

\begin{equation*} H'(x) = \begin{cases} 0^b \mathbin\Vert H_b(x), & \text{if $x$ starts with $1^{2b}$;} \\ H(x), & \text{otherwise.} \end{cases} \end{equation*}

Here $H_b(x)$ is some $b$-bit truncation of $H(x)$. How does the obvious cheap preimage attack on $H'_b$ translate to the cost of a preimage attack on $H'$?

  • The everywhere preimage resistance (ePre) of $H'$ is at most half the bits of that of $H$, because there exists an image $y$—namely, any string starting with $0^{b}$—such that there is a known random algorithm $A_y(H')$ making $q$ queries that can find a preimage with probability $O(2^{-b} q)$ much higher than the generic $O(2^{-2b} q)$.

    That is, we have an extremely cheap preimage attack on $H'_b$ for the image $0^b$ which translates to a cheaper-than-generic everywhere preimage attack on $H'$ because there exist images, namely those starting with $0^b$, for which we can find preimages at substantially lower cost than generic.

  • But this has negligible impact on the always preimage resistance (aPre), or just unqualified preimage resistance (Pre). There are random algorithms $A_{H'}(y)$, for always preimage-resistance, or $A(H', y)$, for (unqualified) preimage-resistance, with conditional probability $O(2^{-b} q)$ of finding preimages given that $y$ lies in a certain class of image—but the probability of being challenged with such an image $y$ is about $2^{-b}$.

    That is, although we have an extremely cheap preimage attack on $H'_b$ for the image $0^b$, it doesn't help us to find a preimage for a uniform random challenge image because the probability of being challenged with an image that starts with $0^b$ is negligible.

Squeamish Ossifrage
  • 49,816
  • 3
  • 122
  • 230
1

The question assumes hat a $b$-bit preimage attack randomly produces b bit preimages. This is not a realistic assumption.

We could imagine a $b$-bit preimage oracle which deterministically produces just a single $b$-bit preimage - never more never less. Running it multiple times will not give us any additional advantage than running it the first time.

Access to a partial preimage oracle does not seem to help at all in solving the full preimage problem.

Maarten Bodewes
  • 96,351
  • 14
  • 169
  • 323
Meir Maor
  • 12,053
  • 1
  • 24
  • 55
0

No it doesn't.

Consider this example from Katz' book: "let $g$ be a one-way function and define $f(x_1, x_2) = (x_1, g(x_2))$, where $|x1|=|x2|$. It is easy to show that $f$ is also a one-way function, even though it reveals half its input."

So, despite finding half part of its preimage, it is not possible to find the inverse of $f$ by a polynomial-time algorithm.

Maarten Bodewes
  • 96,351
  • 14
  • 169
  • 323
ssss1
  • 109
  • 5