4

Let us recall the Schnorr Protocol, following Chris Peikert's excellent Notes on the Theory of Cryptography.

Protocol. Let $G=\langle g \rangle$ be a cyclic group of order $q$. We consider an arbitrary element $x\in G$, having Discrete Logarithm $w=:\log_g(x)$. The input to the Prover $P$ is $x,w$ and to the Verifier $V$ is just $x$. The interactive Proof System is defined as follows:

enter image description here

My Question is on the Zero-Knowledge Property for a $\color{red}{\textrm{Malicious Verifier $V^*$}}$.

So, in the same Set of Notes, we define a simulator $S^{V^*}$ as follows:

$\underline{\text{Simulator $S^{V^*}(x)$}}$

REPEAT

  • $b \stackrel{\\\\\$}{\leftarrow}\{0,1\} \ ; \ a \stackrel{\\\\\$}{\leftarrow} G$
  • $z\leftarrow g^a x^{-b}$
  • $b' \stackrel{}{\leftarrow} V^*(z)$

UNTIL $(b'=b)$

RETURN $(z,b,a)$

I would be more than grateful if someone could rigorously show why this distribution, $S^{V^*}(x)$, is indistinguishable from the distribution $\mathrm{VIEW}_{(P,V^*)}^{V^*}(x)$ and what kind of indistinguishability we have.

It is noticeable that I couldn't find anywhere on the Internet a rigorous Proof for the Mailicious Verifier Case, but only for the Honest Verifier Zero-Knowledge (HVZK). Is that so easy that it is ommited?

Thanks in advance.

Chris
  • 266
  • 2
  • 11

2 Answers2

2

Let me try to present a solution. Observe that we need the simulator $S$, given blackbox access to $V^*$ to generate a distribution that is indistinguishable from the view of the adversary $V^*$. This means that $S$ must also produce the challenge bit $b$ according to the distribution used by $V^*$. However, since $S$ only has oracle access to $V^*$, therefore, does not know this distribution. So, our task is to find a method for $S$ to sample from $V^*$ distribution.

Let's simplify this with a basic probability analysis. Let $D$ be a probability distribution over $\{0,1\}$ such that $p_0$ is the probability of sampling $0$ and $p_1 = 1-p_0$ is the probability of sampling $1$. Consider the following sampling algorithm $A$.

  1. Pick $b \in \{0,1\}$ uniformly at random.
  2. Sample $b' \gets D$.
  3. If $(b = b')$, output $b$.
  4. Else, go to step 1.

Let $P_i$ denote the probability that the algorithm $A$ samples $i$. Let's calculate $P_0$. Note that either $A$ output $0$ in the first iteration if it samples $b = 0$ and $b'$ is also zero, which occurs with probability $\frac{1}{2} \times p_0$. Otherwise, it has failed in the first iteration and repeats. The probability of failure in the first attempt is $\frac{1}{2} p_1 + \frac{1}{2} p_0$ where the two cases are $b=0, b' = 1$ and $b = 1, b' = 0$. Now the interesting part: since, the same process repeats on a failure, the probability of sampling $0$ in any of the subsequent iterations is again $P_0$. Therefore, we get

$$P_0 = \frac{p_0}{2} + \frac{(p_0 + p_1) \times P_0}{2} = \frac{p_0}{2} + \frac{P_0}{2}$$

This implies that $P_0 = p_0$. Similarly, we can show that $P_1 = p_1$.

Observe that $S^{V^*}$ essentially performs the same to simulate the distribution used by $V^*$.

Regarding your question about the type of indistinguishability achieved, it is mentioned in the lecture note that

It is relatively easy to show that $S$ reproduces $V^*$’s view, up to negligible statistical distance.

And this is easily verifiable from the above.


Edit: As @lamontap pointed out, it is possible that the malicious verifier $V^*$ may use different distributions depending on its input $z$. The above calculations do not account for this case.

Let's denote $D_z$ the distribution that $V^*$ will use on input $z$. Let us first calculate the probability that $(z, b, a)$ is the transcript in the real world. Observe that $z$ is chosen uniformly at random by the prover, $b$ is sampled from $D_z$ by the malicious verifier $V^*$, and $a$ is completely determined by $z, b, x$. Therefore, the probability is

$$Pr[z \gets G] \times Pr[b' = b \;|\; b' \gets D_z] = \frac{p_b^z}{q}$$

where $p_b^z$ is the probability of sampling $b$ from $D_z$. In the simulation scenario, the simulator generates this triplet with the following probability which we denote as $p_{z,b,a}^S$. Either

  1. it generates this in the first iteration with probability $\frac{1}{2} \times \frac{1}{q} \times p_b^z$. This is because even though $z = g^a x^b$, $a$ is chosen uniformly at random. Therefore, $z$ is an element chosen uniformly at random from $G$.
  2. or it fails in the first iteration but outputs in the subsequent iterations. We can divide the analysis into two cases. Either the simulator sampled $z$ but $b \neq b'$ or it did not sample $z$ and $b \neq b'$. Therefore, the probability is $$\dfrac{1}{2} \times \dfrac{1}{q} \times (p_0^z + p_1^z)p_{z,b,a}^S + \sum_{y \neq z} \dfrac{1}{2} \times \dfrac{1}{q} \times (p_0^{y} + p_1^y)p_{z,b,a}^S = \dfrac{p_{z,b,a}^S}{2}$$

Combining the above two cases, $$p_{z,b,a}^S = \frac{p_b^z}{2q} + \frac{p_{z,b,a}^S}{2}$$

and this gives us $p_{z,b,a}^S = p_b^z$.

Mahesh S R
  • 1,786
  • 1
  • 5
  • 22
1

I think you might be overthinking the result? The key here is that the simulator has a very powerful advantage, which is that it can rewind $V^\ast$, that is, running with many inputs as much as it wants, until something happens.

With this in mind, note that in the simulated execution $(z,b,a)$ is distributed as:

  • $b$ random
  • $a$ random
  • $z$ constrained to $z = g^ax^{-b}$.

In the real world, the tuple is distributed as:

  • $b$ random
  • $z$ random
  • $a$ constrained to $a = r+bw$.

Both triples have exactly the same distribution. There are different ways to see this, some more formal than the others. Intuitively, it's a combination of the following two things:

  1. $z = g^ax^{-b}$ if and only if the discrete log $r$ of $z$ is $a -bw$, that is, $r = a-bw$
  2. Sampling $r$ at random and letting $a = r+bw$ leads to the same distribution for the tuple $(a,r)$ than first sampling $a$ and then letting $r = a-bw$.

If you want to prove this even more formally maybe it's useful to see why the one-time-pad is perfectly secure; it's the same proof.

Daniel
  • 4,102
  • 1
  • 23
  • 36