1

It's my understanding that Electronic Code Book (ECB) produces similar cypher text for similar plaintext inputs which is not a good thing. To get around that, Cypher Block Chaining (CBC) can be used to increase diffusion within the encrypted message. For example an algorithm could take the previous block's cypher text and XOR that with next block's plaintext. My question is, doesn't that make the beginning of a message easier to decrypt? In my mind it seems like the 1st block of an encrypted message has a closer relationship to the original message (less mathematical permutations involved) than subsequently encrypted blocks?

Joel B
  • 283
  • 1
  • 2
  • 7

3 Answers3

6

In the first block, the IV provides the "randomness", and in subsequent blocks you just use the previous block of ciphertext instead. Based on the assumption, that the cipher is not weak and behaves like a pseudorandom permutation, this is basically the same: You XOR something unpredictable on the plaintext, and then encrypt.

As long as the IV is chosen randomly (and therefore, it should never be the same), there is no weak beginning. If you disregard the randomness and always start with the same value for a new message, then yes, the first block can be considered weaker. Because then you can distinguish, if two messages have the same beginning.

tylo
  • 12,864
  • 26
  • 40
3

As long as the IV is chosen correctly, every individual block of the encrypted output will be uniformly random over the set of all bit-patterns of the given size. Each block is independent from the clear text, but they are not independent from each other.

The first block contains the IV itself, which by construction is uniformly random and independent from the clear text message. Once this block is XOR-ed with the first clear text block, the resulting block is again uniformly random and independent from the clear text.

The output of the XOR is fed through the cipher. Since the cipher performs a permutation of all the possible bit strings of the specific length, it preserves the uniformly randomness. (A permutation of a uniformly random distribution is still uniformly random.) Moreover since the key doesn't depend on the clear text, this step couldn't introduce any correlation with the clear text, so it remains independent from the clear text.

Now you can simply repeat the same steps to see that every individual output block is indeed uniformly random and independent of the clear text.

Another way to see it is, that you can produce the cipher text by randomly choosing any single block instead of the IV. From this single block you can encrypt your way forwards to the end of the clear text to produce all cipher blocks after the randomly chosen block. You can also work your way backwards to produce the preceding cipher blocks using an approach very similar to CFB mode.

In the end the probability distribution of cipher texts are identical regardless of which of the blocks you chose first. In certain scenarios choosing the middle block rather than the first block makes sense as a way of parallelizing the encryption across two CPUs.

kasperd
  • 1,387
  • 1
  • 10
  • 23
2

This is why we use random initialization vectors (IVs) for all such algorithms.