11

I'm just interested in cryptography, so please don't expect me to be an expert. ;) I recently read about AES cache timing attacks and found it very interesting. I read the article Cache-timing attacks on AES by Daniel Bernstein, but I don't seem to understand everything.

  1. How relevant is this to "real-life" network applications? As I understand it, the measurements have to be extremely precise to leak information. Are networks (even LAN networks) fast enough for that? (The author measures the time on the server and sends it to the client.)
  2. The author dedicates a long section on how to prevent the OS to interrupt an AES computation. But how does this leak any information? Yes, the calculation takes more time than usual, but it is not dependent on the input?
  3. Assuming this is in fact a problem to network applications. Would it be sufficient to wait() to a constant time after encryption and before sending any data over the network?
D.W.
  • 36,982
  • 13
  • 107
  • 196
cooky451
  • 257
  • 2
  • 7

1 Answers1

10

Yes, timing attacks are relevant to real-world implementations of crypto. Yes, as that paper demonstrates, these attacks can be carried out in real life: real networks are fast enough to allow these attacks.

It is also important to understand that some network services do provide timestamps that leak information about how long the operation took on the server; for instance, some TCP stacks will automatically add high-precision timestamps to every packet sent, and a few applications may add timestamps to their packets for their own reasons. This further heightens the risk. If we want AES to be a general-purpose encryption algorithm that is secure for essentially all reasonable uses (and we do), then this is extra motivation for a generic defense that eliminates this attack method.

There are many defenses. The best defense is to ensure that the implementation is constant time: the amount of time it takes is independent of the value of the key. You may also want to stop cache-based attacks, and ensure that the sequence of memory addresses read/written is constant (independent of the value of the key).

Delaying until a constant time (the worst-case execution time) has some issues. It is hard to estimate the worst-case execution time in practice, given the variety of ways that can cause execution to take a long time (e.g., a cache miss, a page fault, pre-empted by the OS, and more). If you take a very conservative estimate, then the estimate will be a very long time and performance will suffer dramatically. If you don't, then there is the risk that the actual time may exceed your constant time, and then you cannot recover. So, while this is indeed a possible solution strategy to consider, the devil is in the details, and I think it's probably not the most promising one: in most cases, the issues with it will make it unattractive in practice.

D.W.
  • 36,982
  • 13
  • 107
  • 196