3

As I understand, PUFs work by using two procedures: generation and reproduction. Generation reads a value $w$ from a fuzzy source and generates a key $R$ and helper data $P$. Then, in the reproduction procedure, it reads $w'$ from the fuzzy source and, using $P$, is able to recover $R$.

My question is: this seems to assume that $P$ can be stored safely and can be authenticated, since if a attacker changes the value $P$ that is stored then he can force the system to use a different key. How can this be done, if you don't have a key in the first place? Or am I missing something?

(I have followed the notation from A Soft Decision Helper Data Algorithm for SRAM PUFs by Maes, Tuyls and Verbauwhede.)

Conrado
  • 6,614
  • 1
  • 30
  • 45

2 Answers2

4

It is very similar to how we authenticate ourselves to a website. During registration, the website must store enough information to, at some time in the future, convince itself that the person trying to authenticate now is the same person who registered at some time in the past. For online services, this typically involves storing some function of the password. The online service also wants to make sure that the stored information is stored safely (i.e., unauthorized entities cannot get it) and authenticated (i.e., no unauthorized parties can change it). Otherwise authentication would fail.

With a PUF, it is almost the same. You register the PUF with the system by extracting $R$ and $P$ from $w$. $R$ and $P$ would then be stored in, for example, a database. When the PUF is used again to authenticate, the service would read $w'$, pull $P$ from the database and run the reproduction procedure to get $R'$. It could then pull $R$ from the database and see if $R==R'$. If so, authentication is successful and access is granted.

How you protect $R$ and $P$ from the generation phase is up to you. It also depends on how paranoid you are, how valuable the service is, your threat model, etc. For some, simply storing them in a database that (hopefully) only they have read access to is fine. Others might want to encrypt and sign the values.

It should be noted that $P$ is a public value. It does not need to be kept secret for the system to be secure. As you noted, however, if someone tampers with $P$, they could trick the system into authenticating the wrong party. Equivalently, if I change the hash stored in a database with login credentials, I can now become that user.

Active Attacker Update

Things become tricky when you allow your adversary to modify $P$. AFAIK, there are no guarantees that the attacker cannot modify $P$ in a devistating way (e.g., to publically reveal $R$). And I am not aware of any PUF research to mitigate the problem.

Fortunately PUFs and biometrics are very similar.

  1. Both are a noisy source
  2. Both require fuzzy extraction to be used to handle the noise

Given that, hopefully the following will at least help to solve the problem.

In "Robust and Reusable Fuzzy Extractors" by Boyen in the book "Security with Noisy Data", Boyen tackles a similar problem. From the chapter:

Unfortunately, ordinary fuzzy extractors do not address the issue of an active adversary that can modify $P$ maliciously, either on the storage server or while in transit to the user.

I don't fully understand the details of the work yet, so I don't feel it wise to try and describe the algorithms and protocols.

NOTE: It looks like much of that chapter was taken from a paper by Boyen and some of his colleagues titled "Secure Remote Authentication using Biometric Data". I have not read the paper to confirm that all the detail you might be looking for is there, however.

mikeazo
  • 39,117
  • 9
  • 118
  • 183
2

You can also think about PUFs in terms of challenges/responses. If I want to authenticate you with a PUF, I need to be in possession of it first. I make a list of challenges and determine, for each, the reading from the PUF for that challenge. The reading won't necessarily be a compact, high entropy string. So to distil the reading down to a high entropy value, I use an extractor.

Extractors are fine if the readings are consistent. However with most PUFs, the readings will be noisy and moderately inconsistent. The $P$ value is sometimes called a secure sketch ($P$). All it does is help do error correction. The sketch is not specific to using PUFs; it is part of a "fuzzy extraction" process that can be done after getting a noisy reading from any source.

For each challenge, I record the value extracted from the reading as the correct response. I then send you the PUF. When you want to authenticate, I send a challenge that I haven't used before, you get a reading off the PUF, use the profile to extract the response, and send the response. If it matches, I scratch the challenge & response pair off the list and authenticate you for that session.

The sketches can be made public. They leak no useful information to an adversary trying to guess the correct response. The adversary will have just as much luck guessing the response with or without the sketch.

There is no threat of the adversary changes the sketch server-side: I've already used the sketch to generate the responses and I'm done with it.

An adversary might change the sketch client-side. This would cause a denial-of-service: a legitimate user will no longer be able to authenticate. That is an accepted limitation of the threat model.

PulpSpy
  • 8,767
  • 2
  • 31
  • 46