1

This is a more specific version of this other related question of mine. Going for example with the notation used in (Renner 2006), min- and max-entropies of a source $X$ with probability distribution $P_X$ are defined as $$H_{\rm max}(X) \equiv \log|\{x : \,\, P_X(x)>0\}| = \log|\operatorname{supp}(P_X)|, \\ H_{\rm min}(X) \equiv \min_x \log\left(\frac{1}{P_X(x)}\right) = -\log \max_x P_X(x).$$

As mentioned in the comments of the linked post, $H_{\rm max}(X)$ can be interpreted as the optimal bound for compressibility in the single-shot regime.

Is there any similar kind of operational interpretation for the min entropy $H_{\rm min}(X)$? Be it in terms of single-shot compressibility, or something else? I haven't found something like this mentioned directly in the relevant literature, but I might have missed it.

glS
  • 7,963

1 Answers1

1

Well $P_{max}=\max_{x} P(x)$ is the probability that an optimal guessor given a single guess for the discrete random variable with pmf $(P(x))_{x \in A}$ succeeds. The log measures the number of bits of information obtained in that scenario.

This is classical, also see Wikipedia:

Claude Shannon's definition of self-information was chosen to meet several axioms:

  • An event with probability 100% is perfectly unsurprising and yields no information.
  • The less probable an event is, the more surprising it is and the more information it yields.
  • If two independent events are measured separately, the total amount of information is the sum of the self-informations of the individual events.

It can be shown that there is a unique function of probability that meets these three axioms, namely $\log(1/ P(x)).$

kodlu
  • 10,287
  • that makes sense, thanks. Unfortunately, the wikipedia page is a bit messy right now, confusing quantum and classical results. I guess what I was mostly looking for is some sort of unifying way to interpret $H_{\rm min}$ and $H_{\rm max}$. As of now, $H_{\rm max}$ tells you about compressibility, while $H_{\rm min}$ about guessing the input. Both seem to be trivially optimal bounds though, so maybe this is as much as we can say – glS Sep 03 '22 at 07:18
  • yes, they are trivially optimum as you point out. you may find a paper by Merhav interesting, they examine bounds on Shannon entropy for a fixed p_max. I cannot find the reference right now, unfortunately. It was published in IEEE Information Theory Transactions, probably in the 90s or early 00s. – kodlu Sep 03 '22 at 12:03
  • https://ieeexplore.ieee.org/document/272494 this is it. Relations between entropy and error probability, Feder and Merhav. IT Transactions 1994. There are pdf's freely available elsewhere as well. – kodlu Sep 03 '22 at 12:16