Going for example with the notation used in (Renner 2006), min- and max-entropies of a source $X$ with probability distribution $P_X$ are defined as $$H_{\rm max}(X) \equiv \log|\{x : \,\, P_X(x)>0\}| = \log|\operatorname{supp}(P_X)|, \\ H_{\rm min}(X) \equiv \min_x \log\left(\frac{1}{P_X(x)}\right) = -\log \max_x P_X(x).$$ I can guess that these definitions probably come originally from taking Renyi entropies for $\alpha\to0$ and $\alpha\to\infty$. However, I wonder, is there any other reason why we would want to use this definition of $H_{\rm max}$, rather than $$\tilde H_{\rm max}(X) \equiv \max_x \log\left(\frac{1}{P_X(x)}\right) = -\log \min_x P_X(x).$$ Such definition is clearly more similar in spirit to $H_{\rm min}$, and arguably makes the name max-entropy a bit more intuitive. It also still satisfies $H_{\rm min}(X)\le H(X)\le \tilde H_{\rm max}(X)$.
Is there a good reason to use $H_{\rm max}$ rather than $\tilde H_{\rm max}$ in the context of single-shot entropies? Would results still hold for this modified quantity, or is there any other obvious reason not to use this other definition?