The following counteracts the statements made for the maximum entropy principle case in order to posit a pseudo "minimum entropy principle" case that is simply the polar opposite of the former.
A continuous random variable that has only 1 certain outcome is said to have a Shannon entropy of 0 because its distribution (outcome) is fully certain.
In turn, this zero-entropy variable is said to have maximum information in the sense that unexpected outcomes (all outcomes other than the 1 certain outcome) possess high information a priori. Distributions with such low entropy are undesirable because they represent the case where strong assumptions are made about the variable's outcomes a priori. In this case, we have strongly assumed that the variable will only generate the certain outcome and not deviate from it, based maybe purely on historical observations of the variable.
But what does a zero/low entropy variable mean ex post after the empirical data outcomes, unexpected or not, are actually realized? Does high information still exist in the zero entropy distribution? has its information content dissipitated somehow? has its entropy changed from $0$? How and why