Let that ${0 < p_0 < p_1 <\infty}$ and ${f \in L^{p_0}(X) \cap L^{p_1}(X)}$. Then ${f \in L^p(X)}$ for all ${p_0 \leq p \leq p_1}$, and furthermore we have $$ \displaystyle \| f\|_{L^{p_\theta}(X)} \leq \|f\|_{L^{p_0}(X)}^{1-\theta} \|f\|_{L^{p_1}(X)}^{\theta} $$ for all ${0 \leq \theta \leq 1}$, where the exponent ${p_\theta}$ is defined by ${1/p_\theta := (1-\theta)/p_0 + \theta/p_1}$.
One can of course prove this statement by Hölder's inequality. Here is another approach suggested in one of Terry Tao's real analysis notes. Consider the log-convexity inequality:
$$ \displaystyle |f(x)|^{p_\theta} \leq (1-\alpha) |f(x)|^{p_0} + \alpha |f(x)|^{p_1} $$ for all ${x}$, where ${0 < \alpha < 1}$ is the quantity such that ${p_\theta = (1-\alpha) p_0 + \alpha p_1}$. By integrating this inequality in ${x}$, one already obtains the claim in the normalised case when $${\|f\|_{L^{p_0}(X)} = \|f\|_{L^{p_1}(X)} = 1}. $$
Here is my question:
It is said that to obtain the general case, one can multiply the function ${f}$ and the measure ${\mu}$ by appropriately chosen constants to obtain the above normalisation. Would anybody elaborate how such normalisation could be done?