0

Assume $f_\theta$ is pdf of a continuous random variable. If we can interchange integration and derivative operation, then $$ \begin{align} \mathbb{E}_\theta (\frac{\partial}{\partial \theta} \log f_\theta) &= \int_{\mathbb{R}^n} \frac{\partial}{\partial \theta} \log f_\theta(x))\cdot f_\theta(x) dx \\ &= \int_{\mathbb{R}^n} \frac{\partial}{\partial \theta} f_\theta(x) dx \\ &= \frac{\partial}{\partial \theta} \int_{\mathbb{R}^n} f_\theta(x) dx \\ &= \frac{\partial}{\partial \theta} 1 \\ &= 0 \end{align} $$

But why can we do that? I have tried to use dominated convergence theorem, but cannot find a function to bound $\frac{\partial}{\partial \theta} f_\theta$.

  • You will need to specify $f_\theta$ if you're going to look for a bounding function. – Aaron Hendrickson Mar 18 '23 at 23:56
  • If you see this in deriving the maximum likelihood estimates, then it is usually assumed that integration and differentiation for $f_\theta$ can be interchanged. This is one of the regularity condition for finding MLEs. – ムータンーオ Mar 19 '23 at 04:23
  • @ムータンーオ So is this assumption true for all probability distributions? (side note: this is one statement involved in Cramer-Rao Lower bound) – Tianyi Pan Mar 19 '23 at 18:50
  • This is not true. A simple example is $X \sim Unif(0, \theta)$, which you can compute the two sides to show that they are different. There are certainly more distributions of this kind, and those estimator are called superefficient. – ムータンーオ Mar 20 '23 at 04:09

0 Answers0