I want to find an estimator of the probability of success of an independently repeated Bernoulli experiment. Given that we have exactly $k$ failures before the $r$-th success.
The probability for $k$ failures before the $r$-th success is given by the negative binomial distribution:
$$P_p[\{k\}] = {k + r - 1 \choose k}(1-p)^kp^r$$
This yields the $\log$-Likelihood function for the observed number of failures $k$:
$$l_k(p) = \log({k + r - 1 \choose k}) + k\log(1-p) + r\log(p)$$
With derivative
$$l_k'(p) = \frac{r}{p} - \frac{k}{1-p}$$
The derivative is zero at $\hat p = \frac{r}{r+k}$. To show that $\hat p$ is really a MLE for $p$ we need to show that it is a maximum of $l_k$. But evaluating the second derivative at this point is pretty messy. Is there an easier way to show that this is in fact an MLE for $p$?