Questions tagged [log-likelihood]

For questions that use the natural logarithm of a likelihood function.

For many applications, the natural logarithm of the likelihood function, called the log-likelihood, is more convenient to work with. Because the logarithm is a monotonically increasing function, the logarithm of a function achieves its maximum value at the same points as the function itself, and hence the log-likelihood can be used in place of the likelihood in maximum likelihood estimation and related techniques. Finding the maximum of a function often involves taking the derivative of a function and solving for the parameter being maximized, and this is often easier when the function being maximized is a log-likelihood rather than the original likelihood function.

For example, some likelihood functions are for the parameters that explain a collection of statistically independent observations. In such a situation, the likelihood function factors into a product of individual likelihood functions. The logarithm of this product is a sum of individual logarithms, and the derivative of a sum of terms is often easier to compute than the derivative of a product. In addition, several common distributions have likelihood functions that contain products of factors involving exponentiation. The logarithm of such a function is a sum of products, again easier to differentiate than the original function.

248 questions
15
votes
2 answers

How to derive the likelihood and loglikelihood of the poisson distribution

As the title suggests, I'm really struggling to derive the likelihood function of the poisson distribution (mostly down to the fact I'm having a hard time understanding the concept of likelihood at all). I've watched a couple videos and understand…
12
votes
2 answers

MLE of a discrete random variable

For some reason I am having difficulty understand how to calculate the mle of a discrete rv. The pmf is: $$p(k;\theta) = \left\{\begin{array}{cl} \dfrac{1-\theta}3&\text{if } k=0\\[5pt] \dfrac{1}{3}&\text{if }…
xiong
  • 163
4
votes
1 answer

Quadratic Approximation for Log-Likelihood Ratio Processes

I'm trying to understand why the quadratic equation can approximate the log likelihood ratio, and how it is derived: $$\mathrm{Log}(\mathrm{LR})=\frac{1}{2}\left(\frac{\mathrm{MLE}-\theta}{S}\right)^2$$ Is this approximated using Taylor's series or…
Ela
  • 43
4
votes
0 answers

Convexity of a Log Likelihood Function

Goal I would like to proof than the Negative Log Likelihood Function of Sample drawn from a Normal Distribution is convex. Below a Figure showing an example of such function: Motivation of this question is detailed at the end of the post. Sketching…
4
votes
2 answers

log likelihood function and MLE for binomial sample

Let $X_1,X_2,...;X_n$ be a random sample with $X_i$~$Binomial(m,p)$ for $i=1,...,n$ and $m=1,2,3,...$ and let $p\in (0,1)$. We assume $m$ is known and we are given the following data $x_1,...,x_n\in\{0,...,m\}$ Write up the log-likelihood function…
4
votes
1 answer

Find the MLE of a GLM

(Note this is not an assignment, but revision for a topic from Cambridge past exam papers) I have been trying to attempt the below question, and I am struggling with part (b). For (a) it is obvious that the pmf is the same as the Bernoulli and…
3
votes
2 answers

Why is Maximizing Marginal Log-Likelihood Difficult?

I was reading this tutorial on expectation maximization, and in section 4 the author mentions that it is difficult (impossible?) to differentiate the marginal log likelihood. I am referencing section 4 where it says: "We note that there is a…
3
votes
1 answer

Find conditional MLE of AR time series

I was given a model $r_{t} = ϕ_{0} + ϕ_{2}r_{t-2} + ϵ_{t}$ with $\epsilon_t \sim N(0,\sigma^2)$ and have to derive the likelihood of $(r_{3}, r_{4}, . . . , r_{T})$ conditional on $(r_{1}, r_{2})$ and find $ϕ_{0}$ and $ϕ_2$ that maximize the…
3
votes
0 answers

Family of transformations can I find a density function?

Let's consider the family of transformations given by $$g_a(Y)=\begin{cases} \frac{e^{aY}-1}{a} & \text{ for } a\neq 0 \\ Y & \text{ for } a=0 \end{cases}$$ for $Y\in\mathbb{R}$. Analogous to the estimation of the Box-Cox parameter $\lambda$, the…
3
votes
1 answer

When does sup and function commute?

For $f,g$ real-valued functions, $f$ weakly increasing and continuous, $A\subseteq \mathbb{R}$, can we say \begin{align*} \sup_{x\in A}f(g(x))=f(\sup_{x\in A}g(x)) \end{align*} I ask because I notice in wiki the likelihood ratio test statistic is…
3
votes
1 answer

Finding MLE estimator for given density $f(x, \alpha, \beta)$

I'm having trouble with the following example problem of MLE: Let $X = (X_1, ..., X_n)$ be a trial from i.i.d r.v. with density: $$ g(x) = \frac{\alpha}{x^2}\mathbb{1}_{[\beta, \infty)}(x) $$ where $\beta> 0$. Write $\alpha$ in terms of $\beta$ to…
blahblah
  • 2,198
3
votes
0 answers

Why can we treat Cox's partial likelihood as a full likelihood?

I am doing some self study on Cox regression, and am trying to figure out how we can derive the partial likelihood for the Cox model from the full likelihood. Generally, I know that to get a partial likelihood, we can just use proportionality, but…
3
votes
1 answer

Maximum Likelihood Estimation - Demonstration of equality between second derivative of log likelihood and product of first derivatives

I am faced to a problem of demonstration, about Maximum Likelihood Estimation, summarized on this image : Indeed, I don't know how to prove the following equality between : (1) $$\begin{aligned} \operatorname{var}(\hat{\theta})…
3
votes
1 answer

Example of "The eigenvalues of data covariance matrix, $\Phi^T\Phi$ measure the curvature of the likelihood function."

I am reading PRML, Chapter 3.5.3, screen shot attached. I can understand the derivation and maths but hard to understand the meaning of "The eigenvalues of data co-variance, $\Phi^T\Phi$ matrix measure the curvature of the likelihood function.". Can…
3
votes
1 answer

How to express loglikelihood in terms of other loglikelihood?

I am trying to find out how to decompose conditional loglikelihood of a function to a conditional loglikelihood of an argument plus some reminder terms. I.e. shortly: $$\mathcal{L}_E(\hat{E};\sigma) = \mathcal{L}_A(F^{-1}(\hat{E});\sigma) +…
1
2 3
16 17