0

Let $\{x_i\}$ be identically continuously distributed variables (not independent in general, let's say it can be a stationary AR(1) model).

Define function $f_b$ depending on parameter $b\geq 0.$ Let $m(b)=\mathbf{E}f_b(x_1).$ It is known that $m(b)$ is nondecreasing differentiable function, $m(b) \rightarrow \infty$ for $b\to\infty.$ I'd like to find the solution of the equation $$ m(b) = 0. $$ It is known that the solution exists.

But the problem is that in general the pdf of $x_i$ is unknown. What I have is the sample (data level $n$) from $\{x_i\}$ so I've decided to use method of moments: $$ \frac{1}{n}\sum_{i=1}^nf_b(x_i)=0. $$

Does method of moments give the consistent solution?

Thanks for your ideas in advance.

Dan
  • 363
  • 2
  • 15
  • Under suitable conditions, the sample mean will converge to the expected value. But, estimation, is a method to find what already exists but is currently unknown -not to determine the required value of the unknown so that a particular result holds. – Alecos Papadopoulos Jul 27 '14 at 19:33
  • Hi: I don't follow your question but don't use an AR(1) for an example of a continuous dependent RV because $x_t$ is not continuous. – mark leeds Apr 05 '24 at 04:41

1 Answers1

0

First, if the variables are dependent you need to add some condition that "bounds" the dependence; otherwise, you have a trivial counterexample: when $x_i = x_1$, $\forall i$ the sample average obviously does not converge. An example -probably rasonable, pertinent for a non-degenerate AR(1) process- is here : $\mathrm{cov}(x_i,x_j) \leq \alpha^{|i-j|} $

Given that, if we can apply the law of large numbers, then we know that the random function $ {\bf g}(b)=\frac{1}{n}\sum_{i=1}^nf_b(x_i)$ converges (for each fixed $b)$ to $m(b)= \mathbb{E}[f_b(x_1)]$

Does this imply that the root of ${\bf g}(b)=0$ converges to the root of $m(b)$ ? That's a trickier problem, and frankly I'm not sure how to attack it.

leonbloy
  • 66,202