5

Let $X_1, \cdots, X_n$ be iid from a uniform distribution $U[\theta-\frac{1}{2}, \theta+\frac{1}{2}]$ with $\theta \in \mathbb{R}$ unknown. Show that the statistic $T(\mathbf{X}) = (X_{(1)}, X_{(n)})$ is minimal sufficient but not complete.

I am having trouble proving that it is not complete. My idea is as follows. If I can somehow create two functions of $T(\mathbf{X})$, say $f(X_{(1)}, X_{(n)})$ and $g(X_{(1)}, X_{(n)})$ where $f \neq g$ but show that both are unbiased estimators of $\theta$, then $T(\mathbf{X}) = (X_{(1)}, X_{(n)})$ cannot possibly be complete. Is this the right approach? I am stuck because I am unsure 1) How to find the expectations/distributions of the order statistics and 2) How to construct $f$ and $g$.

Any help would be appreciated!

elbarto
  • 3,446

2 Answers2

2

HINT

For $g(X_{(1)},X_{(n)}) = X_{(n)} - X_{(1)},$ can you show $$ E[g(X_{(1)},X_{(n)})] $$ is independent of $\theta?$ Don't calculate it, just think about it.

  • Thanks for the hint, but I am not sure how to proceed. For example, how does the independence of $\theta$ play a role here? Could you finish your argument based on your hint? – elbarto Sep 22 '17 at 02:22
  • What's the definition of complete? – spaceisdarkgreen Sep 22 '17 at 02:35
  • Let $f(t|\theta)$ be a family of pdfs/pmfs for a statistic $T(\mathbf{X})$. The statistic $T(\mathbf{X})$ is a complete statistic if $E(g(T)) = 0$ for all $\theta$ implies $g(T) = 0$ with probability $1$ for all $\theta$.

    Using this definition, I can see where you are heading with your hint, if we can show that $E[g(X_{(1)}, X_{(2)})]=0$ for all $\theta$ but there exists a $\theta$ such that $g(X_{(1)}, X_{(2)}) \neq 0$, then we are done. However, I am not sure how to complete the argument for this.

    – elbarto Sep 22 '17 at 02:40
  • Let $a =E(g)$ which is independent of $\theta.$ Then $E(g-a) = 0$ for all $\theta$ but it's easy to see $P(g-a=0) \ne 0.$ – spaceisdarkgreen Sep 22 '17 at 02:43
  • Why is $E(g)$ a constant $a$? And, how do we show that there exists a $\theta$ such that $P(g-a=0) \neq 1$? – elbarto Sep 22 '17 at 02:49
  • "$E(g)$ is independent of theta" is what my hint suggests you show, so given that, my comment tells you how you get to the definition of complete (which is what you were asking). For the second question if $P(g-a = 0) = 1$ that means that $g$ is essentially not random, always taking having value $a$ (which is of course its expected value). Is that the case here? (which $\theta$ you pick for this part doesn't matter at all). – spaceisdarkgreen Sep 22 '17 at 02:56
  • If you were asking for more hint on the first part I can only suggest to think about what the distribution looks like for different values of $\theta.$ I claim there's a simple argument for why a statistic like $X_{(n)}-X_{(1)}$ would always look statistically the same regardless of $\theta.$ By contrast, a statistic like $X_{(n)} + X_{(1)}$ would be affected by $\theta.$ – spaceisdarkgreen Sep 22 '17 at 03:00
  • Ok, I played around and it appears that for say $\theta = \frac{1}{2}$, the distribution of $X_{(n)} - X_{(1)}$ follows that of a beta distribution independent of $\theta$. However, how can I see the independence without doing all this working and calculation and more importantly, for a general $\theta$? – elbarto Sep 22 '17 at 03:46
  • $\theta$ is just a location parameter. It just translates the distribution along the real line. So the differences between the $X$'s don't care about it. If you shifted all $X$'s by theta, it just cancels out and all the differences remain the same. – spaceisdarkgreen Sep 22 '17 at 03:58
  • Ah, I see why it is independent of $\theta$ now, thank you. – elbarto Sep 22 '17 at 05:16
2

The trick with these kind of problems is to apply Basu's theorem to get a contradiction. Note that $X_{(n)} - X_{(1)}$ is ancillary for $\theta$. If $(X_{(1)}, X_{(n)})$ was complete, then it would be independent of $X_{(n)} - X_{(1)}$. But this a contradiction, as knowing $(X_{(1)}, X_{(n)})$ completely determines $X_{(n)} - X_{(1)}$.

  • 1
    Thanks, Basu's Theorem is a nice way indeed and combined with @spaceisdarkgreen's answer on the independence with $\theta$, I fully understand it now. I wish I could accept both of your answers, however given that yours is an "answer" while the other one is a "hint", I will accept yours. – elbarto Sep 22 '17 at 05:17