1

For Uniform distribution U[0,$\theta$], Given random samples of size n, I want to find the MP test for

$H_0$ : $\theta$ = $\theta_0$ vs $H_1$ : $\theta$ = $\theta_1$ ($\theta_1$ < $\theta_0$)

$\frac{pdf(X:\theta_1)}{pdf(X:\theta_0)}$ = $\frac{\theta_0^n}{\theta_1^n}$ $\frac{I(X_{(n)} <\theta_1)}{I(X_{(n)} <\theta_0)}$ and let's call this value S. then

S = $\frac{\theta_0^n}{\theta_1^n}$ if $X_{(n)} <\theta_1$, 0 if $X_{(n)} >\theta_1$

by Theorem, $\phi$ = 1 if S>c, $\gamma$ if S=c, 0 if S< c for some non negative number c such that $E_{\theta_0} [\phi(X)]$ = $\alpha$ for given significance level of $\alpha$ is MP test.

So I have tried to find c. I considered several cases.

  1. c > $\frac{\theta_0^n}{\theta_1^n}$. then $E_{\theta_0} [\phi(X)]$ = $Pr_{\theta_0}$(S>c)+$\gamma$$Pr_{\theta_0}$(S=c) = 0 < $\alpha$. it is invalid.

  2. 0 < c<$\frac{\theta_0^n}{\theta_1^n}$. then $Pr_{\theta_0}$(S>c)+$\gamma$$Pr_{\theta_0}$(S=c) = $Pr_{\theta_0}(X_{(n)} < \theta_1)$ = $\frac{\theta_1^n}{\theta_0^n}$ and it is not $\alpha$. So this case is also invalid

  3. when c = 0, $Pr_{\theta_0}$(S>c)+$\gamma$$Pr_{\theta_0}$(S=c) = $Pr_{\theta_0}(X_{(n)} < \theta_1)$ + $\gamma$ $Pr_{\theta_0}(X_{(n)} > \theta_1)$ = $\frac{\theta_1^n}{\theta_0^n}$ + $\gamma$(1-$\frac{\theta_1^n}{\theta_0^n}$ ) = $\alpha$. So I can find $\gamma$.

  4. Fianlly, when c = $\frac{\theta_0^n}{\theta_1^n}$, then $Pr_{\theta_0}$(S>c)+$\gamma$$Pr_{\theta_0}$(S=c) = $\gamma$$Pr_{\theta_0}(X_{(n)} < \theta_1)$ =$\gamma$$\frac{\theta_1^n}{\theta_0^n}$= $\alpha$. So in this case also, I can find $\gamma$.

So, I have two candidates for c. I derived two MP tests from both cases and compared the power of test under $H_1$, and found that c = 0 is more powerful. So I chose c = 0.

Is it right procedure? I'm not sure. or Is there any simpler way?

StubbornAtom
  • 17,932
smw1991
  • 157
  • https://stats.stackexchange.com/q/117844/119261, https://math.stackexchange.com/q/3024228/321264, https://math.stackexchange.com/q/1736322/321264. – StubbornAtom Aug 27 '20 at 18:06

2 Answers2

1

It's been a while, but I thought I'd try to answer this question. Note I write $x_{n:n}$ as $X_{(n)}$.


Denote the pair of hypotheses $H_0: \theta = \theta_0$ versus $H_1: \theta = \theta_1$ as $A$, where $\theta_1 < \theta_0$. We seek a MP level $\alpha$ test for $A$, which we shall call Test $A$. Denote the alternative pair of hypotheses $H_0^*: \theta = \theta_1 = \theta_0^*$ versus $H_1^*: \theta = \theta_0 = \theta_1^*$, where $\theta_1^* > \theta_0^*$ and corresponding MP $\alpha^*$ test as Test $B$.

To control for $\alpha$, we must control the Type I Error for Test $A$. However, this is equivalent to controlling the Type II error for Test $B$, $\beta^*$. That is, we must find $\alpha^*$ such that $\beta^* = \alpha$. To do so, we can set $Q(\theta_1^*) = 1 - \alpha$ and solve for $\alpha^*$. Working from [1], a level $\alpha^*$ MP test for Test $B$ has power \begin{equation} Q(\theta_1^*) = 1 - (1 - \alpha^*)(\theta_0^* / \theta_1^*)^n \end{equation}

Thus, \begin{equation*} 1 - (1 - \alpha^*)(\theta_0^* / \theta_1^*)^n = 1 - \alpha \implies \alpha^* = 1 - \alpha (\theta_0^* / \theta_1^*)^{-n} \end{equation*} Further, by the Neyman Pearson Lemma (Theorem 8.3.1 in [1]) and Example 8.3.4 in [1], we have for a level $\alpha^*$ MP Test $B$, \begin{equation} \mathcal{R} = \{ \boldsymbol{x} = (x_1, \ldots, x_n \in \mathbb{R}^{+n} : x_{n:n} > k = \theta_0^* (1 - \alpha^*)^{1/n} \} \end{equation}

To translate this back to Test $A$, substitute $\theta_1 = \theta_0^*$ and $\alpha^* = 1 - \alpha (\theta_0^* / \theta_1^*)^{-n}$ into $k$. Finally, since $\beta^*$ in Test $B$ corresponds to $\alpha$ in Test $A$, we seek $\mathcal{R}^{c}$. That is, for Test $A$, \begin{equation} \text{Reject } H_0 \iff x_{n:n} < \theta_0 \alpha^{1/n} \end{equation}


Remark: I like the above because we can avoid working with $\theta_1$, where as the OP's $S$ depends on the relationship between $x_{n:n}$ and $\theta_1$.

[1] https://fac.ksu.edu.sa/sites/default/files/marcel.dekker_-probability.and.statistical.inference.pdf (Example 8.4.3, pg. 408-409)

Gauss
  • 216
0

This is a simple problem in which it is easy to get bogged down with the usual LR approach. Best to start with a picture.

Suppose $\theta_0 = 1$ and $\theta_1 = 2$. All of the interval $(1, 2)$ is automatically in the rejection region 'for free.' For an $\alpha$ level test you can also put into the rejection region any subinterval of $(0, 1)$ of length $\alpha$. (Or a collection of tiny disjoint intervals with total length $\alpha$.) Then think about the value of $\beta$. Other cases where $\theta_0 < \theta_1$ are similar.

Next, think about the cases in which $\theta_0 > \theta_1$, Finally, now that you know the answers, ponder how to express them using the usual LR approach.

[Estimation and hypothesis testing with uniform distributions is steadfastly pathological.]

BruceET
  • 52,418
  • Thanks for your answer and I wonder my idea to this problem is wrong or just so inefficient way. – smw1991 Mar 19 '15 at 07:18
  • Ordinarily, looking at the LR ratio is appropriate. But there can be anomalous cases where that approach does not work--especially when the densities have different supports. For a similar very simple problem where all goes well, try using Beta(1,2) as null and Beta(2,1) as alternative. – BruceET Mar 19 '15 at 16:28