2

Sorry, if question is meaningless but I have started to learn concept of P-value and confused to interpret that, if actual results are in favour of hypothesis but p-value are against it then what statistics we should go with?

For an example - Let's say, I have two variables 'A' and 'B' then:

my null hypothesis says: Variable 'A' is greater than 'B' and alternate hypothesis says: Variable 'A' is either less than or equal to Variable 'B'

Now, based on the test performed there would be two conditions:

  1. A=120 and B= 100, p-value= 0.02

Based on above result Variable A > Variable B but P-value is less than alpha i.e. 0.05 so can we still reject the null hypothesis that A > B?

  1. A=100 and B=110, P-value= 0.11

As, above says Variable A < Variable B but P-value is more than alpha i.e. 0.05 so can we accept null hypothesis that A > B ?

Bits
  • 121
  • 1
  • You can't have $A>B$ as a null hypothesis, although you can e.g. have $A-B\sim N(\mu,,\sigma^2)$, because a null hypothesis needs to make calculable predictions. A $p$-value is a $1$- or $2$-tailed probability conditional on the null hypothesis. – J.G. Oct 05 '22 at 09:48

1 Answers1

1

I doubt you can encounter such a situation. In this setting, you will most likely have a critical region for your hypothesis test that is of the form

$$\text{RC}_{\alpha}:=\{A<B-c_{\alpha}\}$$

where $c_{\alpha}>0$ depends on the significance value $\alpha$ and $c_{\alpha}\to0$ for $\alpha\to1$ while $c_{\alpha}\to+\infty$ for $\alpha\to0$. The idea in fact is that you reject the hypothesis "$A$ is greater than $B$" if there is strong evidence that $A$ is smaller than $B$, i.e. if $A$ is not only smaller but is also $c_{\alpha}$ below $B$. If you ask for a small significance, e.g. the usual $\alpha=0.05$, the condition is strong as the gap $c_{\alpha}$ becomes large. Conversely, if you choose high $\alpha$'s, you are willing to admit situations in which the gap is small (till the degenerate case in which $c_{\alpha}=0$).

In general, the p-value of a test is, given the observed values $A$ and $B$ the significance level $\alpha$ for which you have the switch between rejection or acceptance. In particular, the p-value for such a test is the value $\alpha_{*}$ such that $c_{\alpha_{*}}=B-A$. If in your data you see $A>B$, which was actually the null hypothesis, then you are looking for an $\alpha_{*}$ that yields $c_{\alpha_{*}}<0$, which is impossible. The most you can get is $c_{\alpha}=0$, which happens for $\alpha=1$: thus, in this degenerate case we can say that the p-value is 1. This strange situation happens because there is no need to run a test if your data already agrees with the null hypothesis: you only run tests when you see something that disagrees with the null hypothesis, so you start questioning yourself "did this happen by chance, or is this telling me something?".

Indeed, if you find $A<B$ in your data, then you may run your test and find some p-value $\alpha_{*}\in(0,1)$, as in your second example. In that case, the situation changes: if the p-value is large, the idea is that even though $A$ is smaller than $B$ the gap is not large enough and it may have happened just by chance; conversely, if $\alpha_{*}$ is small, say below 0.05, then you have statistical evidence that $A$ is arguably smaller than $B$, and that it was not simply fortune.

Coco
  • 171
  • 7