How do distribution tables, such as t-table and z-scores, are calculated? For example, the formula of $t$ is as follows:
$t=\frac{m_a-m_b}{\sqrt{\frac{s^2}{n_a}+\frac{s^2}{n_b}}}$
How did they calculate a generic table without means or sample size?
How do distribution tables, such as t-table and z-scores, are calculated? For example, the formula of $t$ is as follows:
$t=\frac{m_a-m_b}{\sqrt{\frac{s^2}{n_a}+\frac{s^2}{n_b}}}$
How did they calculate a generic table without means or sample size?
With the increasing availability of reliable statistical software (commercial software such as SAS, SPSS and Minitab, and freeware such as R and Python), it is becoming increasingly common to use software instead of tables. Often the required numerical integration is done afresh for each instance. Sometimes, key parts of the table are stored in memory and retrieved on command. Here are some examples of probabilities and percentage points from standard normal and t distributions from R statistical software:
pnorm(1.96) # P(Z < 1.96)
## 0.9750021
qnorm(.975) # c such that P(Z < c) = 0.975
## 1.959964
pt(0, 7) # if X ~ T(df=7), find P(X < 0)
## 0.5
qt(.95, 20) # if X ~ T(df=20), find c such that P(X < c) = .95
## 1.724718