Significance Tests: Yet Another Introduction
The great chess grandmaster Siegbert Tarrasch once said that if you pay careful attention to development, then when the time came that you needed a combination, it will be there. Tarrasch's comment came to mind as I was reevaluating, yet again, the way I introduce hypothesis testing.
Recall the comments in Is Statistics Hard?
Statistical methods are convoluted! In order to show an effect exists,
- statistics begins by assuming there is no effect.
- Prior to collecting data, rules are chosen to decide whether whether the data are consistent with the assumption of no effect.
- If the data are found to be inconsistent with the assumption, the assumption must be false and there is, in fact, an effect!
Before considering formal hypothesis testing, let's see how those quoted comments relate to confidence intervals. We wished to compare two population means. We did so by constructing a confidence interval for their difference.
The way the game is played, the data are said to be consistent with any value for the population mean difference that is in the CI.
Insofar as comparing population means is concerned, a formal test of the hypothesis that the means are equal will be equivalent to looking at whether 0 is in the confidence interval.
Confidence Intervals | Significance Tests |
construct interval | calculate test statistic |
is 0 in interval? | is statistic far enough away from 0? |