7 - Hypothesis testing
from Part II - Estimation
Published online by Cambridge University Press: 05 July 2012
Summary
Customarily, hypothesis testing is about the problem of finding out if some effect of interest, like the curing power of a drug, is supported by observed data, or if it is a random “fluke” rather than a real effect of the drug. To have a formal way to test this a certain probability hypothesis is set up to represent the effect. This can be done by selecting a function of the data, s(xn), called a test statistic, the distribution of which can be calculated under the hypothesis. Frequently parametric probability distributions have a special parameter value to represent randomness, like 1/2 in Bernoulli models for binary data and 0 for the mean of normal distributions, and the no effect case can be represented by the single model such as f(s(xn); θ0), called the null hypothesis. The test statistic is or should be in effect an estimator of the parameter, and if the data cause the test statistic to fall in the tail, into a so-called critical region of small probability, say 0.05, the null hypothesis is rejected, indicating by the double negation that the real effect is not ruled out. Fisher called the amount of evidence for the rejection of the null hypothesis the statistical significance on the selected level 0.05, which is frequently misused to mean that there is 95 percent statistical significance to accept the real effect.
- Type
- Chapter
- Information
- Optimal Estimation of Parameters , pp. 83 - 103Publisher: Cambridge University PressPrint publication year: 2012