6 - Interval estimation
from Part II - Estimation
Published online by Cambridge University Press: 05 July 2012
Summary
Now that we know how to estimate real-valued parameters optimally the question arises of how confident we can be in the estimated result, which requires infinite precision numbers. It is clear that if we repeat the estimation on a new set of data, generated by the same physical machinery, the result will not be the same. It seems that the model class is too rich for the amount of data we have. After all, if we fit Bernoulli models to a binary string of length n, there cannot be more than 2n properties in the data that we can learn, even if no two strings have a common property. And yet the model class has a continuum of parameter values, each representing a property.
One way to balance the learnable information in the data and the model class is to restrict the precision in the estimated parameters. However, we should not take any fixed precision, for example each parameter quantized to two decimals, because that does not take into account the fact that the sensitivity of models with respect to changes in the parameters depends on the parameters. The problem is related to statistical robustness, which while perfectly meaningful is based on practical considerations such as a model's sensitivity to outliers rather than on any reasonably comprehensive theory. If we stick to the model classes of interest in this book the parameter precision amounts to interval estimation.
- Type
- Chapter
- Information
- Optimal Estimation of Parameters , pp. 70 - 82Publisher: Cambridge University PressPrint publication year: 2012