Book contents
- Frontmatter
- Contents
- Preface
- 1 Probability basics
- 2 Estimation and uncertainty
- 3 Statistical models and inference
- 4 Linear models, least squares, and maximum likelihood
- 5 Parameter estimation: single parameter
- 6 Parameter estimation: multiple parameters
- 7 Approximating distributions
- 8 Monte Carlo methods for inference
- 9 Parameter estimation: Markov Chain Monte Carlo
- 10 Frequentist hypothesis testing
- 11 Model comparison
- 12 Dealing with more complicated problems
- References
- Index
2 - Estimation and uncertainty
Published online by Cambridge University Press: 12 July 2017
- Frontmatter
- Contents
- Preface
- 1 Probability basics
- 2 Estimation and uncertainty
- 3 Statistical models and inference
- 4 Linear models, least squares, and maximum likelihood
- 5 Parameter estimation: single parameter
- 6 Parameter estimation: multiple parameters
- 7 Approximating distributions
- 8 Monte Carlo methods for inference
- 9 Parameter estimation: Markov Chain Monte Carlo
- 10 Frequentist hypothesis testing
- 11 Model comparison
- 12 Dealing with more complicated problems
- References
- Index
Summary
In this chapter we look at estimators and how to use them to characterize a distribution. Relevant to this are the concepts of estimator bias, consistency, and efficiency. I shall discuss measurement models and measurement uncertainty and note the difference between distribution properties and estimates thereof. We will learn about the central limit theorem, how and when we can reduce errors through repeated measurements, and we will see how we can propagate uncertainties.
Estimators
An estimator is something that characterizes a set of data. This is often done in order to characterize the parent distribution, the distribution the data were drawn from. We often distinguish between a point estimator, which is a single value such as the mean or mode, and an interval estimator, which characterizes a range, such as the standard deviation or interquartile range. What we use as an estimator depends on what we want to characterise and what we know about the parent distribution.
Suppose we want to learn about the distribution of the heights of a particular species of tree, given a sample ﹛x﹜ of the heights of N such trees in a forest. Specifically, we would like to estimate the mean of the parent distribution, which here means the set of all trees in the forest at this time (as this is finite it is sometimes called the parent population). How useful any estimate is depends on how the sample of trees was selected: were they taken from across the entire forest rather than in a particularly shady region? Were the tallest trees missed because they were harder to measure (or have already been cut down)? But let us assume that we have a sample that is representative of the forest in some useful sense. There are potentially many ways we could use this sample to estimate the mean of the parent distribution.
- Type
- Chapter
- Information
- Practical Bayesian InferenceA Primer for Physical Scientists, pp. 36 - 54Publisher: Cambridge University PressPrint publication year: 2017