Book contents
- Frontmatter
- Contents
- Preface
- 1 Random variables
- 2 Statistical models and inference
- 3 R
- 4 Theory of maximum likelihood estimation
- 5 Numerical maximum likelihood estimation
- 6 Bayesian computation
- 7 Linear models
- Appendix A Some distributions
- Appendix B Matrix computation
- Appendix C Random number generation
- References
- Index
2 - Statistical models and inference
Published online by Cambridge University Press: 05 April 2015
- Frontmatter
- Contents
- Preface
- 1 Random variables
- 2 Statistical models and inference
- 3 R
- 4 Theory of maximum likelihood estimation
- 5 Numerical maximum likelihood estimation
- 6 Bayesian computation
- 7 Linear models
- Appendix A Some distributions
- Appendix B Matrix computation
- Appendix C Random number generation
- References
- Index
Summary
Statistics aims to extract information from data: specifically, information about the system that generated the data. There are two difficulties with this enterprise. First, it may not be easy to infer what we want to know from the data that can be obtained. Second, most data contain a component of random variability: if we were to replicate the data-gathering process several times we would obtain somewhat different data on each occasion. In the face of such variability, how do we ensure that the conclusions drawn from a single set of data are generally valid, and not a misleading reflection of the random peculiarities of that single set of data?
Statistics provides methods for overcoming these difficulties and making sound inferences from inherently random data. For the most part this involves the use of statistical models, which are like ‘mathematical cartoons’ describing how our data might have been generated, if the unknown features of the data-generating system were actually known. So if the unknowns were known, then a decent model could generate data that resembled the observed data, including reproducing its variability under replication. The purpose of statistical inference is then to use the statistical model to go in the reverse direction: to infer the values of the model unknowns that are consistent with observed data.
Mathematically, let y denote a random vector containing the observed data. Let θ denote a vector of parameters of unknown value. We assume that knowing the values of some of these parameters would answer the questions of interest about the system generating y. So a statistical model is a recipe by which y might have been generated, given appropriate values for θ. At a minimum the model specifies how data like y might be simulated, thereby implicitly defining the distribution of y and how it depends on θ. Often it will provide more, by explicitly defining the p.d.f. of y in terms of θ.
- Type
- Chapter
- Information
- Core Statistics , pp. 19 - 48Publisher: Cambridge University PressPrint publication year: 2015