Book contents
- Frontmatter
- Contents
- Preface
- 1 Probability basics
- 2 Estimation and uncertainty
- 3 Statistical models and inference
- 4 Linear models, least squares, and maximum likelihood
- 5 Parameter estimation: single parameter
- 6 Parameter estimation: multiple parameters
- 7 Approximating distributions
- 8 Monte Carlo methods for inference
- 9 Parameter estimation: Markov Chain Monte Carlo
- 10 Frequentist hypothesis testing
- 11 Model comparison
- 12 Dealing with more complicated problems
- References
- Index
Preface
Published online by Cambridge University Press: 12 July 2017
- Frontmatter
- Contents
- Preface
- 1 Probability basics
- 2 Estimation and uncertainty
- 3 Statistical models and inference
- 4 Linear models, least squares, and maximum likelihood
- 5 Parameter estimation: single parameter
- 6 Parameter estimation: multiple parameters
- 7 Approximating distributions
- 8 Monte Carlo methods for inference
- 9 Parameter estimation: Markov Chain Monte Carlo
- 10 Frequentist hypothesis testing
- 11 Model comparison
- 12 Dealing with more complicated problems
- References
- Index
Summary
Science is fundamentally about learning from data, and doing so in the presence of uncertainty. Uncertainty arises inevitably and avoidably in many guises. It comes from noise in our measurements: we cannot measure exactly. It comes from sampling effects: we cannot measure everything. It comes from complexity: data may be numerous, high dimensional, and correlated, making it difficult to see structures.
This book is an introduction to statistical methods for analysing data. It presents the major concepts of probability and statistics as well as the computational tools we need to extract meaning from data in the presence of uncertainty.
Just as science is about learning from data, so learning from data is nearly synonymous with data modelling. This is because once we have a set of data, we normally want to identify its underlying structure, something we invariably represent with a model. Fitting and comparing models is therefore one of the cornerstones of statistical data analysis. This process of obtaining meaning from data and reasoning is what is meant by inference.
Alas, statistics is all too often taught as a set of seemingly unconnected, ad hoc recipes. Having identified what appears to be a relevant statistical test from a menu – according to the properties of your data and your assumptions – you then apply a procedure that delivers some numerical measure of significance. This kind of approach does little to promote your confidence in the result, and it will leave you lost when your assumptions aren't on the menu. My goal in this book is to show that the process of analysing and interpreting data can be done within a simple probabilistic framework. Probability is central to interpreting data because it quantifies both the uncertainty in the data and our confidence in models derived from them. I will show that there are basic principles for tackling problems that are built on a solid probabilistic foundation. Armed with this know-how you will be able to apply these principles and methods to a wide range of data analysis problems beyond the scope of this book.
This book is aimed primarily at undergraduate and graduate science students. Knowledge of calculus is assumed, but no specific experience with probability or statistics is required. My emphasis is on the concepts and on illustrating them with examples, using both analytical and numerical methods.
- Type
- Chapter
- Information
- Practical Bayesian InferenceA Primer for Physical Scientists, pp. 1 - 3Publisher: Cambridge University PressPrint publication year: 2017