Book contents
- Frontmatter
- Contents
- Preface
- Part I Data and error analysis
- 1 Introduction
- 2 The presentation of physical quantities with their inaccuracies
- 3 Errors: classification and propagation
- 4 Probability distributions
- 5 Processing of experimental data
- 6 Graphical handling of data with errors
- 7 Fitting functions to data
- 8 Back to Bayes: knowledge as a probability distribution
- References
- Answers to exercises
- Part II Appendices
- Part III Python codes
- Part IV Scientific data
- Index
4 - Probability distributions
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- Preface
- Part I Data and error analysis
- 1 Introduction
- 2 The presentation of physical quantities with their inaccuracies
- 3 Errors: classification and propagation
- 4 Probability distributions
- 5 Processing of experimental data
- 6 Graphical handling of data with errors
- 7 Fitting functions to data
- 8 Back to Bayes: knowledge as a probability distribution
- References
- Answers to exercises
- Part II Appendices
- Part III Python codes
- Part IV Scientific data
- Index
Summary
Every measurement is in fact a random sample from a probability distribution. In order to make a judgment on the accuracy of an experimental result we must know something about the underlying probability distribution. This chapter treats the properties of probability distributions and gives details about the most common distributions. The most important distribution of all is the normal distribution, not in the least because the central limit theorem tells us that it is the limiting distribution for the sum of many random disturbances.
Introduction
Every measurement xi of a quantity x can be considered to be a random sample from a probability distribution p(x) of x. In order to be able to analyze random deviations in measured quantities we must know something about the underlying probability distribution, from which the measurement is supposed to be a random sample.
If x can only assume discrete values x = k, k = 1, …, n then p(k) forms a discrete probability distribution and p(k) (often called the probability mass function, pmf) indicates the probability that an arbitrary sample has the value k. If x is a continuous variable, then p(x) is a continuous function of x: the probability density function, pdf. The meaning of p(x) is: the probability that a sample xi occurs in the interval (x, x + dx) equals p(x) dx.
- Type
- Chapter
- Information
- A Student's Guide to Data and Error Analysis , pp. 27 - 52Publisher: Cambridge University PressPrint publication year: 2011