Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgements
- PART 1 GENESIS OF DATA ASSIMILATION
- PART II DATA ASSIMILATION: DETERMINISTIC/STATIC MODELS
- PART III COMPUTATIONAL TECHNIQUES
- PART IV STATISTICAL ESTIMATION
- 13 Principles of statistical estimation
- 14 Statistical least squares estimation
- 15 Maximum likelihood method
- 16 Bayesian estimation method
- 17 From Gauss to Kalman: sequential, linear minimum variance estimation
- PART V DATA ASSIMILATION: STOCHASTIC/STATIC MODELS
- PART VI DATA ASSIMILATION: DETERMINISTIC/DYNAMIC MODELS
- PART VII DATA ASSIMILATION: STOCHASTIC/DYNAMIC MODELS
- PART VIII PREDICTABILITY
- Epilogue
- References
- Index
16 - Bayesian estimation method
from PART IV - STATISTICAL ESTIMATION
Published online by Cambridge University Press: 18 December 2009
- Frontmatter
- Contents
- Preface
- Acknowledgements
- PART 1 GENESIS OF DATA ASSIMILATION
- PART II DATA ASSIMILATION: DETERMINISTIC/STATIC MODELS
- PART III COMPUTATIONAL TECHNIQUES
- PART IV STATISTICAL ESTIMATION
- 13 Principles of statistical estimation
- 14 Statistical least squares estimation
- 15 Maximum likelihood method
- 16 Bayesian estimation method
- 17 From Gauss to Kalman: sequential, linear minimum variance estimation
- PART V DATA ASSIMILATION: STOCHASTIC/STATIC MODELS
- PART VI DATA ASSIMILATION: DETERMINISTIC/DYNAMIC MODELS
- PART VII DATA ASSIMILATION: STOCHASTIC/DYNAMIC MODELS
- PART VIII PREDICTABILITY
- Epilogue
- References
- Index
Summary
This chapter provides an overview of the classical Bayesian method for point estimation. The main point of departure of this method from other methods is that it considers the unknown x as a random variable. All the prior knowledge about this unknown is summarized in the form of a known prior distribution p(x) of x. If z is the set of observations that contains information about the unknown x, this distribution is often given in the form of a conditional distribution p(z│x). The basic idea is to combine these two pieces of information to obtain an optimal estimate of x, called the Bayes estimate.
The Bayesian framework is developed in Section 16.1. Special classes of Bayesian estimators – Bayes least squares estimate leading to the conditional mean (which is also the minimum variance estimate), conditional mode, and conditional median estimates are derived in Section 16.2.
The Bayesian framework
Let x ∈ ℝn be the unknown to be estimated and z ∈ ℝm be the observations that contain information about the unknown x to be estimated. The distinguishing feature of the Bayes framework is that it also treats the unknown x as a random variable. It is assumed that a prior distributionp(x) is known. This distribution summarizes our initial belief about the unknown. It is assumed that nature picks a value of x from the distribution p(x) but decides to tease us by not disclosing her choice, thereby defining a game.
- Type
- Chapter
- Information
- Dynamic Data AssimilationA Least Squares Approach, pp. 261 - 270Publisher: Cambridge University PressPrint publication year: 2006