Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgements
- PART 1 GENESIS OF DATA ASSIMILATION
- PART II DATA ASSIMILATION: DETERMINISTIC/STATIC MODELS
- PART III COMPUTATIONAL TECHNIQUES
- PART IV STATISTICAL ESTIMATION
- 13 Principles of statistical estimation
- 14 Statistical least squares estimation
- 15 Maximum likelihood method
- 16 Bayesian estimation method
- 17 From Gauss to Kalman: sequential, linear minimum variance estimation
- PART V DATA ASSIMILATION: STOCHASTIC/STATIC MODELS
- PART VI DATA ASSIMILATION: DETERMINISTIC/DYNAMIC MODELS
- PART VII DATA ASSIMILATION: STOCHASTIC/DYNAMIC MODELS
- PART VIII PREDICTABILITY
- Epilogue
- References
- Index
17 - From Gauss to Kalman: sequential, linear minimum variance estimation
from PART IV - STATISTICAL ESTIMATION
Published online by Cambridge University Press: 18 December 2009
- Frontmatter
- Contents
- Preface
- Acknowledgements
- PART 1 GENESIS OF DATA ASSIMILATION
- PART II DATA ASSIMILATION: DETERMINISTIC/STATIC MODELS
- PART III COMPUTATIONAL TECHNIQUES
- PART IV STATISTICAL ESTIMATION
- 13 Principles of statistical estimation
- 14 Statistical least squares estimation
- 15 Maximum likelihood method
- 16 Bayesian estimation method
- 17 From Gauss to Kalman: sequential, linear minimum variance estimation
- PART V DATA ASSIMILATION: STOCHASTIC/STATIC MODELS
- PART VI DATA ASSIMILATION: DETERMINISTIC/DYNAMIC MODELS
- PART VII DATA ASSIMILATION: STOCHASTIC/DYNAMIC MODELS
- PART VIII PREDICTABILITY
- Epilogue
- References
- Index
Summary
In all of the Chapters 14 through 16, we have concentrated on the basic optimality of the estimators derived using different philosophies – least sum of squared errors, minimum variance estimates (Chapter 14), maximum likelihood estimates (Chapter 15), and optimality using several key parameters of the posterior distribution including the conditional mean, mode and median (Chapter 16). In this concluding chapter of Part IV, we turn to analyzing the structure of certain class of optimal estimates. For example, we only know that the conditional mean of the posterior distribution is a minimum variance estimate. But this mean, in general, could be a nonlinear function of the observations z. This observation brings us to the following structural question: when is a linear function of the observations optimal? Understanding the structural properties of an estimator is extremely important and is a major determinant in evaluating the computational feasibility of these estimates.
In Section 17.1 we derive conditions under which a linear function of the observations defines a minimum variance estimate. We then extend this analysis in Section 17.2 to the sequential framework where it is assumed that we have two pieces of information about the unknown, (a) an a priori estimate x− and its associated covariance matrix ∑− and (b) a new observation z and its covariance matrix ∑v. We derive conditions under which a linear function of x− and z will lead to a minimum variance estimate x+ of the unknown x.
- Type
- Chapter
- Information
- Dynamic Data AssimilationA Least Squares Approach, pp. 271 - 282Publisher: Cambridge University PressPrint publication year: 2006