Book contents
- Frontmatter
- Contents
- List of exercises
- Preface to the series
- Preface
- 1 The subjective interpretation of probability
- 2 Bayesian inference
- 3 Point estimation
- 4 Frequentist properties of Bayesian estimators
- 5 Interval estimation
- 6 Hypothesis testing
- 7 Prediction
- 8 Choice of prior
- 9 Asymptotic Bayes
- 10 The linear regression model
- 11 Basics of Bayesian computation
- 12 Hierarchical models
- 13 The linear regression model with general covariance matrix
- 14 Latent variable models
- 15 Mixture models
- 16 Bayesian model averaging and selection
- 17 Some stationary time series models
- 18 Some nonstationary time series models
- Appendix
- Bibliography
- Index
15 - Mixture models
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- List of exercises
- Preface to the series
- Preface
- 1 The subjective interpretation of probability
- 2 Bayesian inference
- 3 Point estimation
- 4 Frequentist properties of Bayesian estimators
- 5 Interval estimation
- 6 Hypothesis testing
- 7 Prediction
- 8 Choice of prior
- 9 Asymptotic Bayes
- 10 The linear regression model
- 11 Basics of Bayesian computation
- 12 Hierarchical models
- 13 The linear regression model with general covariance matrix
- 14 Latent variable models
- 15 Mixture models
- 16 Bayesian model averaging and selection
- 17 Some stationary time series models
- 18 Some nonstationary time series models
- Appendix
- Bibliography
- Index
Summary
The reader has undoubtedly noted that the majority of exercises in previous chapters have included normality assumptions. These assumptions should not be taken as necessarily “correct,” and indeed, such assumptions will not be appropriate in all situations (see, e.g., Exercise 11.13 for a question related to error-term diagnostic checking). In this chapter, we review a variety of computational strategies for extending analyses beyond the textbook normality assumption. The strategies we describe, perhaps ironically, begin with the normal model and proceed to augment it in some way. In the first section, we describe scale mixtures of normals models, which enable the researcher to generalize error distributions to the Student t and double exponential classes (among others). Exercises 15.5 and 15.6 then describe finite normal mixture models, which can accommodate, among other features, skewness and multimodality in error distributions. Importantly, these models are conditionally normal (given values of certain mixing or component indicator variables), and thus standard computational techniques for the normal linear regression model can be directly applied (see, e.g., Exercises 10.1 and 11.8). The techniques described here are thus computationally tractable, flexible, and generalizable. That is, the basic methods provided in this chapter can often be adapted in a straightforward manner to add flexibility to many of the models introduced in previous and future exercises. Some exercises related to mixture models beyond those involving conditional normal sampling distributions are also provided within this chapter.
- Type
- Chapter
- Information
- Bayesian Econometric Methods , pp. 253 - 280Publisher: Cambridge University PressPrint publication year: 2007