Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- 1 Astrostatistics
- 2 Prerequisites
- 3 Frequentist vs. Bayesian Methods
- 4 Normal Linear Models
- 5 GLMs Part I – Continuous and Binomial Models
- 6 GLMs Part II – Count Models
- 7 GLMs Part III – Zero-Inflated and Hurdle Models
- 8 Hierarchical GLMMs
- 9 Model Selection
- 10 Astronomical Applications
- 11 The Future of Astrostatistics
- Appendix A Bayesian Modeling using INLA
- Appendix B Count Models with Offsets
- Appendix C Predicted Values, Residuals, and Diagnostics
- References
- Index
- Plate section
- References
8 - Hierarchical GLMMs
Published online by Cambridge University Press: 11 May 2017
- Frontmatter
- Dedication
- Contents
- Preface
- 1 Astrostatistics
- 2 Prerequisites
- 3 Frequentist vs. Bayesian Methods
- 4 Normal Linear Models
- 5 GLMs Part I – Continuous and Binomial Models
- 6 GLMs Part II – Count Models
- 7 GLMs Part III – Zero-Inflated and Hurdle Models
- 8 Hierarchical GLMMs
- 9 Model Selection
- 10 Astronomical Applications
- 11 The Future of Astrostatistics
- Appendix A Bayesian Modeling using INLA
- Appendix B Count Models with Offsets
- Appendix C Predicted Values, Residuals, and Diagnostics
- References
- Index
- Plate section
- References
Summary
Overview of Bayesian Hierarchical Models/GLMMs
The Bayesian models we have discussed thus far in the book have been based on a likelihood, and the mixture of a likelihood and prior distribution. The product of a model likelihood, or log-likelihood, and a prior is called a posterior distribution. One of the key assumptions of a likelihood distribution is that each component observation in the distribution is independent of the other observations. This assumption goes back to the probability distribution from which a likelihood is derived. The terms or observations described by a probability distribution are assumed to be independent. This criterion is essential to creating and interpreting the statistical models we addressed in the last chapter.
When there is correlation or time series autocorrelation in the data caused by clustered, nested, or panel structured data, statisticians must make adjustments to the model in order to avoid bias in the interpretation of the parameters, especially the standard deviations. In maximum likelihood estimation this is a foremost problem when developing models, and an entire area of statistics is devoted to dealing with excess correlation in data.
In many cases the data being modeled is correlated by its structure, or is said to be structurally correlated. For instance, suppose we have data that is collected by individual observations over time or data that belongs to different sources or clusters, e.g., data in which the observations are nested into levels and so is hierarchically structured. Example data is provided in the tables. Table 8.1 gives an example of longitudinal data. Note that time periods (Period) are nested within each observation (id). Table 8.2 gives an example of hierarchical, grouped, or clustered data. This type of data is also referred to as cross sectional data. It is the data structure most often used for random intercept data. We pool the data if the grouping variable, grp, is ignored. However, since it may be the case that values within groups are more highly correlated than are the observations when the data is pooled, it may be necessary to adjust the model for the grouping effect. If there is more correlation within groups, the data is likely to be overdispersed. A random intercept model adjusts the correlation effect by having separate intercepts for each group in the data. We will describe how this works in what follows.
- Type
- Chapter
- Information
- Bayesian Models for Astrophysical DataUsing R, JAGS, Python, and Stan, pp. 215 - 261Publisher: Cambridge University PressPrint publication year: 2017