Book contents
- Frontmatter
- Contents
- Prologue
- Acknowledgments
- 1 Empirical Bayes and the James—Stein Estimator
- 2 Large-Scale Hypothesis Testing
- 3 Significance Testing Algorithms
- 4 False Discovery Rate Control
- 5 Local False Discovery Rates
- 6 Theoretical, Permutation, and Empirical Null Distributions
- 7 Estimation Accuracy
- 8 Correlation Questions
- 9 Sets of Cases (Enrichment)
- 10 Combination, Relevance, and Comparability
- 11 Prediction and Effect Size Estimation
- Appendix A Exponential Families
- Appendix B Data Sets and Programs
- References
- Index
1 - Empirical Bayes and the James—Stein Estimator
Published online by Cambridge University Press: 05 September 2013
- Frontmatter
- Contents
- Prologue
- Acknowledgments
- 1 Empirical Bayes and the James—Stein Estimator
- 2 Large-Scale Hypothesis Testing
- 3 Significance Testing Algorithms
- 4 False Discovery Rate Control
- 5 Local False Discovery Rates
- 6 Theoretical, Permutation, and Empirical Null Distributions
- 7 Estimation Accuracy
- 8 Correlation Questions
- 9 Sets of Cases (Enrichment)
- 10 Combination, Relevance, and Comparability
- 11 Prediction and Effect Size Estimation
- Appendix A Exponential Families
- Appendix B Data Sets and Programs
- References
- Index
Summary
Charles Stein shocked the statistical world in 1955 with his proof that maximum likelihood estimation methods for Gaussian models, in common use for more than a century, were inadmissible beyond simple one- or two-dimensional situations. These methods are still in use, for good reasons, but Stein-type estimators have pointed the way toward a radically different empirical Bayes approach to high-dimensional statistical inference. We will be using empirical Bayes ideas for estimation, testing, and prediction, beginning here with their path-breaking appearance in the James—Stein formulation.
Although the connection was not immediately recognized, Stein's work was half of an energetic post-war empirical Bayes initiative. The other half, explicitly named “empirical Bayes” by its principal developer Herbert Robbins, was less shocking but more general in scope, aiming to show how frequentists could achieve full Bayesian efficiency in large-scale parallel studies. Large-scale parallel studies were rare in the 1950s, however, and Robbins' theory did not have the applied impact of Stein's shrinkage estimators, which are useful in much smaller data sets.
All of this has changed in the 21st century. New scientific technologies, epitomized by the microarray, routinely produce studies of thousands of parallel cases — we will see several such studies in what follows — well-suited for the Robbins point of view. That view predominates in the succeeding chapters, though not explicitly invoking Robbins' methodology until the very last section of the book.
- Type
- Chapter
- Information
- Large-Scale InferenceEmpirical Bayes Methods for Estimation, Testing, and Prediction, pp. 1 - 14Publisher: Cambridge University PressPrint publication year: 2010
- 2
- Cited by