Book contents
- Frontmatter
- Contents
- Prologue
- Acknowledgments
- 1 Empirical Bayes and the James—Stein Estimator
- 2 Large-Scale Hypothesis Testing
- 3 Significance Testing Algorithms
- 4 False Discovery Rate Control
- 5 Local False Discovery Rates
- 6 Theoretical, Permutation, and Empirical Null Distributions
- 7 Estimation Accuracy
- 8 Correlation Questions
- 9 Sets of Cases (Enrichment)
- 10 Combination, Relevance, and Comparability
- 11 Prediction and Effect Size Estimation
- Appendix A Exponential Families
- Appendix B Data Sets and Programs
- References
- Index
Prologue
Published online by Cambridge University Press: 05 September 2013
- Frontmatter
- Contents
- Prologue
- Acknowledgments
- 1 Empirical Bayes and the James—Stein Estimator
- 2 Large-Scale Hypothesis Testing
- 3 Significance Testing Algorithms
- 4 False Discovery Rate Control
- 5 Local False Discovery Rates
- 6 Theoretical, Permutation, and Empirical Null Distributions
- 7 Estimation Accuracy
- 8 Correlation Questions
- 9 Sets of Cases (Enrichment)
- 10 Combination, Relevance, and Comparability
- 11 Prediction and Effect Size Estimation
- Appendix A Exponential Families
- Appendix B Data Sets and Programs
- References
- Index
Summary
At the risk of drastic oversimplification, the history of statistics as a recognized discipline can be divided into three eras:
The age of Quetelet and his successors, in which huge census-level data sets were brought to bear on simple but important questions: Are there more male than female births? Is the rate of insanity rising?
The classical period of Pearson, Fisher, Neyman, Hotelling, and their successors, intellectual giants who developed a theory of optimal inference capable of wringing every drop of information out of a scientific experiment. The questions dealt with still tended to be simple—Is treatment A better than treatment B? — but the new methods were suited to the kinds of small data sets individual scientists might collect.
The era of scientific mass production, in which new technologies typified by the microarray allow a single team of scientists to produce data sets of a size Quetelet would envy. But now the flood of data is accompanied by a deluge of questions, perhaps thousands of estimates or hypothesis tests that the statistician is charged with answering together; not at all what the classical masters had in mind.
- Type
- Chapter
- Information
- Large-Scale InferenceEmpirical Bayes Methods for Estimation, Testing, and Prediction, pp. ix - xiPublisher: Cambridge University PressPrint publication year: 2010