Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Acknowledgments
- Notation
- Part I Classic Statistical Inference
- Part II Early Computer-Age Methods
- Part III Twenty-First-Century Topics
- 15 Large-Scale Hypothesis Testing and FDRs
- 16 Sparse Modeling and the Lasso
- 17 Random Forests and Boosting
- 18 Neural Networks and Deep Learning
- 19 Support-Vector Machines and Kernel Methods
- 20 Inference After Model Selection
- 21 Empirical Bayes Estimation Strategies
- Epilogue
- References
- Author Index
- Subject Index
16 - Sparse Modeling and the Lasso
from Part III - Twenty-First-Century Topics
Published online by Cambridge University Press: 05 July 2016
- Frontmatter
- Dedication
- Contents
- Preface
- Acknowledgments
- Notation
- Part I Classic Statistical Inference
- Part II Early Computer-Age Methods
- Part III Twenty-First-Century Topics
- 15 Large-Scale Hypothesis Testing and FDRs
- 16 Sparse Modeling and the Lasso
- 17 Random Forests and Boosting
- 18 Neural Networks and Deep Learning
- 19 Support-Vector Machines and Kernel Methods
- 20 Inference After Model Selection
- 21 Empirical Bayes Estimation Strategies
- Epilogue
- References
- Author Index
- Subject Index
Summary
The amount of data we are faced with keeps growing. From around the late 1990s we started to see wide data sets, where the number of variables far exceeds the number of observations. This was largely due to our increasing ability to measure a large amount of information automatically. In genomics, for example, we can use a high-throughput experiment to automatically measure the expression of tens of thousands of genes in a sample in a short amount of time. Similarly, sequencing equipment allows us to genotype millions of SNPs (single-nucleotide polymorphisms) cheaply and quickly. In document retrieval and modeling, we represent a document by the presence or count of each word in the dictionary. This easily leads to a feature vector with 20,000 components, one for each distinct vocabulary word, although most would be zero for a small document. If we move to bi-grams or higher, the feature space gets really large.
In even more modest situations, we can be faced with hundreds of variables. If these variables are to be predictors in a regression or logistic regression model, we probably do not want to use them all. It is likely that a subset will do the job well, and including all the redundant variables will degrade our fit. Hence we are often interested in identifying a good subset of variables. Note also that in these wide-data situations, even linear models are over-parametrized, so some form of reduction or regularization is essential.
In this chapter we will discuss some of the popular methods for model selection, starting with the time-tested and worthy forward-stepwise approach. We then look at the lasso, a popular modern method that does selection and shrinkage via convex optimization. The LARs algorithm ties these two approaches together, and leads to methods that can deliver paths of solutions.
Finally, we discuss some connections with other modern big-and widedata approaches, and mention some extensions.
Forward Stepwise Regression
Stepwise procedures have been around for a very long time. They were originally devised in times when data sets were quite modest in size, in particular in terms of the number of variables. Originally thought of as the poor cousins of “best-subset” selection, they had the advantage of being much cheaper to compute (and in fact possible to compute for large p).We will review best-subset regression first.
- Type
- Chapter
- Information
- Computer Age Statistical InferenceAlgorithms, Evidence, and Data Science, pp. 298 - 323Publisher: Cambridge University PressPrint publication year: 2016