Book contents
- Frontmatter
- Contents
- Preface
- Prologue: A machine learning sampler
- 1 The ingredients of machine learning
- 2 Binary classification and related tasks
- 3 Beyond binary classification
- 4 Concept learning
- 5 Tree models
- 6 Rule models
- 7 Linear models
- 8 Distance-based models
- 9 Probabilistic models
- 10 Features
- 11 Model ensembles
- 12 Machine learning experiments
- Epilogue: Where to go from here
- Important points to remember
- References
- Index
11 - Model ensembles
Published online by Cambridge University Press: 05 November 2012
- Frontmatter
- Contents
- Preface
- Prologue: A machine learning sampler
- 1 The ingredients of machine learning
- 2 Binary classification and related tasks
- 3 Beyond binary classification
- 4 Concept learning
- 5 Tree models
- 6 Rule models
- 7 Linear models
- 8 Distance-based models
- 9 Probabilistic models
- 10 Features
- 11 Model ensembles
- 12 Machine learning experiments
- Epilogue: Where to go from here
- Important points to remember
- References
- Index
Summary
TWO HEADS ARE BETTER THAN ONE – a well-known proverb suggesting that two minds working together can often achieve better results. If we read ‘features’ for ‘heads’ then this is certainly true in machine learning, as we have seen in the preceding chapters. But we can often further improve things by combining not just features but whole models, as will be demonstrated in this chapter. Combinations of models are generally known as model ensembles. They are among the most powerful techniques in machine learning, often outperforming other methods. This comes at the cost of increased algorithmic and model complexity.
The topic of model combination has a rich and diverse history, to which we can only partly do justice in this short chapter. The main motivations came from computational learning theory on the one hand, and statistics on the other. It is a well-known statistical intuition that averaging measurements can lead to a more stable and reliable estimate because we reduce the influence of random fluctuations in single measurements. So if we were to build an ensemble of slightly different models from the same training data, we might be able to similarly reduce the influence of random fluctuations in single models. The key question here is how to achieve diversity between these different models. As we shall see, this can often be achieved by training models on random subsets of the data, and even by constructing them from random subsets of the available features.
- Type
- Chapter
- Information
- Machine LearningThe Art and Science of Algorithms that Make Sense of Data, pp. 330 - 342Publisher: Cambridge University PressPrint publication year: 2012
- 4
- Cited by