Book contents
- Frontmatter
- Contents
- Preface
- Prologue: A machine learning sampler
- 1 The ingredients of machine learning
- 2 Binary classification and related tasks
- 3 Beyond binary classification
- 4 Concept learning
- 5 Tree models
- 6 Rule models
- 7 Linear models
- 8 Distance-based models
- 9 Probabilistic models
- 10 Features
- 11 Model ensembles
- 12 Machine learning experiments
- Epilogue: Where to go from here
- Important points to remember
- References
- Index
Epilogue: Where to go from here
Published online by Cambridge University Press: 05 November 2012
- Frontmatter
- Contents
- Preface
- Prologue: A machine learning sampler
- 1 The ingredients of machine learning
- 2 Binary classification and related tasks
- 3 Beyond binary classification
- 4 Concept learning
- 5 Tree models
- 6 Rule models
- 7 Linear models
- 8 Distance-based models
- 9 Probabilistic models
- 10 Features
- 11 Model ensembles
- 12 Machine learning experiments
- Epilogue: Where to go from here
- Important points to remember
- References
- Index
Summary
AND SO WE HAVE come to the end of our journey through the ‘making sense of data’ landscape. We have seen how machine learning can build models from features for solving tasks involving data. We have seen how models can be predictive or descriptive; learning can be supervised or unsupervised; and models can be logical, geometric, probabilistic or ensembles of such models. Now that I have equipped you with the basic concepts to understand the literature, there is a whole world out there for you to explore. So it is only natural for me to leave you with a few pointers to areas you may want to learn about next.
One thing that we have often assumed in the book is that the data comes in a form suitable for the task at hand. For example, if the task is to label e-mails we conveniently learn a classifier from data in the form of labelled e-mails. For tasks such as class probability estimation I introduced the output space (for the model) as separate from the label space (for the data) because the model outputs (class probability estimates) are not directly observable in the data and have to be reconstructed. An area where the distinction between data and model output is much more pronounced is reinforcement learning. Imagine you want to learn how to be a good chess player. This could be viewed as a classification task, but then you require a teacher to score every move.
- Type
- Chapter
- Information
- Machine LearningThe Art and Science of Algorithms that Make Sense of Data, pp. 360 - 362Publisher: Cambridge University PressPrint publication year: 2012