Book contents
- Frontmatter
- Contents
- Preface
- Prologue: A machine learning sampler
- 1 The ingredients of machine learning
- 2 Binary classification and related tasks
- 3 Beyond binary classification
- 4 Concept learning
- 5 Tree models
- 6 Rule models
- 7 Linear models
- 8 Distance-based models
- 9 Probabilistic models
- 10 Features
- 11 Model ensembles
- 12 Machine learning experiments
- Epilogue: Where to go from here
- Important points to remember
- References
- Index
3 - Beyond binary classification
Published online by Cambridge University Press: 05 November 2012
- Frontmatter
- Contents
- Preface
- Prologue: A machine learning sampler
- 1 The ingredients of machine learning
- 2 Binary classification and related tasks
- 3 Beyond binary classification
- 4 Concept learning
- 5 Tree models
- 6 Rule models
- 7 Linear models
- 8 Distance-based models
- 9 Probabilistic models
- 10 Features
- 11 Model ensembles
- 12 Machine learning experiments
- Epilogue: Where to go from here
- Important points to remember
- References
- Index
Summary
THE PREVIOUS CHAPTER introduced binary classification and associated tasks such as ranking and class probability estimation. In this chapter we will go beyond these basic tasks in a number of ways. Section 3.1 discusses how to handle more than two classes. In Section 3.2 we consider the case of a real-valued target variable. Section 3.3 is devoted to various forms of learning that are either unsupervised or aimed at learning descriptive models.
Handling more than two classes
Certain concepts are fundamentally binary. For instance, the notion of a coverage curve does not easily generalise to more than two classes. We will now consider general issues related to having more than two classes in classification, scoring and class probability estimation. The discussion will address two issues: how to evaluate multi-class performance, and how to build multi-class models out of binary models. The latter is necessary for some models, such as linear classifiers, that are primarily designed to separate two classes. Other models, including decision trees, handle any number of classes quite naturally.
Multi-class classification
Classification tasks with more than two classes are very common. For instance, once a patient has been diagnosed as suffering from a rheumatic disease, the doctor will want to classify him or her further into one of several variants. If we have k classes, performance of a classifier can be assessed using a k-by-k contingency table. Assessing performance is easy if we are interested in the classifier's accuracy, which is still the sum of the descending diagonal of the contingency table, divided by the number of test instances.
- Type
- Chapter
- Information
- Machine LearningThe Art and Science of Algorithms that Make Sense of Data, pp. 81 - 103Publisher: Cambridge University PressPrint publication year: 2012
- 1
- Cited by