Book contents
- Frontmatter
- Contents
- List of Symbols
- Acknowledgments
- Part I Overview of Adversarial Machine Learning
- Part II Causative Attacks on Machine Learning
- Part III Exploratory Attacks on Machine Learning
- 7 Privacy-Preserving Mechanisms for SVM Learning
- 8 Near-Optimal Evasion of Classifiers
- Part IV Future Directions in Adversarial Machine Learning
- Part V Appendixes
- Glossary
- References
- Index
8 - Near-Optimal Evasion of Classifiers
from Part III - Exploratory Attacks on Machine Learning
Published online by Cambridge University Press: 14 March 2019
- Frontmatter
- Contents
- List of Symbols
- Acknowledgments
- Part I Overview of Adversarial Machine Learning
- Part II Causative Attacks on Machine Learning
- Part III Exploratory Attacks on Machine Learning
- 7 Privacy-Preserving Mechanisms for SVM Learning
- 8 Near-Optimal Evasion of Classifiers
- Part IV Future Directions in Adversarial Machine Learning
- Part V Appendixes
- Glossary
- References
- Index
Summary
In this chapter, we explore a theoretical model for quantifying the difficulty of Exploratory attacks against a trained classifier. Unlike the previous work, since the classifier has already been trained, the adversary can no longer exploit vulnerabilities in the learning algorithm to mistrain the classifier as we demonstrated in the first part of this book. Instead, the adversary must exploit vulnerabilities that the classifier accidentally acquired from training on benign data (or at least data not controlled by the adversary in question). Most nontrivial classification tasks will lead to some form of vulnerability in the classifier. All known detection techniques are susceptible to blind spots (i.e., classes of miscreant activity that fail to be detected), but simply knowing that they exist is insufficient. The principal question is how difficult it is for an adversary to discover a blind spot that is most advantageous for the adversary. In this chapter, we explore a framework for quantifying how difficult it is for the adversary to search for this type of vulnerability in a classifier.
At first, it may appear that the ultimate goal of these Exploratory attacks is to reverse engineer the learned parameters, internal state, or the entire boundary of a classifier to discover its blind spots. However, in this work, we adopt a more refined strategy; we demonstrate successful Exploratory attacks that only partially reverse engineer the classifier. Our techniques find blind spots using only a small number of queries and yield near-optimal strategies for the adversary. They discover data points that the classifier will classify as benign and that are close to the adversary's desired attack instance.
While learning algorithms allow the detection algorithm to adapt over time, realworld constraints on the learning algorithm typically allow an adversary to programmatically find blind spots in the classifier. We consider how an adversary can systematically discover blind spots by querying the filter to find a low-cost (for some cost function) instance that evades the filter. Consider, for example, a spammer who wishes to minimally modify a spam message so it is not classified as spam (here cost is a measure of how much the spam must be modified). By observing the responses of the spam detector, the spammer can search for a modification while using few queries.
- Type
- Chapter
- Information
- Adversarial Machine Learning , pp. 199 - 238Publisher: Cambridge University PressPrint publication year: 2019