Book contents
- Frontmatter
- Contents
- List of Symbols
- Acknowledgments
- Part I Overview of Adversarial Machine Learning
- 1 Introduction
- 2 Background and Notation
- 3 A Framework for Secure Learning
- Part II Causative Attacks on Machine Learning
- Part III Exploratory Attacks on Machine Learning
- Part IV Future Directions in Adversarial Machine Learning
- Part V Appendixes
- Glossary
- References
- Index
3 - A Framework for Secure Learning
from Part I - Overview of Adversarial Machine Learning
Published online by Cambridge University Press: 14 March 2019
- Frontmatter
- Contents
- List of Symbols
- Acknowledgments
- Part I Overview of Adversarial Machine Learning
- 1 Introduction
- 2 Background and Notation
- 3 A Framework for Secure Learning
- Part II Causative Attacks on Machine Learning
- Part III Exploratory Attacks on Machine Learning
- Part IV Future Directions in Adversarial Machine Learning
- Part V Appendixes
- Glossary
- References
- Index
Summary
In this chapter we introduce a framework for qualitatively assessing the security of machine learning systems that captures a broad set of security characteristics common to a number of related adversarial learning settings. There has been a rich set of work that examines the security of machine learning systems; here we survey prior studies of learning in adversarial environments, attacks against learning systems, and proposals for making systems secure against attacks. We identify different classes of attacks on machine learning systems (Section 3.3), categorizing a threat in terms of three crucial properties.
We also present secure learning as a game between an attacker and a defender— the taxonomy determines the structure of the game and its cost model. Further, this taxonomy provides a basis for evaluating the resilience of the systems described by analyzing threats against them to construct defenses. The development of defensive learning techniques is more tentative, but we also discuss a variety of techniques that show promise for defending against different types of attacks.
The work we present not only provides a common language for thinking and writing about secure learning, but goes beyond that to show how the framework applies to both algorithm design and the evaluation of real-world systems. Not only does the framework elicit common themes in otherwise disparate domains but it has also motivated our study of practical machine learning systems as presented in Chapters 5, 6, and 8. These foundational principles for characterizing attacks against learning systems are an essential first step if secure machine learning is to reach its potential as a tool for use in real systems in security-sensitive domains.
This chapter builds on earlier research (Barreno, Nelson, Sears, Joseph, & Tygar 2006; Barreno, Nelson, Joseph, & Tygar 2010; Barreno 2008).
Analyzing the Phases of Learning
Attacks can occur at each of the phases of the learning process that were outlined in Section 2.2. Figure 2.1(a) depicts how data flows through each phase of learning. We briefly outline how attacks against these phases differ.
The Measuring Phase
With knowledge of the measurement process, an adversary can design malicious instances to mimic the measurements of innocuous data. After a successful attack against the measurement mechanism, the system may require expensive reinstrumentation or redesign to accomplish its task.
- Type
- Chapter
- Information
- Adversarial Machine Learning , pp. 29 - 66Publisher: Cambridge University PressPrint publication year: 2019