Book contents
- Frontmatter
- Contents
- List of contributors
- Acknowledgements
- PART I Introduction
- PART II Meanings of autonomy and human cognition under automation
- 2 Staying in the loop: human supervisory control of weapons
- 3 The autonomy of technological systems and responsibilities for their use
- 4 Human–machine autonomies
- PART III Autonomous weapons systems and human dignity
- PART IV Risk, transparency and legal compliance in the regulation of autonomous weapons systems
- PART V New frameworks for collective responsibility
- PART VI New frameworks for individual responsibility
- PART VII Conclusion
- Index
4 - Human–machine autonomies
from PART II - Meanings of autonomy and human cognition under automation
Published online by Cambridge University Press: 05 August 2016
- Frontmatter
- Contents
- List of contributors
- Acknowledgements
- PART I Introduction
- PART II Meanings of autonomy and human cognition under automation
- 2 Staying in the loop: human supervisory control of weapons
- 3 The autonomy of technological systems and responsibilities for their use
- 4 Human–machine autonomies
- PART III Autonomous weapons systems and human dignity
- PART IV Risk, transparency and legal compliance in the regulation of autonomous weapons systems
- PART V New frameworks for collective responsibility
- PART VI New frameworks for individual responsibility
- PART VII Conclusion
- Index
Summary
We are responsible for the world of which we are a part, not because it is an arbitrary construction of our choosing but because reality is sedimented out of particular practices that we have a role in shaping and through which we are shaped.
Karen Barad, Meeting the Universe Halfway[R]esearch and development in automation are advancing from a state of automatic systems requiring human control toward a state of autonomous systems able to make decisions and react without human interaction. DoD will continue to carefully consider the implications of these advancements.
US Department of Defense, Unmanned Systems Integrated RoadmapThis chapter takes up the question of how we might think about the increasing automation of military systems not as an inevitable ‘advancement’ of which we are the interested observers, but rather as an effect of particular world-making practices in which we need urgently to intervene. We begin from the premise that the foundation of the legality of killing in situations of war is the possibility of discrimination between combatants and non-combatants. At a time when this defining form of situational awareness seems increasingly problematic, military investments in the automation of weapon systems are growing. The trajectory of these investments, moreover, is towards the development and deployment of lethal autonomous weapons – that is, weapon systems in which the identification of targets and the initiation of fire is automated in ways that preclude deliberative human intervention. Challenges to these developments underscore the immorality and illegality of delegating responsibility for the use of force against human targets to machines, and the requirements of international humanitarian law that there be (human) accountability for acts of killing. In these debates, the articulation of differences between humans and machines is key.
The aim of this chapter is to strengthen arguments against the increasing automation of weapon systems, by expanding the frame or unit of analysis that informs these debates. We begin by tracing the genealogy of concepts of autonomy within the philosophical traditions that animate artificial intelligence, with a focus on the history of early cybernetics and contemporary approaches to machine learning in behaviour-based robotics.
- Type
- Chapter
- Information
- Autonomous Weapons SystemsLaw, Ethics, Policy, pp. 75 - 102Publisher: Cambridge University PressPrint publication year: 2016
- 10
- Cited by