Book contents
- Frontmatter
- Contents
- Contributors
- Preface
- Part I Introduction
- Part II Neuromorphic robots: biologically and neurally inspired designs
- Part III Brain-based robots: architectures and approaches
- Part IV Philosophical and theoretical considerations
- Part V Ethical considerations
- 14 Ethical implications of intelligent robots
- 15 Toward robot ethics through the ethics of autism
- Index
- References
14 - Ethical implications of intelligent robots
from Part V - Ethical considerations
Published online by Cambridge University Press: 05 February 2012
- Frontmatter
- Contents
- Contributors
- Preface
- Part I Introduction
- Part II Neuromorphic robots: biologically and neurally inspired designs
- Part III Brain-based robots: architectures and approaches
- Part IV Philosophical and theoretical considerations
- Part V Ethical considerations
- 14 Ethical implications of intelligent robots
- 15 Toward robot ethics through the ethics of autism
- Index
- References
Summary
Introduction
The ethical challenges of robot development were dramatically thrust onto center stage with Asimov’s book I, Robot in 1950, where the three “Laws of Robotics” first appeared in a short story. The “laws” assume that robots are (or will be) capable of perception and reasoning and will have intelligence comparable to a child, if not better, and in addition that they will remain subservient to humans. Thus, the first law reads:
“A robot may not injure a human being, or, through inaction, allow a human being to come to harm.”
Clearly, in these days when military robots are used to kill humans, this law is (perhaps regrettably) obsolete. However, it still raises fundamental questions about the relationship between humans and robots, especially when the robots are capable of exerting lethal force. Asimov’s law also suffers from the complexities of designing machines with a sense of morality. As one of several possible approaches to control their behavior, robots could be equipped with specialized software that would ensure that they conform to the “Laws of War” and the “Rules of Engagement” of a particular conflict. After realistic simulations and testing, such software controls perhaps would not prevent all unethical behaviors, but they would ensure that robots behave at least as ethically as human soldiers do (Arkin, 2009) (though this is still an inadequate solution for many critics).
Today, military robots are autonomous in navigation capabilities, but most depend on remote humans to “pull the trigger” which releases a missile or other weapon. Research in neuromorphic and brain-based robotics may hold the key to significantly more advanced artificial intelligence and robotics, perhaps to the point where we would entrust ordinary attack decisions to robots. But what are the moral issues we ought to consider before giving machines the ability to make such life-or-death decisions?
- Type
- Chapter
- Information
- Neuromorphic and Brain-Based Robots , pp. 323 - 344Publisher: Cambridge University PressPrint publication year: 2011
References
- 2
- Cited by