Book contents
- Frontmatter
- Contents
- General Introduction
- PART I THE NATURE OF MACHINE ETHICS
- PART II THE IMPORTANCE OF MACHINE ETHICS
- PART III ISSUES CONCERNING MACHINE ETHICS
- Introduction
- 6 What Matters to a Machine?
- 7 Machine Ethics and the Idea of a More-Than-Human Moral World
- 8 On Computable Morality
- 9 When Is a Robot a Moral Agent?
- 10 Philosophical Concerns with Machine Ethics
- 11 Computer Systems
- 12 On the Morality of Artificial Agents
- 13 Legal Rights for Machines
- PART IV APPROACHES TO MACHINE ETHICS
- PART V VISIONS FOR MACHINE ETHICS
- References
8 - On Computable Morality
An Examination of Machines as Moral Advisors
from PART III - ISSUES CONCERNING MACHINE ETHICS
Published online by Cambridge University Press: 01 June 2011
- Frontmatter
- Contents
- General Introduction
- PART I THE NATURE OF MACHINE ETHICS
- PART II THE IMPORTANCE OF MACHINE ETHICS
- PART III ISSUES CONCERNING MACHINE ETHICS
- Introduction
- 6 What Matters to a Machine?
- 7 Machine Ethics and the Idea of a More-Than-Human Moral World
- 8 On Computable Morality
- 9 When Is a Robot a Moral Agent?
- 10 Philosophical Concerns with Machine Ethics
- 11 Computer Systems
- 12 On the Morality of Artificial Agents
- 13 Legal Rights for Machines
- PART IV APPROACHES TO MACHINE ETHICS
- PART V VISIONS FOR MACHINE ETHICS
- References
Summary
Introduction
Is humanity ready or willing to accept machines as moral advisors? The use of various sorts of machines to give moral advice and even to take moral decisions in a wide variety of contexts is now under way. This raises some interesting and difficult ethical issues. It is not clear how people will react to this development when they become more generally aware of it. Nor is it clear how this technological innovation will affect human moral beliefs and behavior. It may also be a development that has long-term implications for our understanding of what it is to be human.
This chapter will focus on rather more immediate and practical concerns. If this technical development is occurring or about to occur, what should our response be? Is it an area of science in which research and development should be controlled or banned on ethical grounds? What sort of controls, if any, would be appropriate?
As a first move it is important to separate the question “Can it be done and, if so, how?” from the question “Should it be done?” There are, of course, overlaps and interdependencies between these two questions. In particular, there may be technical ways in which it should be done and technical ways in which it shouldn't be done. For example, some types of artificial intelligence (AI) systems (such as conventional rule-based systems) may be more predictable in their output than other AI technologies.
- Type
- Chapter
- Information
- Machine Ethics , pp. 138 - 150Publisher: Cambridge University PressPrint publication year: 2011
References
- 13
- Cited by