Hostname: page-component-586b7cd67f-g8jcs Total loading time: 0 Render date: 2024-11-26T13:58:11.533Z Has data issue: false hasContentIssue false

Artificial Agents in Natural Moral Communities: A Brief Clarification

Published online by Cambridge University Press:  10 June 2021

Daniel W. Tigard*
Affiliation:
Institute for History and Ethics of Medicine, Technical University of Munich, 81675Munich, Germany
*
*Corresponding author: Email. [email protected]

Abstract

What exactly is it that makes one morally responsible? Is it a set of facts which can be objectively discerned, or is it something more subjective, a reaction to the agent or context-sensitive interaction? This debate gets raised anew when we encounter newfound examples of potentially marginal agency. Accordingly, the emergence of artificial intelligence (AI) and the idea of “novel beings” represent exciting opportunities to revisit inquiries into the nature of moral responsibility. This paper expands upon my article “Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible” and clarifies my reliance upon two competing views of responsibility. Although AI and novel beings are not close enough to us in kind to be considered candidates for the same sorts of responsibility we ascribe to our fellow human beings, contemporary theories show us the priority and adaptability of our moral attitudes and practices. This allows us to take seriously the social ontology of relationships that tie us together. In other words, moral responsibility is to be found primarily in the natural moral community, even if we admit that those communities now contain artificial agents.

Type
Response
Copyright
© The Author(s), 2021. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1. Tigard D. Artificial moral responsibility: How we can and cannot hold machines responsible. Cambridge Quarterly of Healthcare Ethics; available at https://doi.org/10.1017/S0963180120000985. forthcoming.

2. Shoemaker, D. Responsibility from the Margins. New York: Oxford University Press; 2015.CrossRefGoogle Scholar

3. That being said, I want to reiterate—as I pointed out in the initial paper—that with my inquiry into responsibility for artificial intelligence, I deviate from Shoemaker’s investigation of natural subjects. Accordingly, I take full responsibility for any unbecoming distortions of his theory.

4. Champagne M. The mandatory ontology of robot responsibility. Cambridge Quarterly of Healthcare Ethics; available at https://doi.org/10.1017/S0963180120000997. forthcoming.

5. Note that using the notion of “extremes” to depict the two views is simply an analytic tool, a way of drawing definite distinctions. I do not believe many theorists hold one of these extremes; instead, seeing the two views along a continuum, or maintaining some combination, seems more plausible. In any case, it is unclear how my framing of the contrast “misconstrues the relation” as Champagne writes.

6. For a fuller explanation, see note 2, Shoemaker 2015, at 19–20.

7. Yet, on occasion, he says things like “once the jury has found one guilty, one is (and thus was) guilty” (emphasis in original), indicating support for a more constructivist reading, which helps my case for locating responsibility in our practices.

8. Hence my subtitle: How we can and cannot hold machines responsible. See note 1.

9. Shoemaker, D. Response-dependent responsibility; or a funny thing happened on the way to blame. Philosophical Review 2017;126:481527.CrossRefGoogle Scholar

10. See Tognazzini’s defense of contemporary Strawsonians, in Tognazzini, N. Blameworthiness and the affective account of blame. Philosophia 2013;41:1299–312.CrossRefGoogle Scholar

11. Strawson, PF. Freedom and resentment. Proceedings of the British Academy 1962;48:125.Google Scholar Considering the enormous impact of Strawson’s work, we see that Champagne is simply mistaken to think the objective view of responsibility has “been around for too long” to be dislodged.

12. Favro M. Mother says 16-month old son injured by security robot at Stanford shopping center. NBC Los Angeles 12 July 2016; available at https://www.nbclosangeles.com/news/national-international/15-Month-Old-Boy-Injured-By-Robot-at-Stanford-Shopping-Center-386544141.html (last accessed 13 Dec 2019).

13. See, for example, Ren, F. Affective information processing and recognizing human emotion. Electronic Notes in Theoretical Computer Science 2009;225:3950.CrossRefGoogle Scholar Also, Knight, W. Amazon working on making Alexa recognize your emotions. MIT Technology Review 2016Google Scholar.

14. See, for example, Parthemore, J, Whitby, B. Moral agency, moral responsibility, and artifacts: What existing artifacts fail to achieve (and why), and why they, nevertheless, can (and do!) make moral claims upon us. International Journal of Machine Consciousness 2014;6:141–61CrossRefGoogle Scholar; also, Brezeal, C, Scassellati, B. How to build robots that make friends and influence people. Proceedings 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems, 1721 Oct1999Google Scholar, Kyongju, South Korea; Garreau J. Bots on the ground: In the field of battle (or even above it), robots are a soldier’s best friend. Washington Post 6 May 2007.

15. Champagne, M, Tonkens, R. Bridging the responsibility gap in automated warfare. Philosophy and Technology 2015;28:125–37CrossRefGoogle Scholar; Tigard, D. Taking the blame: appropriate responses to medical error. Journal of Medical Ethics 2019;45:101–5CrossRefGoogle ScholarPubMed. Tigard, D. Taking one for the team: A reiteration on the role of self-blame after medical error. Journal of Medical Ethics 2020;46:342–4.CrossRefGoogle Scholar