Hostname: page-component-f554764f5-nt87m Total loading time: 0 Render date: 2025-04-22T02:23:41.745Z Has data issue: false hasContentIssue false

A new problem of evil?

Published online by Cambridge University Press:  14 April 2025

Nesim Aslantatar*
Affiliation:
Department of Philosophy, Indiana University Bloomington, Bloomington, IN, USA

Abstract

This article examines whether artificial intelligence (AI)-driven harm can be classified as moral or natural evil, or whether a new category – artificial evil – is needed. Should AI’s harm be seen as a product of human design, thus maintaining moral responsibility for its creators, or whether AI’s autonomous actions resemble natural evil, where harm arises unintentionally? The concept of artificial evil, combining elements of both moral and natural evil, is presented to better address this dilemma. Just as AI is still a form of intelligence (albeit non-biological), artificial evil would still be evil in the sense that it results in real harm or suffering – it is just that this harm is produced by AI systems rather than by nature or human moral agents directly. The discussion further extends into the realm of defence or theodicy, drawing parallels with the Free Will Defence, questioning if AI autonomy may be justified even if it results in harm, much like human free will. Ultimately, the article calls for a re-evaluation of our ethical frameworks and glossary of terms to address the emerging challenges of AI autonomy and its moral implications.

Type
The Big Question
Copyright
© The Author(s), 2025. Published by Cambridge University Press.

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable