No CrossRef data available.
Article contents
A new problem of evil?
Published online by Cambridge University Press: 14 April 2025
Abstract
This article examines whether artificial intelligence (AI)-driven harm can be classified as moral or natural evil, or whether a new category – artificial evil – is needed. Should AI’s harm be seen as a product of human design, thus maintaining moral responsibility for its creators, or whether AI’s autonomous actions resemble natural evil, where harm arises unintentionally? The concept of artificial evil, combining elements of both moral and natural evil, is presented to better address this dilemma. Just as AI is still a form of intelligence (albeit non-biological), artificial evil would still be evil in the sense that it results in real harm or suffering – it is just that this harm is produced by AI systems rather than by nature or human moral agents directly. The discussion further extends into the realm of defence or theodicy, drawing parallels with the Free Will Defence, questioning if AI autonomy may be justified even if it results in harm, much like human free will. Ultimately, the article calls for a re-evaluation of our ethical frameworks and glossary of terms to address the emerging challenges of AI autonomy and its moral implications.
- Type
- The Big Question
- Information
- Copyright
- © The Author(s), 2025. Published by Cambridge University Press.