Hostname: page-component-7bb8b95d7b-495rp Total loading time: 0 Render date: 2024-10-05T21:16:26.007Z Has data issue: false hasContentIssue false

Moral artificial intelligence and machine puritanism

Published online by Cambridge University Press:  04 October 2023

Jean-François Bonnefon*
Affiliation:
Toulouse School of Economics, Toulouse, France [email protected]; https://jfbonnefon.github.io/

Abstract

Puritanism may evolve into a technological variant based on norms of delegation of actions and perceptions to artificial intelligence. Instead of training self-control, people may be expected to cede their agency to self-controlled machines. The cost–benefit balance of this machine puritanism may be less aversive to wealthy individualistic democracies than the old puritanism they have abandoned.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

The authors make a compelling case that puritan morality is a cognitive technology aimed at facilitating cooperative behavior, based on folk beliefs about the importance and trainability of self-control for overcoming temptations. The puritan technology is crude and costly, though. Crude, because puritan morality can be too optimistic in its belief that self-control can be trained, or too confident in the efficacy of its training regimen. Costly, because puritan morality asks a lot from people. It requires them to voluntary renounce many of the pleasures that the world can offer; and it restricts freedom, particularly that of women, in the name of not creating temptations for others. Given the fragility of this cost–benefit balance, it is perhaps no surprise that puritan morality has fallen out of fashion in wealthy, individualistic democracies which offer abundant access to all sorts of pleasures, and put a high value on individual freedom.

Here I suggest that a different form of puritanism may emerge in these wealthy individualistic societies, under a technological version which changes its cost–benefit balance. The key idea is that progress in artificial intelligence has created a new class of agents for our moral psychology to contend with: Autonomous, intelligent machines whose decisions can fall in the moral domain. For example, autonomous cars take on the duty of protecting the lives of road users; and recommendation algorithms take on the duty to steer children away from inappropriate content. These machines have a moral duty, and are given a considerable degree of autonomy to perform it. Although they do not always guarantee ethical outcomes (Köbis, Bonnefon, & Rahwan, Reference Köbis, Bonnefon and Rahwan2021), machines are paragons of puritan morality, because they do not indulge in anything. Gluttony and lust are unknown to them. They do not dress immodestly, or engage in unruly dance. They do not drink alcohol or consume any other drug. They do not yield to temptation, because they do not experience it, just as the perfect puritan would.

This is indeed one of the first things that people say when arguing about the benefits of autonomous cars (Shariff, Bonnefon, & Rahwan, Reference Shariff, Bonnefon and Rahwan2017): Autonomous cars are never drunk or under the influence of any substance, they do not look at their phone when driving, and they do not fall asleep at the wheel after a night of partying. In other words, they achieve the cooperative behavior that puritan morality seeks, through the perfect display of self-control that puritan morality values. What is more, they may do so with greater efficacy, and for lower costs. Greater efficacy, because it may at some point be easier to program a car to drive safely, than to train a human to do the same (Shariff, Bonnefon, & Rahwan, Reference Shariff, Bonnefon and Rahwan2021). Lower costs, because they remove the need to abstain from alcohol or partying: People no longer need to renounce bodily pleasures, as long as they cede their agency to their car.

This is an example of what I will tentatively call “machine puritanism.” Machine puritanism is a moral system in which people are not expected to build or exercise self-control, but are expected instead to cede their agency to self-controlled machines, either through the delegation of their actions, or through the delegation of their perceptions. Machine puritanism replaces the puritan norms with a novel set of norms, which may be less aversive to members of wealthy individualistic democracies, because they promise better outcomes for lower personal effort.

We have already considered one such example of norm substitution: instead of requiring that people abstain from drinking and partying before driving, machine puritanism requires that they always cede their driving decisions to autonomous cars. Other forms of action delegations may imply that we let machines speak for us, in order to maintain decency of speech (Hancock, Naaman, & Levy, Reference Hancock, Naaman and Levy2020). Puritan norms would require people to discipline themselves into suppressing emotions like anger or infatuation, so that their speech be free of hostility or innuendo; machine puritanism would give people leave to feel whatever they feel, in exchange for letting machines rewrite their emails, text messages, and social media posts to eliminate every trace of inappropriate speech (Gonçalves et al., Reference Gonçalves, Weber, Masullo, Torres da Silva and Hofhuis2021). In a more extreme form of this norm, people may be expected to let a machine block their communications if the machine detects that they are in too emotionally aroused a state.

Machine puritanism may include norms of delegated perception, in addition to the norms of delegated actions. Puritanism requires people to avoid situations in which they could be exposed to arousing stimuli, as well as to not expose others to such stimuli. Machine puritanism would let people do as they please, but give them the option of erasing stimuli from their perception. Instead of refusing to go to a restaurant where alcohol is served, out of fear that they would be tempted to drink, machine puritans could instruct their phone to eliminate the alcohol offerings from the restaurant menu they access through a QR code. Instead of refusing to go to the beach, out of fear of seeing nude bodies, machine puritans could instruct their smart glasses or contacts to blur the bodies of other beachgoers. At some point, the use of such a filter would become more of a norm, because why would you elect to see the bodies of others, if your smart contacts can give them privacy?

The wealthy and individualistic democracies of the West, in which puritan norms have been largely abandoned, are also among the first societies in which intelligent machines will be made massively available. With this massive availability will come the possibility of new puritan norms, which will no longer emphasize the training of self-control, but require instead that we cede control of our perceptions and decisions to these new technological paragons of puritan morality.

Financial support

Bonnefon acknowledges support from grant ANR-19-PI3A-0004, grant ANR-17-EURE-0010, and the research foundation TSE-Partnership.

Competing interest

None.

References

Gonçalves, J., Weber, I., Masullo, G. M., Torres da Silva, M., & Hofhuis, J. (2021). Common sense or censorship: How algorithmic moderators and message type influence perceptions of online content deletion. New Media & Society, 14614448211032310.Google ScholarPubMed
Hancock, J. T., Naaman, M., & Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89100.CrossRefGoogle Scholar
Köbis, N., Bonnefon, J. F., & Rahwan, I. (2021). Bad machines corrupt good morals. Nature Human Behaviour, 5(6), 679685.CrossRefGoogle ScholarPubMed
Shariff, A., Bonnefon, J. F., & Rahwan, I. (2017). Psychological roadblocks to the adoption of self-driving vehicles. Nature Human Behaviour, 1(10), 694696.CrossRefGoogle Scholar
Shariff, A., Bonnefon, J. F., & Rahwan, I. (2021). How safe is safe enough? Psychological mechanisms underlying extreme safety demands for self-driving cars. Transportation Research Part C: Emerging Technologies, 126, 103069.CrossRefGoogle Scholar