Hostname: page-component-586b7cd67f-l7hp2 Total loading time: 0 Render date: 2024-11-25T23:57:27.193Z Has data issue: false hasContentIssue false

Dual-process moral judgment beyond fast and slow

Published online by Cambridge University Press:  18 July 2023

Joshua D. Greene*
Affiliation:
Department of Psychology, Center for Brain Science, Harvard University, Cambridge, MA, USA [email protected] https://www.joshua-greene.net

Abstract

De Neys makes a compelling case that the sacrificial moral dilemmas do not elicit competing “fast and slow” processes. But are there even two processes? Or just two intuitions? There remains strong evidence, most notably from lesion studies, that sacrificial dilemmas engage distinct cognitive processes generating conflicting emotional and rational responses. The dual-process theory gets much right, but needs revision.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

As a proponent of the dual-process theory of moral judgment (Greene, Reference Greene2013; Greene, Sommerville, Nystrom, Darley, & Cohen, Reference Greene, Sommerville, Nystrom, Darley and Cohen2001, Reference Greene, Nystrom, Engell, Darley and Cohen2004, Reference Greene, Morelli, Lowenberg, Nystrom and Cohen2008) I had, following Kahneman (Reference Kahneman2003, Reference Kahneman2011), long thought of its dual processes as respectively “fast and slow.” De Neys makes a compelling case that this is not so. One might conclude, then, that the dual-process theory of moral judgment should be retired. But that, I believe, would be a mistake. There remains strong evidence that moral dilemmas elicit competing responses that are supported by distinct cognitive systems, that one response is meaningfully characterized as more emotional, and that the other is meaningfully characterized as more rational. We may simply need to drop the idea that one response gets a head start in decision making.

For the uninitiated… we are considering sacrificial moral dilemmas such as the footbridge case in which one can save five lives by pushing someone in front of a speeding trolley (Thomson, Reference Thomson1985). According to the dual-process theory, the characteristically deontological response (that it's wrong to push) is supported by an intuitive negative emotional response to the harmful action, whereas the characteristically utilitarian judgment (that it's morally acceptable to push) is driven by more deliberative cost–benefit reasoning. What's now in question is whether the utilitarian judgment is in fact more deliberative and less intuitive.

There is accumulating evidence that the utilitarian response is not slower (Bago & De Neys, Reference Bago and De Neys2019; Baron & Gürçay, Reference Baron and Gürçay2017; Cova et al., Reference Cova, Strickland, Abatista, Allard, Andow, Attie and Zhou2021; Gürçay & Baron, Reference Gürçay and Baron2017; Koop, Reference Koop2013; Rosas, Reference Rosas, Bermúdez and Aguilar-Pardo2019; Tinghög et al., Reference Tinghög, Andersson, Bonn, Johannesson, Kirchler, Koppel and Västfjäll2016), despite a body of evidence indicating that it is (Greene, Morelli, Lowenberg, Nystrom, & Cohen, Reference Greene, Morelli, Lowenberg, Nystrom and Cohen2008; Suter & Hertwig, Reference Suter and Hertwig2011; Trémolière, De Neys, & Bonnefon, Reference Trémolière, De Neys and Bonnefon2012). De Neys argues that such dilemmas simply involve two competing intuitions, and he gives no reason to think that they are driven by distinct processes. And yet, if one looks beyond reaction times and cognitive load, evidence for distinct processes abounds.

I can't properly review this evidence here, but I can describe some highlights. Here I focus on studies of lesion patients, which have produced some of the most dramatic evidence supporting the dual-process theory. Koenigs et al. (Reference Koenigs, Young, Adolphs, Tranel, Cushman, Hauser and Damasio2007) showed that patients with damage to the ventromedial prefrontal cortex are overwhelmingly more likely to make utilitarian judgments compared to healthy patients and brain-damaged controls. What's more, these patients have impaired emotional responses, as demonstrated by skin-conductance data. Similar results with VMPFC patients are reported by Ciaramelli, Muccioli, Làdavas, and Di Pellegrino (Reference Ciaramelli, Muccioli, Làdavas and Di Pellegrino2007), Thomas, Croft, and Tranel (Reference Thomas, Croft and Tranel2011), and Moretto, Làdavas, Mattioli, and Di Pellegrino (Reference Moretto, Làdavas, Mattioli and Di Pellegrino2010). Demonstrating the opposite effect, McCormick, Rosenthal, Miller, and Maguire (Reference McCormick, Rosenthal, Miller and Maguire2016) show that patients with hippocampal damage are overwhelmingly more likely to give deontological responses, and they provide parallel evidence using both skin-conductance data and patient self-reports that these responses are because of dominant emotional responses. Verfaellie, Hunsberger, and Keane (Reference Verfaellie, Hunsberger and Keane2021) report similar results. (But see Craver et al., Reference Craver, Keven, Kwan, Kurczek, Duff and Rosenbaum2016.) Finally, and most recently, van Honk et al. (Reference van Honk, Terburg, Montoya, Grafman, Stein and Morgan2022) show that patients with damage to the basolateral amygdala (implicated in goal-directed decision making) is associated with increased deontological judgment. And here, too, the effects appear to be because of dominant emotional responses. (Note that the basolateral amygdala is distinct from the central-medial amygdala, which is associated with classic affective responses and is what psychologists typically think of as “the amygdala.”)

Cushman (Reference Cushman2013) has reconceptualized the dual-process theory as a contrast between model-based and model-free algorithms for learning and decision making (Balleine & O'Doherty, Reference Balleine and O'Doherty2010; Sutton & Barto, Reference Sutton and Barto2018). (See also Crockett, Reference Crockett2013.) Model-based judgment is based on an explicit representation of cause–effect relationships between actions and outcomes, plus values attached to outcomes. Model-free judgment depends, instead, on accessing values attached directly to actions based on prior reinforcement. Cushman and colleagues have since provided compelling evidence that utilitarian judgments are model-based, while deontological judgments are driven by model-free responses (Miller & Cushman, Reference Miller and Cushman2013; Patil et al., Reference Patil, Zucchelli, Kool, Campbell, Fornasier, Calò and Cushman2021). Moreover, the model-based/model-free distinction specifically explains why patients with hippocampal damage and basolateral amygdala damage make fewer utilitarian judgments (McCormick et al., Reference McCormick, Rosenthal, Miller and Maguire2016; van Honk et al., Reference van Honk, Terburg, Montoya, Grafman, Stein and Morgan2022). As Cushman emphasizes, model-based judgment is not emotion free, as value must be attached to outcomes. But as the patient data indicate, not all emotion is equally emotional.

Putting all of this together, the following picture emerges: Deontological and utilitarian judgments are driven by different processes, as indicated by the contrasting effects of damage to different brain regions. And yet the behavioral data suggest that neither of these processes is reliably faster than the other. Should we say that both responses are intuitive, as De Neys suggests? Yes, in a sense. Both responses come to mind quickly, and further processing is needed to adjudicate between them. (See Shenhav & Greene, Reference Shenhav and Greene2014, on how these responses may be integrated.) But there is an important sense in which the deontological response is more intuitive. It is based on a feeling that the action is wrong. And, in dilemmas like the footbridge case, this feeling is affected by whether the action involves pushing versus hitting a switch (Bago et al., Reference Bago, Kovacs, Protzko, Nagy, Kekecs, Palfi and Matibag2022; Greene et al., Reference Greene, Cushman, Stewart, Lowenberg, Nystrom and Cohen2009). This sensitivity to the physical mechanism of harm is unconscious (Cushman, Young, & Hauser, Reference Cushman, Young and Hauser2006) and not easy to rationally defend (Greene, Reference Greene2014, Reference Greene2017). By contrast, the model-based response is based on an explicit understanding of cost and benefits. This may not require much deliberation, at least when it's just five lives versus one, but it is recognizably rational. Indeed, such judgments are correlated with a range of judgments in non-moral contexts that are unequivocally rational (Patil et al., Reference Patil, Zucchelli, Kool, Campbell, Fornasier, Calò and Cushman2021).

All of this suggests that the dual-process theory's fundamental distinction between emotional and rational responses remains intact, but with the surprising twist, supported by De Neys's synthesis, that it's not about fast versus slow.

Competing interest

None.

References

Bago, B., & De Neys, W. (2019). The intuitive greater good: Testing the corrective dual process model of moral cognition. Journal of Experimental Psychology: General, 148(10), 1782.10.1037/xge0000533CrossRefGoogle ScholarPubMed
Bago, B., Kovacs, M., Protzko, J., Nagy, T., Kekecs, Z., Palfi, B., … Matibag, C. J. (2022). Situational factors shape moral judgements in the trolley dilemma in eastern, southern and western countries in a culturally diverse sample. Nature Human Behaviour, 6(6), 880895.CrossRefGoogle Scholar
Balleine, B. W., & O'Doherty, J. P. (2010). Human and rodent homologies in action control: Corticostriatal determinants of goal-directed and habitual action. Neuropsychopharmacology, 35(1), 4869.CrossRefGoogle ScholarPubMed
Baron, J., & Gürçay, B. (2017). A meta-analysis of response-time tests of the sequential two-systems model of moral judgment. Memory & Cognition, 45(4), 566575.CrossRefGoogle ScholarPubMed
Ciaramelli, E., Muccioli, M., Làdavas, E., & Di Pellegrino, G. (2007). Selective deficit in personal moral judgment following damage to ventromedial prefrontal cortex. Social Cognitive and Affective Neuroscience, 2(2), 8492.CrossRefGoogle ScholarPubMed
Cova, F., Strickland, B., Abatista, A., Allard, A., Andow, J., Attie, M., … Zhou, X. (2021). Estimating the reproducibility of experimental philosophy. Review of Philosophy and Psychology, 12(1), 944.CrossRefGoogle Scholar
Craver, C. F., Keven, N., Kwan, D., Kurczek, J., Duff, M. C., & Rosenbaum, R. S. (2016). Moral judgment in episodic amnesia. Hippocampus, 26(8), 975979.CrossRefGoogle ScholarPubMed
Crockett, M. J. (2013). Models of morality. Trends in Cognitive Sciences, 17(8), 363366.CrossRefGoogle ScholarPubMed
Cushman, F. (2013). Action, outcome, and value: A dual-system framework for morality. Personality and Social Psychology Review, 17(3), 273292.CrossRefGoogle ScholarPubMed
Cushman, F., Young, L., & Hauser, M. (2006). The role of conscious reasoning and intuition in moral judgment: Testing three principles of harm. Psychological Science, 17(12), 10821089.CrossRefGoogle ScholarPubMed
Greene, J. (2013). Moral tribes: Emotion, reason, and the gap between us and them. Penguin.Google Scholar
Greene, J. D. (2014). Beyond point-and-shoot morality: Why cognitive (neuro) science matters for ethics. Ethics, 124(4), 695726.CrossRefGoogle Scholar
Greene, J. D. (2017). The rat-a-gorical imperative: Moral intuition and the limits of affective learning. Cognition, 167, 6677.CrossRefGoogle ScholarPubMed
Greene, J. D., Cushman, F. A., Stewart, L. E., Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2009). Pushing moral buttons: The interaction between personal force and intention in moral judgment. Cognition, 111(3), 364371.CrossRefGoogle ScholarPubMed
Greene, J. D., Morelli, S. A., Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2008). Cognitive load selectively interferes with utilitarian moral judgment. Cognition, 107(3), 11441154.CrossRefGoogle ScholarPubMed
Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44(2), 389400.CrossRefGoogle ScholarPubMed
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science (New York, N.Y.), 293(5537), 21052108.CrossRefGoogle ScholarPubMed
Gürçay, B., & Baron, J. (2017). Challenges for the sequential two-system model of moral judgement. Thinking & Reasoning, 23(1), 4980.CrossRefGoogle Scholar
Kahneman, D. (2003). A perspective on judgment and choice: Mapping bounded rationality. American Psychologist, 58(9), 697.CrossRefGoogle ScholarPubMed
Kahneman, D. (2011). Thinking, fast and slow. Macmillan.Google Scholar
Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., & Damasio, A. (2007). Damage to the prefrontal cortex increases utilitarian moral judgements. Nature, 446(7138), 908911.CrossRefGoogle Scholar
Koop, G. J. (2013). An assessment of the temporal dynamics of moral decisions. Judgment and Decision Making, 8(5), 527.CrossRefGoogle Scholar
McCormick, C., Rosenthal, C. R., Miller, T. D., & Maguire, E. A. (2016). Hippocampal damage increases deontological responses during moral decision making. Journal of Neuroscience, 36(48), 1215712167.CrossRefGoogle ScholarPubMed
Miller, R., & Cushman, F. (2013). Aversive for me, wrong for you: First-person behavioral aversions underlie the moral condemnation of harm. Social and Personality Psychology Compass, 7(10), 707718.CrossRefGoogle Scholar
Moretto, G., Làdavas, E., Mattioli, F., & Di Pellegrino, G. (2010). A psychophysiological investigation of moral judgment after ventromedial prefrontal damage. Journal of Cognitive Neuroscience, 22(8), 18881899.CrossRefGoogle ScholarPubMed
Patil, I., Zucchelli, M. M., Kool, W., Campbell, S., Fornasier, F., Calò, M., … Cushman, F. (2021). Reasoning supports utilitarian resolutions to moral dilemmas across diverse measures. Journal of Personality and Social Psychology, 120(2), 443.CrossRefGoogle ScholarPubMed
Rosas, A., Bermúdez, J. P., & Aguilar-Pardo, D. (2019). Decision conflict drives reaction times and utilitarian responses in sacrificial dilemmas. Judgment & Decision Making, 14(5), 555564.CrossRefGoogle Scholar
Shenhav, A., & Greene, J. D. (2014). Integrative moral judgment: Dissociating the roles of the amygdala and ventromedial prefrontal cortex. Journal of Neuroscience, 34(13), 47414749.CrossRefGoogle ScholarPubMed
Suter, R. S., & Hertwig, R. (2011). Time and moral judgment. Cognition, 119(3), 454458.CrossRefGoogle ScholarPubMed
Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT Press.Google Scholar
Thomas, B. C., Croft, K. E., & Tranel, D. (2011). Harming kin to save strangers: Further evidence for abnormally utilitarian moral judgments after ventromedial prefrontal damage. Journal of Cognitive Neuroscience, 23(9), 21862196.CrossRefGoogle Scholar
Thomson, J. J. (1985). The trolley problem. Yale Law Journal, 94(6), 13951415.CrossRefGoogle Scholar
Tinghög, G., Andersson, D., Bonn, C., Johannesson, M., Kirchler, M., Koppel, L., & Västfjäll, D. (2016). Intuition and moral decision-making – The effect of time pressure and cognitive load on moral judgment and altruistic behavior. PLoS ONE, 11(10), e0164012.10.1371/journal.pone.0164012CrossRefGoogle ScholarPubMed
Trémolière, B., De Neys, W., & Bonnefon, J. F. (2012). Mortality salience and morality: Thinking about death makes people less utilitarian. Cognition, 124(3), 379384.CrossRefGoogle ScholarPubMed
van Honk, J., Terburg, D., Montoya, E. R., Grafman, J., Stein, D. J., & Morgan, B. (2022). Breakdown of utilitarian moral judgement after basolateral amygdala damage. Proceedings of the National Academy of Sciences, 119(31), e2119072119.CrossRefGoogle ScholarPubMed
Verfaellie, M., Hunsberger, R., & Keane, M. M. (2021). Episodic processes in moral decisions: Evidence from medial temporal lobe amnesia. Hippocampus, 31(6), 569579.CrossRefGoogle ScholarPubMed