Article contents
Moral Psychology and the Unity of Morality
Published online by Cambridge University Press: 16 January 2015
Abstract
Jonathan Haidt's research on moral cognition has revealed that political liberals moralize mostly in terms of Harm and Fairness, whereas conservatives moralize in terms of those plus loyalty to Ingroup, respect for Authority, and Purity (or IAP). Some have concluded that the norms of morality encompass a wide variety of subject matters with no deep unity. To the contrary, I argue that the conservative position is partially debunked by its own lights. IAP norms’ moral relevance depends on their tendency to promote welfare (especially to prevent harm). I argue that all moral agents, including conservatives, are committed to that claim at least implicitly. I then argue that an evolutionary account of moral cognition partially debunks the view that welfare-irrelevant IAP norms have moral force. Haidt's own normative commitments are harmonized by this view: IAP norms are more important than liberals often realize, yet morality is at bottom all about promoting welfare.
- Type
- Research Article
- Information
- Copyright
- Copyright © Cambridge University Press 2015
References
1 Haidt, Jonathan, The Righteous Mind: Why Good People Are Divided by Politics and Religion (New York, 2012)Google Scholar.
2 In Haidt's surveys, ‘very conservative’ and ‘very liberal’ are usually poles on a linear spectrum of preliminary self-identification. I suggest the labels are best understood as names for two general propensities in moralizing, rather than as political categories. These categories – along with the labels ‘Ingroup’, ‘Authority’ and ‘Purity’ – are vague and probably multifaceted. For example, different kinds of conservatives might moralize about sexual and ceremonial impurities, respectively. However, here there is no need to disambiguate further than Haidt does.
3 As confirmed so far in at least eleven cultures. See Graham, Jesse, Nosek, B. A., Haidt, Jonathan, Iyer, Ravi, Koleva, Spassena and Ditto, P. H., ‘Mapping the Moral Domain’, Journal of Personality and Social Psychology 101 (2011), pp. 366–85CrossRefGoogle ScholarPubMed.
4 The MFQ is used extensively in Graham et al., ‘Mapping the Moral Domain’, and can be found in that study's appendix as well as at <http://www.moralfoundations.org/sites/default/files/files/MFQ30.self-scorable.doc>.
5 Haidt, Jonathan, Koller, Silvia Helena and Dias, Maria G., ‘Affect, Culture, and Morality, or Is It Wrong to Eat Your Dog?’, Journal of Personality and Social Psychology 65 (1993), pp. 613–28CrossRefGoogle ScholarPubMed.
6 There may be various grounds for debate about the MFT model. Kurt Gray and co-authors take issue, arguing that all moral transgressions, even purity violations as understood by conservatives, are implicitly (subconsciously) understood in terms of agents harming something with a mind. See Young, Gray Liane, and Waytz, Adam, ‘Mind Perception is the Essence of Morality’, Psychological Inquiry 23 (2012), pp. 104–24Google Scholar. The present article's debunking approach is more resistant to counterexamples and inconvenient empirical results than Gray et al.'s paper. Even if conservatives’ IAP judgements consistently depart from Gray et al.'s model, this article contends that conservatives are committed to a Harm-based moral framework as a matter of logical implication.
7 Haidt praises the virtues of conservative moral thinking in places such as Righteous Mind, pp. 305–9.
8 See Sinnott-Armstrong, , ‘Is Moral Phenomenology Unified?’, Phenomenology and the Cognitive Sciences 7 (2008), pp. 85–97CrossRefGoogle Scholar; and Sinnott-Armstrong, Walter and Wheatley, Thalia, ‘The Disunity of Morality and Why it Matters to Philosophy’, Monist 95 (2012), pp. 355–77CrossRefGoogle Scholar.
9 Prinz, Jesse, The Emotional Construction of Morals (Oxford, 2007), p. 206Google Scholar.
10 In this article I will use ‘harm prevention’ and ‘welfare promotion’ as shorthand phrases for the following property of moral rules: their serving to promote welfare, especially by prohibiting and discouraging actions which are harmful or tend to be harmful, and also by recommending or requiring actions which are beneficial or tend to be beneficial.
11 Such as by Keller, Simon, ‘Welfarism’, Philosophy Compass 4 (2009), pp. 82–95CrossRefGoogle Scholar.
12 For other treatments of welfarism as neutral on this score, see Keller, ‘Welfarism’, p. 88, and Sumner, L. W., Welfare, Happiness, and Ethics (Oxford, 1996), ch. 7Google Scholar.
13 Debate on that interesting question takes place, for example, in Sinnott-Armstrong, ‘Is Moral Phenomenology Unified?’.
14 There are extremely rare scenarios in which one could triple one's pawns, and lose material, but checkmate one's opponent (or avoid being checkmated). Such cases illustrate the ultimate normative dependence of (3) on (1B). Such cases are the reason the qualifier (3) applies approximately when and only when (2) applies.
15 To simplify, I’ll ignore the alternative welfarist thesis that some IAP norms depend on (~H) and other IAP norms depend on a rule like (B) ‘benefit others in circumstance x’.
16 Indeed, we sometimes seem to use a normative sense of ‘harm’ on which A harms B iff A wrongfully puts B in a harmed state. Feinberg, Joel, The Moral Limits of the Criminal Law, Volume 1: Harm to Others (Oxford, 1984), p. 34Google Scholar.
17 This list borrows heavily from Feinberg, Harm to Others, p. 105.
18 Cf. Keller, ‘Welfarism’.
19 This account should be friendly both to common sense and to Haidt, who characterizes harm simply by associating it with a cluster of related concepts. These include (physical and emotional) need, suffering, distress; death; cruelty, unkindness; care, compassion; attachment, nurturance, and tender feelings toward cute stimuli. See e.g. Righteous Mind, pp. 131–4.
20 My categories of ‘vulnerability-’ and ‘desire-grounded’ interests are parallel to what Feinberg calls ‘welfare interests’ and ‘ulterior interests’, respectively (see Harm to Others, p. 37). For similar distinctions, see Rescher, Nicholas, Welfare: The Social Issue in Philosophical Perspective, (Pittsburgh, 1972)Google Scholar, and Harrosh, Shlomit, ‘Identifying Harms’, Bioethics 26 (2012), pp. 493–8CrossRefGoogle ScholarPubMed.
21 Feinberg, Harm to Others, p. 37.
22 Feinberg, Harm to Others, pp. 44–5.
23 There are many important questions about the nature of harm which this article cannot and need not settle. A sampling: first, just to what extent a desire, to ground an interest, must meet the six conditions we have listed. Second, whether certain entities can be patients of harm – e.g. animals, foetuses, groups of people, organizations, the natural environment, future persons, dead persons, the institution of marriage, etc. Third, whether and how harms are relative to a ‘baseline’ of normality. Fourth, whether we should distinguish being harmed from being offended (e.g. annoyed, embarrassed) or from other negative emotional or hedonic states (e.g. boredom, foul odours).
24 The general thought is that IAP considerations might be a reason in one case but no reason at all in another case, or even a countervailing reason. For discussion of holism about reasons see Dancy, Jonathan, ‘Holism in the Theory of Reasons’, Cogito 6 (1992), pp. 136–8CrossRefGoogle Scholar.
25 The basis for this assumption comes from studies which ask conservatives about justifications of moral rules. The best example is Haidt et al., ‘Affect, Culture, and Morality’. Haidt expounds this view in Righteous Mind, as well as ‘The Emotional Dog and its Rational Tail: A Social Intuitionist Approach to Moral Judgment’, Psychological Review 108 (2001), pp. 814–34. This assumption would be worth questioning elsewhere; I am aware of no study in which conservatives are directly asked, ‘but why is it bad to do something that violates that kind of Ingroup/Authority/Purity norm?’
26 According to at least two lines of evidence. First, recent internet surveys. In Haidt and colleagues’ (Graham et al., ‘Mapping the Moral Domain’) survey of people across political ideologies in eleven world regions, every political category on average assigned relevance of between ‘somewhat’ and ‘very’ to considerations relating to Harm (e.g. cruelty, infliction of suffering) and Fairness (e.g. discrimination, denying people their rights). Second, anthropologists have apparently confirmed that harm prohibitions are culturally universal. See Nichols, Shaun, Sentimental Rules (Oxford, 2004), p. 142CrossRefGoogle Scholar.
27 The moral/conventional distinction(s) apparently emerge in children as young as 3.5 years. It has been documented in children, adolescents and adults in numerous cultures including Brazil, China, India, Indonesia (preschool children both Muslim and Christian), Israel (both Arab and kibbutz Jewish children), Korea, Ijo children in Nigeria, and Zambia. Autistic children have also been observed to perform normally on the standard moral/conventional distinction. For an overview see Nucci, Larry, Education in the Moral Domain (Cambridge, 2001)CrossRefGoogle Scholar.
28 Knowledgeable readers may now be enticed by recalling recent studies which show that IAP norms elicit some or all of the moral attitude profile ((A)−(D)) in certain groups, especially certain conservatives, non-westerners, and others. That is simply a distraction concerning the periphery of the category of norms that elicit ‘moral’ attitudes. There is no question about the point being made in the main text – that harm norms are in within the core of that category. The most noteworthy recent criticism of the moral/conventional distinction is Kelly, Daniel, Stich, Stephan, Haley, K. J., Eng, S. J. and Fessler, D. M. T., ‘Harm, Affect, and the Moral/Conventional Distinction’, Mind & Language 22 (2007), pp. 117–31CrossRefGoogle Scholar. For an apt response, see Rosas, Alejandro, ‘Mistakes to avoid in Attacking the Moral/Conventional Distinction’, The Baltic International Yearbook of Cognition, Logic and Communication 7 (2012), pp. 1–10CrossRefGoogle Scholar.
29 Regarding the first claim, see Graham et al., ‘Mapping the Moral Domain’; regarding the parenthetical claim, see Haidt et al., ‘Affect, Culture, and Morality’, and cf. Haidt, Righteous Mind, p. 95.
30 For example, see Family Research Council, ‘The Top Ten Harms of Same-Sex Marriage’, pamphlet available at <http://downloads.frc.org/EF/EF11B30.pdf>. Other anecdotes of appeal to harm, and some indirectly suggestive empirical studies, are provided by Gray et al., ‘Mind Perception is the Essence of Morality’, p. 108.
31 Daniel Jacobson presents plausible arguments that the actions probed in Haidt's 2000 ‘dumbfounding’ study are in fact all morally wrong because they are potentially harmful, although not obviously so. Jacobson, , ‘Moral Dumbfounding and Moral Stupefaction’, Oxford Studies in Normative Ethics, vol. 2 (Oxford, 2012), pp. 289–316CrossRefGoogle Scholar.
32 Haidt, Righteous Mind, p. 24.
33 The relevant studies include: Haidt, Jonathan and Hersh, Matthew A, ‘Sexual Morality: The Cultures and Emotions of Conservatives and Liberals’, Journal of Applied Social Psychology 31 (2001), pp. 191–221CrossRefGoogle Scholar; and the widely cited but unpublished Jonathan Haidt, Fredrik Björklund and Scott Murphy, ‘Moral Dumbfounding: When Intuition Finds No Reason’, unpublished manuscript, available at <http://www.faculty.virginia.edu/haidtlab/articles/manuscripts/haidt.bjorklund.working-paper.when%20intuition%20finds%20no%20reason.pub603.doc>.
34 That is, the interviewers, trained to play ‘devil's advocate’, ‘did change some people's minds, in the direction for which he was playing devil's advocate, except that on the Heinz story the percentage endorsing Heinz’ theft rose even though the interviewer was in most cases arguing against that position. The percentage of participants who changed their minds averaged 16%, and did not differ significantly across tasks’ (Haidt et al., ‘Moral Dumbfounding’, p. 11, emphasis mine).
35 Such as the arguments provided in Jacobson, ‘Moral Dumbfounding and Moral Stupefaction’.
36 These are examples of IAP norms from Orissa, India, taken from Shweder, Richard, Mahapatra, Manamohan and Miller, Joan G., ‘Culture and Moral Development’, The Emergence of Morality in Young Children, ed. Kagan, J. and Lamb, S. (Chicago, 1987), pp. 1–83Google Scholar.
37 And, complementarily, in the (rare) circumstance in which I will surely in no way harm anyone by violating the rule (~ehf), then I have no reason to follow that rule – just as I have no reason not to triple my pawns when it surely won't lead to loss of material or of my chess game.
38 Jesse Graham, ‘Left Gut, Right Gut: Ideology and Automatic Moral Reactions’ (PhD thesis, University of Virginia, 2010). Graham does note that, as expected, liberals were seen to have stronger preferences than moderates and conservatives for Harm and Fairness over Ingroup, Authority, and Purity.
39 Of course, the most direct support for the ‘especially when’ thesis would come from conducting an experiment directly examining the additive effects on conservatives’ intuitions of combining IAP- with Harm-norm violations. For example, one could ask subjects about (1) torturing a stranger out of contempt versus (2) insulting your father out of contempt versus (3) torturing your father out of contempt – looking for an effect beyond the mere additive effect of (1) + (2).
40 Evolutionary debunking arguments of moral beliefs have been very popular recently. A good overview is Kahane, Guy, ‘Evolutionary Debunking Arguments’, Noûs 45 (2011), pp. 103–25CrossRefGoogle ScholarPubMed. I borrow his formula in what follows.
41 It is a side issue whether the welfare of the group must be understood in terms of the welfare of the individuals who comprise it. Either way, welfare is being promoted.
42 Haidt, Jonathan and Kesebir, Selin, ‘Morality’, Handbook of Social Psychology, ed. Fiske, S. T., Gilbert, D. T. and Lindzey, Gardner, 5th edn. (Hoboken, NJ, 2010), pp. 797–832Google Scholar. Two citations have been removed from this quotation.
43 The point is plausible if Haidt's evolutionary picture is resisted in a couple of ways. First, some readers might resist the appeal to group selection, saying that IA traits really facilitate individual reproductive success rather than group reproductive success (i.e. population growth) or cultural dominance. However, even if so, individuals would achieve reproductive success by avoiding harms and securing resources and other benefits before and during their reproductive stages of life. So IA norms still turn out to be heuristics for welfare promotion. The debate over group selection, or over the thoroughness or sincerity of humans’ disposition to sacrifice for their groups, turns out to be a distraction from this article's thesis. (Moreover, Haidt and Kesebir argue persuasively that ‘there is now a widespread consensus that cultural group selection occurs’ (‘Morality’, p. 818).)
Second, someone might remind us that welfare protection is not itself the ‘goal’ of evolution; rather, it is reproductive success (in the case of genetic evolution; and it is something like cultural influence in the case of cultural evolution). However, the protection at least of basic welfare is virtually a necessary condition for reproductive success (as well as for cultural influence). Organisms are prolific to the extent that, before and during their reproductive phases, they are spared death, injury, debilitating pain, etc. and secure resources and other benefits for themselves and their offspring (this is true both for reproductive success and for cultural influence). Furthermore, IA norms just do have respect for and promotion of the welfare of groups and/or their constituents as their explicit goals (to the best of my ethnographic knowledge), with sexual or cultural fertility being construed merely as one important aspect of the group's welfare.
44 Haidt and Kesebir, ‘Morality’, p. 809. Numerous citations have been removed from this quotation.
45 As Haidt and Kesebir explain (‘Morality’, p. 810), ‘there is no evidence that any non-human animal feels shame or guilt about violating such norms – only fear of punishment . . . Humans, in contrast, live in a far denser web of norms, mores, and folkways . . . and have an expanded suite of emotions related to violations, whether committed by others (e.g. anger, contempt, and disgust) or by the self (e.g. shame, embarrassment, and guilt)’.
46 An excellent account of the evolution of disgust is Kelly, Daniel, Yuck! The Nature and Moral Significance of Disgust (Cambridge, Mass., 2011)Google Scholar, ch. 2. See also Rozin, Paul, Haidt, Jonathan and McCauley, C. R., ‘Disgust’, Handbook of Emotions, ed. Lewis, M., Haviland-Jones, J. M. and Barrett, L. F., 3rd edn. (New York, 2008), pp. 757–76Google Scholar.
47 See Kelly, Yuck!.
48 Recent important exchanges on moral nihilism have featured Richard Joyce and Sharon Street on one hand, and various detractors on the other. For example, see Joyce, Richard, The Evolution of Morality (Cambridge, Mass., 2006)Google Scholar and Street, Sharon, ‘A Darwinian Dilemma for Realist Theories of Value’, Philosophical Studies 127 (2006), pp. 109–66CrossRefGoogle Scholar. Detractors include Jon Tresan, ‘Question Authority: In Defense of Naturalism without Clout’, Philosophical Studies (2010), pp. 221–38; and Finlay, Stephen, ‘Errors upon Errors: A Reply to Joyce’, Australasian Journal of Philosophy 89 (2011), pp. 535–47CrossRefGoogle Scholar.
49 This is one version of Joshua Gert's account of basic harms. See Gert, Joshua, ‘Problems for Moral Twin Earth Arguments’, Synthese 150 (2006), pp. 171–83CrossRefGoogle Scholar, at 176. Gert characterizes an aversion as basic just in case the most appropriate response to the question, ‘why are you averse to that item?’ has no helpful answer other than ‘what do you mean, why am I averse?’.
50 Much more is said about this picture in Gert, Joshua, Brute Rationality (Cambridge, 2004)CrossRefGoogle Scholar and Gert, Joshua, Normative Bedrock (Oxford, 2012)CrossRefGoogle Scholar.
51 On the disparate elicitors of disgust, see Kelly, Yuck!
52 On this pair of points, see Gert, Brute Rationality, pp. 136–7.
53 Thanks to an anonymous reviewer for raising this helpful objection.
54 See Haidt, ‘Emotional Dog’. To be clear, I am not outright endorsing Haidt's ‘Social Intuitionism’, one problem with which is that it underemphasizes the role of intuitive rules in framing our moral judgements – as has been emphasized by such researchers as Susan Dwyer and John Mikhail. For discussion, see Mallon, Ron and Nichols, Shaun, ‘Rules’, The Moral Psychology Handbook, ed. Doris, John (Oxford, 2010), pp. 297–320CrossRefGoogle Scholar.
55 Haidt, Righteous Mind, p. 81. See also the rest of Part I of that book.
56 Haidt et al., ‘Affect, Culture, and Morality’; see also Haidt, ‘Emotional Dog’.
57 He calls utilitarianism a ‘one-receptor system’, i.e. one which takes harm to be the supremely important moral concept (Righteous Mind, p. 272).
58 Righteous Mind, p. 272.
59 Being open to (c) is one reason this tolerant welfarism is not consequentialist. I suggest it might be happily wedded with the view that the least advantaged should be made as well off as possible. The famous locus classicus of this view is Rawls, John, A Theory of Justice (Oxford, 1971)Google Scholar.
60 Adam Cureton has argued for a thesis that I take to be ambiguous between these. See Cureton, , ‘Solidarity and Social Moral Rules’, Ethical Theory and Moral Practice 15 (2012), pp. 691–706CrossRefGoogle Scholar.
61 Many thanks to audiences at the 2012 Felician Ethics Conference and the 2012 Alabama Philosophical Society. Thanks to Michael Albert, David McNaughton, Tyler Paytas, Preston Werner and Chris Zarpentine for helpful comments. Thanks also to Scott Clifford, Josh Gert, Jon Haidt, Jesse Prinz and Tom Wysocki for helpful conversation or correspondence on this topic.
- 1
- Cited by