Hostname: page-component-5cf477f64f-mgq6s Total loading time: 0 Render date: 2025-04-08T05:22:17.841Z Has data issue: false hasContentIssue false

Idealization in Moral Understanding: Grasping Less but Acting Better

Published online by Cambridge University Press:  28 March 2025

Maria Waggoner*
Affiliation:
Purdue University, West Lafayette, IN, USA
Rights & Permissions [Opens in a new window]

Abstract

Moral understanding has typically been defined as grasping the explanation, q, for some proposition, p, where p states that some action is morally right (or wrong). This article deals with an underdiscussed point within the literature on moral understanding: the degree of moral understanding one has deepens with the more moral reasons that one grasps, whereby these reasons not only consist of those that speak in favor of an action’s moral permissibility but also those speaking against. I argue for a surprising and important implication of this: having a deep degree of moral understanding can make it harder to carry out the right action. Furthermore, I propose that we should think of our pursuit of moral understanding in an analogous way as to how some have thought of scientific understanding: There may be good reasons to fail to appreciate all of the actual moral reasons that in fact exist; sometimes we should seek a surfaced-level moral understanding instead of something deeper. Just as idealizations used within science – which can involve deviations from the truth – can help us achieve scientific understanding, so too we might restrict the moral reasons that we seek to grasp in pursuit of moral understanding.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

1. Introduction

Moral understanding has typically been defined as grasping the explanation, q, for some proposition, p, where p states that some action is morally right (or wrong). The explanation, q, consists of the moral reason(s) that make that action morally right (or wrong). This article deals with an underdiscussed point within the literature on moral understanding: the degree of moral understanding one has deepens with the more moral reasons that one grasps, whereby these reasons not only consist of those that speak in favor of an action’s moral permissibility but also those that speak against. Consider the following case:

AGING MOTHER: Sue’s aging mother wants to move in with Sue and her family. Her mother asks Sue to extend her an invitation. Sue weighs the moral reasons for and against in deciding what to do. The reason in favor of extending the invitation is that of her mother’s welfare – it would be best for Sue’s mother to spend her remaining years surrounded by her only family; the reason against extending the invitation is that having Sue’s mother live with her and her family would negatively impact Sue’s marriage with her husband, Joe. Sue arrives at the conclusion that she ought to indeed extend an invitation and allow her mother to move in if she wishes.

In this case, there are moral reasons for and against, and Sue grasps and weighs them correctly. Furthermore, while these two moral reasons are the main reasons at play, there are in fact other more minor reasons: the low-level stress it could have on Sue and Joe’s children, the negative changes it’ll have on Sue’s social life, and lastly – having to adapt to and accommodate to Sue’s mother’s meat-eating diet, which likely poses at least a minor moral temptation to Sue and Joe’s children, as they try to stick to a vegan lifestyle. Having a deeper degree of moral understanding will involve identifying even these more minor moral reasons – both for and against – and weighing them correctly. We can stipulate that in this case, even when all of these reasons – minor ones included – are taken together, the moral reason to extend the invitation still outweighs the moral reasons against it.

Imagine that Sue’s husband, Joe, has a surfaced-level moral understanding, only grasping the two main reasons, while Sue has a deeper degree of moral understanding, where she identifies and weighs all of the reasons. It has often been presumed that a deeper degree of moral understanding is better; it is something the virtuous person would have; we should want and strive to be like Sue, rather than Joe. Additionally, moral understanding is thought to be valuable because it aids us in acting rightly. This article points out that having a deep degree of moral understanding might actually be in tension with this; the Joes might actually have an easier time doing the right thing than the Sues.

In Section 2, I give three reasons for why having the deepest degree of moral understanding involves grasping all of the moral reasons – both for and against. And furthermore, I point out why such deep understanding is thought to be valuable.Footnote 1 In Section 3, I flesh out what hangs on this: Insofar as moral understanding consists of affective or motivational states (Callahan Reference Callahan2017; Howard Reference Howard2018; Slome, Reference Slome2022) or is even usually tied up with them (Hills Reference Hills2020), having a greater degree of moral understanding why p will mean that one will oftentimes not only be affectively and/or motivationally engaged with the reasons in favor of p, but also the reasons that speak against p. An important implication of this is that having a deep degree of moral understanding can make it harder for one to carry out the right action, as one would not only be affectively and/or motivationally impacted by the reasons for p but also those that speak against p. I suggest that rather than assuming moral understanding always enhances our ability to reliably carry out the right action (Hills Reference Hills2009), we must also acknowledge possible tradeoffs, namely the ways in which having a deep degree of moral understanding can interfere with our ability to carry out the right action.Footnote 2 In Section 4, I compare our pursuit of understanding in the moral domain to the domain of science and suggest that we should think of our pursuit of moral understanding in an analogous way as to how some have thought of scientific understanding: Just as idealizations used within science – which can involve deviations from the truth (Potochnik, Reference Potochnik2017) – are made use of to achieve a kind of scientific understanding, moral understanding might also involve only grasping a subset of all the relevant moral reasons. Yet, in Section 5, I point out some differences between moral and scientific understanding, which shows that even if one doesn’t take scientific idealization to involve restricting one’s grasping of scientific reasons, it’s plausible that such restrictions do occur in the moral domain.

2. The nature and value of moral understanding

One way to investigate the nature of moral understanding is to focus on what is required of an explanation: Moral understanding is thought to involve grasping the explanation for why p, where this grasping involves abilities in following and giving explanations. While this literature on moral understanding has left further discussion of what constitutes an ‘explanation’ largely untouched, one of my aims in this section is to begin to illuminate this idea. By asking what all is involved in an ‘explanation’, we can come to further flesh out the nature of moral understanding, and as I’ll suggest, we might see that having a deep degree of moral understanding involves grasping an explanation that involves weighing reasons both for and against. John Broome (Reference Broome2013), for instance, posits that at least some explanations are normative weighing explanations – they involve identifying and grasping pro tanto normative reasons and weighing them against each other. He explains how this works in the following passage:

Suppose you ought to F. If there is a weighing explanation of why, it takes an analogous form [as the explanation for mechanical weighting of objects using weight pans on a balance]. There is at least one reason for you to F, and there may be one or more reasons for you not to F. Each reason has a weight. The combined weights of the reasons for you to F exceeds the combined weights of the reason for you not to F. This is why you ought to F (Broome, Reference Broome2013, p. 52).

In the case of weighing explanations, moral understanding will involve grasping an explanation that consists of reasons both for and against and weighing them against each other appropriately. Having a deep degree of moral understanding would involve grasping a thorough explanation – a normative weighing explanation.

One way of showing why a deep degree of moral understanding might at least sometimes involve identifying and weighing the reasons that speak in favor and against doing an action is by considering what the value of moral understanding is thought to be. Literature on moral understanding often highlights the value of moral understanding and what is lost when one defers to another’s moral testimony. And from illuminating what the value of moral understanding is, conclusions about its nature are often made. Thus, in what follows, I summarize two ways in which moral understanding has been said to be valuable as a way to draw out the nature of moral understanding. If moral understanding is indeed valuable for the reasons described below, then this reveals something about the nature of moral understanding: that it comes in degrees and that the more moral reasons – both for and against – one grasps, the deeper degree of moral understanding one has. In other words, if one is inclined to think that moral understanding is valuable for the reasons outlined below, then they should embrace a picture of moral understanding that often involves normative weighing explanations, where such understanding is enhanced or deepened when more moral reasons, both for and against, are taken into consideration, weighed against each other, and constitute the explanation grasped.Footnote 3

First, moral understanding has been said to be intrinsically valuable, for when one morally understands, one’s cognitive states mirror moral reality. True beliefs are said to be valuable because their contents mirror or reflect truths about the world. But, if such mirroring is valuable when it comes to having true beliefs, then understanding “must be valuable twice over” (Hills Reference Hills2015, 679), for through such understanding “you can mirror the structure of the world within the structure of your own thoughts as well as their content” (ibid). Stephen Grimm likewise explains that “the mind of someone who understands mirrors or reflects reality at a deeper level than the mind of someone who merely propositionally knows” (Grimm, Reference Grimm2012, 109). When one has moral understanding, it’s not only that your beliefs or thoughts mirror what is true, but the way that these states are structured or organized also reflect something true about moral reality. If it is right that mirroring the structure of the moral world is valuable in this way, then moral understanding which mirrors the full span of moral considerations – all of those reasons for and against – must be even more valuable. Another way of supporting this point is to consider what several philosophers say in response to the McDowellian silencing thesis, which holds that the virtuous person is to ignore reasons that speak against doing what is morally right. Objections made against the silencing thesis often point to the fact that “virtue can have a cost, and a mark of the wise person is that she recognizes it” (Baxley Reference Baxley2007, 419). There is something valuable about appreciating various reasons, even when it means grasping and appreciating the reasons that run against doing the right thing. To fail to grasp such reasons is to fail to appreciate what is of value. If mirroring is valuable because one’s thoughts and their structure reflect moral reality, then it must also be valuable for one’s thoughts to mirror all that morally matters, both moral reasons for and against.

Second, moral understanding has been said to be valuable in another, more instrumental, way: “moral understanding is important in part because being in a position to justify yourself to others is morally important…[for a] core ethical practice is the exchange of reasons” (Hills Reference Hills2009, 106-7). If one doesn’t grasp the moral reasons for why p, then one will be unable to explain or justify why she did the right thing. She will not be able to engage in the exchange of reasons but will instead put forth unsatisfying answers like “S told me so” or “I don’t know, it just felt right.” Again, if one is inclined to think that being able to give justification for one’s moral actions to others is a value of moral understanding, then they should also think that a better and more complete justification will involve grasping and giving more reasons, those which include both those that speak in favor and against the moral status of action in question. Consider the contrast between Sue and Joe when asked by a mutual friend, Peter, if they are really sure that inviting their mother (in-law) to move in with them is the right thing to do. Peter – who has experienced difficulties with his own mother moving in with his family – wants to make sure whether they have really thought things through. Peter might ask: “But what about the stress it might cause on your kids? Don’t they matter?! You are all vegans, but Sue’s mother isn’t. Don’t you think this will make things extra difficult for your kids?” Joe will not be able to say much more than a simple explanation he has probably already given – perhaps he’d just simply repeat, “like I said, it’s really important that Sue’s mother spends her remaining days with family.” Sue, on the other hand, seems much better situated to engage in an exchange of reasons with Peter. She might point out ways in which this worry might be able to be mitigated – perhaps asking her mother to eat less meat or using it as a teaching moment for her children. Or maybe Sue would respond by highlighting how having her mother (and her dietary habits) in the house might be a poor moral example for her children, but that seeing their parents care for a family member in need would provide an even more powerful and important moral example for them. In doing this, Sue can respond to Peter’s question about whether and how the effects on her children matter morally. Yes, they do matter. But in this case, they don’t matter enough to keep Sue and Joe from extending the invitation to Sue’s mother. Sue can provide reasons to Peter as to why this consideration is sufficiently weighty, while Joe cannot. Sue’s explanation, of course, might not be completely satisfying to Peter. Peter might persist, pushing back on Sue’s reasons. To be sure, Sue’s reasons might be unconvincing, or Peter and Sue may eventually come to a standstill. Nonetheless, Sue is better able to justify her decision when compared to Joe. This is because Sue has a deeper degree of moral understanding – she grasps more of the moral reasons than Joe does and can appreciate the reasons that Peter is concerned with.

Thus, if one is sympathetic to thinking that moral understanding is valuable because it (1) mirrors moral reality and (2) enables one to engage in an exchange of moral reasons, then they should think that a deep degree of moral understanding will involve identifying and accurately weighing more reasons, both for and against.

3. How deepening moral understanding may interfere with acting rightly

It’s likely that most would be happy to embrace the idea that moral understanding often involves grasping a weighing explanation, and that a deep degree of moral understanding will involve grasping an explanation that includes many reasons – even those minor ones – for and against.

In this section, I will argue for a surprising and important implication of what has been said thus far: striving for a deep degree of moral understanding – where one tries to identify and weigh a plethora of reasons for and against p – can negatively impact our ability to carry out the right action. In bringing this consequence to light, I hope to show that striving for a deep moral understanding can come with important moral tradeoffs.

To begin, consider how the grasping of reasons within moral understanding is related to the engagement of our affective and/or motivational states. On one end of the spectrum, some have argued that such grasping is itself constituted not only by cognitive appreciation but also by affective/motivational engagement with those reasons (Callahan Reference Callahan2017; Howard Reference Howard2018; Slome, Reference Slome2022). To morally understand why torturing an animal is wrong, one must cognitively grasp the reasons that make it wrong (e.g. unnecessary suffering is bad; treating an animal with such cruelty disrespects the animal’s inherent dignity) but one must also have certain affective and motivational states concerning those reasons: one must, say, feel bad about causing unnecessary suffering or be motivated to alleviate an animal’s suffering. Callahan gives an account of moral understanding which requires that:

[One not only] cognitively appreciates the support relationships [between the explanans (q) and explanandum (p), but also] exhibit[s] the affective/motivational responses [that are] sensitive to those relationships… [one must] be moved to action or feel the import of explanations (Callahan, Reference Callahan2017, 452).

According to this account of moral understanding, the grasping of reasons involves both cognitive and affective/motivational engagement with these reasons. The reasons – and their supporting relation to the action in question – are appreciated cognitively and affectively.

One reason for thinking that moral understanding is constituted by affective states is that some take moral understanding to be different than non-moral understanding. Understanding why magnets stick to the refrigerator is a different sort of thing than understanding why slavery is wrong. The former seems to be a purely cognitive endeavor, while the former is tied up with our emotions and motivations. Yet, a contrasting view of moral understanding is that it is just a species of more general genius, understanding why (Hills Reference Hills2009; Reference Hills2015). According to this view, the difference between moral understanding and, say, scientific understanding, merely lies in the kind of content of p and q; the former involves moral content while the latter is scientific content. Under this view, moral understanding – like other kinds of understanding why – is a purely cognitive state. Nonetheless, even for this cognitive view, moral understanding it is still thought to be connected to affective and motivational states. Hills suggests that “[c]ognitive and non-cognitive moral attitudes provide supporting feedback to each other; moral emotions and moral understanding entwine and develop in combination” (Reference Hills2020, p. 441). So, even if moral understanding is at root a cognitive state and does not itself consist of moral emotions and motivations, it’s plausible that moral understanding will still lead to increased emotional sensitivity and engagement towards p and the underlying explanatory reasons that make p true. And vice versa – being more emotionally sensitive towards p and its explanatory reasons will help one better grasp and achieve moral understanding why p.

Insofar as moral understanding consists of, or is at least closely connected to, affective and motivational states, then the following arises: Having a deep degree of moral understanding, such that one grasps various reasons for and against, means that one will at least sometimes feel emotional and/or motivational pulls that go against doing the right action. Insofar as Sue grasps the various additional reasons that speak against extending an invitation to her mother to move in, she is likely to experience some amount of emotional and/or motivational pull against extending the invitation. This will be true despite her understanding why, all things considered, it is right to extend the invitation. And, insofar as one is affectively and/or motivationally pulled toward the reasons that speak against doing the right action, it seems likely that it will be harder to carry it out. Carrying out the right action might prove harder to do since being affectively sensitive to, and motivationally moved by, the reasons that speak against doing the right action might weaken one’s resolve towards doing the right thing. In this vein, Karen Stohr gives the case of an employer who knows that she ought to fire a handful of employees in order to keep her company from going under. Despite knowing that this is the right thing to do, she nonetheless has affective and emotional states which are sensitive towards reasons that speak against firing them:

She is anguished by the knowledge that she will be causing them pain and distress…[on the day of relaying the bad news,] she wakes up that morning with an anxious feeling in her stomach…[and] drives to work with a sense of dread… She is grieved at the sight of her employees’ stress, sadness, and anxiety in response to the news (Stohr, Reference Stohr2003, 343).

Importantly, Stohr observes that in this case – one where the person is emotionally sensitive to the reasons that speak against performing the right action – “it [is] harder for her to perform the action required of her” (ibid). This is so precisely because the employer grasps the reasons that speak against firing these employees – namely that doing so will “caus[e them] pain” (ibid). Being sensitive to this reason “make[s] the act of firing them hard for her to perform, despite the fact that it is the right thing to do” (ibid). It seems that feeling the pull of (all, or at least more of) the reasons which speak against the right action could weaken one’s resolve, creating something akin to a weakness of will. Despite knowing what – and understanding why – the right thing to do is, one has difficulty bringing oneself to do it.

There is another way that being emotionally sensitive to all of the various reasons – including a potential long list of those which speak against doing the right action – might interfere with acting rightly: namely, through experiencing regret post-action, leading to a change in one’s behavior in future similar scenarios. Empirical research indicates that experiencing regret after carrying out a particular action makes one less likely to carry out that action in the future (O’Connor et al., Reference O’Connor, McCormack and Feeney2014; Ratner and Herbst, Reference Ratner and Herbst2005; Zeelenberg and Pieters, Reference Zeelenberg and Pieters2007). O’Connor and colleagues (2014) describe this effect of regret as “adaptive switching.” It is said to be adaptive because one learns from one’s experiences, adjusting one’s behavior accordingly.

Furthermore, emotional engagement and sensitivity with the reasons that speak against doing the action in question seems to play an important role in the degree of regret one feels: Francis and colleagues (Reference Francis, Gummerum, Ganis, Howard and Terbeck2017) found while both lay persons and trained emergency professionals made the utilitarian choice in a trolley-like case, the lay subjects who experienced high degrees of emotional arousal while making their decision not only had higher scores of empathy but also exhibited greater regret after the fact. Trained emergency professionals – who are plausibly emotionally desensitized to such distressing scenarios – showed lower levels of emotional arousal while making sacrificial, utilitarian decisions, and less regret afterwards. The level of emotional distress, indicative of being affectively sensitive to and motivationally pulled by the reasons that speak against the action being done, seemed to be related to the level of regret that one experiences after the fact. And the degree of regret predicts how one will act in future scenarios, with greater regret leading to an increased likelihood of switching to carrying out a different future action.

Some also have found that registering deontological considerations (as compared to utilitarian ones) involves more affective engagement (Greene et al., Reference Greene, Brian Sommerville, Nystrom, Darley and Cohen2001; Greene et al., Reference Greene, Nystrom, Engell, Darley and Cohen2004; Petrinovich et al., Reference Petrinovich, O’Neill and Jorgensen1993; Ciaramelli, et al., Reference Ciaramelli, Muccioli, Làdavas and Di Pellegrino2007; Koenigs, et al., Reference Koenigs, Young, Adolphs, Tranel, Cushman, Hauser and Damasio2007; Mendez, et al., Reference Mendez, Anderson and Shapira2005). In a situation where the utilitarian action is the right thing to do, and so the utilitarian reasons outweigh the deontological – such as, say, switching the trolley in the Original Trolley Problem – it’s plausible that one would feel emotional arousal during one’s decision and action, and regret afterwards. Despite having done the right thing, feeling this regret may lead one to act differently in future scenarios, whereby one refuses to carry out the utilitarian option, even when it’s the right thing to do. Frechen and Brouwer (Reference Frechen and Brouwer2022) found evidence that supports this suggestion. These researchers looked at decision-making across multi-stage scenarios and found that those who made a utilitarian decision initially were more likely to switch, choosing a deontological decision in the second scenario.

This body of research shows another way in which those who are affectively and motivationally sensitive to certain (perhaps only deontological) moral reasons which speak against doing the right action can lead one to carry out the wrong action: One may do the right action initially, but due to being emotionally sensitive to at least certain kinds of reasons that speak against this action, regret arises. This regret prompts one to switch decisions in the future, even if this means switching to doing the wrong action.

However it exactly occurs, it seems likely that being affectively/motivationally engaged with various reasons that speak against doing the right action will, at least in certain circumstances, make one less likely to actually carry out that action. In short – despite Sue’s deeper moral understanding, Joe might actually have it easier when it comes to following through with doing the right thing – both in this instance and in future instances.

This will be an unwelcome consequence for some, as moral understanding has been thought to enhance one’s ability to carry out the right action, not interfere with it. Hills (Reference Hills2009) explains it like this:

While in principle, your moral instincts might be infallible so that you always instinctively chose the morally right action…in practice, [this] is [not] likely to happen. Moral decisions are often complicated. Moral reasons can be difficult to assess and interact in quite complex ways…[Y]ou will make a good decision only if you have moral understanding…moral understanding is worth having as a means to right action (p. 106).

We see here that moral understanding is thought to increase the epistemic accuracy in one’s moral judgment-making. One who has more moral understanding will be able to grasp more reasons, in a more nuanced way, and have a greater facility in applying them. It seems natural to extend this enhanced epistemic ability to effectively carrying out the decided action: insofar as one is better situated to figure out what to do, one will – all else being equal – be better situated to carry out the right action.

But all else is not equal: Insofar as having greater moral understanding involves grasping more moral reasons – including those that speak against doing the right thing – this can interfere with, rather than strictly enhance, one’s ability to act rightly. This seems even more likely if moral understanding consists of, or even simply implicates, affective/motivational engagement of the reasons that speak against the right action, as one will feel their pull and find it harder to resist them.

This is not to deny that deep moral understanding is a good worth pursuing and is an important part of our moral lives. Furthermore, it’s plausible that deepened moral understanding can, in certain aspects, make it more likely that one does the right thing. This is because deepened moral understanding can also increase one’s confidence, making it less likely that one will give up one’s belief or judgment that an action is right or wrong in the face of new evidence. When this new evidence would otherwise lead one to carrying out the wrong action, persisting in one’s correct moral judgment will aid in acting rightly. Just as Plato argued that having knowledge, rather than true opinion, cannot be lost in the face of new evidence, we can reasonably expect something similar to be the case when one has a deep degree of moral understanding: moral knowledge might resultantly be more durable in the face of new evidence, since the reasons grasped in moral understanding secure or tie one’s moral beliefs down.

Nonetheless, we ought to recognize and reckon with the difficulties that may arise as we strive to achieve deep moral understanding. There are costs to having deepened moral understanding, and these tradeoffs must be considered.Footnote 4

4. Taking a lesson from the use of idealizations in scientific understanding

Within the realm of science, it’s been argued that idealizations help us achieve scientific understanding. Idealizations involve deviations from truths and have been said to be necessary for creating a simplified model that the human mind is capable of grasping (Elgin, Reference Elgin2017; Potochnik, Reference Potochnik2017, Reference Potochnik2020). Such deviations from the truth are thought to be necessary because of the cognitive limitations of the human mind; sometimes less (of the truth) is more, at least when it comes to achieving understanding.

One point that has been emphasized within this literature is that those seeking scientific understanding have many aims. Depending on why scientific understanding is being pursued within a particular context (e.g. what aim having such understanding is directed at), this will influence the ways in which certain idealizations are employed and in what aspects the truth is deviated from. So, too, I suggest with moral understanding: Depending on the aim that one has in striving to achieve moral understanding, it might be appropriate to make use of idealizations or deviate from the (complete) moral truth in certain ways.

One common aim of achieving moral understanding that has already been highlighted is that of guiding our actions or helping us to do the right thing. When this is our aim, I suggest that we ought to make use of a kind of idealization that is analogous to minimalist idealization in the scientific realm. Minimalist idealization is “the practice of constructing and studying theoretical models that include only the core causal factors which give rise to a phenomenon” (Weisberg, Reference Weisberg2007, p. 642, italics mine). What is important to make note of in minimalist idealization is that it is not simply non-causal and accidental features which are excluded from the model, but causal, contributing factors as well. The minimalist idealized model includes only the most important, or ‘core’ causal factors. Weisberg continues: “[T]he key to explanation is a special set of explanatorily privileged causal factors. Minimalist idealization is what isolates these causes and thus plays a crucial role for explanation” (ibid, p. 645). Angela Potochnik (Reference Potochnik2020) similarly explains how we make use of a kind of minimalist idealization to understand how natural selection selects for a trait. In doing so, we “assume that the population is infinite, which enables drift to be neglected” (p. 939), and importantly, this is so even when drift has actually influenced the actual outcome. So long as “drift has not swamped natural selection’s causal role” (ibid), we should set it aside or do not account for it. In other words – to understand the process of natural selection, we make false assumptions about the population (e.g. that it is infinite) which allows us to omit a feature (e.g. drift) even when this feature plays a causal role. It is only when drift plays a prominent causal role or “swamp[s]” the process that we ought to include it in our model; otherwise we do better by overlooking it.

Drawing a parallel to scientific understanding, I propose that minimalist idealization ought to sometimes play a role in achieving moral understanding, as achieving moral understanding – for the particular aim of guiding our action – will involve neglecting some moral reasons that are in fact part of the weighing explanation for why p. In particular, those that are not privileged, or do not play a crucial or core role, are to be left out. This is to be done even when these reasons are in fact reasons, or when they do in fact play an explanatory role for why p.

Consider an analogous case, inspired by that of Sosa’s archer who successfully hits the target because of the archer’s skill. Perhaps this skill involves taking into account various features of her environment – the tautness of the bow, the curvature of the arrow, the wind, temperature, barometric pressure, and the degree of solar radiation. But plausibly, when the archer tries to consider all of the physical features that play a causal role in successfully hitting the target, considering all of these features and weighing them according to their relative causal importance might actually serve as a distraction, interfering with this ability to successfully hit the target. Given the archer’s aim – that of hitting the target – we might think that the archer should engage in some sort of minimalist idealization, focusing on the core causal factors: perhaps just the direction and force of the wind. When asked for an explanation of why the archer aimed slightly to the left of the target, the archer’s explanation reflects this minimalist idealization. Since she neglects the minor causal factors, her explanation will only invoke the direction and force of the wind.Footnote 5

So too with moral understanding: When we have certain aims – like those of trying to ‘hit the mark’ or do the right thing – we should at least sometimes embrace a kind of minimalist idealization, resulting in a more surfaced degree of moral understanding that neglects moral reasons which are not ‘core’ or ‘privileged.’

5. Challenging the minimalist idealization model of science

In the previous section, it was said that minimalist idealization used in the domain of science makes use of privileged causal features in explanations while neglecting more minor features. However, this, by itself, doesn’t mean that the scientist will fail to grasp or understand the features which are ignored. Rather, as some (Grimm, Reference Grimm, Bernecker and Pritchard2011; Hazlett, Reference Hazlett and Grimm2017; Kvanvig, Reference Kvanvig, Haddock, Miller and Pritchard2009) have argued, it is because the scientist understands that she is using a model, knowing what the omitted features are and grasping why they are left out in this case, that she is capable of using the model in the appropriate ways. Minimalist idealization, then, doesn’t mean that one will fail to know what the more minor causal features are, nor fail to understand what role they play. Rather, the scientist retains her understanding of the full landscape of causal features despite only making use of the core or privileged ones in a given explanation. A similar reading could be given for Sosa’s archer: The archer grasps precisely how all of the causal features – the tautness of the bow, the curvature of the arrow, the wind, temperature, barometric pressure, the degree of solar radiation – contribute to the arrow hitting the target. But at the moment of shooting, she chooses to focus on just the most important feature, that of the wind. But that doesn’t mean she is ignorant of the other features nor fails to understand how they would contribute to hitting the target.

If this is right, then the analogy to scientific idealization doesn’t necessarily show that doing the right action can be in tension with deep moral understanding. Rather, when one is trying to figure out what to do (and follow through with that decision), one might benefit from only making use of the most significant moral reasons while ignoring more minor ones. But that doesn’t mean that one fails to know what those reasons are or understand the role they play.Footnote 6

There are several things to say in response: First, while some might be of the opinion that the lesser causal features are still grasped and understood (even if temporarily ignored) when one engages in idealization, this is not the unanimous view. In particular, I take it that this is not Potochnik’s view. Rather her account seems to involve the claim that humans are cognitively limited, such that we cannot – cognitively speaking – take into consideration all the features at once. We are incapable of grasping how all the features interact and explain the phenomena in question; the full explanation is too complicated to understand, given the cognitive hardware that humans have. So, Potochnik’s account gives us something like the following: while perhaps the scientist knows her model is missing certain factors, she is nonetheless unable to either know what those features are and/or how they would weigh up and interact with the other more significant features. So, too, then with moral understanding.

Furthermore, even if Potochnik is wrong about our cognitive limitations when it comes to the domain of science, it seems plausible that a worry about cognitive limitations remains for the moral domain. This is because science is theoretical; morality is practical. When it comes to practical and moral decision-making, evidence seems to indicate that our minds are limited in relevant ways: Rudolph and Popp (Reference Rudolph and Popp2007) found that the more information one has, the more ambivalent one becomes about how to proceed. One explanation for this is that we humans are quite bad at weighing the information or evidence we have (Griffin and Tversky, Reference Griffin1992). Himma (Reference Himma2007) explains the phenomenon of ‘choice overload’ – that having too many choices is paralyzing – or that “an abundance of choices means we have to sift through an abundance of information to determine which option is best supported by reasons” (p. 207). But if the issue is that we have difficulty processing and ranking reasons, then so too we should expect this to manifest in moral understanding. More recently, Longo and colleagues (Reference Longo, Shankar and Nutall2019) found that the more information one is given, the more likely they were to encounter paralysis about how to proceed. It wasn’t because the information was too complicated for subjects to understand the meaning of, but knowing all the pros and cons made it difficult to ‘weigh up’ the various competing considerations (p. 766). It seems that by including too many moral and practical considerations, one might lose a grip on the overall judgment or conclusion to draw. Given our cognitive limitations, if we want to have some degree of moral understanding, it might be necessary that less significant moral features never be registered to begin with.

But, even if it is granted that we, despite our cognitive limitations, can still know or register all of the moral features without things getting too complicated, there is a further empirical question concerning our human ability to ignore or bracket at least some (minor) moral reasons. One important difference between scientific causal features and moral considerations is that the latter are often affectively laden. If we grasp a moral reason – with all its emotional and motivational effects – quite plausibly, it might still creep into our mind even when we attempt to bracket it, and so still having some impact on our choices and actions.

6. Concluding thoughts and further research

In this article, I have argued that striving to deepen our moral understanding can come with tradeoffs, specifically impacting our ability to engage in the right action. This is surprising, given that enhanced moral understanding is often said to be tightly linked with increasing our ability to do the right thing. At least some of the evidence and reasons I have given for this unexpected relationship hinges on empirical evidence about human psychology. Thus, future research should further investigate the extent to which human psychology might limit our abilities to deepen our moral understanding, and even if we can, whether this comes with important moral tradeoffs that should make us take a second look at when, and to what degree, we should strive to increase our moral understanding.Footnote 7

Footnotes

1 I do not take these arguments to be at odds with prominent accounts of moral understanding but rather giving further details and commitments of these accounts.

2 It should be noted that not all accounts of moral understanding face this consequence. Sliwa (Reference Sliwa2017), for instance, holds that moral understanding is reducible to knowledge of knowing right from wrong, and that a gut feeling or perception-like experience of badness or wrongness is an instance of moral understanding. My argument only is relevant for accounts of moral understanding where a grasping of explanatory reasons is involved.

3 I don’t take these arguments to be objections to standard accounts of moral understanding; I suspect that most will agree with what I say here. Rather, my intention in this section is to just fully flesh out, clarify, and give support for the idea that a deep degree of moral understanding involves grasping reasons for and against; the deeper the moral understanding, the more reasons on either side are identified and weighed appropriately.

4 The main claim in this article is that deepened moral understanding poses a problem for right action when one grasps that various reasons are relevant for why an action is right or wrong but does not exactly know how to weigh them. In this situation, one might do better in acting rightly if they were to only grasp, and weigh correctly, the more significant moral reasons. However, if one can both grasp the relevance of a large set of reasons and weigh them correctly, then it seems that deepened moral understanding will plausibly aid one in right action.

5 I am grateful to Allan Hazlett for suggesting the use of this case.

6 I am grateful to an anonymous reviewer for raising this objection.

7 I am grateful to Adam Waggoner, Allan Hazlett, Stephen Grimm, Eric Wiland, Laura Callahan, and an anonymous reviewer for their helpful feedback on previous versions of this paper.

References

Baxley, A.M. (2007). ‘The Price of Virtue.Philosophical Quarterly 99(4), 403–23.Google Scholar
Broome, J. (2013). Rationality Through Reasoning Chichester UK: Wiley Blackwell.Google Scholar
Callahan, L.F. (2017). ‘Moral Testimony: A Re-Conceived Understanding Explanation.The Philosophical Quarterly 68(272), 437–59.Google Scholar
Ciaramelli, E., Muccioli, M., Làdavas, E. and Di Pellegrino, G. (2007). ‘Selective Deficit in Personal Moral Judgment Following Damage to Ventromedial Prefrontal Cortex.Social Cognitive and Affective Neuroscience 2(2), 8492.Google ScholarPubMed
Elgin, C.Z. (2017). True Enough. Cambridge MA: MIT Press.Google Scholar
Francis, K.B., Gummerum, M., Ganis, G., Howard, I.S. and Terbeck, S. (2017). ‘Virtual Morality in the Helping Professions: Simulated Action and Resilience.’ British Journal of Psychology 109(3), 442–65.Google ScholarPubMed
Frechen, N. and Brouwer, S. (2022). ‘Wait, Did I Do That? Effects of Previous Decisions on Moral Decision-making.Journal of Behavioral Decision Making 35(5), e2279.CrossRefGoogle Scholar
Greene, J.D., Brian Sommerville, R.B., Nystrom, L.E., Darley, J.M. and Cohen, J.D. (2001). ‘An FMRI Investigation of Emotional Engagement in Moral Judgment.Science 293(5537), 2105–8.Google ScholarPubMed
Greene, J.D., Nystrom, E., Engell, A.D., Darley, J.M. and Cohen, J.D. (2004). ‘The Neural Bases of Cognitive Conflict and Control in Moral Judgment.Neuron 44(2), 389400.Google ScholarPubMed
Griffin, D. and Tversky A. (1992). ‘The Weighing of Evidence and the Determinants of Confidence.Cognitive Psychology 24(3), 411435.Google Scholar
Grimm, S. R. (2012). ‘The Value of Understanding.Philosophy Compass 7(2), 103–17.Google Scholar
Grimm, S.R. (2011). ‘Understanding.’ In Bernecker, S. and Pritchard, D. (eds), The Routledge Companion to Epistemology, pp. 8494. London: Routledge.Google Scholar
Hazlett, A. (2017). ‘Understanding and Structure.’ In Grimm, S. (ed), Making Sense of the World: New Essays on the Philosophy of Understanding, pp. 135158. Oxford: Oxford University Press.Google Scholar
Hills, A. (2015). ‘Understanding Why.Noûs 50(1), 661–88.CrossRefGoogle Scholar
Hills, A. (2009). ‘Moral Testimony and Moral Epistemology.Ethics 120(1), 94–127.CrossRefGoogle Scholar
Hills, A. (2020). ‘Moral Testimony: Transmission Versus Propagation.Philosophy and Phenomenological Research 101(2), 399-414.CrossRefGoogle Scholar
Himma, K.E. (2007). ‘The Concept of Information Overload: A Preliminary Step in Understanding the Nature of a Harmful Information-Related Condition.’ Ethics and Information Technology 9, 259272.CrossRefGoogle Scholar
Howard, N.R. (2018). ‘Sentimentalism about Moral Understanding.Ethical Theory & Moral Practice 21, 1065–78.CrossRefGoogle Scholar
Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M.D. and Damasio, A.R. (2007). ‘Damage to the Prefrontal Cortex Increases Utilitarian Moral Judgements.Nature 446(7138), 908–11.CrossRefGoogle Scholar
Kvanvig, J. (2009). ‘Appendix E Responses to Critics.” In Haddock, A., Miller, A. and Pritchard, D. (eds), Epistemic Value, pp. 339–51. Oxford: Oxford University Press.Google Scholar
Longo, C., Shankar, A. and Nutall, P. (2019). ‘“It’s Not Easy Living a Sustainable Lifestyle”: How Greater Knowledge Leads to Dilemmas, Tensions and Paralysis.’ Journal of Business Ethics 154, 759779.CrossRefGoogle Scholar
Mendez, M.F., Anderson, E. and Shapira, J. (2005). ‘An Investigation of Moral Judgement in Frontotemporal Dementia.Cognitive and Behavioral Neurology 18(4), 193–97.CrossRefGoogle ScholarPubMed
O’Connor, E., McCormack, T. and Feeney, A. (2014). ‘Do Children Who Experience Regret Make Better Decisions? A Developmental Study of the Behavioral Consequences of Regret.” Child Development 85(5), 19952010.CrossRefGoogle ScholarPubMed
Petrinovich, L., O’Neill, P. and Jorgensen, M.J. (1993). ‘An Empirical Study of Moral Intuitions: Toward an Evolutionary Ethics.Journal of Personality and Social Psychology 64 (3), 467–78.CrossRefGoogle Scholar
Potochnik, A. (2017). Idealizations and the Aims of Science. Chicago: University of Chicago Press.CrossRefGoogle Scholar
Potochnik, A. (2020). ‘Idealizations and Many Aims.Philosophy of Science 87(5), 933–43.CrossRefGoogle Scholar
Ratner, R.K. and Herbst, K.C. (2005). ‘When Good Decisions Have Bad Outcomes: The Impact of Affect on Switching Behavior.Organizational Behavior and Human Decision Processes 96(1), 337.CrossRefGoogle Scholar
Rudolph, T.J. and Popp, E. (2007). ‘An Information Processing Theory of Ambivalence.’ Political Psychology 28(5), 563585.CrossRefGoogle Scholar
Sliwa, P. (2017). ‘Moral Understanding as Knowing Right from Wrong.Ethics 127(3), 521–52.CrossRefGoogle Scholar
Slome, E. (2022). ‘Moral Understanding and Reconceived Understanding: A Reply to Callahan.Philosophical Quarterly 72(3), 763770.CrossRefGoogle Scholar
Stohr, K. (2003). ‘Moral Cacophony: When Continence Is a Virtue.The Journal of Ethics 7(4), 339–63.CrossRefGoogle Scholar
Weisberg, M. (2007). ‘Three Kinds of Idealization.The Journal of Philosophy 104(2), 639–59.CrossRefGoogle Scholar
Zeelenberg, M. and Pieters, R. (2007). ‘A Theory of Regret Regulation 1.0.Journal of Consumer Psychology 17(1), 318.CrossRefGoogle Scholar