No CrossRef data available.
Article contents
Imaginative Motivation
Published online by Cambridge University Press: 01 June 2009
Abstract
This article argues for a certain picture of the rational formation of conditional intentions, in particular deterrent intentions, that stands in sharp contrast to accounts on which rational agents are often not able to form such intentions because of what these enjoin should their conditions be realized. By considering the case of worthwhile but hard-to-form ‘non-apocalyptic’ deterrent intentions (the threat to leave a cheating partner, say), the article argues that rational agents may be able to form such intentions by first simulating psychological states in which they have successfully formed them and then bootstrapping themselves into actually forming them. The article also discusses certain limits imposed by this model. In particular, given the special nature of ‘apocalyptic’ deterrent intentions (e.g. the ones supposedly involved in nuclear deterrence), there is good reason to think that these must remain inaccessible to fully rational and moral agents.
- Type
- Research Article
- Information
- Copyright
- Copyright © Cambridge University Press 2009
References
1 See Kavka, Gregory, ‘Some Paradoxes of Deterrence’, Journal of Philosophy 75 (1978), pp. 285–302CrossRefGoogle Scholar, and Moral Paradoxes of Nuclear Deterrence (Cambridge, 1987). Kavka in fact thought that nuclear deterrence gave rise to a number of different paradoxes of deterrence, but I will consider only what I take to be the most serious such paradox, one involving a tension between agent-rationality and option-rationality. (Daniel Farrell argues that this is in fact the only case that constitutes something akin to a paradox; see his ‘On Some Alleged Paradoxes of Deterrence’, Pacific Philosophical Quarterly 73 (1992), pp. 114–36.) The expression ‘apocalyptic threat’ is Gauthier's (from his ‘Assure and Threaten’, Ethics 104 (1994), pp. 690–721). Gauthier applies it to any deterrent threat ‘that, should it fail, would require [the agent who made the threat] to bring utter disaster on her head’ (p. 719). Note that Gauthier talks of apocalyptic threats, to emphasize the importance of the intention's being made known to the threatened party, but I will use ‘intention’ and ‘threat’ interchangeably.
2 David Gauthier's ‘Deterrence, Maximization, and Rationality’, Ethics 94 (1984), pp. 474–95, is an influential defence of the claim that such deterrent intentions may be entirely rational. Gauthier also argued that if such intentions failed to deter it would be rational to act on them as well (on the grounds that the rationality of forming an intention implies the rationality of acting on the intention, absent a change in the background conditions). For criticism of Gauthier's argument, see Bratman, Michael, Intention, Plans, and Practical Reason (Stanford, 1999), pp. 105–6Google Scholar. A rather different picture emerged in Gauthier's ‘Assure and Threaten’, which defends a more complex account of the conditions under which commitment behaviour counts as rational. I describe and criticize Gauthier's account in section 2.
3 In ‘Deterrence and the Fragility of Rationality’, Ethics 106 (1996), pp. 350–77, I present an earlier version of such an account, applied only to the case of apocalyptic deterrent intentions, and arguing (wrongly, as I now think) that such ‘intentions’ can be adopted by rational and moral agents in the full knowledge that what they conditionally enjoin is irrational and/or wrong.
4 See Pettit, P. and Smith, M., ‘Backgrounding Desire’, Philosophical Review 99 (1990), pp. 565–92CrossRefGoogle Scholar, for the difference between an account on which belief and/or desire are foregrounded (in the sense that the agent reasons by focusing on the fact that these are her beliefs and desires) and an account on which they are backgrounded (in the sense that the agent reasons by focusing on the content of these beliefs and desires). I have in mind the backgrounding way of understanding the condition.
5 According to Kavka, ‘[i]t is part of the concept of rationally intending to do something, that the disposition to do the intended act be caused (or justified) in an appropriate way by the agent's view of reasons for doing the act’ (‘Some Paradoxes of Deterrence’, p. 292). See also Michael Bratman's account of the ‘rationality of an agent for her deliberative intentions’ in Intention, Plans, and Practical Reason. Bratman's ahistorical and historical principles both contain the condition that the agent in intending ‘reasonably supposes that [the object of the intention] is at least as well supported by his reasons for action as its relevant, admissible alternatives’ (pp. 84–5). Although Bratman doesn't discuss conditional intentions as such, there is every reason to suppose he would take them to fall under an appropriate extension of this condition.
6 ‘Assure and Threaten’, sect. IX.
7 In ‘Fear and Integrity’, Canadian Journal of Philosophy 38 (2008), pp. 31–49, I suggest how such an account might be extended to unconditional (future-directed) intentions, including the kind of problematic unconditional intentions that feature in Kavka's well-known Toxin Puzzle (‘The Toxin Puzzle’, Analysis 43 (1983), pp. 33–6).
8 For a general argument for the central importance of the emotions in our rational lives, see Michael Stocker (with Elizabeth Hegeman), Valuing Emotions (Cambridge, 1996). The motivational importance of emotions in decision-making is also underscored in important empirical work done by Antonio Damasio and his co-workers. See, for example, Damasio, Descartes' Error: Emotion, Reason, and the Human Brain (New York, 1994) and The Feeling of What Happens (New York, 1999); and Bechara et al., ‘Insensitivity to Future Consequences Following Damage to Human Prefrontal Cortex’, Cognition 50 (1994), pp. 7–15.
9 I am here indebted to Greenspan's ‘Emotional Strategies and Rationality’, Ethics 110 (2000), pp. 469–87, although my emphasis is somewhat different. I have been concerned with the way the intention might be formed, whereas Greenspan seems more concerned with how the agent might bring herself to act on her threat through a rational shift in evaluative perspective. See also Helm, Bennett, Emotional Reason (Cambridge, 2001)CrossRefGoogle Scholar, which attempts to bridge the cognitive-conative divide by, in part, construing emotions as themselves evaluative in nature.
10 For the notion of expressive reasons for action, see, for example, Raz, Joseph, The Authority of Law: Essays on Law and Morality (Oxford: Clarendon, 1979), pp. 253–8CrossRefGoogle Scholar. What is important in the present use of this idea is that the expressive reason for acting depends for its existence and force on the formation of the intention. The reason was not available for incorporating into intention-independent deliberation about whether to perform the act.
11 For a very different account of such threats and their rationality, see Robert Frank's Passions within Reason: The Strategic Role of the Emotions (New York, 1988).
12 This description will be contentious if the envisaged scenario is a survivable nuclear war (a near-apocalyptic scenario). For the agent issuing the threat may then have as one of her rational and moral goals the conditional goal of ensuring that the attacker doesn't survive intact, to avoid the agent's nation being placed in bondage to a wholly alien way of life (cf. Greenspan, ‘Emotional Strategies and Rationality’, p. 484 n. 24).
13 For an excellent discussion of the distinction, see Harman, Gilbert, Reasoning, Meaning and Mind (Oxford, 1999), ch. 2CrossRefGoogle Scholar.
14 David Lewis once argued that real world deterrers (at least those in the U.S.) were a ‘strange’ mixture of good and evil and of the rational and irrational. See his ‘Devil's Bargains and the Real World’, The Security Gamble: Deterrence Dilemmas in the Nuclear Age, ed. David MacLean (New York, 1984). In conversation, he took this to show that an agent-irrationalist and agent-immoralist view of nuclear threats sets the standards for rationality and morality too high. Lewis's view suggests another way of defending agent-rationalism, although not one I am inclined to accept.
15 Just as ‘X is fragile’ does not mean ‘X will break when struck, no matter what the possible circumstances’, so ‘P is rational’ does not mean ‘X chooses rationally, no matter what the possible circumstances’. Knowing which possible circumstances are relevant is, of course, a difficult matter.
16 Even if the agent does end up leaving her partner, this is still not enough to show that the agent is irrational. It may just signal that the agent's preferences have undergone a sharp, unanticipated shift. She may now see her leaving as something she wants to do to show her disaffection and anger – her action may thus count as expressively rational. By contrast, if the agent leaves her partner but then thoroughly regrets taking this course of action because she continues to identify with her original desires, then we would say that she acted irrationally.
17 I am grateful for helpful critical comments from many colleagues, especially David Braddon-Mitchell, Stewart Candlish, Richard L. Epstein, David Lumsden and Jonathan McKeown-Green.