No CrossRef data available.
Article contents
A Defence Of Constrained Maximization
Published online by Cambridge University Press: 13 April 2010
Abstract
- Type
- Articles
- Information
- Dialogue: Canadian Philosophical Review / Revue canadienne de philosophie , Volume 36 , Issue 3 , Summer 1997 , pp. 453 - 468
- Copyright
- Copyright © Canadian Philosophical Association 1997
References
Notes
1 I have in mind specifically Gauthier's argument in Morals by Agreement (New York: Oxford University Press, 1986)Google Scholar but this argument evolved from several earlier articles, especially “Reason and Maximization,” Canadian Journal of Philosophy, 4, 3 (March 1975): 411–33CrossRefGoogle Scholar. And I think the argument of Morals by Agreement is illuminated by his papers, “Deterrence, Maximization, and Rationality,” in The Security Gamble, edited by MacLean, D. (Totowa, NJ: Rowman & Allenheld, 1984), pp. 100–22Google Scholar, and “In the Neighborhood of the Newcomb Predictor,” in Proceedings of the Aristotelian Society (The Aristotelian Society, 1989), vol. 89, pp. 179–94CrossRefGoogle Scholar. In this article, I will treat only part of Gauthier's project, showing that some non-maximizing actions are rational. This of course leaves out much of the larger picture of trying to provide a contractarian justification for ethics.
2 Gauthier, Morals by Agreement, p. 82.
3 Ibid., p. 170.
4 Utility will be the same given the plausible assumption that people do not, for instance, go out of their way to punish those who are known to be constrained maximizers.
5 This is the standard reading of the argument. The use of the terms “inherited” and “transferred” is from Derek Parfit, Reasons and Persons (New York: Oxford University Press, 1984), p. 40Google Scholar. But many others interpret the argument in the same way. See, e.g., Copp, David, “Contractarianism and Moral Skepticism,” in Contractarianism and Rational Choice, edited by Vallentyne, P. (New York: Cambridge University Press, 1991), pp. 196–228Google Scholar; Smith, Holly, “Deriving Morality from Rationality,” in Contractarianism and Rational Choice, pp. 229–53Google Scholar; Vallentyne, Peter, “Gauthier's Three Projects,” in Contractarianism and Rational Choice, pp. 1–11Google Scholar; and Kavka, Gregory, “Responses to the Paradox of Deterrence,” in The Security Gamble, pp. 155–59Google Scholar. Even Gauthier, in ”Deterrence, Maximization and Rationality,” in The Security Gamble, pp. 100–22, sounds like he means this to be the argument (see note 12 below). Edward McClennen takes this to be the standard interpretation, but offers his own revision of the argument in “Constrained Maximization and Resolute Choice,” in The New Social Contract, edited by Paul, E. Frankel, Miller, F., and Paul, J. (New York: Basil Blackwell, 1988), pp. 95–118.Google Scholar
6 Much of Gauthier's argument, and the responses to it, are put in terms of dispositions. I am translating this into talk of following policies, because I think the debate is made clearer by putting it in these terms. Taking “disposition” to mean something like “steady tendency to conform one's actions to a given policy” is, I believe, true to the spirit of the original arguments.
7 See Kavka, Security Gamble, pp. 156–57, and Parfit, Reasons and Persons, pp. 21–22.
8 Kavka, Gregory, Moral Paradoxes of Nuclear Deterrence (New York: Cambridge University Press, 1987), p. 45.Google Scholar
9 Duncan Macintosh adds that, even if the inheritability principle were true, Gauthier's argument still would not work. Even if it is rational to adopt a c-max policy earlier, it will be rational (utility-maximizing) to re-adopt an s-max policy after you have gained your partner's cooperation, according to Macintosh's objection. So the rationally adopted policy at that time will be s-max and the inheritability principle would only show that straightforward maximizing actions were rational. See 6“Preferences Progress: Rational Self-Alteration and the Rationality of Morality,” Dialogue, 30, 1–2 (1991): 3–32CrossRefGoogle Scholar. In section 3,1 explain why I do not believe this style of objection undermines my version of Gauthier's argument.
10 Parfit, Reasons and Persons, pp. 5–17.
11 Parfit in fact wants to put his objection more strongly than this, in the form of a dilemma, but I think the dilemma is false when deployed against Gauthier. Adapting Parfit's description of the dilemma from Reasons and Persons, pp. 19–20, it would be deployed against the claim that if a theory of rationality A recommends following the advice of some incompatible theory B, this shows that theory B is a better theory and more likely to be true than theory A. The dilemma is that theory A must either be true or false. If it is true, then theory B must be false since it is incompatible with A. Suppose, then, that theory A is false. If A is false, its recommendations do not support any claim, including the claim that theory B is true. The dilemma appears devastating. But it is effective only when applied to incompatible theories, and s-max and c-max are compatible in a well-defined range of cases. Gauthier distinguishes parametric choice from strategic choice. A parametric choice is one “in which the actor takes his behaviour to be the sole variable in a fixed environment” (Morals by Agreement, p. 21), while a strategic choice is one in which one's choices can affect how others choose to act. C-max and s-max offer the same advice in situations of parametric choice (and in most situations of strategic choice). So if choosing to adopt a policy of c-max is a parametric choice, Parfit's dilemma will be a false one. S-max could be the correct theory of parametric choice even if c-max is the correct theory overall. And there is no reason the choice to follow a policy of constrained maximization could not be a parametric choice. A choice between policies will affect how one interacts with others, but is not necessarily made while interacting with others (this was pointed out to me by Geoffrey Sayre-McCord). Gauthier says that “constrained maximization is a disposition for strategic choice which is parametrically chosen” (Gauthier, Morals by Agreement, p. 183). But even if Parfit's dilemma is false, the more general problem about showing the significance of indirect self-defeat remains.
12 I am not certain whether the strategy I propose here should be taken as a new approach or simply a more charitable reading of Gauthier. The argument as presented in Morals by Agreement leaves room for my reading, and I will point out a passage that seems best explained by this reading. In “Deterrence, Maximization, and Rationality,” however, Gauthier sounds as if he wants to make his argument about the rationality of retaliation depend on something like the claim that rationality can be inherited. But some statements from “In the Neighborhood of the Newcomb Predictor” are highly suggestive of the approach I offer, with Gauthier even borrowing Parfit's talk of a theory's ”aim” as I do. I am willing to accept a description of my project in this article either as offering a new strategy or as stressing a line of argument that is already present in Gauthier's work.
13 Conversations with Debra DeBruin helped me see the importance of this approach. Parfit also divides a theory of rationality into parts—roughly an aim, a policy, and a “supremely rational disposition” (see Parfit, Reasons and Persons, p. 8). But Parfit does not acknowledge the possibility of conceptually separating the parts of a theory in the way I propose.
14 Parfit, Reasons and Persons, pp. 7–8
15 For a discussion of a similar point about consequentialist ethical theories, see Railton, Peter, “Alienation, Consequentialism, and the Demands of Morality,” Philosophy and Public Affairs, 13, 2 (Spring 1984): 136–71.Google Scholar
16 Gauthier actually considers only two policies, constrained and straightforward maximization. I will follow his lead in this, although some possible problems have been raised with this simplification. See, e.g., Copp, “Contractarianism and Moral Skepticism”; Smith, “Deriving Morality from Rationality”; or Danielson, Peter, “The Visible Hand of Morality,” Canadian Journal of Philosophy, 18, 2 (June 1988): 357–84.CrossRefGoogle Scholar
17 Stephen Darwall notes that Gauthier distinguishes between a rational disposition and a rationally chosen disposition, but does not make clear what is different about the two. He thinks that a rational disposition is just a disposition the acquiring of which will maximize an agent's utility, rather than a disposition that itself best satisfies an aim. See Darwall, Stephen, “Rational Agent, Rational Act,” Philosophical Topics, 14, 2 (Fall 1986): 33–57.CrossRefGoogle Scholar
18 Gauthier, Morals by Agreement, p. 183.
19 The distinction between rationally chosen dispositions and rational dispositions is not merely verbal. In fact, Gauthier cannot, or at least should not, rely on the argument that a constrained maximizing disposition is a rationally chosen disposition. Gauthier takes utility to be satisfaction of current preferences, so even if a constrained maximizing disposition will provide one with more utility over a lifetime than will a straightforward maximizing disposition, the straightforward maximizing theory of choice would not necessarily recommend choosing a constrained maximizing disposition. An agent's current preferences need not include a preference to provide oneself with as much utility as possible over the course of one's entire life. So constrained maximization may not be a rationally chosen disposition for all agents, even if it is a rational disposition. This is another reason the two earlier versions of Gauthier's argument are unsatisfactory.
20 There is another sense in which a rational policy might be thought to diverge from evaluation of rationality, but this sort of divergence does not pose a threat to c-max in particular. What is important, on this line of thinking, is actually achieving an aim as well as possible, so an action is rational only if it actually achieves the theory's aim. An agent might act in accordance with the correct policy yet fail to achieve the aim of the theory as well as possible because events do not occur in the way he or she had every reason to expect they would. One might claim that the agent's action is irrational in such a case because it failed to achieve the aim of the theory as well as some other action would have. The value placed on a certain aim leaves room for this emphasis on actually achieving the aim, and so for a divergence between policy and evaluation. But if this sort of divergence occurs between evaluation and a policy of c-max, it would equally occur between evaluation and a policy of s-max. For that matter, it would be organic to any theory of rationality concerned with achieving an aim, or to consequentialist ethical theories. Though I find it odd to think that unforeseeable twists of fate should matter to the rationality of a choice, the possibility of this sort of divergence between policy and evaluation could pose a challenge for the defender of c-max. But it is not the challenge that concerns the agent when he or she is advocating c-max over s-max.
21 This criticism was offered by Duncan Macintosh in his referee's comments for this paper. It is similar to MacIntosh's criticism of Gauthier mentioned in note 9 above.
22 I am not denying that changes in the world will sometimes affect what policy best satisfies an aim, and so will also affect what policy is the correct rational policy. If everyone became much less translucent, for instance, it could cause s-max to become the correct rational policy. I am only claiming that if a policy is the best one for situation x, and then the only “change” that occurs is that situation x actually arises, it is perverse to say that this causes the policy to become the wrong policy.
23 Parfit, Reasons and Persons, pp. 21–22
24 Some will think that these counter-intuitive cases count more against s-max than c-max. The difficult cases for constrained maximization arise only when one of the parties involved inaccurately predicts what the other will do. You will not have the opportunity to choose to exploit your partner unless he misperceives what policy you will follow in making your choice. In contrast, problem cases for straightforward maximization arise even if both parties have perfect knowledge of the other's disposition and how he will choose. Straightforward maximization dooms even perfectly perceptive agents to sub-optimal—sometimes tragically sub-optimal—outcomes. Some will think this shows that counterintuitive advice is more organic to s-max than to c-max. Whether this is so depends on one's views about how normative theories should apply to ideal agents and agents in the real world. The issue is too complicated to explore in this article.
25 See Sayre-McCord, Geoffrey, “Deception and Reasons to be Moral,” American Philosophical Quarterly, 26, 2 (April 1989): 113–22.Google Scholar
26 For their invaluable help with this paper, I wish to thank Debra DeBruin, Christopher Morris, Bernard Boxill, Michael Resnik, and especially Geoffrey Sayre McCord.