Article contents
Interaction Problems for Utility Maximizers
Published online by Cambridge University Press: 01 January 2020
Extract
This essay is arranged in three sections. In the first I consider interaction problems that can frustrate maximizers. My object here is to add to the kind of case discussed by Gauthier, another in which maximizers would not do well. In the next section I set out conditions under which ‘straight’ or ordinary maximizers could avoid their problems as surely and as easily as could Gauthier's ‘constrained’ maximizers. And in the last section I comment on the relative merits of straight and ‘constrained’ maximization. Attention here is paid first to the suggestion that straight maximizers in view of their problems would choose not to be straight maximizers, would indeed choose to be ‘constrained’ maximizers, and that this shows that there is something wrong with straight maximization. Rejecting this inference, I turn in conclusion to the idea that it follows from the ‘incompleteness’ of straight maximization, from the fact that it is not always possible, that there is something wrong with it. This inference too is rejected.
- Type
- Research Article
- Information
- Copyright
- Copyright © The Authors 1975
References
* A comment on Gauthier's, David “Reason and Maximization,” this journal, vol IV, (March 1975), pp. 411–33Google Scholar. Page references unless otherwise indicated are to that paper. Versions of the paper and this comment were read on June 4, 1974 in Toronto to the Institute on Moral and Social Philosophy sponsored by the Canadian Philosophical Association.
1. ‘Maximizer’ in this essay always means straight maximizer. And ‘straight maximizer’ and ‘utility maximizer’ are used interchangeably. Similarly for cognates. Whether or not these words and phrases cover ‘constrained’ maximization-more precisely, whether or not ‘constrained’ maximization in which keeping certain agreements is accorded ‘priority’ is extensionally equivalent with straight maximization supposing certain ‘premiums’ placed on the keeping of these agreements-is a hard and probably not very important question. We return to it for brief comment in our short second section.
2. See Sobel, J. H. “The Need for Coercion,” in Coercion, eds. Pennock, and Chapman, (Aldine-Atherton, Chicago/New York, 1972)Google Scholar for a fuller discussion of this case.
3. Here is an argument, only the names of the actions have been changed. Row and Column could reason as follows:
Since we are both utility maximizers, I know we would both do best if we were both to tend the fire. Since you know what I know, you know this. Therefore I know that we both know that we would both do best if we were both to tend the fire. And you know that we both know… This mutuality of knowledge and of advantage ensures that your decision will parallel mine; I may then treat the situation as if my decision were a joint decision. Therefore, as a utility maximizer, I should tend the fire.
Gauthier, David “The Impossibility of Egoism,” journal of Philosophy, August 15, 1974.CrossRefGoogle Scholar
It seems clear that Gauthier cannot consistently endorse this argument, for if good it could be used to resolve the prisoner's Dilemma into despite the presence of dominant strategies converging on (C,C): Note that Gauthier himself uses the argument (more precisely a ‘condition’ the sole support for which is this argument) to resolve the structure
in the cell (R2, C2) despite the presence of (weakly) dominant strategies that converge on (R1, C1). (See p. 453, “Impossibility of Egoism.”) It seems that Gauthier cannot consistently use this argument, and it is in any case not a good argument. Whether or not Row can properly treat his decision as a joint decision depends quite obviously entirely on whether or not he has reason to think it would influence Column's. We of course assume for our cases that actions in them are all causally independent. A connected point is this: Even though given the symmetry of the case it is true that whatever is utility maximizing for Row is so for Column, it is not true that whatever Row decided to do and did, Column would decide to do and do. It should suffice to recall that Row can after all make mistakes. He is perfectly rational and so will not, but he can. And if he were to make a mistake, Column would not ‘follow suit’, but, by the general hypothesis, would do the reasonable thing which of course is what he expects Row to do. Note that it is no part of our idealizing assumptions that whatever Row did Column would expect it. (See “The Need for Coercion” for a detailed statement of the assumptions for a hyperrational, ideal utilitv maximizer community, and specifically section 2.131 for consideration of an argument similar to the one that heads this footnote. See Nozick, Robert “Newcomb's Problem and Two Principles of Choice,” Essays in Honor of Carl G. Hempel, ed. Rescher, N. (D. Reidel, Dordrecht-Holland, 1970)Google Scholar for further evidence of the importance in decision theory of the idea of causal influence. Utility maximizers, properly conceived (this is not Nozick's own ‘solution’ to the Problem) are concerned with degrees to which their possible actions figure to influence possible outcomes, not with conditional probabilities of outcomes on actions where these probabilities are based on one's total evidence. Utility maximizers, properly conceived, are in this way even if not in all ways still ‘utilitarians’.)
4. Does every situation possessed of exactly one equilibrium resolve for hyperrational utility maximizers? No. For example, some structures possessed of unique mixed strategy equilibria in which the mixed strategies are not ‘centroid’ strategies (strategies in which pure strategies are made equally Probable) do not resolve for such agents. Thus the structure
has only one equilibrium, namely the strategy pair (2/3R1, 1/3R2);(2/3C1, 1/3C2). So it is clear that this structure can resolve for hyperrational utility maximizers only in this strategy pair. At least this much would follow from proper postulates for the hyperrational utility maximizer community: some aspects of the theory of what perfect utility maximizers would do under ideal epistemic conditions are clear even if other aspects, including the exact character of proper and complete postulates for their conditions and perfections, are not entirely clear. So Row and Column could employ only the indicated equilibrium strategies; and it is nearly as clear that they could not employ even these strategies and that the structure, despites its unique equilibrium, would not resolve for hyperrational utility maximizers. The argument is indirect:
Suppose the structure resolves in its equilibrium. Then (2/3R 1, 1/3R2) is selected by Row's principle and has greater informed expected utility than does any other strategy open to him. Column knows this: in a hyperrational community there is no private relevant information. So Column expects (2/3R1, 1/3R2). Which means that Column's informed expected utility for each of his strategies, pure or mixed, is -5/3. Column's principle does not single out (2/3C1, 1/3C2) or any other strategy, and Column is indifferent as to what strategy he employs. Knowing this, as he would, Row judges that each of Column's strategies is equally probable. But then R1,not the mixed strategy (2/3R1,1/3R2) has greatest informed expected utility and is selected by Row's principle.
Not all unique equilibrium structures resolve for hyperrational utility maximizers. Not even all two-person, zero-sum unique equilibrium structures resolve for them. Perhaps all structures with unique pure strategy equilibria resolve for hyperrational agents. That seems right, though it of course does not follow from the fact that these structures could resolve for such agents only in their equilibria. What is needed is an account of how hyperrational maximizers would exploit, if not the pure equilibrium feature itself, then other features that would ‘in this case or that’ attend it. I do not have such an account, though I am responsible for a suggestion, which now seems plainly mistaken, according to which agents could, given weakly dominant strategies, ‘reason by elimination’ to their places in a unique pure strategy equilibrium. (See “The Need for Coercion,” p. 174. The argument I there say hyperrational agents could use would have them beg the question of whether or not they are ‘up to’ the impending situation. Note that my discussion on pp. 173–4 of that paper is, at it happens, not improved by several printer's errors, specifically, seven mis-placed or missing ‘negation-bars.’)
5. Gauthier, in “The Impossibility of Egoism,” claims that a certain three-person structure would be impossible for utility maximizers and that all two-person structures are possible. I reject the second conjunct (See structures II and III as well as footnote 3 above.), and am unsure about the first. Regarding the first, it seems that the three-person structure Gauthier considers might resolve at its sole equilibrium (its sole equilibrium is a pure strategy equilibrium). Not that his case against that resolution rests on a ‘condition’ according to which situations possessed of more than one equilibrium cannot resolve, for ideally well-informed utility maximizers, in an equilibrium inferior for all agents involved to another equilibrium. (I have omitted presently irrelevant refinements.) And this ‘condition’ rests in turn only on the argument criticized in footnote 2 above. Both ‘condition’ and its sole ground are, I think, ill-conceived.
6. At least one of the conditions for the existence of a von Neumann and Morgenstern utility function probably is in error because overly stringent. Reflection on this error, on the fact of a wider range of possible rational preference profiles, might lead one to adjust one's preferences exploiting this wider range. I have in mind the continuity or unique indifference condition (see p. 416.) according to which, given any trio of possible outcomes A, B, and C, wherein C is prefered to B is prefered to A, there is a lottery on A and C such that the subject is indifferent between this lottery and B. Suppose a ‘fine discriminator’ violates this condition in the following manner: he prefers B to each lottery on A and C in which the probability of C is less than 1/2 and prefers each lottery in which the probability of C is 1/2, or greater, to B. For the subject and this trio, there is a ‘turning point’ but there is no indifference point. And this is no ground at all for holding that his preferences over outcomes and lotteries are not perfectly rational. See Nozick, Robert The Normative Theory of Individual Choice, unpublished doctoral dissertation, Princeton University, Princeton, 1963, p. 139Google Scholar. For a very different criticism of utility maximization as a necessary condition for agent rationality in all cases, see Ellsberg, Daniel “Risk, Ambiguity, and the Savage Axioms,” Quarterly journal of Economics, 1961CrossRefGoogle Scholar. Both criticisms go in different ways to the intrinsic merits of utility maximization as a partial analysis (that is, as an analysis of an ever-present part) of practical rationality, and so these criticisms contrast sharply with the kind developed by Gauthier based as it is on certain supposed bad consequences (causal not logical) of utility maximization, more precisely, of agents’ being utility maximizers. Compare Gauthier's criticism of straight maximization with Hodgson's, D.H. criticisms of Act-utilitarianism in Consequences of Utilitarianism, (Clarendon Press, Oxford, 1967).Google Scholar
7. One can also imagine cases in which constrained maximizers would want jointly to lock themselves into their constrained ways of thought. Could they do this? I used to think they certainly could, but a new sentence in Gauthier's paper makes matters unclear (or worse) at least as regards certain communities of ideally well-informed constrained maximizers, for according to this sentence, “even if an agreement is reached, a constrained maximizer is committed to carrying it out only in a context of mutual expectations… of all parties … that it will be carried out” (p. 426). This proviso may generate, by the ‘after you mechanism’problems with keeping agreements, ‘no resolution’ problems, and so render pointless the making of agreements, ‘locking in’ agreements or any agreements, at least for ideally well-informed and reasonable constrained maximizers.
8. Straight and constrained maximization again stand together. Recall that constrained maximization is said to coincide with straight maximization ‘in the state of nature’, or as some would say ‘in the non-cooperative game’. (See p. 429.) So constrained maximization is, at least ‘in the state of nature,’ incomplete in just the way that straight maximization is. Nor does the fact that, given conditions for the making of agreements, constrained maximizers have a remedy for their ‘incompleteness problem’ make a difference, since the same can be true of straight maximizers. Gauthier evidently thinks of constrained maximization as always possible (see “The Impossibility of Egoism,” pp. 455–6), but for the reasons just given it cannot be.
9. This assessment, this view of the import of the ‘incompleteness’ of utility maximization, contrasts (of course) with the way in which Gauthier perceives the theoretical position. (See “The Impossibility of Egoism,” concluding Section X.)
- 24
- Cited by