Article contents
The Risks of a Reputation for Toughness: Strategy in Public Goods Provision Problems Modelled by Chicken Supergames
Published online by Cambridge University Press: 27 January 2009
Extract
In this article, versions of the n-person Chicken supergame are applied to the problem of public goods provision. The literature on two-person Chicken suggests that players should build a reputation for toughness in Chicken supergames by making, and sticking to, commitments not to co-operate. By doing this they are able to make the other player more likely to co-operate in future rounds of the game. However, in the n-person and continuous-strategy models examined here there may be risks associated with a reputation for toughness since, under certain conditions, other players are less likely to co-operate in future rounds of the game the greater your reputation for toughness. The implications of this are that the chances that vital public goods such as security and environmental stability will be provided may be reduced if players falsely generalize from the argument for maintaining a tough reputation in two-person, two-strategy chicken supergames.
- Type
- Articles
- Information
- Copyright
- Copyright © Cambridge University Press 1987
References
1 Kahn, H., On Thermonuclear War (Princeton, N.J.: Princeton University Press, 1977)Google Scholar; Schelling, T. C., Arms and Influence (New Haven, Conn.: Yale University Press, 1966)Google Scholar; Snyder, G. and Diesing, P., Conflict Among Nations (Princeton, N.J.: Princeton University Press, 1961).Google Scholar
2 The most crucial feature of two-person, two-strategy Chicken is that there are two equilibria, and in both of these equilibria one player co-operates and the other defects. In the Chicken game matrix below the equilibria are starred. Suppose that Player 1 can physically commit himself to
defection or can convince Player 2 that he is very unlikely to back down from his commitment. The Player 1 can force Player 2 into co-operation, obtaining for himself a payoff of 4. As shown below, deterrence theory concerns itself with the question of how credible commitments that are not physically binding must be to hijack another player.
3 See, for example, Icklé, F. C., How Nations Negotiate (New York: Praeger, 1966)Google Scholar, Chap. 6; Raiffa, H., The Art and Science of Negotiation (Cambridge, Mass.: Belknap Press, 1982), p. 13.Google Scholar
4 For a discussion of commitment tactics in budgetary politics see Wildavsky, A., The Politics of The Budgetary Process (New York: Little, Brown, 1964), pp. 101–23.Google Scholar On the importance of maintaining the credibility of medium-term financial strategies see Schelling, T. C., ‘Establishing Credibility: Strategic Considerations’, American Economic Review, LXXII (1982), 77–8.Google Scholar
5 Hardin points out that many industries in the United States constitute privileged groups in Olson's sense; at least one member would be willing to take on the costs of lobbying alone. See Hardin, R., Collective Action (Baltimore, Md.: Johns Hopkins University Press, 1982), p. 31.Google Scholar Also see Olson, M., Logic of Collective Action (Cambridge, Mass.: Harvard University Press, 1972), pp. 143–8.Google Scholar Neither author realizes that an industry in which more than one firm is willing to bear the costs of lobbying alone involves a game of Chicken between those firms which are willing. Hence the conclusions drawn about the power of concentrated industries do not immediately follow.
6 Taylor, M., Anarchy and Cooperation (New York: Wiley, 1976), pp. 14–15.Google Scholar
7 Hardin, , Collective Action, p. 55.Google Scholar
8 See Taylor, M. and Ward, H., ‘Chickens, Whales and Lumpy Goods’, Political Studies, XXX (1982), 350–70, p. 353.CrossRefGoogle Scholar See also Lipnowski, I. and Maital, S., ‘Voluntary Provision of a Pure Public Good as the Game of Chicken’, Journal of Public Economics, XX (1983), 381–6.CrossRefGoogle Scholar
9 See Schelling, T. C., Strategy and Conflict (Oxford: Oxford University Press, 1977)Google Scholar, Appendix A. See also Kahn, , On Thermonuclear War, pp. 282–90.Google Scholar
10 Taylor, and Ward, , ‘Chickens, Whales and Lumpy Goods’.Google Scholar
11 Taylor, , Anarchy and Cooperation.Google Scholar For a discussion of the evolution of co-operation in Prisoners' Dilemma supergames, see Axelrod, R., The Evolution of Cooperation (New York: Basic Books, 1984).Google Scholar
12 The best account is still Schelling, , Arms and InfluenceGoogle Scholar, Chap. 2. Another useful account is Jervis, R., ‘Bargaining and Bargaining Tactics’, in Pennock, J. R. and Chapman, J. W., eds, Coercion (Nomos XIV) (Chicago: Atherton/Aldine, 1972).Google Scholar The novel idea here is that A's commitment may also alter B's incentives to stand firm or to retreat.
None of the standard accounts provide an understanding of the dynamics of the commitment process. This problem is especially pressing for n-persons Chicken. How does the commitment of one player alter the commitments of the other (n – 1)? Is there a bandwaggon effect or negative bandwaggon effect in which more players commit through time, or commitments are reversed? Will commitments stabilize within some time limit? There are other complications not considered here. In the pre-game of commitment there is no reason why the same set of strategies should be available to each player. Also commitment over more than one round of the game may be possible.
13 Taylor, , Anarchy and Cooperation.Google Scholar For some results on Chicken supergames which do assume perfect information, see Ward, H., A Behavioural Theory of Bargaining (doctoral dissertation, University of Essex, 1979), Appendix A.Google Scholar
14 Although the literature often assumes infinite supergames, or that players perceive the game as infinite, most games are finite. In many cases players face a game of ruin where the maximum amount of the good that could be provided gets worse as players fail to co-operate and the game progresses. Eventually players know that they are at ‘the brink’ at which, if they do not co-operate, the good will become unavailable and they will need to co-operate for some considerable time to merely stave off disaster. Many environmental problems have this quality.
15 See Brams, S. J. and Kilgour, M., ‘Optimal Deterrence’, Social Philosophy and Policy, Autumn 1985, 118–35CrossRefGoogle Scholar; Zagare, F., ‘Towards a Reformulation of the Theory of Mutual Deterrence’, International Studies Quarterly, XXIX (1985), 155–71.CrossRefGoogle Scholar Players have mixed strategies for pre-empting and retaliating.
16 Jervis, R., ‘Deterrence Theory Revisited’, World Politics, XXXI (1979), 289–324.CrossRefGoogle Scholar I follow Jervis's divisions between phases of the literature.
17 Schelling, , Arms and influence, Chap. 2.Google Scholar
18 Ellsberg, D., ‘The Theory and Practice of Blackmail’, reprinted in Young, O., ed., Bargaining (Urbana: University of Illinois Press, 1975).Google Scholar
19 Schelling, , Arms and Influence, p. 55.Google Scholar
20 Brams, S. and Hessle, M., ‘Threat Power in Sequential Games’, International Studies Quarterly, XXVIII (1984), 23–44CrossRefGoogle Scholar, makes this assumption in a wider context, examining all non-zero-sum (2 × 2) games, including Chicken. Each round of the game is sequential: the round terminates when the player with next move chooses not to change strategy. The game is played repeatedly, and the effects of one player having a credible threat to move the game to a Pareto inferior outcome are considered. The idea that there are net gains from having credible threats available, even though short-term costs may be involved, is an unanalysed assumption of the model.
21 Jervis, R., Perception and Misperception in International Politics (Princeton, N.J.: Princeton University Press, 1976), p. 58.Google Scholar
22 Jervis, R., Perception and Misperception, Chap. 3.Google Scholar
23 Snyder, , ‘Crisis Bargaining’.Google Scholar
24 Wagner, R. H., ‘Deterrence and Bargaining’, Journal of Conflict Resolution, XXVI (1982), 329–58.CrossRefGoogle Scholar
25 Suppose player i's subjective probability for any player j committing is
Obviously pij increases at a decreasing rate with the number of times j commits.
26 Suppose some fraction of contributions, say f < 1, is returnable if the good is not provided. Then the analysis goes through replacing c by f, c for contingencies in which i contributes but the good is not provided.
27 Suppose that only one of the players need contribute to provide the good. Then the equivalent of Condition One is p12-P13 > (c1 – k1)/b1. Obviously 1 is more likely to co-operate as p12 or p13 increases.
28 Alternatively, if the sequence started at t = 0, Player 1 would have done worse if he had a higher reputation for toughness based on past behaviour in other games.
29 Wonnacott, T. and Wonnacott, R., Introductory Statistics for Business and Economies, 2nd edn (New York: Wiley, 1977), p. 224.Google Scholar
30 Taylor, and Ward, , ‘Chicken, Whales and Lumpy Goods’, p. 368.Google Scholar
31 In an extremely interesting article by Bliss and Nalebuff players can choose the timing of a fixed contribution from a continuum. The authors consider the optimal timing of contribution: how long do you wait before swerving if you wish to maximize your expected utility? This dimension is not considered here, but is important empirically. See Bliss, C. and Nalebuff, B., ‘Dragon Slaying and Ballroom Dancing: The Private Supply of a Public Good’, Journal of Public Economics, XXV (1985), 1–12.Google Scholar
32 As I show in Appendix III, the argument here does not depend upon this assumption. It also holds for provision functions, like that shown by the dotted line in Figure 1, which are ‘roughly logistic’ in shape. Such provision functions are quite empirically plausible for some public goods.
33 Of course, some goods are such that the idea of negative contributions is implausible. In addition possible levels of contribution are liable to be bounded. By ignoring these complications, we can avoid using truncated distributions. So long as the range of contributions is small, this is of little concern.
34 The analysis is unaffected so long as some proportion of contributions is not returnable.
35 In the n-person case, Player 1 has a normal distribution for the sum of the other players' contributions derived from normal distributions for each other player. Then the effects of any other player J's behaviour at (t – 1) on this distribution over the sum of contributions, ceteris paribus, is exactly analogous to that in the two person case.
36 G is clearly
where the integral is the cumulative distribution function (cdf) of FI, t,. By the symmetry of the cdf around the point (), this can be thought of as a reflection of the cdf around a line through the centre of symmetry parallel to the x-axis. To obtain GI, t as a function of (rI, t – s) we again reflect about a line parallel to the y-axis passing through the point of symmetry obtaining a function shaped like the cdf itself. To obtain GI, t as a function of rI, t change the origin of this function again obtaining a function with the same shape as the cdf.
37 Although multiple solutions are inconvenient, the argument here does not depend upon U1 being of such a shape as to yield them.
38 Raiffa, , The Art and Science of Negotiation, p. 204.Google Scholar
39 This is analogous to Axelrod's work on the Prisoners' Dilemma, summarized in The Evolution of Cooperation.Google Scholar However, Axelrod only considers pairwise interactions, not true n-person games. I suggest that this is inadequate in relation to public goods problems.
40 For experimental work on a game analogous to the n-person case in Section III see Van de Kregt, A., Orbell, J. and Dawes, R., ‘The Minimal Contributing Set as a Solution to Public Goods Problems’, American Political Science Review, LXXVII (1983), 112–22.CrossRefGoogle Scholar The experimenters did not see the game as Chicken and subjects were not given any opportunity to commit. This game is theoretically analysed in Rapoport, A., ‘Provision of Public Goods and the MCS Experimental Paradigm’, American Political Science Review, LXXIX (1985), 148–55.CrossRefGoogle Scholar Rapoport derives similar results about this case to those in Taylor, and Ward, , ‘Chickens, Whales and Lumpy Goods’Google Scholar, but examines a wider variety of assumptions about players' probability estimates.
41 Taylor and Ward suggest certain restrictions on the utility function if the game is to be a continuous contribution generalization of chicken. In particular UI must be such that the higher J's known contribution level the lower I's optimal contribution level, so that J has an incentive to commit so as to force I to contribute more. See Taylor, and Ward, ‘Chickens, Whales, and Lumpy Goods’, p. 363.Google Scholar
42 For these conditions see Maxwell, E. A., An Analytical Calculus, Vol. IV (Cambridge: Cambridge University Press, 1963), pp. 143–7.Google Scholar
- 11
- Cited by