Article contents
The Rational Timing of Surprise
Published online by Cambridge University Press: 13 June 2011
Abstract
National leaders are frequently surprised by the actions of other governments. This paper explores the structure common to problems involving the use of resources for achieving surprise. Such resources include deception through double agents and through sudden changes in standard operating procedures. Still other resources for surprise include cracked codes, spies, and new weapons. Since surprise is usually possible only by risking the revelation of the means of surprise, in each case the same problem arises: when should the resource be risked and when should it be maintained for a potentially more important event later? A rational-actor model is developed to provide a prescriptive answer to this question. Examining the ways in which actual actors are likely to differ from rational actors leads to several important policy implications. One is that leaders may tend to be overconfident in their ability to predict the actions of their potential opponents just when the stakes get large. Another implication is that, as observational technology improves, the potential for surprise and deception may actually increase.
- Type
- Research Article
- Information
- Copyright
- Copyright © Trustees of Princeton University 1979
References
1 Masterman, J. C., The Double-Cross System in the War of 1939 to 1945 (New Haven: Yale University Press 1972).Google Scholar
2 Richard M. Bissell, Jr., “The Bissell Philosophy: Minutes of the 1968 ‘Bissell Meeting’ at the Council on Foreign Affairs as Reprinted by the Africa Research Group,” reprinted as Appendix to Marchetti, Victor and Marks, John D., The CIA and the Cult of Intelligence (New York: Knopf 1974), 381–98Google Scholar; Scoville, Herbert Jr, “Is Espionage Necessary for Our Society?” Foreign Affairs, Vol. 54 (April 1976), 482–95.CrossRefGoogle Scholar
3 For a discussion of indices of behavior, see Jervis, Robert, The Logic of Images in International Relations (Princeton: Princeton University Press 1970), 18 and 41–65.Google Scholar See also Goffman, Erving, Strategic Interaction (Philadelphia: University of Pennsyl vania Press 1969).Google Scholar
4 Allison, Graham, Essence of Decision (Boston: Little, Brown 1971).Google Scholar
5 The morality of deception is worth at least a footnote. Most of us find deception distasteful. In everyday relations we regard deception as immoral. We don't deceive others around us and we feel we have a right to expect that they won't deceive us. But not all relationships are of this basically cooperative, trusting type. Wars certainly are not. Whenever a relationship is based on violence, it seems hardly immoral to add some deception. Indeed, it would have been immoral for the British and Americans not to try to deceive Hitler about the location of the D-day attack, for example. Thus, whether deception is moral or not depends upon the nature of the relationship be tween the two sides. (I would like to thank Alan Donagan for this formulation.) When the relationship has less than total conflict, there is an additional consideration. The very practice of deception tends to make the relationship less trusting and more hostile.
6 Goffman, (fn. 3), 69, states that when the stakes become high enough, “nothing (it can be thought) ought to be trusted at all.” I think that even when the stakes are high, something has to be trusted.Google Scholar
7 Dulles, , The Craft of Intelligence (New York: Harper & Row 1963), 75.Google Scholar
8 Brown, Anthony Cave, Bodyguard of Lies (New York: Harper & Row 1969), 128.Google Scholar See also Beesly, Patrick, Very Special Intelligence (London: Hamish Hamilton 1977), 901.Google Scholar
9 Brown, Cave (fn. 8), 128.Google Scholar Was the success of the German air raid on Coventry a result of the British reaching the opposite decision—namely to protect Ultra by deliberately not taking advantage of the information it provided? Brown, p. 40, says yes. Howard, Michael, in “The Ultra Variations,” Times Literary Supplement (May 28, 1976), 641–42Google Scholar, and Kahn, David, in “The Significance of Codebreaking and Intelligence in Allied Strategy and Tactics,” summarized in Newsletter of the American Committee on the History of the Second World War, No. 17 (May 1977), 3–4Google Scholar, dispute this and argue that Coventry was not a deliberately martyred city.
10 Aldouby, Zwy and Ballinger, Jerrold, The Shattered Silence: The Eli Cohen Affair (New York: Coward 1971), 226, 325f.Google Scholar
11 Aller, James C., “Electronic Warfare Concept,” Naval War College Review, XXII (May 1970), 75–79.Google Scholar
12 Milovidov, A. S. and Kozlov, V. G., The Philosophical Heritage of V. I. Lenin and Problems of Contemporary War (Moscow: Military Publishing House 1972Google Scholar; trans. ed., Washington, D.C.: U.S. Government Printing Office), 198.
13 We are assuming that the determination of the stakes is independent of the choice whether to use a resource for surprise. Put another way, the stakes are exogenous to the model.
14 Savin, V. Ye., The Basic Principles of Operational Art and Tactics (Moscow: Military Publishing House 1972Google Scholar; trans, ed., Washington, D.C.: U.S. Government Printing Office), 235.
15 Ibid., 236.
16 This strategy assumes that what we are trying to maximize is expected value—i.e., average dollar winnings. It ignores risk aversion and the possibility that we will run out of money to maintain our position while waiting for a good roll.
17 The highest expected value is achieved by waiting for a three or more. This would give an expected value of $8.25 to the game. If we stopped at one or above, we would get on average $7; at two or above, $7.80; at three or above, $8.25; at four or above, $8; at five or above, $6; and at six, a loss of $3.
18 It can be shown that a threshold decision rule is optimal for the type of model being described in the body of the paper. It can also be shown that for a broader class of model, embodying Markov processes, the optimal decision rule need not be a thresh old rule. The proofs are available in the appendix of Axelrod, “The Rational Timing of Surprise,” IPPS Discussion Paper No. 113, available from the author or the In stitute of Public Policy Studies, University of Michigan, Ann Arbor, MI 48109.
19 Richardson, Lewis F., Statistics of Deadly Quarrels (Pittsburgh: Boxwood Press; Chicago: Quadrangle Press 1960).Google Scholar
20 Masterman, (fn. 1), 168.Google Scholar
21 Ibid., 157f.
22 This is true regardless of the distribution of stakes and the value of the other parameters.
23 See Axelrod (fn. 18) for the proof. If the risk to the resource is constant per year rather than per event, then the discount rate per event will be greater in peacetime, when events are few and far between compared to wartime. With a high discount rate per event in peacetime and the expectation of protracted peace, there would be an incentive in peacetime to exploit the resource to achieve such things as affecting the other side's defense investments and force postures. I am grateful to William R. Harris for this point.
24 The way in which the distribution of stakes affects the optimal threshold is shown in Axelrod (fn. 18).
25 Masterman, (fn. 1), 127.Google Scholar
26 Once again it should be emphasized that stakes really need to be evaluated not only in terms of the overall importance of the current event, but also in terms of the relevance of the resource to the event. For example, the stakes for a resource involving tank operations will be low if the current event mainly involves aerial combat rather than armored combat.
27 An important but somewhat tangential point is that, in writing treaties, if is sometimes possible to shape the stages of the implementation so that the stakes are relatively uniform. There would then be no occasion in which one side would have an incentive to surprise the other by breaking the treaty. Treaties dealing with staged withdrawals are of this form, as are some types of staged arms-control agreements. The object in both cases is to design the sequence of events so that at no time do the stakes make it rational to exploit one's resources for surprise.
28 Nicholas Schweitzer, “Bayesian Analysis for Intelligence: Some Focus on the Middle East.” Paper presented at the International Studies Association Annual Convention, Toronto, February 1976.
29 Herzog, Chaim, The War of Atonement (London: Weidenfeld and Nicolson 1975), 52fGoogle Scholar; Handel, Michael I., “Perception, Deception and Surprise: The Case of the Yom Kippur War,” Hebrew University of Jerusalem, Jerusalem Papers on Peace Problems, No. 19 (1976), 38.Google Scholar
- 32
- Cited by