Drawing on scholarship on uncertainty in economics, in the social sciences, and in philosophy, this article challenges the conventional wisdom that puts risk at the centre of the analysis of decision-making on issues of international security and examines how leaders make decisions under condition of radical uncertainty. While decision-making under conditions of risk is an important part of understanding some of the big challenges in international security, exploring what leaders do under conditions of uncertainty should receive far greater attention than it has.Footnote 1
The first part of this article explores how people perceive and respond to threats when they find themselves in a world of uncertainty rather than the more familiar one of risk. It examines how leaders hide uncertainty or hide from uncertainty to make problems more tractable.
The second section draws on well-established traditions within pragmatist philosophy to suggest strategies that are better suited to a world of radical uncertainty. Examination of the Biden administration’s strategy in the early years after Russia’s invasion of Ukraine illustrates how strategies drawing on the tenets of pragmatism are well suited to manage uncertainty. In the tradition of John Dewey, I call these strategies ‘learning by doing’.Footnote 2
The final section explores the kinds of strategies leaders should use to respond to perceived threats in a context of radical uncertainty, a defining characteristic of contemporary world politics.Footnote 3 It explores the advantages of pragmatic strategies that proceed to look for ‘what works in the world’ as provisional responses to perceived threats through iterative, ongoing processes of experimentation. I argue that these kind of pragmatist strategies, which have not been commonly used in international politics, are especially appropriate as uncertainty, always present, becomes more important in contemporary international security and political economy.
A pragmatist perspective
Perceiving and responding to threats are evolved practices that have long interested evolutionary biologists, neuroscientists, psychologists, economists, sociologists, historians, political scientists, and philosophers. Different disciplines bring different perspectives, and within disciplines there are often big differences in what scholars think is provisionally adaptive, how they explain maladaptive behaviour, and what they see as remediable. What different disciplines mean by adaptive in world politics is not obvious, not fixed, and almost always subject to contestation.
Pragmatist scholarship contributes to these debates and shifts the analysis of threat perception in several important ways. It joins perception (or knowledge) together with response (action). It does so because pragmatism tells us that the test of knowledge, whether that knowledge is ‘objective’ or ‘subjective’ or a mixture of both, is whether it is useful when it meets the world. It follows that perception and response, knowledge and action, are inextricably linked as people learn through the creation of knowledge what is useful in the world. In a dynamic world, we cannot understand one without understanding the other. It is in this sense that pragmatists – and evolutionary psychologists as well – understand the perception and response to threats as evolved practices that are conjoined.
Pragmatist perspectives are especially valuable under conditions of uncertainty. This article explores threat perception and response under the condition of radical uncertainty, the most intense kind of uncertainty. Pragmatists were and continue to be preoccupied with the challenges created by uncertainty, the opportunities and limits of probability, and the scope for subjective estimates and judgements.Footnote 4 They share a concern about inference and judgement with evolutionary biologists and psychologists as well as with social scientists who have studied threat perception, but they pay special attention to the condition of uncertainty and the responses that it requires.
The special case of radical uncertainty
Understanding the world and crafting strategic responses to the threats that leaders of states or non-governmental organisations perceive is always challenging. It is more difficult to do, however, under conditions of uncertainty than it is under risk. Uncertainty grows out of the incomplete knowledge that is inherent in a dynamic world and the need to make predictions about the future consequences of present actions. In world politics, which has been conceived both as anarchical and as deeply social, that prediction problem can be partly mitigated by well-established norms and deeply embedded practices. These norms and practices make it easier to separate what is unusual from what is routine. They help to create boundaries and structure problems and consequently to reduce some kinds of uncertainty that characterise strategic decision-making. When norms are contested, rules are thin, and practices range widely, however, uncertainty deepens. It becomes more difficult to connect present actions to future consequences and to identify unusual behaviour, much less their unintended consequences. Assessing whether behaviour is threatening becomes harder, and crafting strategic responses consequentially becomes more challenging.
This article examines threat perception and strategic response in a world of radical uncertainty, a special case of uncertainty. Different disciplines conceive of radical uncertainty in different ways and deal differently with the challenges it creates. Here, I distinguish uncertainty from risk and then further separate radical uncertainty from other kinds of uncertainty and explore its impact on threat perception and response. My approach to the conceptualisation of radical uncertainty, as I have noted, falls broadly within the philosophical tradition of pragmatism.
As I have observed, uncertainty grows out of incomplete knowledge of a dynamic, ever-changing, interactive, complex world, and the need to make judgements about the connections between present action or inaction and their future consequences.Footnote 5 Uncertainty and risk are often conflated in everyday language, but philosophers and economists especially treat them as distinct. Decision-making under risk occurs when probability distributions are available to estimate the likelihoods of events or of known outcomes as consequences of known options. This is the easy world of ‘known knowns’, where the consequences of the principal options are known, and the occurrence of these consequences are known and predictable or they can be estimated because the pattern of their occurrence more or less resembles some probability distribution. Even if the consequences are not known with a probability of 1, they are ‘knowable’ because their likelihood is knowable. These are the best-suited problems for classical estimation of probability. Similarly, the likelihood of threatening events that occur with some regularity are easier for analysts to estimate with a reasonable degree of confidence.
In many cases in international politics, the requirements to construct probability distributions (reliable data over time and large numbers of repeated trials) are simply not present.Footnote 6 Here, decision-making does not occur under conditions of risk, but under conditions of uncertainty. The point is elegantly made by John Maynard Keynes, writing in 1937:
By uncertain knowledge I do not mean merely to distinguish what is known for certain from what is only probability. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper, or the rate of interest in twenty years hence, or the obsolescence of invention, or the position of private wealth owners in the social system in 1970. About these matters there is no scientific basis to form any calculable probability whatever. We simply do not know.Footnote 7
There is no more difficult sentence for advisers or analysts to say when they are asked by leaders about the likelihood of some future threatening event than ‘I simply do not know’. Even when they are asked in private, most are unwilling to say flat out that they do not know. That answer seems to convey either a lack of knowledge or an absence of confidence or both. Admitting that one does not know is psychologically deeply uncomfortable.
It is puzzling that it should be so difficult, because the ability to distinguish between a world of risk and a world of uncertainty itself requires considerable expertise. Analysts require deep knowledge about what is known or knowable and about what is unknowable, not only because what is unknowable stretches far into the future, but because it has occurred only rarely and under very different conditions in the past and does not fit within any probability distribution that would generate a meaningful estimate of risk. It also occurs in the context of a complex, interactive, dynamic world. Designing strategy in a world of uncertainty is consequently more difficult than crafting strategy in a world of risk. In all the examples cited by Keynes, the accurate answer is: ‘I don’t know’. In these kinds of situations, decision-makers have to estimate threat under uncertainty. How they do so when they acknowledge that they ‘do not know’ is the focus of this essay.
Like others, I distinguish here between uncertainty and radical uncertainty.Footnote 8 Decision-making under uncertainty occurs when the consequences of options are known, but there are no relevant probability distributions to estimate their likelihood – known unknowns. There is some structure to the problem – the options and their consequences are known or knowable – but the likelihood of these consequences is not.Footnote 9 Were leaders to ask their intelligence analysts what the probability of Xi Jinping attacking Taiwan is in the next decade, the honest answer would be: ‘I do not know’. Leaders face this kind of challenge with many events they would consider threatening.
Keynes was, of course, raising a far more difficult problem. In some worlds, we do not know what the consequences of options are, much less their probabilities. Decision-making under these conditions is even more challenging. Neither the consequences of options nor their likelihoods are known; we are in a world of unknown unknowns. In the large dynamic worlds of entanglement that Peter Katzenstein describes, worlds characterised by what quantum scientists call superpositionality, at the edges we may not even know the state of the world.Footnote 10 This is the condition of radical uncertainty, a condition where states of the world are changing and their attributes change in response to these dynamic movements. We cannot think about these states of the world in probabilistic terms because these states of the world are changing in dynamic and complex ways that we do not fully understand.Footnote 11
In a world of radical uncertainty, how can people estimate threats and the consequences of options? Almost 100 years ago, a vigorous debate about how to handle uncertainty developed between Keynes and his protégé, the young pragmatist philosopher, mathematician, and economist Frank Ramsey. They agreed on the importance of radical uncertainty. Keynes, as we have seen, insisted that the logic of probability could not be applied beyond the situations where its requirements were met. Ramsey however made a big and consequential leap. He proposed that ‘subjective probability’, or the subjective judgements that people make even in the face of radical uncertainty, could be calculated in the same way as probabilities. He cautioned, however, that this method, like frictionless planes, was not suited for real-world decision-makers. His followers ignored the caution.Footnote 12 In so doing, they transformed intractable uncertainty into tractable measures of risk and probabilistic reasoning. In a sense, they made uncertainty disappear by quantifying their subjective estimates – or beliefs or intuitions – and then treated the problem as one of risk. Talking about risk and risk management felt much more comfortable than talking about uncertainty.
This move, a very large one by those who followed Ramsey, one that he had warned against, enabled contemporary ‘decision sciences’. As the century progressed, they largely banished uncertainty from the discussion of threat perception and strategy. When people made obvious errors in either estimating threats or in choosing ways to respond, errors that were systematically different from the way probabilities ‘should be’ calculated, cognitive psychologists explained these errors as the result of evolved ‘biases’ or ‘heuristics’ that compromised human judgement.Footnote 13 Human beings, they found, were not intuitive estimators of probability. The explanation of failure inhered in the perceiver rather than in the environment. In the language of psychologists, the errors were dispositional rather than situational, and context did not matter much. This kind of argument was developed in controlled laboratory environments but was applied frequently in a context that was radically uncertain. Doing so can be considered a category error. I return to this issue when I examine the many different ways leaders respond to uncertainty.
More recently, radical uncertainty has returned as a focus of concern, perhaps as a result of the series of ‘shocks’, or unexpected events that in the last two decades destabilised expectations about how the world works. Beginning with the attacks on the World Trade Center in 2001, followed by Russia’s attacks on Ukraine in 2014 and 2022, it became increasingly difficult to argue that any of these threats, even retrospectively, could have been estimated with probabilistic reasoning.Footnote 14 They could not because there was no reliable and large set of frequency data for that class of event, nor were the processes that created them regular and stable. Converting uncertainty to risk came at a very high cost. Adding ‘subjective’ as an adjective before probability only served to hide the uncertainty from those analysts and decision-makers responsible for estimating the threat and choosing how to respond.
John Kay and Mervyn King, in their analysis of radical uncertainty, tell the story of President Obama struggling with whether or not to authorise the dispatch of Navy SEALs to Abbottabad to capture Osama bin Laden. He and his advisers had at best very limited information to work with. There were some ‘known’ consequences of the two options of raid or do not raid – the probability of mechanical failure in the helicopters, for example, that would be deployed that could lead to the failure of the raid – but there were many more ‘unknowns’. And there was no basis for attaching probabilities to the estimation of an event – whether bin Laden was in that compound. Their description of the conversation around that decision is telling, not only because officials did not acknowledge uncertainty, but because the discussion conflated officials’ estimates of probability with the confidence they had in these estimates:
John, the CIA team leader, was 95% certain that bin Laden was in the compound. But others were less sure. Most placed their probability estimate at about 80%. Some were as low as 40% or even 30%.
The President summed up the discussion. ‘This is 50–50. Look guys, this is a flip of the coin. I can’t base this decision on the notion that we have any greater certainty than that.’ … His summary recognized that he had to make his decision without knowing whether the terrorist leader was in the compound or not. Obama would reflect on that discussion in a subsequent interview: ‘In this situation, what you started getting was probabilities that disguised uncertainty as opposed to actually providing you with more useful information.’Footnote 15
Obama identified a very important part of the problem. Leaders in the room were hiding their uncertainty. The subjective probabilities created false precision about whether a proposition was true or false and masked the radical uncertainty that he faced about the choices and their consequences. What he seems to have missed was that advisers did not distinguish their estimate of the probability that it was indeed bin Laden in the compound from their degree of confidence in those estimates.Footnote 16 Only by forcing an acknowledgement of uncertainty, something leaders do all too infrequently, could the president think through the consequences of authorising the raid that could then fail or succeed or of refusing to authorise the raid at all.
What do leaders do in a world of uncertainty?
Ramsey identified a century ago the difficulties of precisely measuring degrees of belief and rightly located the problem in human psychology. The problem is not only one of quantification, but also the far deeper challenge that people tend to be deeply uncomfortable with uncertainty because it challenges directly their need to control and the illusion that they can control and predict the consequences of their actions. I identify three broad patterns in what leaders do. The first is the effort they make to mask or hide uncertainty by converting the problem to risk and preserving the illusion of control or by denying uncertainty through a repertoire of psychological strategies. In a second pattern, leaders who are faced with uncertainty avoid or delay decisions for as long as possible. In neither of these patterns do leaders engage with uncertainty and the challenges it creates. The third pattern, one that is consistent with pragmatism, is acknowledgement of uncertainty and a process of trial-and-error experimentation that proceeds over time to manage the uncertainty. Closely related is storytelling about possible futures.
Hiding uncertainty
The quantification of beliefs about the probability of the consequences of options created the field of decision sciences and opened the door to hiding uncertainty. When leaders are asked – ‘what is the likelihood of escalation if you block these shipping lanes?’ – they have beliefs about whether escalation will happen. These beliefs are not based on Gaussian probability distributions. They are a product of experience, salient historical memories, deeply embedded stories that they have heard and told over and over again, and intuition. Bayesian models of updating were developed to translate these beliefs into subjective estimates of likelihood that were quantified and then folded into decision sequences that updated ‘prior’ beliefs in response to new information in a continuous process over time.
There are important debates about how valid this elicitation of subjective probabilities is when answers like ‘very likely’ or ‘not at all’ are quantified using language scales.Footnote 17 What is not debatable is that these processes of quantification make uncertainty disappear. Leaders believe that they are now in a world of risk that can be managed and controlled. The decision can be modelled, the results quantified, and the optimising response can be identified. Under conditions of uncertainty, Bayesian models, like other models that rely on subjective expectations of probability, convert uncertainty into risk that can be managed and mitigated. They help decision-makers both to hide the uncertainty from themselves and, in so doing, to become overconfident.
Overconfidence amplifies the impact of forecasting errors when people make subjective estimates of probability.Footnote 18 People who are overconfident tend, by definition, to be more confident than they are accurate; they exaggerate the true likelihood of an outcome if, of course, that ‘true’ likelihood can be known.Footnote 19 Overconfidence is a tendency towards optimism that neuroscientists locate in specific areas of the brain that block disconfirming evidence.Footnote 20 Accuracy is a function of the ability of people to make predictions and then get feedback so that they can adjust their future estimates.Footnote 21 In a world of dynamic uncertainty, feedback is generally slower and more ambiguous, leaving greater scope for people to reinforce rather than adjust their prior beliefs and avoid revising their estimates. Consequently, decision-makers are likely to remain overconfident far longer in an uncertain world than in worlds where feedback comes quickly and is undisputable.Footnote 22
Putting aside for the moment the challenge of quantifying beliefs, the challenge that Ramsay raised almost a century ago, Bayesian models can be helpful to decision-makers who face a choice under conditions of risk where the problem is well defined, the options and the consequences are known, and there are relevant probability distributions to estimate the likelihood of the consequences. Making intuitive estimates more explicit can be helpful as decision-makers consider how ‘diagnostic’ a new piece of information is and think through in a more reflective way how their estimates should change. That is not the case, however, in a world of uncertainty where none of these conditions is met. This kind of analytic tool then obscures rather than reveals. The usefulness of any tool in the toolkit, whether it works in the world, depends in large part on the world in which it is deployed.
Leaders also use a closely related set of cognitive and emotional strategies to discount and deny uncertainty. The most common is to simplify complex environments by building simple problem frames. President George H. Bush famously said when Iraq invaded Kuwait in 1990 that Saddam Hussein was ‘another Hitler’. That kind of simplified reasoning by analogy to frame a complex problem hides the uncertainty and, for the decision-maker, makes it simpler to identify options and their consequences. Leaders select analogies by their emotional valence.Footnote 23 Psychologists have identified what thy call the ‘negativity bias’, the principle that ‘negative events are more salient, potent, dominant in combinations and generally more efficacious than positive events’.Footnote 24 People pay more attention to and give more weight to negative elements of their environment. There is evidence of variation by ideological predisposition. Compared to liberals, conservatives tend to register greater physiological responses to processing features of the environment that are negative. They are psychologically, physiologically, and neurologically more sensitive to threats in the environment.Footnote 25 To hide their uncertainty, they also choose frames through preferred policy paradigms or draw on their underlying assumptions about the state of the world.Footnote 26 Temporal sensitivity also affects the salience of frames; negative events become disproportionately salient the closer they are in time or distance.Footnote 27
Closely related, evidence suggests that people become uncomfortable when they come face to face with information that suggests a problem is more complex than they had thought. They consequently deny or discount inconsistent information to preserve their pre-existing frames. People have a strong tendency to see what they expect to see based on their existing beliefs and to avoid the discomfort that complexity and uncertainty can create.Footnote 28
Hiding from uncertainty
At times, in the face of uncertainty, leaders choose to do nothing. They can do so through two quite different processes Doing nothing can be a conscious and reflective process. Aware that they do not know enough, and hoping that, with time, the problem will become clearer and the options better defined, decision-makers actively choose to postpone. The aphorism ‘don’t decide until you have to’ captures the common-sense understanding of waiting in the face of uncertainty. When leaders acknowledge that there is uncertainty about the state of the world, this kind of process can avoid a rush to judgement and serve leaders well, at least for a time.
At other times, in an evolved response driven unconsciously by emotions, leaders simply freeze. Overwhelmed by the fear or anxiety that is induced by the sense that they have lost control, they are unable to make a decision. They tend to use ambiguous language or veto action so that they do not have to make a choice.Footnote 29 They drift until catastrophe looms and forces a decision or, through sheer good luck, the problem goes away. This kind of response, largely involuntary and unreflexive, generally does not serve leaders well.
What should leaders do? What might work better in a world of uncertainty?
How should leaders estimate threats and make choices in a world of radical uncertainty?
To answer these questions, I draw heavily on the rich traditions of pragmatism. There are important differences among pragmatist thinkers, but pragmatism can be defined broadly as a set of dispositions for making sense of the world and what works in the world.Footnote 30 The two are intimately connected. Pragmatists understand belief as a predisposition to act. The importance of an argument or proposition is in its practical consequences in the world. Pragmatism treats the world as relational and contingent where people adjust their habits to those contingent circumstances, both to make sense of the world and for understanding what works in the world.Footnote 31 Making sense of the world can be understood as an answer to the question, what is going on here? What is the story? How can we arrive at a successful answer to the problem that confronts us?Footnote 32 That successful answer, what works in the world, can be understood as an incremental, iterative, and sequential process of experimentation with options that will go some way to addressing the problem, options that are ‘useful’ rather than ‘optimal’.
Making sense of the world and understanding what works in the world are closely connected. In pragmatist thinking, knowing cannot be separated from doing; pragmatism recognises the ways in which knowledge is embedded in practice. Knowing matters when it has consequences in the world. When we think about the problem of threat perception in global politics through this lens, we see that threat perception matters when the way the threat is perceived and the way leaders respond to it have practical consequences in the world. Threat perception and strategy are intimately linked, inseparable from one another. In much of the literature in international politics, the two are analytically separated. In this pragmatist analysis, they are conjoined.
Other disciplines provide context to the processes of threat perception. Evolutionary psychology is particularly helpful.Footnote 33 Perception is one of the oldest practices of human beings, one that evolved over time in adaptive ways to meet both individual and collective needs. Evolutionary psychologists situate the cognitive patterns and emotions that play such an important role in threat perception within a long time frame. They consider these patterns as evolved adaptations that respond to cues with minimal cognitive effort to solve recurrent problems. These patterns provided the quick adaptive solutions that were necessary to survive and reproduce. Natural selection enabled over time the ‘representational systems able to make predictions about the situation under informational uncertainty from indirect cues’.Footnote 34 In this sense, evolutionary processes are consistent with adaptive, trial-and-error processes of learning-by-doing that in iterative sequences look for ‘useful’ solutions, generate feedback, and start the process again.
Evolutionary arguments are intuitively appealing because they make sense of prevalent cognitive patterns and emotional triggers that are widely documented and considered as ‘errors’ or ‘biases’.Footnote 35 They explain why we are ‘wired’ the way we are, not as a default from a normative model of rationality, but as the consequence of natural selection to survive and reproduce over time. At the individual level, the process is fast, physiological, emotional, and only then conscious and cognitive. Evolutionary psychologists treat cognitive and emotional patterns not as bugs but as adaptive design features that may bring practical benefits even in our complex modern environment.Footnote 36
Pragmatism walks hand in hand with our understanding of these evolved processes of threat perception. To take only one example, some political psychologists suggest that conservative processes of threat perception that lead to ‘overestimating’ threat may be a design feature rather than a bug because, over time, they helped people to survive by avoiding the worst.Footnote 37 In the unforgiving world of some analysts of international politics where the consequences of anarchy, rather than society, anchor many analyses, these evolutionary arguments resonate. Drawing on error management theory, Johnson argues that, in particular contexts, these evolutionary processes privilege certain kinds of problem frames and judgements to avoid the worst mistakes and are consequently adaptive.Footnote 38 Neville Chamberlain, the prime minister of the United Kingdom before World War II, made an almost-fatal mistake because he and many of his colleagues did not commit the fundamental attribution error. Rather, he underestimated the threat that Hitler’s Germany posed to the United Kingdom. There are better odds of survival, Johnson concludes, if leaders overestimate threats many times but avoid underestimating the one serious threat to their survival. Ours is an unforgiving world where the costs of one serious false negative, he argues, far exceeds the costs of numerous false positives.Footnote 39
Pragmatists would reframe the question by asking whether that process of overestimation ‘worked in the world’ under that specific contingency. They would eschew the generalisation that overestimating threat is adaptive, pay a great deal of attention to the specific relationship, the contingencies, and the context and be more comfortable with the radical uncertainty that precludes many generalisations as universal over time.Footnote 40 Much depended on the story Churchill as well as Chamberlain told and the narrative each constructed about Hitler and his intentions. And as history has shown, although Churchill was wrong so many times earlier in his career, he was right this one time that proved to be so overwhelmingly important. The individual narratives of the two leaders did not change much, but collective judgement evaluated the consistency and coherence of the competing narratives as the conditions and the contingencies shifted. Those judgements by leaders of the government in the United Kingdom moved as the context changed.
The shift in the threat perception in the United Kingdom from 1938 to 1940 illustrates the argument that threat perception in international politics is not only an individual, embodied process. It is also a deeply social process where stories are told and shared within communities. Threat perception at the individual level evolved as an adaptive process to enable the mobilisation of the physiological responses to react rapidly in the face of danger. As a community process, threat perceptions are shared through storytelling and narrative to allow communities to mobilise the collective resources they need to respond to what they think constitutes danger.Footnote 41
Storytelling becomes an important part of pragmatist thinking when the stories are about what works – or does not work – in the world.Footnote 42 Stories have narrative structure that often build in context, contingency, and relational processes that stretch over time in a narrative arc. Often, stories are the result of shared understandings of how the world works under these conditions in this context. Shared narratives that become deeply embedded over time can reinforce norms and social conventions, provide policy frames, highlight processes of experimentation over time, and generate practical knowledge. It is in this sense that storytelling helps individuals and communities to overcome their deeply rooted fear of loss of control and acknowledge that they are indeed in a world of uncertainty.
Pragmatists do not consider all stories as equivalent. Some stories are more responsive to experience and more open to challenge than others. These stories help make sense of a world that seems threatening, as opposed to simply offering a perspective (this is my story, this is my truth). In this respect, pragmatists differ from political psychologists who identify misperceptions against what is, inevitably, an unclear standard of objective or accurate perception. That standard becomes obvious only ex post and, given the multiple interpretations of threat that are often plausible, is very difficult to articulate ex ante. Stories can be one kind of evidence – narrative evidence, operating in tandem with qualitative and quantitative data. One can see our theories and models as overarching stories, sometimes put in axiomatic language, sometimes not. Theories, that is, are stories about the way the world works. This was Ramsey’s point in his famous paper ‘Theories’. Just as we might start a bedtime story with ‘There was a girl, who …’, our scientific theories start with an existential quantifier ‘There are electrons, which …’. We revise the theory as new evidence comes in.Footnote 43
Logical inference, consistency, and interrogation by peers in the community help to strengthen the internal coherence of stories and their believability. Ultimately their value lies in how useful and helpful they are, in the practical knowledge that they generate.Footnote 44 It is in this sense that widespread understanding of what is threatening is a deeply social and political process that mobilises resources for collective action that works in the world. And the various answers can be evaluated, after the fact, in terms of how well they met the world.
Stories about threats shape the strategies that leaders choose to respond. Effective strategy under conditions of radical uncertainty requires experimentation and iterative processes of learning through trial and error and feedback over time and, at different times, the imagining of possible futures. That kind of process is uniquely suited to a process of discovery through learning that incrementally reduces uncertainty about what will work, at least for this problem under these conditions in this context. The next section looks at the stories told about Russia’s invasion of Ukraine and the strategy that the United States led in response. Although that kind of strategic process is not without significant risks, it is especially well suited to a world of radical uncertainty. I return to the imagining of possible futures in the conclusion.
Stories of World War III
Radical uncertainty as a strategic asset
Russia set the scene in February 2022 by using radical uncertainty as a strategic asset as it prepared to launch a large-scale invasion of Ukraine. It did so deliberately by raising the possibility that Moscow might use a tactical nuclear weapon. The use of a nuclear weapon could lead to escalation in unpredictable ways with unforeseeable consequences. I use the word ‘possibility’ rather than ‘probability’ deliberately, since that strategy is built on radical uncertainty that leaders then deliberately manipulate to shape the behaviour of their adversaries. It is precisely in its unpredictability that the advantage of the strategy lies.Footnote 45
Thomas Schelling first developed strategies to manipulate what he called risk at a time when the Soviet Union as well as the United States both had nuclear weapons. Each side worried that any use of a nuclear weapon could provoke a strike by the other that could escalate to all-out nuclear war. If nuclear weapons were not useful for warfighting, they nevertheless could be useful as instruments of coercion that might force the other side to back down. That strategy of coercion worked through the radical uncertainty it created for an adversary.
Schelling’s strategy is more accurately described as one that manipulates not risk but uncertainty. It does so through what he called ‘the threat that leaves something to chance’.Footnote 46 Schelling wrote a confidential memo in 1959 for the RAND Corporation that was only released in 2021.Footnote 47 He made clear that the key to ‘these threats is that, although one may or may not carry them out if the threatened party fails to comply, the final decision is not altogether under the threatener’s control. The threat … has an element of, “I may or I may not, and even I can’t be altogether sure.”’Footnote 48 The final decision is left to ‘chance’. Chance is random and unpredictable. Threats that leave something to chance therefore build in some danger by deliberately designing in uncertainty. Schelling understands the power of the illusion of control and recognises that it is that danger inherent in giving up some control that conveys commitment to an adversary. As we shall see, it is as though, in a changed strategic context, some elements of Schelling’s strategic thinking have found new life in Moscow.
A year and a half before he attacked Ukraine, Putin told an elaborate story of Russia and Ukraine as one nation that was steeped in historical grievance and a sense of betrayal by Western powers.Footnote 49 Then just before the invasion on 24 February 2022, Putin ordered an unknown level of alert of Russia’s strategic forces and issued a veiled nuclear threat should NATO intervene.Footnote 50 A threat that leaves something to chance, as Schelling developed the concept, derives its leverage principally from what leaders do and how much control they give up, not from what they say. Putin did not do much. He did not order the removal of any nuclear weapons from storage. He did not give up any control. Instead, he used ambiguous language about Russia’s nuclear infrastructure to weakly approximate, through fuzziness, a threat that left something to chance.
Reducing uncertainty: Learning by doing
President Biden and his advisers found themselves in a world of radical uncertainty as the evidence began to mount that Russia was about to invade Ukraine. An invasion would break all the norms and rules of the post–Cold War period. As it became clear that the invasion was going ahead, the uncertainty deepened. Putin’s behaviour was unpredictable, not only because no probability estimates were relevant, but because he was demonstrating his readiness to break all the constraining norms. Economists have long recognised that the usual sequence of preferences preceding and shaping choices may not always hold. At times, choice reveals preferences.Footnote 51 People do not always know their preferences. It is only after they make a choice that they discover them by reasoning backwards from their behaviour. Wartime presidents are no exception. As one astute observer remarked, ‘Putin himself likely does not know what he would do then [if he were losing the war]’.Footnote 52 It may well be that Putin is not a reliable predictor of what he will do because even he does not know how he will feel, should he face any of these contingencies. The United States had no alternative but to develop a strategy to try to manage escalation in the context of these deep uncertainties.
The president began by putting some structure into what was an undefined problem with almost no constraints. Biden first told a story of World War III, a story of a war that could erupt if US forces engaged in combat with Russians forces that could then escalate to nuclear war. Then, in an unprecedented step, he laid out in a published op-ed five boundary conditions to reduce some of the most important uncertainties. The president made public what the United States would do and what it would not do.Footnote 53 First, Biden made clear that ‘we do not seek a war between NATO and Russia’. Second, he made explicit that ‘so long as the United States or our allies are not attacked, we will not be directly engaged in this conflict, either by sending American troops to fight in Ukraine or by attacking Russian forces’. Third, Biden clarified that, despite his revulsion at Russia’s unjust attack, ‘the United States will not try to bring about his [Putin’s] ouster in Moscow’. While the United States would support Ukraine to the fullest extent possible, Biden continued, ‘We are not encouraging or enabling Ukraine to strike beyond its borders’. Biden established a final parameter when he warned Russia explicitly against the use of nuclear weapons: ‘Any use of nuclear weapons on any scale would be completely unacceptable to the United States as well as to the rest of the world and would entail severe consequences.’
Within the framing narrative of escalation that could lead to World War III and the structural parameters the president put in place to define the problem, the Biden administration then put in place a pragmatic strategy of ‘learning by doing’ that tried to further reduce uncertainty in response to Russia’s manipulation of uncertainty. The process of experimentation was incremental and inductive as officials struggled to figure out what would work. Biden early on ruled out NATO enforcement of a no-fly zone over Ukraine – despite brutal Russian attacks on Ukraine’s civilian infrastructure and desperate pleas from Volodymyr Zelensky – because he judged the risk of direct engagement between NATO and Russian pilots to be too high. After that, the administration began incrementally, and at times with considerable delays, to provide Ukraine with increasingly more advanced military equipment. Officials then waited to assess Russia’s reaction. The Russian response was limited to verbal warnings and veiled nuclear threats but no military escalation outside the battlefield in Ukraine.
Early decisions to supply defensive Javelins and Stingers were followed, after Ukrainian civilian infrastructure came under relentless attack, by the decision to provide Kyiv with High Mobility Artillery Rocket Systems. Even though their range was short enough that they could not reach Russian territory, a pattern of probe and wait preceded the decision. Russia railed against the provision of these longer-range rocket systems but took no action against NATO members. Over time, the United States moved by increments, with some of its allies, to supply Bradley and Marder armored vehicles, then advanced surface-to-air missiles in November and December of 2022. They then pledged to send batteries of the Patriot air defence system that arrived in Ukraine the following April. In January 2023, the United Kingdom, the United States, and Germany decided to supply Challenger, Abrams, and Leopard-2 tanks and then Ground Launched Small Diameter Bombs, rocket-propelled guided bombs with a more extended range. Even though Russia has threatened in the past that any attack on Crimea would ‘ignite judgment day’, in May 2023, the United Kingdom followed by France, sent missiles with a range long enough to reach Crimea. That same month, the Biden administration, acceding to a long-standing Ukrainian request, also authorised the training of Ukrainian pilots on F-16 fighter jets and promised that its allies would deliver these aircraft to Ukraine in the next several months. Consistent with the parameter condition that NATO not enable Ukraine to attack beyond its borders, Ukrainian Defense Minister Oleksiy Reznikov again confirmed Ukraine’s agreement not to use Western long-range weapons to strike Russian territories.Footnote 54
What had been off limits in February 2022 was on its way or promised to Ukraine 18 months later. So far, Russia has not put any strategic weapons on high alert, although it has moved tactical nuclear weapons to Belarus that, however, remain under Russian control. The pragmatic strategy of incremental learning-by-doing seems to have worked, up until now, in controlling escalation and supplying Ukraine with most of what it needed, although far more slowly than Kyiv would have liked.
Pragmatism in the world
How do we know that what worked in the past is still working? The limits of trial-and-error learning in a world of uncertainty
The pragmatic strategy that the Biden administration used worked, at least in the first two years of the war. The United States and its NATO allies were able to shore up Ukraine’s capacity to defend itself without provoking escalation by Russia. That strategy has by now almost run its course, largely because almost all of Ukraine’s requests for more advanced weapons have been met. Little remains except for permission to use US longer-range missiles and artillery to strike at Russia’s forces that were attacking or preparing to attack Ukraine from inside Russian territory. The United States gave that permission, albeit in a limited and circumscribed way, on 30 May 2024 as Russian forces increased their bombardment of the area near Kharkiv.Footnote 55 And again, as they have in the past, Russian officials issued veiled nuclear threats and warned the United States against ‘miscalculations that could have fatal consequences’. Russia continued to threaten, at times obliquely and at times more explicitly, to retaliate against NATO forces and the United States should they intervene and allow Ukraine to use more advanced weapons on Russian soil. At the end of May, for example, Vladimir Putin issued an oblique warning to small European countries that they are ‘very densely populated’.Footnote 56
This description of how a pragmatic strategy worked, at least in the early phases, in the context of an attack by one nuclear power against a smaller neighbour that is supported by another nuclear power, raises important questions. That the strategy ‘worked’ matters. It matters because it avoided the first trap of converting the choice among options under conditions of radical uncertainty to one of risk and hiding the uncertainty. Nor did the Biden administration fall into the second trap of hiding from the uncertainty. But leaders could only know that the strategy worked only after the fact, as time passed. Can they ‘know’ that the strategy is still working? And that it will continue to work?
Ukraine faces enormous challenges as the war goes on, and the United States and its allies will have to continue to craft new strategies to cope with a deepening crisis. The obvious danger is that leaders who use a pragmatic, incremental strategy of experimentation to explore what works, as the Biden administration has, may cross a threshold of escalation without knowing that they are doing so. Several problems stand out.
Although some leaders make their red lines clear, others, especially those who are manipulating uncertainty, deliberately will not do so. Leaders often have incentives to keep that kind of information private or to dissemble. The red lines of others, like their intentions, tend to be opaque. The opacity of an adversary’s intentions is compounded when that adversary has broken established norms and disrupted their past practices. Norm-shattering behaviour that is discontinuous with the past deepens radical uncertainty. In a radically uncertain world, one that is at the edge of danger, there is no reliable way of anticipating their future behaviour or estimating their red lines. We simply cannot know.
In a complex world of contingencies that are dynamic, an adversary’s leaders themselves may not know their own red lines until they face a situation and then learn from their response to the situation what their preferences are. Their preferences are ‘revealed’ to them through the choices they make in the moment. And if they themselves do not know their preferences, how can others know them with any confidence?
For all these reasons, knowing whether a strategy that was working in the world yesterday will continue to work today, much less tomorrow, is a wicked problem. Success in the past in pushing forward, pausing, and then pushing forward again when they meet no resistance, the kind of success the Biden administration has had, tells leaders little about a threshold that they might encounter if conditions change. Consequently, leaders are at risk of over-learning from the past. Nikolai Sokov, a former Russian diplomat, made clear how difficult it is to know in advance what a tipping point will be:
in the absence of very definitively drawn red lines. The trouble is that when you advance in these small kinds of steps, most likely you will not know that you have crossed the red line. So that’s the danger.Footnote 57
The United States devoted considerable effort in the first two years of the war in Ukraine to understanding Russia’s thresholds. But thresholds generally do not remain fixed. They are dynamic and develop as conditions change. Past success, unfortunately, is not a reliable predictor of future success.
How can leaders then manage the radical uncertainty they continue to face even when they use pragmatic, incremental strategies of experimentation. Anthony Blinken, the US Secretary of State, when asked about the decision to allow Ukraine to use US missiles to strike Russian troops that were massing just across the border in preparation for attack, answered in language that was deeply pragmatist: ‘So with regard to the use of U.S. arms by Ukraine … the hallmark of our engagement, our support for Ukraine over these more than two years, has been to adapt and adjust as necessary to meet what’s actually going on.’Footnote 58
Reducing uncertainty by storytelling: Imagining possible futures
Storytelling about what works in the world, as I noted earlier, is an important part of pragmatist thinking and, more importantly, how we in fact think about a world only partly of our making. Stories do more than build in context, contingency, and social processes and conventions. They help us to think about the future by imagining different worlds. Whether they are called stories, narratives, or scenarios matters less than the individual and collective processes of imagining possible futures, the contexts and the contingencies that could combine to create these futures, and the pathways that would take us to them.Footnote 59 Imagining the pathways to these possibilities helps to shape understanding of what practical knowledge we need to build, how that knowledge might meet each of these worlds, the signposts to watch for along the pathways, and where and when to begin a process of experimentation, probing, and learning.
Living in a world of uncertainty
A central theme of this article is the importance of acknowledging that, at times, we live in a world of uncertainty. Acknowledging uncertainty is hard to do. It pushes against powerful psychological impulses. Yet only by acknowledging uncertainty, rather than hiding it by denying it or treating it as risk, or by hiding from it through avoidance, can leaders open the door to experimentation and learning and adaptation.
In a world of radical uncertainty, pragmatists as well as evolutionary biologists and psychologists can only know, after the fact, what has worked in the world. There are no easy or obvious solutions to that wicked problem. Leaders can only guard against false certainties from their advisers and overconfidence in themselves. They can do their best to remain open to new information, challenge assumptions and advice, ask what they have not heard, actively create an environment that supports dissent, consult people who are not usually in the room who might think about the problem differently, and engage in continuous thought experiments and storytelling to imagine possible futures and the proposed responses that might work in each of these worlds. All of these are easy to prescribe and hard to do.Footnote 60 And none of them may be enough. Leaders may still fail even with iterative processes of experimentation. In a world of radical uncertainty, however, without processes that are grounded in the search for practical knowledge, failure looms large.
Acknowledgements
I wish to thank Peter Katzenstein, Cheryl Misak, Caleb Pomeroy, and Brian Rathbun for their extraordinarily helpful comments.
Janice Gross Stein is Belzberg Professor of Conflict Management and Founding Director of the Munk School of Global Affairs and Public Policy at the University of Toronto. She is an honorary member of the American Academy of Arts and Science and Senior Scholar at the Kissinger Center at the School for Advanced International Studies at Johns Hopkins University.