Hostname: page-component-cd9895bd7-dzt6s Total loading time: 0 Render date: 2024-12-22T20:47:51.016Z Has data issue: false hasContentIssue false

The small world's problem is everyone's problem, not a reason to favor CNT over probabilistic decision theory

Published online by Cambridge University Press:  08 May 2023

Daniel Greco*
Affiliation:
Department of Philosophy, Yale University, New Haven, CT 06511-6629, USA. [email protected] https://sites.google.com/site/dlgreco/

Abstract

The case for the superiority of Conviction Narrative Theory (CNT) over probabilistic approaches rests on selective employment of a double standard. The authors judge probabilistic approaches inadequate for failing to apply to “grand-world” decision problems, while they praise CNT for its treatment of “small-world” decision problems. When both approaches are held to the same standard, the comparative question is murkier.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Johnson, Bilovich, and Tuckett marshal evidence from a wide range of sources to make the case that decision-making under radical uncertainty involves essential use of narratives; structured, higher-order representations that include information about the causal, temporal, valence, and analogical structure of the decision problems they represent. They make a compelling case that Conviction Narrative Theory (CNT) deserves a seat at the table in theorizing about decision-making under radical uncertainty. But they go further and claim that CNT is superior to theoretical frameworks in which probabilistic inference and expected utility maximization take center stage. I'll argue that their case rests on selective employment of a double standard. They argue that probabilistic inference is impossible under conditions of radical uncertainty, while the task of selecting the most plausible narrative is possible. In order for the latter task to be possible, however, various heuristics must be used to first winnow down the space of possible narratives to a small set of compact narratives from which the best can tractably be selected. But if such heuristics are allowed to play a role in CNT, they should also be allowed to play a role in probabilistic approaches. Once they are, the asymmetry between CNT and probabilistic approaches with respect to their ability to handle radical uncertainty disappears.

Savage (Reference Savage1954) famously distinguished between “small-world” and “grand-world” decision problems. Small-world decision problems are the ones we encounter in textbooks. In such problems the spaces of possible outcomes, actions, and evidence are highly constrained, and it is possible to calculate which action maximizes expected utility, as well as how this would change were the agent in question to gain various pieces of evidence. Small-world decision problems are tractable because they represent only a tiny subset of the possible distinctions that can be made. Grand-world problems represent an agent facing a vast array of options, considering all logically coherent hypotheses about the outcomes of their choosing those options, and considering the relevance of all their sensory and background information to all of the previous. Nobody has ever written down a grand-world decision problem, and if, per impossibile, someone did, they'd still be unable to solve it. Savage admitted he had little to say about how we identify small-world problems for analysis, and effectively treated the process as a black box.

While Savage distinguished between small- and grand-world decision problems in the context of expected utility theory, essentially the same distinction can be made in other paradigms for thinking about rational choice. The idea that grand-world decision problems are intractable is very closely related to the infamous “frame problem” for artificial intelligence (McCarthy & Hayes, Reference McCarthy, Hayes, Webber and Nilsson1981), in which it's not assumed that an agent must select an optimal action by computing expected utilities. In this more general setting, the problem is how an agent identifies a representation of her practical situation – one that contains enough relevant information for solving it to be fruitful, while still being compact enough to be tractably analyzable – as the one to subject to some decision-making rule. This problem arises no less for CNT than for probabilistic approaches to decision-making; the space of possible narratives is vast and unstructured, as is the space of possible items of evidence an agent might compare for consilience with one or another narrative.

While the authors acknowledge the size of the space of possible narratives, they have very little to say about how we manage to identify tractable, small-world problems of narrative comparison; their discussion of how we use explanatory heuristics to evaluate narratives effectively assumes we already have a small set of candidate narratives on the table for evaluation. This argumentative strategy amounts to a double standard. Most of the authors' arguments for the inapplicability of probabilistic approaches to decision-making under radical uncertainty – arguments about the impossibility of enumerating all possible outcomes of our actions, all relevant pieces of evidence, and of coming up with both prior and conditional probabilities for all of the previous – presuppose that probabilistic methods must be applied directly to grand-world decision problems. But when they discuss CNT, they treat the process of how we get from a grand-world problem to a small-world one as a black box; while they mention a necessary role for heuristics, they have nothing to say about how they work, or why similar heuristics couldn't work for probabilistic approaches.

If we ask whether probabilistic approaches can meet the same standard to which the authors hold CNT – being feasibly applicable not directly to grand-world problems, but instead to small-world problems in which some set of heuristics has already identified a tractable set of relevant hypotheses, pieces of evidence, and evaluatively relevant features – the prospects aren't so obviously bleak. It's true that even in small-world problems computing exact posterior probabilities is NP-hard (Cooper, Reference Cooper1990). But it's also well known that there are tractable methods for approximate Bayesian computation (Lintusaari, Gutmann, Dutta, Kaski, & Corander, Reference Lintusaari, Gutmann, Dutta, Kaski and Corander2017). It's also true that we generally lack non-arbitrary methods for assigning prior probabilities in small-world problems. But given the fruitfulness of Bayesian, probabilistic models in other areas of cognitive science such as vision (Yuille & Kersten, Reference Yuille and Kersten2006), where worries about arbitrariness aren't obviously any less applicable, it's premature to assume such challenges can't be met in the case of decision-making.

None of this is to undermine the authors' positive case for CNT. Nor is it to say that all of their comparative arguments for the superiority of CNT over probabilistic approaches rely on the double standard I describe above; the discussion of “digitization” in section 7.2 is immune from the criticism I've offered here. Still, argumentative double standards should be avoided; neither CNT nor probabilistic approaches to decision-making seem particularly well-suited to answering the exceedingly difficult question of how we identify small-world problems for analysis. Perhaps CNT provides a better account of how we handle such small-world problems once we've identified them, but if so, that claim isn't supported by evidence for the infeasibility of applying probabilistic methods directly to grand-world problems.

Acknowledgements

None.

Financial support

None.

Competing interest

None.

References

Cooper, G. F. (1990). The computational complexity of probabilistic inference using Bayesian belief networks. Artificial Intelligence, 42(2–3), 393405.CrossRefGoogle Scholar
Lintusaari, J., Gutmann, M. U., Dutta, R., Kaski, S., & Corander, J. (2017). Fundamentals and recent developments in approximate Bayesian computation. Systematic Biology, 66(1), e66e82.Google ScholarPubMed
McCarthy, J., & Hayes, P. J. (1981). Some philosophical problems from the standpoint of artificial intelligence. In Webber, B. L. & Nilsson, N. J. (Eds.), Readings in artificial intelligence (pp. 431450). Morgan Kaufmann. https://doi.org/10.1016/B978-0-934613-03-3.50033-7CrossRefGoogle Scholar
Savage, L. J. (1954). The foundations of statistics. Wiley Publications in Statistics.Google Scholar
Yuille, A., & Kersten, D. (2006). Vision as Bayesian inference: Analysis by synthesis? Trends in Cognitive Sciences, 10(7), 301308.CrossRefGoogle ScholarPubMed