Johnson et al. make a compelling case that what they call conviction narratives play an important role in human thought and decision-making. But are such narratives primarily products of more basic cognitive representations and processes, which are constructed in the moment to justify and persuade?
Humans continually tell stories about themselves and others, whether in conversation, courts of law, or literature. But, of course, they generate not merely stories, but external representations of many kinds: Diagrams, maps, sketches, descriptions, theories, hypotheses, regulations, algorithms, mathematical proofs, logical formalisms, and many more.
A central and classic question for the behavioral and brain sciences is which of these representations, if any, provide useful analogs of the internal representations underlying thought. Do animals, including humans, represent spatial layouts using “mental maps” (Tolman, Reference Tolman1948)? Is mental imagery underpinned by a distinctively pictorial style of representation (Kosslyn, Reference Kosslyn1980; Pylyshyn, Reference Pylyshyn1981)? Does the generation and understanding of natural language involve the translation to and from an internal language of thought (Dennett, Reference Dennett and Dennett1978; Fodor, Reference Fodor1975)? Is our naïve understanding of other people, or the external world, rooted in internally represented theories of folk psychology or naïve physics (Bloom, Reference Bloom2005)? Similarly, researchers have long wondered whether aspects of everyday knowledge are organized into standardized, perhaps story-like, scripts (e.g., Schank & Abelson, Reference Schank and Abelson1977) – a viewpoint that Johnson et al. explore in great depth.
How can we tell? As the classic debates mentioned above suggest, there is no single accepted and definitive approach. But one direct approach is to ask what it is, and what it is not, possible to represent, using different types of representation. Suppose that a person judges London to be west of Tokyo, Tokyo to be west of San Francisco, and San Francisco to be west of London. This immediately implies that these judgments cannot be being directly read off from a flat 2D map-like internal representation of the geography of the world – because it is impossible to represent these relationships on a 2D map. Of course, those judgments might be read off from an internal representation that, like the earth itself, has the form of a sphere; or they might be stored in a purely symbolic database.
Consider an ingenious study by Muryy and Glennerster (Reference Muryy and Glennerster2021), in which people navigate a virtual reality environment, in which “wormholes” have been created which are inconsistent 3D space. If people represent the local environment using Euclidean spatial maps, then they should be unable to represent such wormholes, should therefore be unable to represent such environments using their mental maps,with the result that their navigation should be heavily impeded. In practice, though, people are able to move around these strange spaces successfully, apparently oblivious to their contradictions with the assumptions of Euclidean space. Or consider the visual representation of so-called “impossible objects,” such as the Penrose triangle (Penrose & Penrose, Reference Penrose and Penrose1958). If vision, or mental imagery, generally involves conjuring up an iconic pictorial representation in 3D space, then such a process should be confounded by impossible objects, because no consistent representation can be found. In practice, the visual system is only able to detect such inconsistencies only after considerable scrutiny.
Turning to decision-making, the assumption that outcomes are assigned cardinal-valued utilities similarly implies that intransitive preferences (preferring A to B, B to C, and C to A) should be impossible to represent, and should be observed empirically due only to unstable preferences or noise. But many have argued that choices can often be systematically intransitive, and hence cannot be mediated by any kind of utility representation over possible outcomes (e.g., Tsetsos et al., Reference Tsetsos, Moran, Moreland, Chater, Usher and Summerfield2016).
In each of these cases, the choice of basic representational units determines what can, and what cannot, be represented – and in each of these cases, observed behavioral flexibility seems to go beyond what might be expected if the mind were working with maps, images, or utilities.
If, with Johnson et al., we see conviction narratives as representational units, then it is natural to ask: What representational assumptions do narrative representations embody? Which pieces of knowledge or types of decisions should it not be possible to represent (or represent easily) using narratives?
Taking the narrative story at face value, we might expect that people should not be able to represent narratives which have “plot holes” or contradictions – precisely because they should not have a natural story-like representation. Yet, as with impossible figures, plot holes and contradictions often go undetected. In The Big Sleep, the murder of the butler is famously both unexplained, and seemingly inexplicable – Raymond Chandler admitted that he himself had no idea who was responsible (Herman, Reference Herman1997). Thus, a gaping plot hole evaded both many readers and the author. But the same point arises, of course, for much simpler cases. The “Moses illusion” (Erickson & Mattson, Reference Erickson and Mattson1981) asks people how many of each animal did Moses take onto the ark; “two” is often the ready answer, although, of course, there is no biblical or other story in which Moses took any animals onto an ark (it was, of course, Noah)
These sorts of cases aren't necessarily problematic for Johnson et al.'s analysis – but I think they would pose a challenge for any model in which narratives form the building-blocks of knowledge. Because if narratives are the building-blocks of knowledge, then these fundamental narratives need to be coherent in their own terms.
An alternative viewpoint is that the narratives are not building blocks at all, but are the results of cognitive processing [just as we might see visual imagery and indeed visual perception as a result of symbolic computation, rather than as arising from a distinctively pictorial mode of representation (Pylyshyn, Reference Pylyshyn2002, Reference Pylyshyn2003)]. Rather, we might see the mind is an inveterate story-spinner that continually generates narratives, moment-by-moment, to make sense of the world around us (Chater, Reference Chater2018). So, for example, consider observing Heider and Simmel's (Reference Heider and Simmel1944) celebrated animation of an “interaction” in which an aggressive large triangle chases a small triangle and circle. We find ourselves projecting a story-like interpretation, in which the small triangle is initially defending the circle, but is forced aside; the circle “escapes” and locks the large triangle in a confined room (denoted by a rigid rectangle); the circle and small triangle keep themselves hidden; the furious large triangle ends up by “smashing up” the room. Our tendency to populate the world with narratives involving plans, intentions, friendships, and hostilities is, indeed, ubiquitous. But more evidence may be required to justify Johnson et al.'s proposal that narratives play a key role in mental representation.
Johnson et al. make a compelling case that what they call conviction narratives play an important role in human thought and decision-making. But are such narratives primarily products of more basic cognitive representations and processes, which are constructed in the moment to justify and persuade?
Humans continually tell stories about themselves and others, whether in conversation, courts of law, or literature. But, of course, they generate not merely stories, but external representations of many kinds: Diagrams, maps, sketches, descriptions, theories, hypotheses, regulations, algorithms, mathematical proofs, logical formalisms, and many more.
A central and classic question for the behavioral and brain sciences is which of these representations, if any, provide useful analogs of the internal representations underlying thought. Do animals, including humans, represent spatial layouts using “mental maps” (Tolman, Reference Tolman1948)? Is mental imagery underpinned by a distinctively pictorial style of representation (Kosslyn, Reference Kosslyn1980; Pylyshyn, Reference Pylyshyn1981)? Does the generation and understanding of natural language involve the translation to and from an internal language of thought (Dennett, Reference Dennett and Dennett1978; Fodor, Reference Fodor1975)? Is our naïve understanding of other people, or the external world, rooted in internally represented theories of folk psychology or naïve physics (Bloom, Reference Bloom2005)? Similarly, researchers have long wondered whether aspects of everyday knowledge are organized into standardized, perhaps story-like, scripts (e.g., Schank & Abelson, Reference Schank and Abelson1977) – a viewpoint that Johnson et al. explore in great depth.
How can we tell? As the classic debates mentioned above suggest, there is no single accepted and definitive approach. But one direct approach is to ask what it is, and what it is not, possible to represent, using different types of representation. Suppose that a person judges London to be west of Tokyo, Tokyo to be west of San Francisco, and San Francisco to be west of London. This immediately implies that these judgments cannot be being directly read off from a flat 2D map-like internal representation of the geography of the world – because it is impossible to represent these relationships on a 2D map. Of course, those judgments might be read off from an internal representation that, like the earth itself, has the form of a sphere; or they might be stored in a purely symbolic database.
Consider an ingenious study by Muryy and Glennerster (Reference Muryy and Glennerster2021), in which people navigate a virtual reality environment, in which “wormholes” have been created which are inconsistent 3D space. If people represent the local environment using Euclidean spatial maps, then they should be unable to represent such wormholes, should therefore be unable to represent such environments using their mental maps,with the result that their navigation should be heavily impeded. In practice, though, people are able to move around these strange spaces successfully, apparently oblivious to their contradictions with the assumptions of Euclidean space. Or consider the visual representation of so-called “impossible objects,” such as the Penrose triangle (Penrose & Penrose, Reference Penrose and Penrose1958). If vision, or mental imagery, generally involves conjuring up an iconic pictorial representation in 3D space, then such a process should be confounded by impossible objects, because no consistent representation can be found. In practice, the visual system is only able to detect such inconsistencies only after considerable scrutiny.
Turning to decision-making, the assumption that outcomes are assigned cardinal-valued utilities similarly implies that intransitive preferences (preferring A to B, B to C, and C to A) should be impossible to represent, and should be observed empirically due only to unstable preferences or noise. But many have argued that choices can often be systematically intransitive, and hence cannot be mediated by any kind of utility representation over possible outcomes (e.g., Tsetsos et al., Reference Tsetsos, Moran, Moreland, Chater, Usher and Summerfield2016).
In each of these cases, the choice of basic representational units determines what can, and what cannot, be represented – and in each of these cases, observed behavioral flexibility seems to go beyond what might be expected if the mind were working with maps, images, or utilities.
If, with Johnson et al., we see conviction narratives as representational units, then it is natural to ask: What representational assumptions do narrative representations embody? Which pieces of knowledge or types of decisions should it not be possible to represent (or represent easily) using narratives?
Taking the narrative story at face value, we might expect that people should not be able to represent narratives which have “plot holes” or contradictions – precisely because they should not have a natural story-like representation. Yet, as with impossible figures, plot holes and contradictions often go undetected. In The Big Sleep, the murder of the butler is famously both unexplained, and seemingly inexplicable – Raymond Chandler admitted that he himself had no idea who was responsible (Herman, Reference Herman1997). Thus, a gaping plot hole evaded both many readers and the author. But the same point arises, of course, for much simpler cases. The “Moses illusion” (Erickson & Mattson, Reference Erickson and Mattson1981) asks people how many of each animal did Moses take onto the ark; “two” is often the ready answer, although, of course, there is no biblical or other story in which Moses took any animals onto an ark (it was, of course, Noah)
These sorts of cases aren't necessarily problematic for Johnson et al.'s analysis – but I think they would pose a challenge for any model in which narratives form the building-blocks of knowledge. Because if narratives are the building-blocks of knowledge, then these fundamental narratives need to be coherent in their own terms.
An alternative viewpoint is that the narratives are not building blocks at all, but are the results of cognitive processing [just as we might see visual imagery and indeed visual perception as a result of symbolic computation, rather than as arising from a distinctively pictorial mode of representation (Pylyshyn, Reference Pylyshyn2002, Reference Pylyshyn2003)]. Rather, we might see the mind is an inveterate story-spinner that continually generates narratives, moment-by-moment, to make sense of the world around us (Chater, Reference Chater2018). So, for example, consider observing Heider and Simmel's (Reference Heider and Simmel1944) celebrated animation of an “interaction” in which an aggressive large triangle chases a small triangle and circle. We find ourselves projecting a story-like interpretation, in which the small triangle is initially defending the circle, but is forced aside; the circle “escapes” and locks the large triangle in a confined room (denoted by a rigid rectangle); the circle and small triangle keep themselves hidden; the furious large triangle ends up by “smashing up” the room. Our tendency to populate the world with narratives involving plans, intentions, friendships, and hostilities is, indeed, ubiquitous. But more evidence may be required to justify Johnson et al.'s proposal that narratives play a key role in mental representation.
Financial support
This work was supported by the ESRC Network for Integrated Behavioural Science [grant number ES/K002201/1].
Competing interest
None.