Quilty-Dunn et al. adduce evidence for the psychological reality of languages-of-thought (LoTs) from a wide range of empirical domains. Their case inherits support from each domain, while depending on none. This is a powerful way to make such a case. Their article is, moreover, timely. It is a most welcome antidote to the steady rise in antirepresentationalist sentiment in many philosophy of cognitive science circles in recent years. Overarching theories of cognition that eschew any role for computational procedures applied to structured symbols are not serious contenders unless and until they adequately account for detailed empirical information of the sort discussed by Quilty-Dunn et al.
So, my impression of their article is strongly positive. Here, my aim is to supplement their case in two ways. First, by drawing attention to a distinct empirical rationale for LoTs. And second, by situating LoT research within a broader framework that promises to shed light on the evolution of LoTs.
LoT-based architectures often make much reduced physical demands compared to alternative architectures. Symbols are constructed in a combinatorial fashion, and their sequence properties play a role in individuating symbols. This allows for efficient representation. Additionally, the meaning of a (complex) symbol is a function of that symbol's parts, together with their mode of composition. Symbols, to some extent, analytically deconstruct their referents. Such symbols allow for the use of compact computational procedures (as opposed to, say, lookup tables). Together, these principles can reduce the demand on physical resources (e.g., neurons) by orders of magnitude.
These points have been most forcefully argued by Gallistel and colleagues (Gallistel, Reference Gallistel1990, Reference Gallistel2008; Gallistel & King, Reference Gallistel and King2011), often with examples drawn from animal cognition. A good case is the caching behavior of western scrub jays. These birds are estimated to encode the location of thousands of caches (Clayton & Krebs, Reference Clayton and Krebs1995). Moreover, for each location, they encode what was cached, when it was cached, and whether they were watched while caching it (their caches are often pilfered) (Clayton, Yu, & Dickinson, Reference Clayton, Yu and Dickinson2001). Additionally, they make flexible use of this information (e.g., to retrieve cached items in an efficient way) (Clayton et al., Reference Clayton, Yu and Dickinson2001). Arguably, scrub jays could not physically realize the requisite symbols and computations except by instantiating an LoT. And even if they could, an LoT architecture might still have been selectively favored for its increased economy. Brain tissue is expensive, after all.
But how might such symbol systems evolve in the first place? Progress on this question can be made by using the “sender–receiver framework.” This framework is inspired by the signaling games first presented by David Lewis (Reference Lewis1969). At their simplest, a signaling game features a sender who can observe the variable state of the world and send a signal (but cannot act), and a receiver who can observe the signal (but not the world), and act. Acts have consequences for both sender and receiver, and both have preferences regarding which act should be done when. Lewis showed that, given certain conditions (e.g., rationality, common interest, common knowledge), informative signaling can arise and stabilize. Decades later, these games were revisited by Skyrms who showed how Lewis's constraints could be significantly relaxed (Skyrms, Reference Skyrms1995, Reference Skyrms2004, Reference Skyrms2010). Indeed, Skyrms showed how even completely mindless agents can evolve informative signaling under many conditions.
Skyrms's generalization of the Lewis model allows us to apply that model within organisms, not just between them (Godfrey-Smith, Reference Godfrey-Smith2014; Planer, Reference Planer2019; Planer & Godfrey-Smith, Reference Planer and Godfrey-Smith2021). Two cognitive mechanisms (or one and the same cognitive mechanism at different times) can serve as sender and receiver in a Lewis–Skyrms-style setup. And this allows us to see (with the aid of the theory and results that have grown up around signaling games in recent decades) how signaling systems, including rather complex ones, can arise and stabilize in brains over phylogenetic and ontogenetic timeframes. This includes systems that are plausibly conceived of as LoTs (Planer, Reference Planer2019).
Using the sender–receiver framework, Planer and Godfrey-Smith (Reference Planer and Godfrey-Smith2021) present a taxonomy of signs displaying different forms of structure (Table 1). Unfortunately, there is not scope here to go through the details of this taxonomy. Suffice it to say that the taxonomy is structured by two tripartite distinctions among signs, namely, atomic-composite-combinatorial, and nominal-organized-encoding, which are envisaged as plausible, incremental evolutionary pathways. On this taxonomy, an LoT is a sign system (used in cognition) that is simultaneously combinatorial and encoding. As a combinatorial sign system, it contains signs that are constructed out of other signs belonging to the system (and hence, there is sharing of parts across signs), and moreover, the order of the parts of a sign matters to how the sign functions in communication and/or computation. And as an encoding sign system, there is a systematic principle (or set of such principles) that assigns meaning to complex signs based not only on the identity of their parts, but also on where those parts occur in the sign (and so, particular locations within a complex sign have meaning). It is combinatoriality that allows for maximally efficient representation and encoding principles that allow for the use of compact, efficient algorithms. These properties are very close to those Quilty-Dunn et al. call “discrete constituency” and “role-filler independence” (while “predicate–argument structure” [target article, sect. 2, para. 9] can be understood as a special case of encoding).
Table 1. Taxonomy of signs based on their formal and semantic structure
A final methodological point. The sender–receiver framework is closely associated with a family of formal signaling models. And although the orientation to sign use that the framework fosters is not inherently formal (Planer & Godfrey-Smith, Reference Planer and Godfrey-Smith2021), these models are very useful. For they make testing ideas about the emergence of various forms of structure tractable. Research on the evolution of LoTs can no doubt benefit from these formal tools. Most obviously, signaling models might be used to investigate whether and under what conditions Quilty-Dunn et al.'s six core properties indeed cluster (or form subclusters). Additionally, such models might be used to test the idea that LoTs evolve at interfaces between other systems, as interface systems can be naturally modeled as intermediaries in so-called signaling chains.
Quilty-Dunn et al. adduce evidence for the psychological reality of languages-of-thought (LoTs) from a wide range of empirical domains. Their case inherits support from each domain, while depending on none. This is a powerful way to make such a case. Their article is, moreover, timely. It is a most welcome antidote to the steady rise in antirepresentationalist sentiment in many philosophy of cognitive science circles in recent years. Overarching theories of cognition that eschew any role for computational procedures applied to structured symbols are not serious contenders unless and until they adequately account for detailed empirical information of the sort discussed by Quilty-Dunn et al.
So, my impression of their article is strongly positive. Here, my aim is to supplement their case in two ways. First, by drawing attention to a distinct empirical rationale for LoTs. And second, by situating LoT research within a broader framework that promises to shed light on the evolution of LoTs.
LoT-based architectures often make much reduced physical demands compared to alternative architectures. Symbols are constructed in a combinatorial fashion, and their sequence properties play a role in individuating symbols. This allows for efficient representation. Additionally, the meaning of a (complex) symbol is a function of that symbol's parts, together with their mode of composition. Symbols, to some extent, analytically deconstruct their referents. Such symbols allow for the use of compact computational procedures (as opposed to, say, lookup tables). Together, these principles can reduce the demand on physical resources (e.g., neurons) by orders of magnitude.
These points have been most forcefully argued by Gallistel and colleagues (Gallistel, Reference Gallistel1990, Reference Gallistel2008; Gallistel & King, Reference Gallistel and King2011), often with examples drawn from animal cognition. A good case is the caching behavior of western scrub jays. These birds are estimated to encode the location of thousands of caches (Clayton & Krebs, Reference Clayton and Krebs1995). Moreover, for each location, they encode what was cached, when it was cached, and whether they were watched while caching it (their caches are often pilfered) (Clayton, Yu, & Dickinson, Reference Clayton, Yu and Dickinson2001). Additionally, they make flexible use of this information (e.g., to retrieve cached items in an efficient way) (Clayton et al., Reference Clayton, Yu and Dickinson2001). Arguably, scrub jays could not physically realize the requisite symbols and computations except by instantiating an LoT. And even if they could, an LoT architecture might still have been selectively favored for its increased economy. Brain tissue is expensive, after all.
But how might such symbol systems evolve in the first place? Progress on this question can be made by using the “sender–receiver framework.” This framework is inspired by the signaling games first presented by David Lewis (Reference Lewis1969). At their simplest, a signaling game features a sender who can observe the variable state of the world and send a signal (but cannot act), and a receiver who can observe the signal (but not the world), and act. Acts have consequences for both sender and receiver, and both have preferences regarding which act should be done when. Lewis showed that, given certain conditions (e.g., rationality, common interest, common knowledge), informative signaling can arise and stabilize. Decades later, these games were revisited by Skyrms who showed how Lewis's constraints could be significantly relaxed (Skyrms, Reference Skyrms1995, Reference Skyrms2004, Reference Skyrms2010). Indeed, Skyrms showed how even completely mindless agents can evolve informative signaling under many conditions.
Skyrms's generalization of the Lewis model allows us to apply that model within organisms, not just between them (Godfrey-Smith, Reference Godfrey-Smith2014; Planer, Reference Planer2019; Planer & Godfrey-Smith, Reference Planer and Godfrey-Smith2021). Two cognitive mechanisms (or one and the same cognitive mechanism at different times) can serve as sender and receiver in a Lewis–Skyrms-style setup. And this allows us to see (with the aid of the theory and results that have grown up around signaling games in recent decades) how signaling systems, including rather complex ones, can arise and stabilize in brains over phylogenetic and ontogenetic timeframes. This includes systems that are plausibly conceived of as LoTs (Planer, Reference Planer2019).
Using the sender–receiver framework, Planer and Godfrey-Smith (Reference Planer and Godfrey-Smith2021) present a taxonomy of signs displaying different forms of structure (Table 1). Unfortunately, there is not scope here to go through the details of this taxonomy. Suffice it to say that the taxonomy is structured by two tripartite distinctions among signs, namely, atomic-composite-combinatorial, and nominal-organized-encoding, which are envisaged as plausible, incremental evolutionary pathways. On this taxonomy, an LoT is a sign system (used in cognition) that is simultaneously combinatorial and encoding. As a combinatorial sign system, it contains signs that are constructed out of other signs belonging to the system (and hence, there is sharing of parts across signs), and moreover, the order of the parts of a sign matters to how the sign functions in communication and/or computation. And as an encoding sign system, there is a systematic principle (or set of such principles) that assigns meaning to complex signs based not only on the identity of their parts, but also on where those parts occur in the sign (and so, particular locations within a complex sign have meaning). It is combinatoriality that allows for maximally efficient representation and encoding principles that allow for the use of compact, efficient algorithms. These properties are very close to those Quilty-Dunn et al. call “discrete constituency” and “role-filler independence” (while “predicate–argument structure” [target article, sect. 2, para. 9] can be understood as a special case of encoding).
Table 1. Taxonomy of signs based on their formal and semantic structure
Adapted from Planer and Godfrey-Smith (Reference Planer and Godfrey-Smith2021).
A final methodological point. The sender–receiver framework is closely associated with a family of formal signaling models. And although the orientation to sign use that the framework fosters is not inherently formal (Planer & Godfrey-Smith, Reference Planer and Godfrey-Smith2021), these models are very useful. For they make testing ideas about the emergence of various forms of structure tractable. Research on the evolution of LoTs can no doubt benefit from these formal tools. Most obviously, signaling models might be used to investigate whether and under what conditions Quilty-Dunn et al.'s six core properties indeed cluster (or form subclusters). Additionally, such models might be used to test the idea that LoTs evolve at interfaces between other systems, as interface systems can be naturally modeled as intermediaries in so-called signaling chains.
Financial support
This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.
Competing interest
None.