Hostname: page-component-586b7cd67f-dsjbd Total loading time: 0 Render date: 2024-11-26T00:17:50.572Z Has data issue: false hasContentIssue false

A tale of two histories: Dual-system architectures in modular perspective

Published online by Cambridge University Press:  18 July 2023

John Zerilli*
Affiliation:
Old College, University of Edinburgh, Edinburgh, UK [email protected] https://www.law.ed.ac.uk/people/dr-john-zerilli

Abstract

I draw parallels and contrasts between dual-system and modular approaches to cognition, the latter standing to inherit the same problems De Neys identifies regarding the former. Despite these two literatures rarely coming into contact, I provide one example of how he might gain theoretical leverage on the details of his “non-exclusivity” claim by paying closer attention to the modularity debate.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

The cleavage between thinking that's fast, intuitive, and stereotyped and thinking that's slow, effortful, and fluid is a defining feature of contemporary dual-system accounts. However, a parallel and largely independent tradition in cognitive science posits domain-specific cognitive systems or “modules” (Chomsky, Reference Chomsky1980; Fodor, Reference Fodor1983; Marr, Reference Marr1976; Mountcastle, Reference Mountcastle1957, Reference Mountcastle, Edelman and Mountcastle1978). In the canonical formulation, the existence of modules is thought to hinge on the difference between “central” and “peripheral” operations, where only the latter qualify as modular (Fodor, Reference Fodor1983; cf. Carruthers, Reference Carruthers2006; Chomsky, Reference Chomsky, de Almeida and Gleitman2018; Sperber, Reference Sperber, Hirschfield and Gelman1994, Reference Sperber and Dupoux2002). Peripheral systems encompass both sensory (input) and motor (output) systems, including those storing procedural knowledge and skill routines. They are characterised by a similar roster of diagnostic features as those commonly ascribed to the fast and intuitive “system 1” within dual-system accounts – in particular, a degree of informational encapsulation, automaticity, and introspective opacity. The main difference is that, with modules being domain-specific, one doesn't encounter an all-purpose “peripheral module,” akin to system 1, that's set against the central system/“system 2.” Instead, there are at least as many modules as there are input and output systems, and potentially separate modules for acquired skills (Karmiloff-Smith, Reference Karmiloff-Smith1992). Furthermore, being peripheral, the operations of modules map imperfectly onto system 1 functions, with some possible overlap for skills. But even then, in dual-system accounts, the skills in question are more likely to be cognitive biases and rational heuristics – something more like intellectual habits – than perceptuo-motor and procedural skills. Perhaps ironically, the dual-system view has more in common with theories of “massive modularity,” in that both view central operations as carved into stereotyped modes of functioning dependent on context (Barrett & Kurzban, Reference Barrett and Kurzban2006). Both dual-system and modular theories are, in turn, distant cousins of the much older physiological division of the nervous system into the central (“voluntary”) and peripheral (“autonomic”/“involuntary”) nervous systems. According to the physiological classification, brain and spinal cord constitute the central nervous system, meaning that, counterintuitively, modular (peripheral) operations, being largely cortically controlled, fall under the central nervous system, not the peripheral one.

Some philosophers have thought that if peripheral operations are “fast, cheap, and out of control” they will be less vulnerable to epistemically corrosive top-down/doxastic influences (Machery, Reference Machery, Zeimbekis and Raftopoulos2015; Zeimbekis & Raftopoulos, Reference Zeimbekis and Raftopoulos2015). Indeed, epistemic worries lay partly behind the traditional effort among modularists to show that perception isn't cognitively penetrable – that a visual module, for example, cannot access central information, such as an agent's beliefs and desires, and so operates without interference from what the agent believes or wants the world to be like (Fodor, Reference Fodor1983, Reference Fodor1984). This form of informational encapsulation amounts to a more pronounced form of the system 1/system 2 distinction, albeit pitting perceptuo-motor tasks against system 2. De Neys's non-exclusivity model, for its part, predicts that system 2 responses are available to system 1, itself a highly suggestive claim that runs counter to the modularist's contention about the cognitive impenetrability of perception. For instance, De Neys speculates that “intuitive logical reasoning would serve to calculate a proxy of logical reasoning, but not actual logical reasoning” (target article, sect. 4.2, para. 3). One compelling explanation for this feat is that the brain is able to execute quick, largely involuntary, and reliable routines by exploiting some of the same hardware – and information – that runs the slower (more deliberate) routines. If that's true, and generalises to perceptual systems, the epistemic worry would either dissolve (optimistically) or diminish (more likely), because perceptual systems would then still be fast, cheap, and out of control, and hence less vulnerable to interference from central information, despite having access to that information (i.e., being cognitively penetrable). But more importantly for De Neys (and whether or not the idea generalises to perceptual systems), it would offer De Neys a promising source of corroborating detail for his non-exclusivity framework: System 1 might generate system 2 responses efficiently and reliably because it has access to system 2 information! As it happens, a proposal along these lines finds support in some of the (anti)modularity literature, which suggests that perceptual systems do have access to central information.

For example, evidence of widespread neural “reuse” or “recycling” demonstrates that the neural communities subserving even our most evolutionally ancient transduction systems also subserve central systems; and it's also likely that transduction dynamics can sometimes be activated by the same domain-general nodes yielding central system dynamics (Anderson, Reference Anderson2010, Reference Anderson2014; Dehaene, Reference Dehaene, Dehaene, Duhamel, Hauser and Rizzolatti2005). Both findings are significant, because overlapping neural systems are likely to share information (Pessoa, Reference Pessoa2016). Further evidence that fast routines can indeed be gotten out of the elements of slower ones comes from research showing that visual processing integrates memories and prior expectations – which feature in slower, classically central, operations – implying that some perceptual processes have access to central information, despite being fast, automatic, and reflex-like (Chanes & Barrett, Reference Chanes and Barrett2016; Munton, Reference Munton2022). Take a simple example.

Maple Syrup: A bottle of “Hamptons Maple Syrup” on my kitchen benchtop struck me as “Hampton's Maple Syrup” for quite some time until one day I realised there was no apostrophe. In fact, for some of the time there was an apostophe, but it had been expertly occluded by my partner, an amateur lithographer, who gets a kick out of altering labels on household food items when he's bored.

Maple Syrup seems as good an example as any of the cognitive penetration of perceptual experience, and it's the cumulative force of multiple bouts of misremembering what I had previously seen, on top of heavily weighted priors, that plausibly accounts for it. The penetration is fast, automatic, and not readily susceptible to central revision. Crucially, it illustrates that fast and frugal dynamics can sometimes underwrite perceptual fidelity without the added requirement that perception be cognitively impenetrable – after all, there normally is an apostrophe on bottles of Hampton's maple syrup! Contextual disambiguations like this are probably ubiquitous (e.g., incorrectly seeing “agnostic” instead of “agonistic” in a context where the former would be more typical, such as an article about religious beliefs in America).

Obviously, De Neys can afford to be noncommital on the epistemic issues surrounding perception. But a fallout from this debate may offer just the lead he needs in gaining a tighter understanding of how his non-exclusivity proposal might work.

Financial support

No funding was received to assist with the preparation of this manuscript. The author has no relevant financial or non-financial interests to disclose.

Competing interest

None.

References

Anderson, M. L. (2010). Neural reuse: A fundamental organizational principle of the brain. Behavioral and Brain Sciences, 33, 245313.10.1017/S0140525X10000853CrossRefGoogle ScholarPubMed
Anderson, M. L. (2014). After phrenology: Neural reuse and the interactive brain. MIT Press.10.7551/mitpress/10111.001.0001CrossRefGoogle Scholar
Barrett, H. C., & Kurzban, R. (2006). Modularity in cognition: Framing the debate. Psychological Review, 113, 628647.10.1037/0033-295X.113.3.628CrossRefGoogle ScholarPubMed
Carruthers, P. (2006). The architecture of the mind: Massive modularity and the flexibility of thought. Oxford University Press.10.1093/acprof:oso/9780199207077.001.0001CrossRefGoogle Scholar
Chanes, L., & Barrett, L. F. (2016). Redefining the role of limbic areas in cortical processing. Trends in Cognitive Sciences, 20, 96106.CrossRefGoogle ScholarPubMed
Chomsky, N. (1980). Rules and representations. Columbia University Press.Google Scholar
Chomsky, N. (2018). Two notions of modularity. In de Almeida, R. G. & Gleitman, L. R. (Eds.), On concepts, modules, and language: Cognitive science at its core (pp. 2540). Oxford University Press.Google Scholar
Dehaene, S. (2005). Evolution of human cortical circuits for reading and arithmetic: The “neuronal recycling” hypothesis. In Dehaene, S., Duhamel, J. R., Hauser, M. D., & Rizzolatti, G. (Eds.), From monkey brain to human brain (pp. 133157). MIT Press.CrossRefGoogle Scholar
Fodor, J. (1983). The modularity of mind: An essay on faculty psychology. MIT Press.CrossRefGoogle Scholar
Fodor, J. (1984). Observation reconsidered. Philosophy of Science, 51, 2343.CrossRefGoogle Scholar
Karmiloff-Smith, A. (1992). Beyond modularity: A developmental perspective on cognitive science. MIT Press.Google Scholar
Machery, E. (2015). Cognitive penetrability: A no-progress report. In Zeimbekis, J. & Raftopoulos, A. (Eds.), The cognitive penetrability of perception: New philosophical perspectives (pp. 5974). Oxford University Press.Google Scholar
Marr, D. (1976). Early processing of visual information. Philosophical Transactions of the Royal Society B, 275, 483524.Google ScholarPubMed
Mountcastle, V. (1957). Modality and topographic properties of single neurons of cat's somatic sensory cortex. Journal of Neurophysiology, 20, 408434.10.1152/jn.1957.20.4.408CrossRefGoogle ScholarPubMed
Mountcastle, V. (1978). An organizing principle for cerebral function: The unit module and the distributed system. In Edelman, G. & Mountcastle, V. B. (Eds.), The mindful brain (pp. 750). MIT Press.Google Scholar
Munton, J. (2022). How to see invisible objects. Noûs, 56, 343365.CrossRefGoogle Scholar
Pessoa, L. (2016). Beyond disjoint brain networks: Overlapping networks for cognition and emotion. Behavioral and Brain Sciences, 39, 2224.CrossRefGoogle ScholarPubMed
Sperber, D. (1994). The modularity of thought and the epidemiology of representations. In Hirschfield, L. A. & Gelman, S. A. (Eds.), Mapping the mind (pp. 3967). Cambridge University Press.CrossRefGoogle Scholar
Sperber, D. (2002). In defense of massive modularity. In Dupoux, I. (Ed.), Language, brain, and cognitive development (pp. 4757). MIT Press.Google Scholar
Zeimbekis, J., & Raftopoulos, A. (Ed.) (2015). The cognitive penetrability of perception: New philosophical perspectives. Oxford University Press.CrossRefGoogle Scholar