Morin presents an interesting, novel argument about the lack of writing systems based on ideographs rather than language. Much of the argument, especially regarding the role of standardization in supporting systematicity, seems both correct and not incompatible with the dominance of language-based writing. Morin proposes two explanations for why there has not been a self-sufficient, systematic graphic code: The learning account and the standardization account. He dismisses the first and champions the second. But the learning account – the notion that humans are not equipped to learn large numbers of graphic symbol–meaning pairings without relying on language – cannot be dismissed so easily and is, in fact, compatible with a role for standardization.
In his target article, Morin notes that there are limits to glottography, such that the linguistic information an orthography encodes is not limited to phonological information. (It is for this reason that we refer to a “language constraint” and not “glottography” in identifying constraints on written language; Perfetti, Reference Perfetti2003; Perfetti & Harris, Reference Perfetti and Harris2013.) But the converse is evidently true: There are limits to our ability to bypass speech and reliably access meaning directly from graphic symbols. The strongest evidence for this learning account exists in the form of the millions of individuals worldwide who, because of biological constraints imposed by deafness or dyslexia, cannot easily form connections between written letter strings and spoken words. For these individuals, the most expeditious route to literacy, were it a viable one, would be to bypass language entirely in forming a reading network. Processing letter strings as unitary graphic symbols that map directly to meaning – that is, as ideographs – would unlock written text for clinical populations who otherwise struggle to access it.
This does not happen. Despite enormous social and economic pressures and abundant opportunity, deaf and dyslexic individuals have tremendous difficulty reading. The average deaf student finishes secondary education reading at the level of a primary-aged child (Lederberg, Schick, & Spencer, Reference Lederberg, Schick and Spencer2013), and individuals diagnosed with dyslexia in childhood continue to display poor reading and writing skills throughout life (Reis, Araújo, Morais, & Faísca, Reference Reis, Araújo, Morais and Faísca2020). In fact, a preponderance of evidence indicates that whole-word reading instruction – a method that essentially encourages children to treat writing as an ideography – provides fertile ground for dyslexia to flourish (Perfetti & Harris, Reference Perfetti, Harris, Verhoeven, Perfetti and Pugh2019).
In his description of the limits of glottography, Morin cites a recent article in which we review evidence that some percentage of deaf and dyslexic individuals become proficient readers by deemphasizing phonology in the formation of their neural reading networks (Hirshorn & Harris, Reference Hirshorn and Harris2022). Although this is true, the subset of deaf and dyslexic individuals who manage to forge these atypical pathways is distressingly small, and neuroimaging evidence has yet to reveal individuals who omit phonological areas entirely from the reading network. Moreover, we proposed in our review article that those deaf and dyslexic individuals who attain literacy by constructing phonology-deemphasized networks may represent a minority of the larger population with exceptional visual memory capacity (Hirshorn & Harris, Reference Hirshorn and Harris2022). The probability that humans would have stumbled upon or cultivated a graphic symbol–meaning network that excludes language entirely, in the absence of the current pressures that compel predisposed members of clinical populations to develop a limited version of them, therefore seems low, even if we did not have language to parasitically host writing.
What to make, then, of Morin's rejection of the learning account? Curiously, his dismissal of it is premised on his dismissal of the motor theory-of-speech perception, which, among other things, purports to account for children's learning to speak more easily than their learning to read. The reader may or may not find Morin's criticisms of motor theory persuasive – if not, there plenty of other criticisms of it on offer (see, e.g., Hickok, Reference Hickok2014). In any case, it is not clear how the viability of the learning account is tethered to the viability of motor theory. Rejection of motor theory's claim that humans have an innate sensitivity to speech segments, or mentally represent speech as articulatory gestures, does not entail acceptance of the claim that we can easily learn large numbers of graphic symbol–meaning pairings without parasitizing neural speech networks. Morin's argument against the learning account relies heavily on rejecting the assumption that the human mind is “ill-equipped” (target article, sect. 5.5, para. 1) to memorize large numbers of form–meaning pairings. On Morin's argument, this assumption is undermined because it “wrongly predicts that full-blown sign languages cannot evolve” (target article, sect. 5.5, para. 1). This, too, seems to be a non-sequitur. Instead, sign languages can evolve but are not able to replace language-based writing. They find their evolutionary niche without crowding out the dominant species.
We do agree that standardization is a factor in developing writing systems. Morin's argument is that “spoken or signed codes, being easier to standardize, install a lock-in situation where other types of codes are less likely to evolve” (target article, sect. 6.1, para. 6). The question is, why are they easier to standardize? Language-based writing, whether the graphs used map to speech segments, whole syllables, morphemes, or whole words, provides standard graph–language pairs of cognitively manageable numbers that generate the infinity of messages with relatively small means.
Morin presents an interesting, novel argument about the lack of writing systems based on ideographs rather than language. Much of the argument, especially regarding the role of standardization in supporting systematicity, seems both correct and not incompatible with the dominance of language-based writing. Morin proposes two explanations for why there has not been a self-sufficient, systematic graphic code: The learning account and the standardization account. He dismisses the first and champions the second. But the learning account – the notion that humans are not equipped to learn large numbers of graphic symbol–meaning pairings without relying on language – cannot be dismissed so easily and is, in fact, compatible with a role for standardization.
In his target article, Morin notes that there are limits to glottography, such that the linguistic information an orthography encodes is not limited to phonological information. (It is for this reason that we refer to a “language constraint” and not “glottography” in identifying constraints on written language; Perfetti, Reference Perfetti2003; Perfetti & Harris, Reference Perfetti and Harris2013.) But the converse is evidently true: There are limits to our ability to bypass speech and reliably access meaning directly from graphic symbols. The strongest evidence for this learning account exists in the form of the millions of individuals worldwide who, because of biological constraints imposed by deafness or dyslexia, cannot easily form connections between written letter strings and spoken words. For these individuals, the most expeditious route to literacy, were it a viable one, would be to bypass language entirely in forming a reading network. Processing letter strings as unitary graphic symbols that map directly to meaning – that is, as ideographs – would unlock written text for clinical populations who otherwise struggle to access it.
This does not happen. Despite enormous social and economic pressures and abundant opportunity, deaf and dyslexic individuals have tremendous difficulty reading. The average deaf student finishes secondary education reading at the level of a primary-aged child (Lederberg, Schick, & Spencer, Reference Lederberg, Schick and Spencer2013), and individuals diagnosed with dyslexia in childhood continue to display poor reading and writing skills throughout life (Reis, Araújo, Morais, & Faísca, Reference Reis, Araújo, Morais and Faísca2020). In fact, a preponderance of evidence indicates that whole-word reading instruction – a method that essentially encourages children to treat writing as an ideography – provides fertile ground for dyslexia to flourish (Perfetti & Harris, Reference Perfetti, Harris, Verhoeven, Perfetti and Pugh2019).
In his description of the limits of glottography, Morin cites a recent article in which we review evidence that some percentage of deaf and dyslexic individuals become proficient readers by deemphasizing phonology in the formation of their neural reading networks (Hirshorn & Harris, Reference Hirshorn and Harris2022). Although this is true, the subset of deaf and dyslexic individuals who manage to forge these atypical pathways is distressingly small, and neuroimaging evidence has yet to reveal individuals who omit phonological areas entirely from the reading network. Moreover, we proposed in our review article that those deaf and dyslexic individuals who attain literacy by constructing phonology-deemphasized networks may represent a minority of the larger population with exceptional visual memory capacity (Hirshorn & Harris, Reference Hirshorn and Harris2022). The probability that humans would have stumbled upon or cultivated a graphic symbol–meaning network that excludes language entirely, in the absence of the current pressures that compel predisposed members of clinical populations to develop a limited version of them, therefore seems low, even if we did not have language to parasitically host writing.
What to make, then, of Morin's rejection of the learning account? Curiously, his dismissal of it is premised on his dismissal of the motor theory-of-speech perception, which, among other things, purports to account for children's learning to speak more easily than their learning to read. The reader may or may not find Morin's criticisms of motor theory persuasive – if not, there plenty of other criticisms of it on offer (see, e.g., Hickok, Reference Hickok2014). In any case, it is not clear how the viability of the learning account is tethered to the viability of motor theory. Rejection of motor theory's claim that humans have an innate sensitivity to speech segments, or mentally represent speech as articulatory gestures, does not entail acceptance of the claim that we can easily learn large numbers of graphic symbol–meaning pairings without parasitizing neural speech networks. Morin's argument against the learning account relies heavily on rejecting the assumption that the human mind is “ill-equipped” (target article, sect. 5.5, para. 1) to memorize large numbers of form–meaning pairings. On Morin's argument, this assumption is undermined because it “wrongly predicts that full-blown sign languages cannot evolve” (target article, sect. 5.5, para. 1). This, too, seems to be a non-sequitur. Instead, sign languages can evolve but are not able to replace language-based writing. They find their evolutionary niche without crowding out the dominant species.
We do agree that standardization is a factor in developing writing systems. Morin's argument is that “spoken or signed codes, being easier to standardize, install a lock-in situation where other types of codes are less likely to evolve” (target article, sect. 6.1, para. 6). The question is, why are they easier to standardize? Language-based writing, whether the graphs used map to speech segments, whole syllables, morphemes, or whole words, provides standard graph–language pairs of cognitively manageable numbers that generate the infinity of messages with relatively small means.
Financial support
This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.
Competing interest
None.