Morin contends that, of all possible codes humans and perhaps other intelligent agents may invent, adapt, and employ for communication, spoken or signed languages have greatest ecoevolutionary fitness for being both self-sufficient and general tools of expression, with the notable rare exception of flourishing language-linked graphic codes or writing systems. As justification for the biocultural dominance of language over ideography, Morin's assertions emphasize the uniqueness of language to optimize limiting trade-offs between self-sufficiency (i.e., use of language without need to annotate messages through auxiliary codes) and generality (i.e., use of language to produce and comprehend an indefinite variety of affective prosody and propositional linguistic content). But, as the author also recognizes, the capacity of language to optimize trade-offs may differ according to protolanguage and language sophistication, exemplified by the contrasting independent evolution of ancient languages in China, Egypt, Meso-America, and Mesopotamia. Poorly standardized and narrowly specialized codes missing prominent traits of glottographic writing, such as Mesopotamian protocuneiform (Damerow, Reference Damerow2006; Schmandt-Besserat, Reference Schmandt-Besserat2007), failed to inspire later languages and corresponding writing systems with robust versatility and longevity. Based on this and additional evidence, the degree of graphic code standardization and specialization seems to help reciprocally drive the effectiveness, fitness, or optimality of languages, with proliferating apex writing systems representing more than just a few dimensions of their successful languages, such as mere notations for language phonetics, affect, semantics, or structure (Clark, Reference Clark2017b, Reference Clark2018). Morin applies this somewhat compelling author-coined “specialization hypothesis” (target article, sect. 4, para. 1) when concluding graphic codes, particularly ideographies, can be self-sufficient or general, such as respective mathematical notations and mnemonic pictographs. Significantly, the exclusion properties of this hypothetical condition are suggestive of the Gödelian incompleteness theorems (Clark, Reference Clark2018; Clark & Hassert, Reference Clark and Hassert2013; Gödel, Reference Gödel1931; Kreisel, Reference Kreisel and Schoeman1967), where formal axiomatic systems, including languages and graphic codes, must exist in a universe of graded logicomathematical consistency (i.e., all theorems are true syntax-correct propositions of the system) and completeness (i.e., all true syntax-correct propositions of the system are theorems).
The rational Gödelian incompleteness theorems, similar to predictions from the specialization hypothesis, precisely and accurately prove mutual exclusiveness between strong axiomatic consistency and strong axiomatic completeness. In the case of ideographies and other axiomatic constructs, systems may be nonetheless logicomathematically identified along the continuum of consistency and completeness, allowing ideographies, for example, to be only strongly consistent and specialized, only strongly complete and general, or both weakly consistent and specialized and weakly complete and general (Gödel, Reference Gödel1931; Kreisel, Reference Kreisel and Schoeman1967). Contradicting Morin's perspective, the Gödelian framework validates the conjecture that some ideographies may too demonstrate optimality comparable to languages and their writing systems through weaker consistency–completeness conditions and that they should predictably instantiate a bigger presence in the history of human language origins and evolution. Why this is not readily evident after millennia of human language and writing development is perhaps the truer “puzzle of ideography” and one that indeed merits focused attention. Juxtaposing Morin's minor speculations on old-to-new technology influences, the main delimiting constraints on widespread, effective ideography innovation and use arguably have been its inherent combinatorial grapholinguistic complexity, computational encoding–decoding complexity, and difficult technological rendering and deployment (Clark, Reference Clark and Floares2012, Reference Clark2014, Reference Clark2015, Reference Clark2018), each of which relate to Morin's learning and specialization accounts. Traditional communication barriers associated with complexity of pictograms and alternate ideographs now become trivialized through modern advancements in interoperable mobile digital devices, such as smart phones and tablets, smart wearables (e.g., smart glasses), and smart mirrors (De Buyser, De Coninck, Dhoedt, & Simoens, Reference De Buyser, De Coninck, Dhoedt and Simoens2016; Lee et al., Reference Lee, Kim, Hwang, Chung, Jang, Seo and Hwang2020; Miotto, Danieletto, Scelza, Kidd, & Dudley, Reference Miotto, Danieletto, Scelza, Kidd and Dudley2018). Artificial intelligence/machine learning (AI/ML)-powered virtual technologies enable communicants to easily generate, exchange, interpret, store, and adapt ideographic messages beyond simple stylized emojis in real time, in person nearby or at-a-distance, and within and across populations, cultures, and generations of users, promoting both self-sufficient and general ideographic language emergence and transition (Clark, Reference Clark2017a, Reference Clark2020).
Morin's fragmentary views on technology-assisted human performance may be further examined and perfected by the study of contemporary languages and how they may originate, evolve, and devolve through varying combinations of seamless, secure digital technology integration and trustworthy digital ideographic standardization and translation (Clark, Reference Clark2014, Reference Clark2017a, Reference Clark2017b, Reference Clark2020; Roff, Reference Roff2020). Digital technology standardization and trustworthiness remain important debated concerns for the discipline and industry of communications, motivating creation and assembly of joint stakeholder caucuses (e.g., government, corporate, consumer, etc.) to devise and enforce laws, policies, and practices that advance technological capabilities in accordance with fundamental human values, principles, rights, and duties (e.g., Glikson & Woolley, Reference Glikson and Woolley2020; National Academies of Sciences, Engineering, and Medicine, 2021). On-device or more intensive data-compute- and -managed off-device AI/ML capabilities (e.g., connected wireless cloud-computing resources and services for mobile devices, etc.) now enable an enormous range of communication possibilities unattainable with past technologies, such as analog landline telephone networks, wire telegraph systems, printable sheet paper and bounded books, and inscribable clay tablets. Normative technology protections are intended to safeguard users at risk for all sorts of deliberate and incidental harm coupled with human–human, human–machine, and machine–machine interactions. User safety and wellbeing may be threatened by inadequate technology operational specifications and malfunction, user error and abuses, and self-generative AI/ML biases and exploitation, among other technological and ethical dangers. Technology-free language use, of course, has its own safety faults for communicants. Ecoevolutionary pressures that force development of honest language use for meaningful, reliable communications also conserve language vulnerabilities for eavesdropping, deceit, and propaganda within and across systematics boundaries from microbes to humans (Clark, Reference Clark2014). Language manipulation nonetheless often becomes exaggerated via the computational and expressive power, flexibility, and accessibility of digital communication tools, encouraging advisable standards for disambiguating and labeling honesty and deception, including, but not limited to, exchanges associated with digital user identity filters, content authenticators, cultural translators, and user consent in extended-reality platforms. Although state-of-the-art digital technologies allocate unprecedented resources to evolve ideographies into adaptive living languages, the same technologies impart unprecedented user hazards only mitigated by unprecedented colloquial and bureaucratic societal norms. These digital-age norms may facilitate and/or impede natural language origins and evolution in ways never before observed in human history, regardless of whether ideographies serve as language bases.
Morin contends that, of all possible codes humans and perhaps other intelligent agents may invent, adapt, and employ for communication, spoken or signed languages have greatest ecoevolutionary fitness for being both self-sufficient and general tools of expression, with the notable rare exception of flourishing language-linked graphic codes or writing systems. As justification for the biocultural dominance of language over ideography, Morin's assertions emphasize the uniqueness of language to optimize limiting trade-offs between self-sufficiency (i.e., use of language without need to annotate messages through auxiliary codes) and generality (i.e., use of language to produce and comprehend an indefinite variety of affective prosody and propositional linguistic content). But, as the author also recognizes, the capacity of language to optimize trade-offs may differ according to protolanguage and language sophistication, exemplified by the contrasting independent evolution of ancient languages in China, Egypt, Meso-America, and Mesopotamia. Poorly standardized and narrowly specialized codes missing prominent traits of glottographic writing, such as Mesopotamian protocuneiform (Damerow, Reference Damerow2006; Schmandt-Besserat, Reference Schmandt-Besserat2007), failed to inspire later languages and corresponding writing systems with robust versatility and longevity. Based on this and additional evidence, the degree of graphic code standardization and specialization seems to help reciprocally drive the effectiveness, fitness, or optimality of languages, with proliferating apex writing systems representing more than just a few dimensions of their successful languages, such as mere notations for language phonetics, affect, semantics, or structure (Clark, Reference Clark2017b, Reference Clark2018). Morin applies this somewhat compelling author-coined “specialization hypothesis” (target article, sect. 4, para. 1) when concluding graphic codes, particularly ideographies, can be self-sufficient or general, such as respective mathematical notations and mnemonic pictographs. Significantly, the exclusion properties of this hypothetical condition are suggestive of the Gödelian incompleteness theorems (Clark, Reference Clark2018; Clark & Hassert, Reference Clark and Hassert2013; Gödel, Reference Gödel1931; Kreisel, Reference Kreisel and Schoeman1967), where formal axiomatic systems, including languages and graphic codes, must exist in a universe of graded logicomathematical consistency (i.e., all theorems are true syntax-correct propositions of the system) and completeness (i.e., all true syntax-correct propositions of the system are theorems).
The rational Gödelian incompleteness theorems, similar to predictions from the specialization hypothesis, precisely and accurately prove mutual exclusiveness between strong axiomatic consistency and strong axiomatic completeness. In the case of ideographies and other axiomatic constructs, systems may be nonetheless logicomathematically identified along the continuum of consistency and completeness, allowing ideographies, for example, to be only strongly consistent and specialized, only strongly complete and general, or both weakly consistent and specialized and weakly complete and general (Gödel, Reference Gödel1931; Kreisel, Reference Kreisel and Schoeman1967). Contradicting Morin's perspective, the Gödelian framework validates the conjecture that some ideographies may too demonstrate optimality comparable to languages and their writing systems through weaker consistency–completeness conditions and that they should predictably instantiate a bigger presence in the history of human language origins and evolution. Why this is not readily evident after millennia of human language and writing development is perhaps the truer “puzzle of ideography” and one that indeed merits focused attention. Juxtaposing Morin's minor speculations on old-to-new technology influences, the main delimiting constraints on widespread, effective ideography innovation and use arguably have been its inherent combinatorial grapholinguistic complexity, computational encoding–decoding complexity, and difficult technological rendering and deployment (Clark, Reference Clark and Floares2012, Reference Clark2014, Reference Clark2015, Reference Clark2018), each of which relate to Morin's learning and specialization accounts. Traditional communication barriers associated with complexity of pictograms and alternate ideographs now become trivialized through modern advancements in interoperable mobile digital devices, such as smart phones and tablets, smart wearables (e.g., smart glasses), and smart mirrors (De Buyser, De Coninck, Dhoedt, & Simoens, Reference De Buyser, De Coninck, Dhoedt and Simoens2016; Lee et al., Reference Lee, Kim, Hwang, Chung, Jang, Seo and Hwang2020; Miotto, Danieletto, Scelza, Kidd, & Dudley, Reference Miotto, Danieletto, Scelza, Kidd and Dudley2018). Artificial intelligence/machine learning (AI/ML)-powered virtual technologies enable communicants to easily generate, exchange, interpret, store, and adapt ideographic messages beyond simple stylized emojis in real time, in person nearby or at-a-distance, and within and across populations, cultures, and generations of users, promoting both self-sufficient and general ideographic language emergence and transition (Clark, Reference Clark2017a, Reference Clark2020).
Morin's fragmentary views on technology-assisted human performance may be further examined and perfected by the study of contemporary languages and how they may originate, evolve, and devolve through varying combinations of seamless, secure digital technology integration and trustworthy digital ideographic standardization and translation (Clark, Reference Clark2014, Reference Clark2017a, Reference Clark2017b, Reference Clark2020; Roff, Reference Roff2020). Digital technology standardization and trustworthiness remain important debated concerns for the discipline and industry of communications, motivating creation and assembly of joint stakeholder caucuses (e.g., government, corporate, consumer, etc.) to devise and enforce laws, policies, and practices that advance technological capabilities in accordance with fundamental human values, principles, rights, and duties (e.g., Glikson & Woolley, Reference Glikson and Woolley2020; National Academies of Sciences, Engineering, and Medicine, 2021). On-device or more intensive data-compute- and -managed off-device AI/ML capabilities (e.g., connected wireless cloud-computing resources and services for mobile devices, etc.) now enable an enormous range of communication possibilities unattainable with past technologies, such as analog landline telephone networks, wire telegraph systems, printable sheet paper and bounded books, and inscribable clay tablets. Normative technology protections are intended to safeguard users at risk for all sorts of deliberate and incidental harm coupled with human–human, human–machine, and machine–machine interactions. User safety and wellbeing may be threatened by inadequate technology operational specifications and malfunction, user error and abuses, and self-generative AI/ML biases and exploitation, among other technological and ethical dangers. Technology-free language use, of course, has its own safety faults for communicants. Ecoevolutionary pressures that force development of honest language use for meaningful, reliable communications also conserve language vulnerabilities for eavesdropping, deceit, and propaganda within and across systematics boundaries from microbes to humans (Clark, Reference Clark2014). Language manipulation nonetheless often becomes exaggerated via the computational and expressive power, flexibility, and accessibility of digital communication tools, encouraging advisable standards for disambiguating and labeling honesty and deception, including, but not limited to, exchanges associated with digital user identity filters, content authenticators, cultural translators, and user consent in extended-reality platforms. Although state-of-the-art digital technologies allocate unprecedented resources to evolve ideographies into adaptive living languages, the same technologies impart unprecedented user hazards only mitigated by unprecedented colloquial and bureaucratic societal norms. These digital-age norms may facilitate and/or impede natural language origins and evolution in ways never before observed in human history, regardless of whether ideographies serve as language bases.
Financial support
This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.
Competing interest
None.