The investigation of sign languages has altered our understanding of the nature of language in a profound way. Sign languages have hierarchical structure identical to that of spoken languages despite the fact they are gestured and watched. Specifically, sign languages have multilayered and interrelated, linguistic organization: a lexicon with sublexical structure (i.e., phonology), grammatical and derivational morphology, syntax, and semantics (Klima & Bellugi, 1979; Liddell, 1980; Padden, 1988; Stokoe, Casterline, & Cronneberg, 1965; Supalla & Newport, 1978). Using the architecture of sign language, psycholingusitic research has discovered that the human mind packs and unpacks meaning in sign language via its linguistic structure, in a fashion identical to how speakers produce and perceive meaning from spoken language (for a review, see Emmorey, 2002). Consistent with these linguistic and psychological discoveries, developmental research has found that the milestones of sign language acquisition by children exposed to it from birth are no different from those for spoken language (Mayberry & Squires, 2006; Meier, 1987; Petitto, 1991; Reilley, McIntyre, & Bellugi, 1990). Given the linguistic, psycholinguistic, and developmental parallels between sign and spoken language, it comes as no surprise that the neurocortical underpinnings of sign language are highly similar to those of spoken language (MacSweeney et al., 2002; Petitto et al., 2000).
Sign language research has thus unearthed a key cognitive principle. Linguistic structure, its acquisition, processing, and neurocortical representation transcend sensory–motor modality. The structure and processing of language are properties of the human mind as it communicates with other minds. The properties of the sensory and motor systems clearly provide the raw materials from which language builds its architecture, but they do not determine its form, function and acquisition, or social and neurocortical use.
Sign language research can potentially yield another critical insight into the human mind, which is the role of early linguistic experience in language acquisition. How does early linguistic experience affect the trajectory of language acquisition over the life span? The present paper focuses on the role of early linguistic experience in second-language (L2) learning. We ask here whether and how age of acquisition (AoA) of the first language (L1) affects the outcome of L2 learning. We also ask whether sensory–motor modality is a relevant factor in the transfer of linguistic skills from the L1 to the L2. Specifically, we ask whether a visual L1 can support subsequent learning of a spoken L2. Before summarizing a series of three experiments conducted in two languages, it is necessary for us to consider the unique circumstances of sign language acquisition vis-à-vis spoken language acquisition.
THE BIOLOGICAL AND CULTURAL CONTEXTS OF SIGN LANGUAGE ACQUISITION
The age of onset of L1 exposure is quite different for babies born with and without normal hearing. Babies born with hearing are immersed in spoken language from birth and even before. Age of onset of L1 exposure is thus homogeneous for children who hear normally. We, consequently, know little about the effects of delayed L1 exposure on the outcome of L1 acquisition, except for the most aberrant and atypical of child-rearing circumstances, such as the case of Genie (Curtiss, 1977). By contrast, age of first exposure to an accessible L1 is highly variable for children born with severe or profound hearing impairments. Because their deafness is greater than the sound level of speech, and distorts speech even when amplified, they are isolated from the language spoken to and around them. More than 90% of children born deaf have parents who hear normally and know no sign language, which means that they are not exposed to sign language from birth (Schien & Delk, 1974).
A host of additional social and cultural factors serve to further delay first exposure to an accessible L1 for the deaf child. For example, the age when the child's hearing loss is detected varies widely, as does the age when the child and family receive special intervention. Once special services are available to the child and family, the goal is typically focused on audition and speech training, omitting exposure to sign language. Listening to and lipreading speech provide insufficient linguistic details for the child, even with powerful hearing aids and cochlear implants, to acquire spoken language spontaneously as a L1 within the normal, developmental time frame. Children who are deaf frequently must demonstrate that they are unable acquire language via audition alone before they are exposed to sign language. The growing linguistic paradox in North America is that infants who hear normally are enthusiastically exposed to signs at an early age, whereas those who cannot hear appear to have lost the privilege because of beliefs that sign language impedes the deaf child's language acquisition (Padden & Humphries, 2005; Signing with your baby, 2006).
Biological and cultural factors can thus converge to create a period of prolonged linguistic isolation for many young children born deaf. This complex situation means that age of exposure to an accessible L1 is heterogeneous when the language is signed. We use the heterogeneous timing in the onset of L1 exposure to answer a series of questions about the effects of delayed L1 acquisition on the outcome of language acquisition in general. The questions we ask are, first, does the age when accessible L1 exposure begins makes a difference? What are the effects of age of L1 exposure on the ability to comprehend language? If age of L1 exposure affects language comprehension, how does it affect comprehension of various syntactic structures? Second, do age of L1 exposure effects extend beyond the syntactic level of language to phonological processing and word identification? Third, and equally important, are the effects of delayed L1 exposure on linguistic outcome similar to, or different from, documented AoA effects on L2 learning? The most common means by which researchers have investigated AoA effects on the outcome of language is using the L2 context. L2 learning is highly common, and characterized by a variable age of onset (Birdsong, 1996). Here we take a novel and more finely tuned approach. We manipulate AoA for the L1 and ask how it affects the outcome of L2 learning. Fourth, and finally, we ask what effects delayed L1 exposure has on L2 literacy development.
AoA EFFECTS ON AMERICAN SIGN LANGUAGE (ASL) SYNTAX
Children acquiring sign language, like those acquiring spoken language, begin mastery of syntactic structures with simple sentences and eventually acquire more complex ones. Thus, we might expect AoA to interact with syntax such that the effects are more apparent for complex and later acquired structures than for simpler and earlier acquired structures. To test the hypothesis, we used a grammaticality judgment paradigm to assess syntactic proficiency in ASL (Boudreault & Mayberry, 2006). We tested a variety of ASL syntactic structures including (a) simple sentences using subject–verb–object (SVO) order with noninflecting verbs; (b) negative sentences with either a negative sign inserted in the verb phrase or a negative headshake produced simultaneously with the verb; (c) inflecting verbs where the beginning and ending loci of the verb are inflected for case, person, and number; (d) questions created with either the wh-signs WHO or WHAT given at the end of the sentences or with an accompanying question facial expression and without a wh-sign; (e) subject–subject relative clauses with the embedded clause marked via facial expression or sign marker; and (f) sentences with classifier predicates. Children exposed to ASL from birth initially use word order rather than verb inflection to indicate subject and object. They negate verb phrases with negative signs prior to using negative facial expressions. They use wh-movement at somewhat older ages and master the full ASL classifier system in the preschool years (for a review, see Mayberry & Squires, 2006).
Onset of exposure to ASL in this study was defined as the age when the participant was first in the company of peers who were deaf and used ASL, that is, an immersion situation. This is a memorable event for our participants and a reliable indicator of first ASL exposure. The adults we test are severely or profoundly deaf from birth, unable to navigate everyday life solely through speech, listening, and lipreading, have used ASL as a primary language for 10 years or more, and have normal, nonverbal intelligence.
AoA exerted significant effects on the ASL syntactic proficiency, as Figure 1 shows. As AoA increased, grammatical judgment accuracy decreased (Boudreault & Mayberry, 2006). ASL syntactic performance tended also to decrease with increased syntactic complexity regardless of AoA. The exception was for ASL classifier constructions, structures that are not yet well described by linguists. This shows that complex structures are more difficult to process in sign languages, just as they are for spoken languages.
We thus observe that AoA affects the outcome of syntactic learning of ASL, just as AoA has been found to affect the syntactic outcome of L2 spoken language learning in some cases (Johnson & Newport, 1989). However, the AoA effect is inconsistent for L2 learning, and appears to vary as a function of the linguistic relationship of the L1 and the L2, in addition to the amount of education undertaken in the L2 (Birdsong & Molis, 2001; Flege, Yeni-Komshian, & Liu, 1991). Note that in the ASL grammaticality judgment study, participants first exposed to ASL at the oldest ages performed at near chance levels on the more complex wh-questions and relative clause structures. Because ASL was the primary language used by the participants for many years (and they were not fluent in spoken language), this finding suggests the AoA effects we found here are unique to delayed L1 learning.
By definition, L2 learning entails complete L1 acquisition begun in early childhood. Children born deaf often arrive at the ASL acquisition task with only a modicum of spoken language acquisition. In previous work, we found that AoA exerts greater effects on the outcome of ASL acquisition when it began after little or no previous language acquisition in contrast to when it was a second language (Mayberry, 1993, 1994). We turned to another language in the next study, English, to further probe this complex question with new learner groups, including those who hear normally. The question we ask is whether delayed L1 acquisition affects the outcome of L2 learning.
DELAYED L1 EFFECTS ON L2 SYNTAX
Using grammaticality judgment, we assessed English syntactic proficiency controlling AoA for both the L1 and the L2 (Mayberry & Lock, 2003). We accomplished this with four groups of participants who had normal hearing, or were deaf, and had contrasting linguistic experience in early life with respect to L1 exposure. All the groups had similar AoA for the L2, which was English in all cases. Adults with normal hearing who had acquired English as a mother tongue served as the control group. The experimental comparison was among the three groups who did not acquire English from birth. One L2 group learned English as an L2 in school subsequent to acquiring a different spoken language in early life, Urdu, German, Spanish, or French. The other L2 group learned English as an L2 in school after having acquired a sign language in early life, ASL. All participants in this group were severely or profoundly deaf and their parents, who were deaf, signed to them from infancy. The critical group was also severely and profoundly deaf, but they had normally hearing parents who knew no sign language; they were first exposed to ASL in school. The main characteristic of this group with respect to L1 exposure is that they acquired little functional spoken language prior to ASL exposure.
The question is whether this delay in exposure to an accessible language during early life affects the outcome of subsequent language learning. The results of our previous experiments showed this to be the case for ASL. The question we ask with this experiment is whether a similar outcome characterizes English learning.
For the English experiment, we selected syntactic structures ranging from simple to complex, structures that typically developing children acquire early and late over the course of English acquisition. The structures we tested included (a) monoclause SVO sentences with present tense, (b) dative structures, (c) conjoined clauses, (d) full, nonreversible passive sentences, and (e) subject–subject relative clauses. The two L2 groups of English learners performed remarkably similarly on the grammatical task, as Figure 2 shows. Indeed, the two groups' performance is nearly identical. The L2 groups' performance shows high convergence even though one L2 group had normal hearing and the other group was deaf from birth. The two groups' performance converges even though one group's native languages were spoken and the other group's native language was signed. Despite these radical differences in both the linguistic structures and sensory–motor modality of the early language experience, the groups whose L1 language exposure began in early infancy showed similar performance on their L2 syntactic proficiency in English. When the L2 AoA is held constant, and the L1 AoA is held constant, no apparent differences in L2 syntactic processing arise (Mayberry & Lock, 2003).
Now we consider the third group. If AoA is the sole factor determining L2 outcome, then the performance of the group who began to learn English at the same age as the other groups should also be similar. Recall that the critical contrast between the third group and the other two L2 groups is that the former experienced a marked absence of accessible language exposure in early life. Speech input was insufficiently robust in their early lives to enable them to acquire spoken language either spontaneously or through daily instruction. Hence, they arrived at the ASL learning task in later childhood with only a modicum of L1 acquisition. We observe that their performance on the English grammatical judgment task is significantly below that of the L2 learner groups, as Figure 2 shows. Their performance is at near-chance levels for the more complex syntactic structures of wh-questions and relative clause structures.
The unique, language acquisition circumstances caused by congenital deafness show that AoA is not the only or even the most important factor in L2 syntactic outcome. These findings corroborate the findings of other L2 studies (Birdsong & Molis, 2001; Flege et al., 1991). Instead, the unique language acquisition situation of childhood deafness reveals that the timing of L1 exposure in early life affects the outcome of all subsequent language learning, both the L1 and the L2, independent of sensory–motor modality (Mayberry, Lock, & Kazmi, 2002).
These findings require that we narrow the definition of a critical, or sensitive, period for language to the L1 (Lenneberg, 1967; Penfield & Roberts, 1959). Moreover, these findings have far reaching implications for L2 acquisition. L1 and L2 acquisition are clearly interindependent. Severely delayed L1 acquisition significantly affects the outcome of L2 syntactic learning as well. Thus, the unique situation of childhood deafness demonstrates the strong link between L1 and L2 learning. We return to the theoretical question of how L1 and L2 acquisition may be linked below after exploring AoA effects on another domain of language.
Linguistic structure is multilayered and hierarchical so that a remaining question about AoA effects on sign language, as either the L1 or L2, is whether the effects are located only in syntactic processing. In the case of spoken L2 outcome, AoA has widely been shown to affect the outcome of phonological knowledge and processing, in addition to its widely investigated syntactic effects. Could the same be true in the case of sign language?
AoA AFFECTS PHONOLOGICAL AND LEXICAL PROCESSING IN ASL
The breakthrough discovery about sign language structure was that signs have parts, or sublexical structure (Stokoe et al., 1965). In contrast to gestures, regardless of whether they accompany speech, every sign consists of a set of articulation units, which include handshape, orientation, place of articulation on the body, and movement. Signs are highly specified at the sublexical level in a way that gestures, being holistic in formation and meaning, are not (McNeill, 1992). Signs thus have the linguistic structure of spoken words.
Nearly all work demonstrating the psychological reality of ASL sublexical structure has come from production studies. For example, the phonological structure of signs explains the sign mistakes made by fluent adults in “slips of the hand” where the sublexical features of adjacent words influence one another (Klima & Bellugi, 1979). In short-term memory errors, partial remembering entails recall of most, but not all the phonological units of a target sign (Bellugi, Klima, & Siple, 1975). The sign errors made by young children over the course of acquisition show systematic sensitivity and mastery of the ASL phonological system (Marentette & Mayberry, 2000). Finally, the paraphasic errors made by aphasic signers are faithful to the phonological structure of ASL signs (Corina, 2000). Unlike pictures or gestures, the form of signs is highly specified in the unitized, systematic fashion characteristic of phonological systems. Signers' production errors demonstrate that the mind uses these phonological units to express sign meaning. Thus, phonology is an important part of the signers' mental toolkit for language.
Turning now to AoA effects, we ask whether and how AoA affects the outcome of phonological knowledge and processing in ASL. It is clear that ASL has a complex and detailed phonological system. Conversely, the phonological effects of AoA on L2 learning arise in the context of an acoustic and oral speech signal. Is it possible for AoA to affect phonological processing when the signal is visual and signed? In other words, is it easier to recognize the meanings of words in the L2 when they are watched rather than listened to?
Our previous findings suggest that the answer is no. Using a sentence memory paradigm to investigate AoA effects on ASL outcome, we found a marked tendency for sign phonological substitution errors to increase in tandem with increased AoA (Mayberry & Eichen, 1991). For example, one participant substituted the sign SLEEP for the sign AND in the target sentence, “I ate too much turkey and potato,” as Figure 3 shows. At first glance the error seems bizarre. The word sleep is semantically unrelated to the sign and. Even the lexical categories are wrong. The target sign is a closed class word, a conjunction, whereas the error is a verb. On closer inspection, however, we see that there is a significant phonological overlap between the two signs. The signs can be said to visually rhyme. The two signs are phonological minimal pairs in ASL because they share all phonological features except one, place of articulation.
These phonological errors are suggestive of difficulty processing the phonological structure of signs. This was borne out by a negative correlation between the commission of phonological errors on a narrative shadowing task and narrative comprehension (Mayberry & Fischer, 1989). This finding suggests that, although sign is visual, a possible locus of AoA effects on ASL processing may be at the phonological level.
Using a primed, lexical decision paradigm, we asked whether delayed L1 acquisition differentially affects phonological processing in service of lexical recognition in sign language (Mayberry & Witcher, 2006). In the lexical decision task, the participant decides with a button press whether the target is a real sign or not, that is, whether the sign is a part of the ASL lexicon. Half the targets were thus possible signs that we created by altering a single articulatory feature of extant ASL signs. The remaining targets were either unrelated to the prime, and thus served as the control condition, or phonologically related to the prime by sharing three out of four phonological parameters. These phonological primes visually rhymed with the following targets, as do the target and error signs shown in Figure 3.
We again investigated the contrast between delayed L1 acquisition and L2 acquisition, but here the target language was ASL. The critical contrast is between two groups. One test group was delayed L1 ASL learners, who were first exposed to ASL in late childhood and whose spoken language acquisition prior to being exposed to ASL was minimal. The second test group was classic L2 learners; they had used ASL for more than 10 years. All the L2 learners were normally hearing and acquired English as an L1 in early life. Two control groups were born severely and profoundly deaf. One control group acquired ASL in early life from their deaf parents, native L1 learners. The other control group acquired ASL in early childhood at school.
As AoA increased, the time needed to recognize signs also increased. More revealing is the finding that AoA affected how the groups engaged in phonological processing during lexical access. At intervals less than 330 ms, phonological overlap between the prime and target facilitated the sign recognition of both the native and early childhood learners of ASL. They recognized signs faster when the signs were phonologically related. This finding suggests spreading phonological activation in the mental lexicon during word recognition when ASL is acquired early in life (Mayberry & Witcher, 2006).
By contrast, the two groups who learned ASL after early childhood, both L1 and L2 learners, showed phonological inhibition at the brief interval. They recognized signs more slowly when the signs were phonologically related. This finding indicates that the ability to use phonological structure to identify signs is indeed linked to early childhood language exposure. Learners who begin to learn ASL after early childhood are clearly sensitive to phonological structure in ASL, but phonological structure hinders rather than helps their sign recognition.
We were also interested in observing AoA effects on sign recognition when the duration between the prime and target was increased by more than 300% to an entire second. In this condition, there is more time to consider the phonological structure of the prime before seeing the target. Again, we observe differential effects of delayed L1 acquisition on language processing, but in this case at the level of phonological structure. Even when the interval between contiguous signs is increased three times, the delayed L1 learners of ASL continue to show inhibition related to phonological structure. Hyperinhibition for phonological structure was not evident in the sign recognition of the L2 ASL learners, however. With increased time in between signs, the sign recognition the L2 ASL learners now resembled that of the early L1 learners of ASL. The phonological inhibition the L2 learners showed at the brief interval had fully dissipated. Strikingly, the phonological inhibition shown by the late L1 learners persisted over an entire second (Mayberry & Witcher, 2006).
The lexical processing results show, first, that AoA effects are found at the single word level in sign language. AoA effects on language processing are not restricted to syntax but occur at the level of the single sign as well. AoA affects phonological processing in sign language, just as it affects phonological processing in spoken language. Just as speakers must recognize and unpack the phonological structure of spoken words to understand them, signers must also recognize and unpack the phonological structure of signed words to understand them. Clearly, AoA has a significant impact on the knowledge and processing of the phonological structure of language. This is true regardless of whether the language is visual and signed in the case of sign language, or auditory and spoken in the case of spoken language.
The findings further demonstrate a critical difference in AoA effects for the L1 compared to the L2 at the lexical level. Signers exposed to ASL in early childhood show evidence that phonology is an organizing structure in their mental lexicon for ASL. By contrast, signers exposed to ASL after early childhood show evidence that phonological structure is not the means by which their mental lexicon is organized. The phonological structure of sign is a stumbling block in their recognition of sign meaning. The key difference between the L2 ASL learners and the delayed L1 learners of ASL is that the former are able to circumvent the problem in some way whereas the later cannot. Early L1 exposure appears to tune the visual system for phonology, just as it tunes the auditory system for the same linguistic structure (Werker & Tees, 1984).
To summarize our results thus far, we observe multiple, differential effects of AoA on L2 learning in contrast to L1 acquisition. Differential AoA effects on the L1 and L2 are found at the syntactic, phonological, and lexical levels of linguistic structure. Early L1 exposure throughout early life appears to bolster subsequent L2 outcome. The question remains as to whether the phenomenon extends to reading development. Simply put, does early and rich L1 exposure facilitate later L2 reading development?
DELAYED L1 EFFECTS ON L2 READING DEVELOPMENT
A common requirement of bilingualism is literacy in the L2. Unlike infant language acquisition, reading the L2 can serve as a major source of input for the L2 learner. A major difference between online language comprehension and reading comprehension is that the former entails a dynamic and rapidly changing signal, whereas reading entails fixed visual symbols. The language learner has more control over reading words and sentences than listening to, or watching, them in spoken or signed language. Might we thus observe AoA effects on L1 and L2 development to be reduced in the reading task? At the same time, written languages represent spoken languages. Some theories of reading development posit that the reader must be able to speak the language represented in the written text in order to comprehend it. These theories would predict that readers who are deaf and do not speak would have difficulty with L2 literacy. Note, however, that this prediction is at odds with our findings to date. Indeed, our findings suggest an alternative hypothesis. If L1 acquisition serves to scaffold L2 learning, then early L1 acquisition should scaffold L2 reading also, even when the L1 is a visual language.
To determine if this is the case, we grouped ASL signers according to their performance on an ASL grammaticality judgment task as being of either high or low proficiency (Boudreault & Mayberry,2006). High proficiency suggests near-native control of ASL grammar. Low proficiency suggests poor control of ASL grammar and is indicative of delayed L1 acquisition. We administered a set of reading tests to the participants to obtain grade equivalent reading levels. Note that a correspondence between ASL grammatical skill and English reading ability is an example of visual bilingualism.
The results were striking. Grouping the signers who were deaf by ASL grammatical skill produced a bimodal distribution of English reading achievement with no overlap between the groups. Specifically, the mean English reading grade achievement of the group with high ASL grammatical skill was at the post high school level. By contrast, the mean English reading achievement of the group with low ASL grammatical skill group was between Grades 3 and 4 (Chamberlain, 2002; Chamberlain & Mayberry, 2007).
Thus, we observe that early L1 acquisition can lead to successful L2 learning, even when the early L1 is sign language and the subsequent L2 is reading a spoken language. This is a clear example of visual bilingualism. Strong L1 skills in a visual language can scaffold strong L2 skills in a visual representation of a spoken language.
INSIGHTS INTO THE NATURE OF L2 LEARNING
We return now to the questions with which we began this experimental odyssey. How does early linguistic experience affect the trajectory of language acquisition over the lifespan? How does AoA of the L1 affect the outcome of the L2? Is sensory–motor modality a relevant factor in the transfer of linguistic skills from the L1 to the L2? Can a visual L1 support subsequent learning of a spoken L2?
First, we see that AoA exerts a strong and lifelong effect on the outcome of L1 acquisition. These effects are apparent at the syntactic, lexical, and phonological levels of language. Second, we see that AoA exerts a small effect on the outcome of L2, all factors being equal, but only when the L2 is learned subsequent to L1 acquisition in early life. Third, these linguistic relations transcend sensory–motor modality. Early acquisition of sign language as the L1 supports later learning of a spoken language as the L2 (in its written form). Likewise, early acquisition of a spoken language as the L1 supports later learning of a sign language as a L2. Thus, AoA effects are not a determining factor in L2 outcome. However, AoA is a critical factor in L1 outcome. Delayed exposure to an accessible L1 in early life leads to incomplete acquisition of all subsequently learned languages. The deleterious effects of delayed L1 are apparent at all levels of linguistic structure, namely, syntax, phonology, and the lexicon. Early language acquisition not only bestows facility with the linguistic structure of the L1, but it also bestows the ability to learn linguistic structure throughout life.
In sum, these findings as to the nature of AoA effects on L1 and L2 development are consistent with the major contribution of sign language research to cognitive science. Language structure and processing is a product of the human mind and not the sensory–motor modality through which it is sent and received. Our experimental findings build on this principle by showing that the timing of L1 acquisition, independent of sensory–motor modality, is a critical factor creating the lifelong ability to learn language.
The research reported here was supported by grants from the Natural Sciences and Engineering Council of Canada (Grant 171239) and the Social Science and Humanities Research Council of Canada (Grant 410-2004-1775). Preparation of this paper was supported by funds from the Division of Social Sciences of the University of California, San Diego. We thank the deaf communities of Montreal, Ottawa, Edmonton, and Halifax for invaluable assistance in the work.