Hostname: page-component-848d4c4894-jbqgn Total loading time: 0 Render date: 2024-07-01T14:45:21.990Z Has data issue: false hasContentIssue false

A parametric approach to the acquisition of syntax

Published online by Cambridge University Press:  19 July 2021

William SNYDER*
Affiliation:
University of Connecticut
*
Address for correspondence: Department of Linguistics, University of Connecticut, 365 Fairfield Way, Storrs, CT06269, USA Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Three case-studies, using longitudinal records of children's spontaneous speech, illustrate what happens when a child's syntax changes. The first, examining acquisition of English verb-particle constructions, shows a near-total absence of commission errors. The second, examining acquisition of prepositional questions in English or Spanish, shows that children (i) may go as long as 9 months producing both direct-object questions and declaratives with prepositional phrases, before first attempting a prepositional question; and (ii) at some point, abrubtly begin producing prepositional questions that are correctly formed for the target language. The third case study shows that in children acquiring English, the onset of verb-particle constructions occurs almost exactly when that child begins using novel noun-noun compounds. After a discussion of the implications for the nature of syntactic knowledge, and for the mechanisms by which it is acquired, two examples are presented of as-yet untested acquisitional predictions of parametric proposals in the syntax literature.

Type
Special Issue Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0), which permits non-commercial re-use, distribution, and reproduction in any medium, provided that no alterations are made and the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use and/or adaptation of the article.
Copyright
Copyright © The Author(s), 2021. Published by Cambridge University Press

Introduction

Here I will present some of the reasons to take a parametric approach to children's acquisition of syntax. To begin, I will review three case studies that offer clear insights into what happens when a child's syntax undergoes a change. The key characteristics will be that changes are decisive, additive, and interconnected. This combination of characteristics, in my view, calls for an explanation within some version of a Principles-and-Parameters (P&P) framework. In fact, I will argue that a fully satisfactory explanation will also require a suitable parametric format, and a suitable approach to parameter setting. In this context I will discuss what counts as a “parameter” from the perspective of language acquisition, and indicate the types of acquisitional predictions that typically follow from any parametric hypothesis. I will then provide two concrete examples of testable (but as-yet untested) predictions of parametric hypotheses in the literature.

Syntactic change in childhood

In this section I briefly summarize three case studies, drawn from my own work, that offer some insight into what happens when a child's grammar changes.

Changes are decisive: verb-particle combinations in English.

The first case study is drawn from Chapter 4 of Snyder (Reference Snyder2007). There, much as here, the project was to see exactly what happens in a child's spontaneous speech, when the child's syntax undergoes a change. The test domain was the English verb-particle construction, which is illustrated in (1-2) using the particle up.

  1. (1)

    1. a. Mike finished his dinner up.

    2. b. Mike finished up his dinner.

    3. c. Mike finished it up.

    4. d. *Mike finished up it.

  2. (2)

    1. a. Sue has lifted the box up.

    2. b. Sue has lifted up the box.

    3. c. *Sue has the box up + lifted.

    4. d. Susi hat die Kiste auf + gehoben. [German]

      Susi has the box up + lifted

      ‘Susi has lifted the box up.’

A great many of the world's languages, when expressing the meaning of (1) or (2), disallow the presence of any direct counterpart to the English particle up. This can be seen clearly in translations of (2a), where the particle is a morpheme serving to emphasize upward motion, and is separated from its associated verb by the direct object (the box). For example in French, one might supplement the verb lever ‘to lift’ with the affix re-, as in relever ‘to lift up’, and thereby create a stronger sense of upward motion; yet there is no way to insert a direct object between the verb and the affix (e.g., * lever la boîte re, lit. ‘lift the box RE’). Indeed, no matter how one goes about constructing it, and no matter what morpheme is used to emphasize an upward direction of motion, French disallows anything directly comparable to the verb-particle construction in (2a). Given this variation across languages, the child must rely on the input to determine whether verb-particle constructions are possible in her target language.

Moreover for a child acquiring English, even after she decides that verb-particle constructions are possible, there will be opportunities for error when she is working out their precise syntax; and mistakes in the syntax will lead to distinctive error-patterns in her speech. For example, if she is using simple analogical reasoning over the forms in (1a-c), then her grammar will allow the erroneous form in (1d). Alternatively, if she strictly limits her syntactic hypotheses to a smaller set of options provided by Universal Grammar (UG), she might – at least in principle – be protected from errors like (1d), but there should still be the possibility of errors like (2c), which is surely UG-compatible, because (2d) is possible in German. (In fact, (2d) is one of several verb-particle configurations that are absent from present-day English, but attested in languages that are closely related.)

In sum, the domain of verb-particle constructions provides ample opportunity for a child to go wrong; and at each decision point, a wrong turn will give her a grammar that is noticeably different from the target, one that yields not only errors of omission (which could also have non-grammatical causes) but errors of commission, where morphemes are assembled in ways that simply cannot happen in the target language.

The final, crucial ingredient for this case study was that English verb-particle constructions are used quite frequently, not only by adults in their child-directed speech, but also by children once they acquire them. This means that a high-quality longitudinal corpus of spontaneous-speech samples, from a child acquiring English, is a rich source of evidence about what exactly happens as the child masters this domain of English syntax.

Snyder (Reference Snyder2007) performed a fine-grained analysis of early verb-particle combinations in the spontaneous speech of the child Sarah (Brown, Reference Brown1973; MacWhinney, Reference MacWhinney2000). Among the CHILDES corpora for English, Sarah's had the smallest average gap between recordings (7.4 days), and included a substantial period when she was not yet combining verbs with particles. The study began with an a-priori enumeration of the logically possible error types, and then developed a search strategy that would detect the maximum possible number of those errors, if they ever occurred. The main findings are illustrated in Figures 1 and 2, which show intransitive verbs and transitive verbs separately, and which provide the frequencies (per thousand child utterances) of both adult-like uses and errors of commission, in the combined recordings from each one-month interval.

Figure 1. Sarah's production of intransitive verb-particle, by age.

Figure 2. Sarah's production of transitive verb-DP-particle, by age.

The change in Sarah's syntax was decisive. First, we see an abrupt transition from a period with almost no adult-like uses of a verb-particle combination, and no attempts that resulted in an error of commission (i.e., from 27 through 29 months); to a period with adult-like uses occurring on a regular basis (from 30 months onward). The FRU (‘First of Regular Uses’; Snyder, Reference Snyder2007) for the intransitive V-Particle construction occurred at the age of 30 months: that was the point when Sarah first produced an example that was followed quickly (within a month) by additional examples, involving different lexical items. As shown by Stromswold (Reference Stromswold, McDaniel, McKee and Cairns1996), FRU correlates closely with other measures of acquisition used in spontaneous-speech studies. It has the advantage of being an exceptionally sensitive measure, and thus crediting the child with knowledge as soon as credit is warranted.

The FRU for the transitive (V-DP-Particle) structure likewise occurred at 30 months. In Figures 1 and 2 we can see that these FRUs were both followed by a few months of relatively stable production, and then (around 33 months) by a sharp increase in frequency. Thus, in both cases the FRU was the first harbinger of an “explosion” coming a few months later.

Second, during the period when verb-particle combinations were first entering Sarah's syntactic repertoire, she produced hardly any errors of commission. This is especially clear because the search method was explicitly designed to err on the side of over-counting such errors: anything that could possibly be a commission error was counted as such. During the full age range covered in Figures 1 and 2 (from 27 to 34 months of age), Sarah produced a total of 10,233 recorded utterances, including 102 utterances in which she attempted a verb-particle construction. Of these 102 attempts, 32 contained some form of error. Yet, at least 29 of the 32 errors were omission errors, and may therefore have resulted from performance factors, for example, rather than from the temporary adoption of an incorrect grammar.

Thus, much more telling than omission errors are commission errors, and in total there were exactly three candidates:

  1. (3)

    1. a. Age 30 months: took my eye on .

    2. b. Age 32 months: put back hm .

    3. c. Age 34 months: I xx go downed .

In (3a), Sarah used a specific verb-particle combination (took + on) in a context where it would not ordinarily be used by an adult. Nonetheless, aside from this lexical choice, there is nothing wrong with the utterance. In (3b), the material transcribed as hm could perhaps have been an attempt at the personal pronoun him (and if so this was an error of commission, because an adult speaker would have placed the pronoun before the particle). Yet, the transcriber may well have been correct in treating hm as some type of filler syllable, rather than a pronoun, in which case the child simply made an error of omission (by leaving out the direct object). The most clear-cut error of commission is (3c), where Sarah used the intransitive verb go with the particle down, and apparently placed the past-tense inflection marker -ed on the particle, rather than using a past-tense form of the verb. Hence there were at most three genuine commission errors involving particles in this sample of Sarah's speech, and quite possibly there were fewer than three.

Moreover, each of the error-types represented in (3) was (at most) a one-time occurrence. Given the difficulties inherent in transcription of child speech, it would be unwise to take any single utterance as meaningful unless it is part of a larger pattern, involving multiple, distinct utterances recorded during the same time period. Hence, there is precious little to suggest that Sarah's spontaneous production of verb-particle combinations was ever guided by an incorrect syntax.

The case study thus illustrates a property that I have found to hold quite generally – namely, that changes are decisive: the child makes an abrupt transition from essentially no uses, to adult-like uses, and produces few if any commission errors along the way. The sharpness of the transition may be more or less evident, depending on how frequently the diagnostic structure is used once it is acquired. After the FRU, the frequency sometimes starts out low, and remains low for a period of two or three months (as it does in Figures 1–2), but eventually increases to the same frequency found when the child is older – or perhaps to an even higher frequency, followed by a decrease to the stable-state frequency. (The latter pattern is what Brown & Hanlon, Reference Brown, Hanlon and Hayes1970, p.33, described as a “brief infatuation.”) Once the structure enters the child's repertoire, it remains part of the repertoire; it does not, for example, disappear and reappear intermittently.

During the transition from almost never producing a given structure to using it routinely, the scarcity of commission errors in the child's spontaneous speech indicates the child does NOT go through a period of trying out a range of possible grammars, and then gradually homing in on the correct option (as would be expected, for example, under the Variational-Learning model of Yang, Reference Yang2002). The types of commission errors we would expect in that case are simply absent. Instead we see what Maratsos (Reference Maratsos, Kuhn, Damon, Siegler and Lerner1998) calls ‘underground’ acquisition: the child works underground, and identifies the correct grammatical option for her target language, before she ventures to deploy that option (or any other) in her spontaneous speech. In my own work (e.g., Snyder, Reference Snyder2007) I refer to this pattern as ‘Grammatical Conservatism’: until the child has identified the correct grammatical basis for a new syntactic structure, she either avoids that structure entirely (by using a circumlocution, or simply changing the subject), or she omits the portions of the structure that she is uncertain about. Thus in children's spontaneous speech, omission errors are common, but commission errors are rare.

An important (and remarkable) implication of the child's decisiveness is that syntax acquisition must be deterministic, in the sense of Berwick (Reference Berwick1985): instead of trying out potentially incorrect options and then backtracking from them when necessary, the child waits until she knows the correct option, and then she commits to it; the decision is irrevocable. Crucially, anything short of deterministic learning would predict, contrary to fact, the presence of numerous commission errors in the child's spontaneous speech. In the next section we will see further evidence that children acquire syntax deterministically.

Changes are additive: prepositional questions in English and Spanish

The second case study, drawn from Sugisaki and Snyder (Reference Sugisaki, Snyder and Otsu2003, Reference Sugisaki and Snyder2006), concerns prepositional questions (‘P-questions’) in English and Spanish. In both languages, formation of a wh-question normally involves movement of a wh-expression to the specifier position of an interrogative C. In English, if the wh-expression is the complement of a P, then that P is normally “stranded” (left behind) when wh-movement occurs, as shown in (4a). In more formal registers, and especially in written English, it is sometimes also possible to “pied-pipe” (carry along) the P, as in (4b), but this sounds a bit odd in conversational English (as indicated by the ‘??’).

In Spanish, however, when a wh-expression is the complement of a P, pied-piping of the P is obligatory, as illustrated in (5).

In Sugisaki and Snyder, we chose to examine P-questions because we wanted to know more about the relationship of spontaneous speech to syntactic knowledge, in children who had not yet identified the relevant syntactic properties of their target language. To address this issue, prior research had sometimes focused on errors of omission, like subject omissions (i.e., ‘null subjects’) in the speech of children acquiring English. Yet, an omission error is difficult to interpret. Did it result from a non-target grammar? Or from a limitation on working memory? Or perhaps from a gap in the child's lexicon?

In contrast, if a child makes an error of commission – putting words together in a way that is disallowed in the target language (and that could not have resulted from simple omission) – there is much greater clarity. When a child is engaged in ordinary spontaneous speech, neither a lack of sufficient processing resources, nor a gap in the lexicon, is likely to make her put overt lexical items in the wrong places. But if she produces a P-question under the guidance of an incorrect grammar, the result is almost certain to be lexical items in the wrong places.

Thus, the point of looking at P-questions was to find out if children ever use potentially incorrect syntactic options to guide the assembly of their spontaneous utterances. In particular, if a child knows about direct-object wh-questions in English/Spanish (i.e., if she has lexical knowledge of at least one wh-pronoun, and she knows about wh-movement), but she does not yet fully understand how the target language forms P-questions, what will she do? Will she employ a “default option” for the syntax of P-questions, such as pied-piping (which, across languages, is far more common than P-stranding)? Will she alternate among the various cross-linguistic options? Or will she remain silent? To find out we selected longitudinal corpora of spontaneous-speech samples from 10 children acquiring English, and from four children acquiring Spanish.

What we discovered was that the children in our study never made a single error of commission. When children acquiring English first began producing P-questions, they used P-stranding, just like adults; and when children acquiring Spanish first began producing P-questions, they used pied-piping, just like adults. This is the pattern expected of a deterministic learner.

A possible objection, however, is the following. Perhaps, when the children did not yet know how to form a P-question in their target language, they simply lacked knowledge of wh-words and wh-movement altogether. If so, for the period when they did not yet know how to form a P-question, perhaps they could not have produced a commission error either.

Did the FRUs for direct-object (DO) questions and P-questions actually occur close together, as expected under this scenario? To assess this possibility, we first located each child's FRU for DO-questions. Indeed, for one of the children acquiring English, the FRUs for DO-questions and P-questions occurred in the same transcript. But for the other nine children acquiring English, and for all four of the children acquiring Spanish, the FRU for DO-questions occurred earlier than the FRU for P-questions.

Next we checked to see if children, by the point of their FRU for DO-questions, were already producing the declarative counterparts to P-questions – that is, declarative sentences in which the VP contained a PP, as in She's playing with a truck. (This would be the declarative counterpart to the P-question What is she playing with?) Indeed, for every child (in both languages), such utterances were present even before the FRU for DO-questions. Hence, at the point when the children began using DO-questions, they had all the conceptual and lexical resources necessary to ask a P-question.

Finally, for each child whose corpus permitted it, we performed a statistical assessment of the gap between the FRU for DO-questions, and the FRU for P-questions. Specifically, we performed a binomial test, based on the relative frequency of DO-questions and P-questions in speech samples that were recorded slightly later, after the child's FRU for P-questions had occurred. (In three cases the corpus ended before the child had begun using P-questions; those corpora were set aside.) For example, the child Abe (Kuczaj, Reference Kuczaj1977; MacWhinney Reference MacWhinney2000) produced 11 DO-questions in his corpus prior to his FRU for P-questions. In transcripts slightly later than the FRU for P-questions, when Abe asked either a DO-question or a P-question, 58.3% of the time it was a DO-question. We took as our null hypothesis (i) that P-questions were grammatically available to Abe as early as direct-object questions, and (ii) that his general likelihood of wanting to produce a P-question (versus a DO-question) was constant over the period examined. Under this null hypothesis, the probability that Abe would, simply by chance, produce a run of 11 or more DO-questions before the first P-question, is given by p=(.583)11=.00264**. Note that a Bonferroni correction might be warranted here, because a total of 14 children's corpora were examined in our study. Yet, even with this correction, the gap in Abe's corpus is significant: p = 14(.583)11=.037*.

In this manner we determined that four of the children (all of them acquiring English) had a gap that was statistically significant (Bonferroni-corrected p < .05). For these four children, the gaps ranged from 2.0 months to 9.0 months (mean: 5.2 months). In other words, starting from a point when the child had both DO-questions and the declarative counterparts to P-questions, these children went for a period lasting up to 9 months without attempting a P-question (or at least, without attempting it often enough for a single example to be captured in the child's recorded speech). This was despite the fact that the children were producing numerous DO-questions (range: 11 to 48; mean: 29.8) in the same recordings.

This case study provides especially strong support for the view that the child's acquisition of syntax is a deterministic process. The first case study demonstrated that a child (Sarah) showed no sign of making any of the numerous possible “wrong turns” as she acquired verb-particle constructions, but one might object that she also acquired verb-particle constructions relatively early (by the age of 30 months). Perhaps at ages prior to 30 months, the lack of verb-particle combinations would not have been a serious limitation on her expressive power anyway. If so, perhaps there was little impetus for her even to produce this type of utterance, prior to mastering its syntax.

Objections of this type do not apply to the case study in this section. In the four children who exhibited a statistically significant gap, there was a lengthy period (lasting up to nine months) when the child was producing the declarative counterparts to both DO-questions and P-questions, and producing a substantial number of DO-questions, but never producing P-questions of any sort. Consider the case of Abe (who, among the four children with a significant gap, had one of the shortest, lasting just over two months). As noted above, Abe produced 11 DO-questions during the gap. After the gap had ended, for every 11 uses of a DO-question, he was producing roughly 8 uses of a P-question (i.e., on occasions when either a DO-question or a P-question was produced, 41.7% of the time it was a P-question; .417 ≈ .421 = 8/(11+8)). Hence, we can estimate that during the portions of the gap when Abe's speech was being recorded, there were approximately eight occasions when he would have produced a P-question if he had been able to. Instead, he produced neither an adult-like P-question, nor an erroneous attempt at a P-question, on any of those occasions.

The implication, at least for these four children, is that the recording sessions during the gap must have included numerous situations of the same types that would later lead them to ask a P-question (i.e., once they knew how to construct one correctly). During the gap, however, the children refrained from even attempting it. Thus we now have even stronger evidence that, when it comes to syntax, the child is a genuinely deterministic learner. Given the lack of commission errors during these sometimes enormous gaps (up to nine months), there must have been no possibility of the child simply making a temporary, provisional choice (so as to allow one form or another of P-question), and then backtracking from it later, if necessary. As far as I can tell, anything short of deterministic learning would have yielded commission errors that are simply absent from these children's spontaneous-speech data.

Finally, the evidence in this section provides an especially clear illustration that syntactic changes are ‘additive’. As language acquisition proceeds, a child gains an ever-larger repertoire of syntactic structures. At each point along the way, the child has a grammar permitting a proper subset of the structures available in the target language. The child's structures are all compatible with the target grammar, but the adult's structures are not all available to the child. As a result there will exist periods of ineffability, as was seen in this case study: four of the children, for a period of up to nine months, had a grammar allowing them to ask DO-questions, but providing no way at all for them to ask a P-question. Eventually the children expanded their syntactic repertoire, by adding the (correct) structure for P-questions.

Changes are interconnected: particles and compound-formation in English

The third and final case study, based on findings reported in Snyder (Reference Snyder1995, et seq.), once again concerns verb-particle constructions, but this time in relation to compound-word formation. Research in comparative syntax (notably, Neeleman & Weerman's Reference Neeleman and Weerman1993 work on Dutch, and LeRoux's Reference LeRoux1988 work on Afrikaans) suggests that these two phenomena – particles and compounding – are somehow connected. In Snyder (Reference Snyder1995), I therefore conjectured that, cross-linguistically, the availability of verb-particle constructions would pattern with the availability of some particular type of compound word-formation. When I conducted a small-scale cross-linguistic survey, this conjecture received support: in every language that I checked, if separable verb-particle constructions (along the lines of English lift the box up) were acceptable to my consultants, then a particular type of compound-word formation was acceptable too.

Specifically, all the languages with a separable verb-particle construction shared the property of allowing speakers to invent new ‘bare-stem’ compounds whenever needed, in the same way that a speaker might put words together to create a new sentence; I refer to this as CREATIVE compounding (or more precisely, creative endocentric bare-stem compounding). For example, an English-speaker is free to create a compound like college lab space committee. Listeners might not have heard this particular compound before, but, as long as they have the relevant background knowledge, they will automatically interpret it to mean something along the lines of ‘committee making decisions about the allocation of space for laboratories at a college’.

In contrast, many of the world's languages permit nothing of this sort. In French, for example, any comparable string of nouns – regardless of word order – is incomprehensible (e.g., *faculté laboratoire espace comité, literally ‘college lab space committee’; *comité espace laboratoire faculté, literally ‘committee space lab college’). The expression [N college [N [N lab space]] committee]] is an example of RECURSIVE compounding: lab and space form a compound, which then serves as a modifier in the larger compound [N [N lab space]] committee]], which in turn serves as the head of a still larger compound. On the basis of a cross-linguistic survey, Namiki (Reference Namiki and Chiba1994) has proposed the generalization that a language allows recursive compounding if and only if it allows creative compounding.

Creative compounding is distinct from the simple existence of lexicalized compounds. For example, much as English has a lexicalized compound frog man, with the meaning of ‘underwater diver’, French has a lexicalized compound homme grenouille (literally ‘man frog’), meaning ‘underwater diver’. In fact, French has a considerable number of these lexicalized compounds, but there is a major difference from English. Given a suitable context, the English compound frog man can take on an unlimited number of additional meanings (e.g., ‘man who conducts scientific research on frogs’, ‘man who collects stone carvings of frogs’, ‘man who is seeking to purchase a frog-shaped coffin’); the French compound homme grenouille can never receive any of these interpretations, ever.

In sum, my cross-linguistic survey indicated that creative compounding of the type found in English is restricted to a limited number of languages. Any language that disallowed creative compounding also disallowed separable verb-particle constructions. This generalization is formulated as a one-way implication, because my cross-linguistic diagnostics for verb-particle combinations required separability of the verb and the particle: for example, by an intervening direct object. In two of the languages in the sample, Japanese and ASL, there was nothing available that satisfied this diagnostic. Aside from those two cases, however, the languages allowing creative compounding also had a separable verb-particle construction.

In a series of works I have tested a closely related prediction for acquisition: among children acquiring English, any child who knows that English permits separable verb-particle combinations should also know that English permits creative compounding (i.e., creative endocentric bare-stem compounding). In previous work (Snyder, Reference Snyder2007, chapter 5), I selected 19 high-quality longitudinal CHILDES corpora for children acquiring English. For each child, my assistants and I determined the FRU for the V-DP-Particle construction (i.e., with the verb and particle separated by a direct object), and the FRU for creative (i.e., novel) bare-stem endocentric compounding. To count as novel (i.e., the child's own creation), a compound could not be a lexicalized form (e.g., toothbrush, apple juice), and it could not be a form that had been used (by any speaker) at any point earlier in the corpus. One potential difficulty was that novel compounds are generally used (by adults) much less frequently than verb-particle combinations; but fortunately, when children first acquired compounding they treated it as if it were a new toy. Thus for a time they produced novel compounds fairly frequently, and locating an FRU was not difficult.

The results are shown graphically in Figure 3. There was an unusually strong linear relationship between the ages of onset for compounding and for particles (r = 0.937, t(17)=11.1, p < .0001). The coefficient of determination, r2=0.880, indicates that fully 88% of the observed variability can be explained by the best-fit linear model, which closely approximates an identity function. Thus, despite considerable variability across children, each child began using novel compounds at more or less the same point in time when s/he began using the V-DP-Particle construction.

Figure 3. Scatter plot, with best-fit linear trendline, showing each child's age (in years) at the FRU of creative N-N compounding, versus age at FRU of V-DP-Particle.

If we think in terms of “prerequisites” (such as grammatical and lexical information) that a child must acquire, before a new structure becomes part of her linguistic repertoire, then the findings in Figure 3 support the view that compounding and particles have a prerequisite in common. Under this scenario we expect that the point in time when a given child acquires this shared prerequisite will become the minimum age of onset for both compounding and particles. Yet, explaining the high degree of association evident in Figure 3 appears to call for an even stronger link: specifically, for both compounding and the verb-particle construction, the shared prerequisite must be the one that the child acquires last. Otherwise it seems unlikely that the ages would be as tightly clustered near the identity function.

At this point the skeptical reader might object. Developmental correlations can exist all too easily between aspects of language that are not closely connected, simply because the child is moving rapidly from a state of not knowing how to say anything, to a state of knowing how to say a great many different things. Hence it would be advisable to apply a partial correlation procedure, and thereby check whether the observed degree of association between compounding and particles actually goes beyond what we would expect anyway, on these more general, developmental grounds.

For this purpose I used MLU as a rough index of a child's general linguistic development. I first determined each child's MLU in the transcript containing her FRU of the V-DP-Particle construction; I then took the mean, MLU = 1.919 morphemes, as a proximal developmental landmark for the V-DP-Particle construction. To estimate the age when each child was first developmentally ready to begin using the V-DP-Particle construction, I located the first transcript in which the child's MLU reached or exceeded 1.919 morphemes, and took the child's age from that transcript. The resulting ages were my developmental (or ‘MLU-based’) predictions of the point when the V-DP-Particle construction should begin to appear.

As desired, the ages predicted by the MLU-based measure correlated quite well with the actual ages of FRUs for V-DP-Particle (r = .8690); this indicated that the MLU-based measure was a good proxy for whatever role the child's rate of general linguistic development plays in determining the point at which V-DP-Particle utterances begin to appear. Next I applied a partial-correlation procedure, and thereby removed the portion of the variance (in the age of FRU for particles) that could be explained by the MLU-based measure, in order to see how well the remaining, unexplained variance could be explained by the putative connection between particles and novel compounding. What I found was that even after the variance explainable in terms of MLU had been removed, there was still a robust (partial) correlation between the age of FRU for particles, and the age of FRU for novel compounding: r Part, Comp. MLU = .799, t(16) = 5.31, p < .001. Based on the coefficient of determination, (r Part, Comp. MLU)2 = .638, compounding accounted for 63.8% of the variance that could not be explained by the MLU-based measure.

Hence, the findings in Figure 3 seem to call for an explanation that includes a shared grammatical prerequisite, rather than merely a developmental association. I also checked whether the MLU-based measure could account for any of the variance in particle ages that was not explained by the ages for compounding. In fact, it could (r Part, MLU. Comp = .52, t(16) = 2.44, p = .0267), explaining 27.1% of the remaining variance. Hence it is plausible that developmental factors, such as limits on processing capacity, can also influence the timing of a child's FRU for V-DP-Particle.

In Snyder (Reference Snyder1995) and subsequent works (e.g., Snyder, Reference Beck, Snyder, Féry and Sternefeld2001, Reference Snyder, Danis, Mesh and Sung2011, Reference Snyder, Perkins, Dudley, Gerard and Hitczenko2016), I refer to the shared grammatical prerequisite as the positive (or ‘marked’) setting of ‘The Compounding Parameter’ (TCP). While the details of the proposed parameter have changed a bit over the years, the key idea has always been that ‘[+TCP]’ corresponds to availability (in the given language) of a particular interpretive operation; and that this operation is needed in the semantic interpretation both of novel compounds and of complex predicates such as verb-particle combinations. In contrast, lexicalized compounds, which are also found in [-TCP] languages like French, can be interpreted by means of the same look-up mechanism that is used to retrieve the lexical semantics of non-compound words.

For present purposes, the crucial point is simply that changes in the child's syntax, and perhaps in the child's grammar more broadly, are interconnected: in the present case, English verb-particle combinations and creative compounding are acquired together, as a package. Under my proposal, this is because acquisition of the English verb-particle construction entails acquisition of information much more abstract than this individual construction. When this information is acquired, we see the near-simultaneous arrival of structures as seemingly disparate as particles and compounds.

Remarks on methodology

The careful reader will have noticed a heavy reliance on corpus analysis in the case studies presented above. This pattern reflects some particular advantages of longitudinal corpora for investigations of the acquisitional time course. Indeed, as discussed at some length in Snyder (Reference Snyder2007), different methods of studying child language acquisition are by no means equivalent and interchangeable; instead, each method has its own profile of strengths and weaknesses.

More specifically, the major strengths of longitudinal corpora include the facts that:

  1. (i) naturalistic observation has an absolute minimum of task demands, which makes spontaneous-speech corpora an especially valuable source of information about one- and two-year-olds; and

  2. (ii) a longitudinal corpus provides by far the best available record of an individual child's acquisitional time course.

At the same time, some major weaknesses of longitudinal corpora are that:

  1. (i) transcription of a child's speech can be challenging (and hence, one must be careful not to place too much weight on any single utterance in a transcript);

  2. (ii) the child's intended meaning is sometimes unclear (and is never under the control of the investigator); and

  3. (iii) the age of acquisition (i.e., the age at which a child's grammar undergoes a particular change) can be estimated reliably only if the grammatical change results in a fairly abrupt change in the child's verbal behavior. Such a change is typically observable only when it affects linguistic expressions that children use frequently.

This last point is perhaps the most concerning one for research on the acquisition of syntax, because it substantially reduces the number of syntactic diagnostics available to the investigator.

When we turn our attention to elicited production (EP), we find a near-complementary set of strengths and weaknesses. Major strengths include the facts that:

  1. (i) low-frequency structures can be readily investigated,

  2. (ii) the intended meaning of a child's utterance is under the experimenter's control, and

  3. (iii) in children who are sufficiently mature to handle the task demands of EP, if a child knows a given point of grammar, a well-designed EP study will be able to demonstrate this.

Major weaknesses of EP are that:

  1. (i) the point in time when the child can first handle the task demands may arrive well after the grammatical point of interest has already been acquired; and (crucially)

  2. (ii) when the child is not yet adult-like on the grammatical point of interest, the child's responses need to be interpreted with considerable caution.

This final point bears some discussion.

Snyder (Reference Snyder2007, chapter 6), drawing on evidence from Yamane, Pichler, and Snyder (Reference Yamane, Pichler, Snyder, Greenhill, Littlefield and Tano1999), provides a side-by-side comparison of EP versus spontaneous-speech data, for English wh-questions of quantity and possession. Questions of these types occur sufficiently often in spontaneous speech, once a child has acquired them, to make this a fair comparison. Strikingly, the EP data included numerous error patterns that were entirely absent from children's spontaneous speech, even when the latter children were producing the exact same question-types, and even if one considered the entire period covered in the child's longitudinal corpus. Furthermore, in EP data the error patterns produced by a single child, within a single testing session, were sometimes strikingly inconsistent from one experimental item to the next. From these considerations it seems clear that children who have not yet committed themselves to the target option, for a given point of grammar, are likewise not committed to any particular non-target option; they are better characterized simply as undecided.

This is not to say that the child's non-target productions in EP are without interest – for example, they might reflect, at least in part, some of the grammatical possibilities that the child is actively considering, at a point when acquisition is still in progress. Yet, for purposes of testing predictions about the time course of acquiring the target grammar, the most dependable and stable results in EP appear to come when children actually arrive at the target. Hence, if one wishes to test a prediction of concurrent acquisition using EP, one will want to classify each child in the study as either reliably adult-like, or not yet reliably adult-like, on each of the grammatical structures that are predicted to arrive concurrently. The exact productions of a child who is not yet adult-like should not play any critical role.

In sum, whenever possible one seeks converging evidence from multiple methodologies. Yet, at least in my own judgement, we do not currently have experimental methods that yield fully reliable evidence about the grammar of an individual child, when the child is in the age range of 18 to 30 months. This means that many important findings from spontaneous-speech corpora cannot (at present) be cross-checked with experimental studies of comprehension or elicited production. Nonetheless, anytime one discovers that older children (in the age range of 3–4 years) have not yet mastered a given point of grammar, one can (and should) seek converging evidence from multiple methodologies, including both experimental and observational approaches.

Explaining the ‘key characteristics’ with principles and parameters

In this section I will show how the three key characteristics of syntactic change described above – namely, that changes are decisive, additive, and interconnected – can be captured and explained within a Principles-and-Parameters (P&P) model of acquisition. As we will see, a fully successful explanation will require us not only to posit the existence of parameters, but to identify a suitable parametric ‘format’, and a suitable approach to parameter-setting. Fodor's (Reference Fodor1998) ‘Parsing to Learn’ model will be discussed briefly as an example of a learning model with the desired characteristics.

The P&P framework

When we take a parametric approach to the child's acquisition of syntax, we are first and foremost taking the position that syntax is tightly constrained. In a P&P model of syntax, the principles are syntactic characteristics that we expect to hold true in every natural language; and the parameters are hypothetical points of permitted, but narrowly restricted, variation. In adopting a parametric approach we do not exclude the possibility that certain aspects of syntax will turn out to be idiomatic, or lexically restricted; but we do commit ourselves to the view that a non-trivial portion of the native speaker's syntactic knowledge can be characterized accurately in terms of more general (i.e., language-wide) principles and parameters.

A syntactic theory framed within the P&P architecture leads to strong predictions, both for the patterns of syntactic variation across languages, and for the patterns of syntactic change during language acquisition. In fact, the third ‘key characteristic’, interconnectedness, is precisely what we expect to see during acquisition, if the child is tacitly setting the value of an abstract parameter. For example, in terms of the third case study, if there is a parameter linking together the verb-particle construction and creative endocentric compounding, then it makes perfect sense that children acquiring English will, as we saw, begin producing these structures as part of a cluster, in close temporal proximity. (In fact, when we find reliable clustering effects of this kind, we might reasonably wonder how non-parametric approaches would be able to account for them.)

Parametric format, and approach to parameter setting

The remaining ‘key characteristics’ – namely, that syntactic changes are decisive and additive – do not follow automatically from a P&P model of syntax. To see why, suppose we are working within a classic “switchbox” model of syntactic parameters (in the spirit of Chomsky, Reference Chomsky1981, Reference Chomsky1986), where the parameters are akin to electrical switches, and where the grammar does not even function unless every switch is properly set to one of its permitted values. Let's further suppose that we have a binary parameter (call it the ‘P Parameter’): when the complement to P undergoes wh-movement, the value of the P parameter determines whether the P is required to move along with its complement (as in Spanish), or is permitted to stay behind (as in English).

Now consider the situation of a learner who has not yet determined the correct value. If the learner's production systems are guided by the current settings in the switchbox, and the grammar does not even “work” unless every parameter is set to a legitimate option, then for the learner to have any use of the grammar at all, it will be necessary to make a guess – quite possibly an incorrect guess – about the P Parameter's value. An incorrect guess will lead to clear-cut commission errors (relative to a target language with the opposite setting), whenever the child decides to ask a P-question. Indeed, if we also consider wh-in-situ languages like Japanese, where both the P and its complement normally remain in their base positions, then there are even more ways for the child's early P-questions to be incorrect, relative to a particular target language.

The point here is that we can easily imagine a P&P model of syntax that makes it impossible for the learner to “reserve judgement” on the value of any parameter – or more precisely, where the only way to reserve judgement (e.g., on the P Parameter) is not to set it, and the effect of not setting it is to render the grammar unavailable altogether, even for DO-questions. Yet, as we saw in the second case study, when children acquiring English/Spanish are still reserving judgement on the correct syntax for P-questions, they are nonetheless producing well-formed DO-questions.

Thus, the existence of decisive, additive changes means we need a P&P model of a particular type. The model needs to permit the learner to reserve judgement on a given grammatical question – and therefore not have access to any linguistic structure that depends on a specific answer to that question – while simultaneously permitting the learner to use her current grammatical knowledge to build structures that are fully licensed by the choices that have already been made. Only a model of this kind will be able to account for periods when the child is, for example, constructing DO-questions correctly but never even attempting a P-question.

This is really an issue of parametric format: our model of grammar needs to employ subset parameters. A subset parameter, in the relevant sense of the term, has an initial, “unmarked” value that permits only a proper subset of the grammatical structures that will become available if, at a later point in acquisition, a “marked” value of the parameter is adopted. In the present case we might (for example) postulate two separate binary parameters, each of which is a subset parameter. One of the parameters will say (in effect), “P-questions with pied-piping {ARE, ARE NOT} allowed” (where underlining indicates the unmarked option). The other parameter will say, “P-questions with P-stranding {ARE, ARE NOT} allowed.” Thus, the child will initially have a grammar disallowing both types of P-question, but potentially allowing DO-questions. At some point, if the child discovers that her language uses P-stranding (or uses pied-piping) to form P-questions, she will set the corresponding parameter to its marked value.

Under this scenario, there will be a question as to whether there exist any languages in which the two parameters are both set to their marked values (so that pied-piping and P-stranding can be used interchangeably). In principle it could turn out that adult English-speakers have such a language, insofar as the pied-piping option is available, at least marginally, in certain registers. Alternatively (and perhaps more plausibly), it could turn out that the parameter-settings of adult English-speakers really only permit P-stranding, and the limited use of pied-piping results (somehow) from formal schooling. If in fact no languages could be found in which the pied-piping and P-stranding options are fully interchangeable, we would need to consider the possibility that co-occurrence of these two marked values within a single grammar is somehow blocked.

Crucially, if we adopt a theory of natural-language grammars in which the permitted points of grammatical variation all have a subset-superset character, it will become possible for the child's grammar to undergo a decisive, additive change (i.e., whenever a parameter's value is changed from a subset option to a superset option), but we will need something more in order to guarantee that the child is a deterministic learner. In fact, we will need two things: First, it must be logically possible for a learner to know with certainty, based on some type of input received, that the target language requires a particular incremental addition to the grammar (i.e., that a specific parameter must be set to a particular, marked value). Second, whatever the procedure is that the child is (tacitly) following during language acquisition, it will need to ensure that the child makes a given incremental addition to the grammar if, and only if, it is definitely required by the target language.

A compatible model of parameter setting

The view that children acquire syntax deterministically has enormous implications. In particular, I believe it means the child must come to the task of syntax acquisition with at least two specific forms of innate guidance: (i) a universal set of syntactic options (perhaps in the form of structure-building operations) that any given language might, or might not, adopt as part of its grammar; and (ii) for each syntactic option, a method of detecting its use, in the linguistic productions of an interlocutor (such as a parent).

For the latter, the child would ideally have a single, general method that applies in the same way across the full set of syntactic options. This is the objective in Fodor's (Reference Fodor1998, et seq.) Structural Trigger Learning (STL) model. Indeed, STL provides the best available example of a learning model that could be fully compatible with the evidence presented in the case studies.

The heart of STL is ‘learning by parsing’. Adults clearly have some form of mental parser. According to STL, children use the same parser to discover the syntax of their target language. For an adult, words of the ambient language reach the parser one by one, and there can be periods of temporary ambiguity, when the parser cannot yet choose conclusively between different structural options. In this situation, people quickly choose a single option and forget the others. Nonetheless, it is simple to set a ‘flag’ in memory to record the fact that a point of ambiguity occurred.

STL uses this ‘flagged serial parsing’ to set syntactic parameters. The parameters are expressed as ‘treelets’. A treelet is an annotated fragment of a syntactic tree, just prior to phonological interpretation. Treelets function as interlocking “building blocks” that are assembled to create syntactic structures for an unlimited number of sentences. The syntax (i.e., parameter settings) of a particular language is encoded as a set of treelets.

STL performs parsing in terms of treelets. At the outset, the parser is supplied with a “super-grammar” of treelets; this corresponds to the full set of treelets that are innately available to a child. The learner runs this parser on each new input sentence. When there is an unambiguous parse, the learner can safely conclude that all treelets employed in the parse are part of the target language. Thus, learning will be deterministic as long as the learner adds a new treelet to her collection of “confirmed” treelets if, and only if, it occurs in an unambiguous parse. The addition of a confirmed treelet is, in effect, the re-setting of a subset-superset parameter from its unmarked, to its marked, value.Footnote 1

Types of acquisitional predictions

The key characteristics of syntactic change discussed above have important implications for the project of deriving acquisitional predictions from specific syntactic hypotheses. In particular, the properties of being decisive and additive mean that any change in the child's syntax has the character of a rapid transition between two stable states. The child goes from not having, to having, a particular “structure-building option.” Once that option has been added, the child does not retreat from it.

One important consequence is that the appearance of a new type of utterance (i.e., an FRU) in the child's spontaneous speech indicates that the child has actually made a commitment to a grammar sanctioning that utterance type; the child is not simply taking a fun-filled romp through grammatical options that will later be abandoned. A second important consequence is that the child's grammatical basis for the new type of utterance is almost certainly correct, in the sense of being the same as the adult's. If the child were merely approximating the adult's grammar, using a mixture of correct and incorrect grammatical options, the result would almost surely be systematic errors of commission in the child's speech, persisting until the child (somehow) managed to retreat from the incorrect choices. Instead, children's spontaneous speech contains astonishingly few of the logically possible commission errors, and different children are strikingly similar in the few types of commission errors that they do make (chiefly optional infinitives and morphological overregularization errors).Footnote 2 Taken together, these two consequences (commitment and correctness) provide the basis for a “linking theory” connecting proposed parameters of cross-linguistic variation to predictions about the time course of acquisition.

Concurrent acquisition

At this point in the discussion I need to address the question of what exactly counts as a parameter. For present purposes, the definition can be quite broad:

  1. (8) A grammatical parameter…

    1. a. Ranges over a finite set of discrete values;

    2. b. needs to be assigned a particular value by the child, in the course of language acquisition;

    3. c. is involved in determining which <form, meaning> pairings are acceptable to an adult speaker;

      and

    4. d. is abstract (e.g., not intrinsically tied to a single linguistic form, but rather able to influence the acceptability of multiple, superficially unrelated forms).

Probably anything with the combination of properties in (8) should be regarded as a parameter. This characterization certainly covers the classic “switchbox” conception of parameters discussed in Section 2, but it also extends quite readily (for example) to syntactic frameworks in which cross-linguistic variation is captured with morphosyntactic features on functional heads (e.g., Chomsky, Reference Chomsky1995, et seq.). In the latter case, we are still clearly dealing with a parametric system, as long as (i) there is a finite inventory of functional heads, each of which might, or might not, be available in a given language; and (ii) there is a finite inventory of morphosyntactic features, each of which might, or might not, be available (in the given language) for assignment to a particular functional head. Each of the “choice points” is a parameter.

To take another example, the characterization in (8) readily extends to Optimality Theoretic (OT) approaches to grammatical variation (Prince & Smolensky, Reference Prince and Smolensky2004). An OT theory posits a universal set of violable constraints (CON), and expresses the grammar of a particular language as a total ranking of CON. Yet, as noted by Tesar and Smolensky (Reference Tesar and Smolensky2000), the set of grammars that can be distinguished in an OT system with n constraints can always be distinguished by a set of n(n-1)/2 binary, switchbox-style parameters, each of which specifies the dominance relation between two particular constraints. Thus, even if one adopts the notational scheme of a constraint ranking, the information expressed by a particular state of the system translates directly into a finite number of binary choices about pairwise rankings; and each of those choices is parametric, in the sense of (8).

Of course, differences between parametric formats still matter. For example, choosing the format of a constraint ranking favors particular approaches to parameter setting (such as a constraint-demotion algorithm). But in terms of deriving acquisitional predictions, there is an important commonality that cuts across formats: any particular array of parametric values will make a specific set of surface structures grammatically available; and a change in the value of a single parameter (even if it is expressed by a change in the relative ranking of two OT constraints, for example) can add multiple, distinctive structures to that set at a single point in time.

This commonality gives rise to three main types of parametric prediction. The first is a prediction of concurrent acquisition. Suppose, for example, that in a particular language (say, English), there are two distinct surface forms (maybe Perceptual Reports, such as John saw Bob leave; and ‘Put’-locatives, such as Mary put the book on the table) that have a grammatical prerequisite in common (e.g., the positive value of the small clause parameter, or SCP, proposed in Snyder, Reference Snyder, Demonte and McNally2012; see also Snyder & Stromswold, Reference Snyder and Stromswold1997). Now, if syntactic change is decisive and additive, we expect that at some point or another, every child acquiring English will make the decisive, additive change of setting SCP to ‘+’. At that point, both Perceptual Reports and ‘Put’-locatives will enter the child's repertoire of grammatically well-formed surface structures, and the child will suddenly start using both structures in her spontaneous speech…. Right?

Well, perhaps. But we need to be careful. Strictly speaking, the prediction is that a child acquiring English will add the Perceptual-Report and ‘Put’-locative structures to her repertoire concurrently if, for both structures, the positive setting of SCP is the last prerequisite that the child needs to acquire. Suppose, however, that a child sets SCP to ‘+’ at a point in time when she has not yet acquired any verbs (put, place, set, …) with the necessary lexical properties to appear in a ‘Put’-locative. If the child already knows the lexical items (e.g., see) that she needs in order to build certain examples of a Perceptual Report, then that structure might become “available” to her, and start to appear in her speech, substantially earlier than ‘Put’-locatives.

Fortunately, it often turns out that the lexical items needed for the structures of interest have already been appearing in the child's speech for some time, in other types of structure (or even in a non-target version of the structure of interest, with certain key elements omitted); and if not, it could still happen that the missing lexical items occur frequently in the child's parental input, in which case they might be acquired very rapidly, once the child is ready to use them. Nonetheless, a prudent course of action is to identify as many as possible of the prerequisites (whether lexical or syntactic) for each structure of interest, and mention them in one's statement of the prediction. For example, “In a child who is already using suitable lexical items in other contexts (i.e., verbs of perception like see, and verbs of placement like put), we expect Perceptual Reports and ‘Put’-locatives to enter the child's repertoire concurrently.”

Ordered acquisition

The second type of parametric prediction is ordered acquisition. If the lexical and grammatical prerequisites (or at least the “late-acquired” prerequisites) for a particular surface form (call it ‘A’) are a proper subset of the prerequisites for another surface form, ‘B’, then in children acquiring a language permitting both A and B, we predict that a child will either add A prior to B, or add A and B to her repertoire concurrently (i.e., if for both A and B, the last-acquired prerequisite is one of the shared prerequisites) no child should ever add A later than B. In other words, it should be impossible for a child to know the full set of prerequisites for B, and not also know the full set of prerequisites for A. Thus, a prediction of ordered acquisition follows much the same logic as cumulative complexity, in the sense of Brown and Hanlon (Reference Brown, Hanlon and Hayes1970, p.13).

Mysterious acquisition

The third and final type of parametric prediction is what I will term mysterious acquisition. Suppose there are two surface forms in the target language, A and B, that have the same prerequisites (or at least the same “late-acquired” prerequisites). Suppose, moreover, that B literally never occurs in child-directed speech. Nonetheless, we predict that a child who knows all the prerequisites for A will, as if by magic, also have a grammar permitting structure B (as might be demonstrated, for example, with a test of comprehension or elicited production) – even though she has never heard B.

An example of mysterious acquisition can be found in Xu and Snyder (Reference Xu and Snyder2017). There, three-year-olds acquiring English were shown to know that English permits a “restitutive” reading of again, despite the lack of any such reading in many of the world's languages, and despite the complete absence of direct evidence for restitutive again in a 100,000-utterance sample of child-directed speech. The proposed explanation was that availability of restitutive again follows from English having the positive setting of TCP, as had been argued on independent grounds in Beck and Snyder (Reference Beck, Snyder, Féry and Sternefeld2001).

Another good example of mysterious acquisition can be found in Isobe's (Reference Isobe2005) work on head-internal relative clauses (HIRCs) in Japanese. Isobe demonstrated that HIRCs are exceedingly rare in child-directed Japanese. Yet, according to Cole's (Reference Cole1987) analysis, they should be grammatically possible in any language that has both OV word order and null pronouns, both of which are early-acquired properties of Japanese. Isobe demonstrated that despite the extreme paucity of direct evidence in their input, three- and four-year-olds were quite capable of understanding HIRCs.

In sum, when a syntactic theory posits any form of parametric variation meeting the description in (8), it usually entails strong predictions for the time course of child language acquisition – predictions of concurrent, ordered, and/or mysterious acquisition. In all three cases, the prediction is a direct consequence of positing at least one point of grammatical variation that has simultaneous consequences for multiple, distinct surface structures. The next two sections provide concrete examples of the first two types of prediction (concurrent and ordered acquisition), in the form of testable (but so far untested) acquisitional predictions of parametric proposals in the linguistics literature.

First example: a prediction of concurrent acquisition

A proposed analysis of prenominal possessors

Schoorlemmer and Rooryck (Reference Schoorlemmer and Rooryck2017) observe that all present-day languages in the Germanic family permit prenominal possessors marked with -s, as illustrated for English and Dutch in (9).

Yet, the precise syntax is subject to several points of variation. This is illustrated in (10), where English is contrasted with a Dutch variety that the authors refer to as “Dutch-A.” First, where English permits the prenominal -s possessor to be a full phrase (DP) like the old man in (10a), Dutch-A limits it to a word-level category (e.g., proper name, title) like Jan in (9b).

Second, where English permits a cardinal number (two) to intervene between the possessive marker -s and the head N in (11a), Dutch-A prohibits this.

Based on additional evidence from German, Icelandic, Norwegian, Swedish, and Danish, Schoorlemmer and Rooryck arrive at the generalization in (12).

In other words, there is a bidirectional relationship: the possessive marker -s can be preceded by a phrasal possessor if, and only if, the same marker can be followed by a cardinal number. (Danish, English, Norwegian, and Swedish allow both; Dutch-A, German, and Icelandic allow neither.)

Now, (12) is stated as a generalization over the Germanic languages, and is formulated in terms of the specific Germanic morpheme -s. Yet, Schoorlemmer and Rooryck argue that if (12) is correct, it plausibly follows in some way from universal properties of human language. Hence, the authors propose that the morpheme -s in Germanic is an instance of a possessive D, and that UG provides two ways for a possessive D to become associated with a specific possessor. In both cases the expression denoting the possessor originates in the specifier position of a phrase that they term ‘PossP’, located deeper in the DP structure:

In languages of the English type (13b), the specifier of PossP undergoes phrasal movement into the specifier position of the possessive D. In languages like Dutch-A, however, the specifier of PossP undergoes head movement, and adjoins to D, as in (13c). Head movement is possible if, and only if, the specifier is a monomorphemic element, which under the assumptions of Bare Phrase Structure (Chomsky, Reference Chomsky1995, et seq.) can function as either a phrase or a head. Hence, the possessor Jan in (9b) can undergo head movement and adjoin to the possessive D, but the phrase de oude man ‘the old man’ in (10b) cannot.

The pattern in (11) follows from the structural position of a cardinal number within DP:

The cardinal number two/twee in (11) is taken as the head of a proposed Cardinality Phrase (‘CardP’), which (when present) intervenes between the possessive D and the PossP. In languages of the English type (14b), the possessor moves across this head to the specifier of possessive D by means of phrasal movement. In languages like Dutch-A, however, which require the possessor to combine with possessive D through head movement, the intervening cardinal number (Card) blocks the movement, as in (14c). Hence, (11b) is ungrammatical.

Predictions for language acquisition

On the view that a child's acquisition of syntax is a deterministic process, and that a child is grammatically conservative in spontaneous speech, it should (at least in principle) be possible to test the proposed parameter using longitudinal corpora of spontaneous speech from children acquiring a language with prenominal possessors. In particular, if a child is acquiring a language of the English type, we expect there to be a specific point in time when the child determines that the -s morpheme is a possessive D0 of the type that attracts a phrasal possessor DP to its specifier position. At that point in time, as long as the child's grammar already permits pre-nominal cardinals (e.g., the three books), the child's grammar should simultaneously begin to permit phrasal possessors (as in the man's bike) and cardinal numbers (as in Mary's two bikes).

With a collection of ten or more longitudinal corpora for English, it may be possible to test this prediction with a correlation test, by locating each child's FRU for each of the two types of DP. The strong prediction of Schoorlemmer and Rooryck's parametric proposal is that, across children, the age of FRU for phrasal possessors will be significantly correlated with the age of FRU for post-possessor cardinal numbers. A potential problem, however, is that the frequency of one or both types of possessive DP in children's spontaneous speech might be too low to permit the identification of a clear FRU. (In other words, each of the early uses might be separated from the next clear use by more than a month.)

If so, an alternative approach is to use elicited production (EP). Provided there are plenty of children who do not acquire the relevant types of possessive DP until an age when elicitation has become an age-appropriate task, it should be possible to use a well-designed EP experiment to assess, for each of a number of children, whether their grammar permits phrasal possessors, and whether their grammar permits post-possessor cardinal numbers. Also, in the context of an EP experiment it will be possible to use a pre-test to exclude any children whose conceptual development (e.g., in the area of number) is not yet sufficient to support the types of DP that are of interest. For the children who pass the pre-test and complete the EP experiment, it will then be possible to apply a standard contingency test, such as a Fisher Exact Test, to evaluate the prediction that any given child should either succeed on both, or fail on both. Failure on the task might take the form of a child using circumlocutions of some type, or of a child failing to produce a minimum number of the elicited items, or quite possibly of a child producing errors of commission. As noted above, children's elicited speech (unlike spontaneous speech) often contains errors of commission, as well as occasional target-like forms, when the child does not yet know the point of grammar being tested.

Second example: a prediction of ordered acquisition

Case-marking in causatives

In many of the world's languages, one can take an existing verb, add a morphological affix, and obtain a new verb that requires one more argument than the original verb did. Causativization is such a process. In Japanese (15a), the verb katta ‘bought’ takes two arguments, a subject (the Agent) and a direct object (the Theme). The direct object bears the accusative case-marker -o; the subject bears either the nominative case-marker -ga or, in the main clause, the topic-marker -wa.

In (15b), the causative morpheme -(s)ase- has been added to create the verb kaw-ase-ta ‘made-buy’. This new verb takes three arguments: a subject (the Cause), an indirect object (the Agent), and a direct object (the Theme). The presence of an extra argument means that an additional type of case-marking is needed. Japanese uses dative case: the subject of the original verb becomes a dative-marked indirect object.

Baker (Reference Baker1988) has proposed a theory of morphological affixes like the causative, and one of the key ideas is that the case morphology used with causativized verbs is predictable from the case-marking options that are available, in the given language, for use with triadic verbs like ‘give’. For example, the use of dative-marking on the Agent in (15b) is predictable from the fact that dative-marking is used on the Goal argument (Hanako) in (16).

In contrast, in a language like Kinyarwanda, where triadic verbs assign accusative case to both the indirect object (Goal) and the direct object (Theme), the counterpart to (15b) employs accusative case-marking on both the indirect object (Agent) and direct object (Theme).

Predictions for language acquisition

Baker's account of the case-marking patterns found with causativized verbs leads us to expect an acquisitional ordering effect. When a child is acquiring the grammar of a language with morphologically derived causative verbs, if the child knows the information required for the production of causativized transitive verbs (as in 15b), then, according to Baker's account, the child necessarily knows the corresponding case-marking options used for the Goal and Theme arguments of a triadic verb (as in 16). The knowledge of case-marking with triadic verbs is a proper subset of the knowledge needed for the construction of a causativized transitive verb. The latter, in addition, requires information about the causative morpheme itself.

Of the languages with morphological causatives that Baker discusses, the only ones for which there currently exist publicly available longitudinal corpora are Japanese and French. French resembles Japanese in the sense that it employs the dative case-marker (á) for both the Goal argument of a triadic verb, and the Agent argument of a causativized transitive. Hence, the acquisitional predictions will be the same as for Japanese. (One notable difference, however, is that the French causative morpheme faire is morphologically separate from, though adjacent to, the causativized verb.)

The acquisitional prediction of Baker's proposal is an ordering effect. Of the three logically possible orders of acquisition, at most two should occur. One possibility is that a child will first acquire the morphological case-marking associated with triadic verbs, and then at a later point will work out the identity (and morpho-phonemic properties) of the causative morpheme. In this case, the FRU of correctly case-marked arguments with triadic verbs is expected to occur at an earlier age than the FRU of causativized transitives. A second possibility is that the child will first acquire the information about the causative morpheme, and then at a later point acquire the morphological case-marking options for triadic verbs. In this case the FRU of causativized transitives should occur at approximately the same age as the FRU of case-marking with triadic verbs. Under this scenario, for a period of time prior to the onset of causativized transitives and triadic verbs, the child's grammar will be expected to allow causativized intransitives. Across languages, the logical subject of a causativized intransitive is simply marked with accusative case, regardless of how causativized transitives work.

What should not be possible is for a child to have the FRU of case-marking with triadic verbs significantly later than the FRU of causativized transitives. Under Baker's account, if a child knows how to construct a causativized transitive, she necessarily knows how to do case marking with triadic verbs. Hence, discovery of even a single longitudinal corpus from a child who unequivocally shows such a pattern will be sufficient evidence to call Baker's account into serious question. For a single-subject case-study of this kind, a statistical argument could be built using a binomial test, based on the number of examples of causativized transitives that the child produced prior to the FRU of triadic verbs, together with an estimate of the relative frequency of causativized transitives versus triadic verbs after the FRU of the latter had finally occurred.

In contrast, acquisitional support for Baker's account could take the form of data from a collection of longitudinal corpora for children acquiring a language with morphological causatives. With a sufficient number of corpora, the FRUs for the two structures of interest should yield a significant contrast by paired t-test: whenever there is a substantial difference in the ages for the two FRUs, the FRU for triadic verbs will consistently be the earlier one.

Conclusions

The evidence from the three case studies reviewed here leads me to believe that children's acquisition of syntax is a deterministic process, in which changes to the grammar are decisive, additive, and interconnected. If correct, this has important implications for both the nature of syntactic knowledge and the mechanisms of its acquisition. Furthermore, it provides the basis for a “linking theory” connecting proposals in comparative syntax to testable predictions in acquisition. Thus, the parametric approach offers a single theoretical framework encompassing not only the particulars of child language acquisition, but also the intricate linguistic judgements of adult speakers, and the amazing patterns of cross-linguistic variation.

Competing Interests

The author declares none.

Footnotes

1 Note that for STL to get off the ground, the learner must already know a certain number of lexical items and their syntactic categories. Also, the version of STL sketched here, in which parameters are instantiated directly as treelets, is one of two possibilities discussed in Fodor, Reference Fodor1998.

2 A possible objection to the idea of crediting the child with the same grammatical basis as the adult comes from language change: when a language is undergoing certain types of syntactic change, it seems that at some point, children must be adopting a grammatical option that is unavailable in their parents' grammar. For some discussion and a proposal, please see Snyder, Reference Snyder, Sengupta, Sircar, Raman and Balusu2018.

References

Baker, M. C. (1988). Incorporation: a theory of grammatical function changing. Chicago: University of Chicago Press.Google Scholar
Beck, S., & Snyder, W. (2001). The Resultative Parameter and restitutive again. In Féry, C. & Sternefeld, W. (Eds.), Audiatur vox sapientiae: a festschrift for Arnim von Stechow (pp. 4869). Berlin: Akademie Verlag.Google Scholar
Berwick, R. C. (1985). The acquisition of syntactic knowledge. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Brown, R. (1973). A first language: the early stages. Cambridge, MA: Harvard University Press.CrossRefGoogle Scholar
Brown, R., & Hanlon, C. (1970). Derivational complexity and order of acquisition in child speech. In Hayes, J. R. (Ed.), Cognition and the development of language (pp. 155207). New York: Wiley.Google Scholar
Chomsky, N. (1981). Lectures on government and binding. Dordrecht: Foris.Google Scholar
Chomsky, N. (1986). Knowledge of language: its nature, origin, and use. New York: Praeger.Google Scholar
Chomsky, N. (1995). The minimalist program. Cambridge, MA: MIT Press.Google Scholar
Cole, P. (1987). The structure of internally headed relative clauses. Natural Language & Linguistic Theory, 5(2), 277302.CrossRefGoogle Scholar
Fodor, J. D. (1998). Unambiguous triggers. Linguistic Inquiry, 29(1), 136.CrossRefGoogle Scholar
Isobe, M. (2005). Language variation and child language acquisition: laying ground for evaluating parametric proposals. Doctoral dissertation, Keio University, Tokyo.Google Scholar
Kuczaj, S. (1977). The acquisition of regular and irregular past tense forms. Journal of Verbal Learning and Verbal Behavior, 16, 589600.CrossRefGoogle Scholar
LeRoux, C. (1988). On the interface of morphology and syntax. Stellenbosch Papers in Linguistics #18. South Africa: University of Stellenbosch.Google Scholar
MacWhinney, B. (2000). The CHILDES project: tools for analyzing talk (3rd ed.). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
Maratsos, M. (1998). The acquisition of grammar. In Kuhn, D., Damon, W., Siegler, R., & Lerner, R. M. (Eds.), Handbook of Child Psychology (Vol. 2), Cognition, Perception, and Language (pp. 421466). Hoboken, NJ: Wiley.Google Scholar
Namiki, T. (1994). Subheads of compounds. In Chiba, S. (Ed.), Synchronic and diachronic approaches to language: a festschrift for Toshio Nakao on the occasion of his sixtieth birthday (pp. 269285). Tokyo: Liber Press.Google Scholar
Neeleman, A., & Weerman, F. (1993). The balance between syntax and morphology: Dutch particles and resultatives. Natural Language & Linguistic Theory, 12(3), 433475.CrossRefGoogle Scholar
Prince, A., & Smolensky, P. (2004). Optimality Theory: constraint interaction in generative grammar. Malden, MA: Blackwell.CrossRefGoogle Scholar
Schoorlemmer, E., & Rooryck, J. (2017). On Germanic numerals and possessives. Poster presented at DP@60, MIT, Cambridge, MA.Google Scholar
Snyder, W. (1995). Language acquisition and language variation: the role of morphology (Doctoral dissertation). Distributed by MIT Working Papers in Linguistics, Cambridge, MA.Google Scholar
Snyder, W. (2001). On the nature of syntactic variation: evidence from complex predicates and complex word-formation. Language, 77(2), 324342.CrossRefGoogle Scholar
Snyder, W. (2007). Child language: the parametric approach. Oxford: Oxford University Press.Google Scholar
Snyder, W. (2011). Children's grammatical conservatism: implications for syntactic theory. In Danis, N., Mesh, K., & Sung, H. (Eds.), Proceedings of the 35th annual Boston University Conference on Language Development (Vol. 1, pp. 1–20). Somerville, MA: Cascadilla Press.Google Scholar
Snyder, W. (2012). Parameter theory and motion predicates. In Demonte, V. & McNally, L. (Eds.), Telicity, change, and state: a cross-categorial view of event structure (OSTL 39) (pp. 279299). Oxford: Oxford University Press.CrossRefGoogle Scholar
Snyder, W. (2016). How to set the Compounding Parameter. In Perkins, L., Dudley, R., Gerard, J., & Hitczenko, K. (Eds.). Proceedings of the 6th Conference on Generative Approaches to Language Acquisition North America (pp. 122130). Somerville, MA: Cascadilla Press.Google Scholar
Snyder, W. (2018). On the child's role in syntactic change. In Sengupta, G., Sircar, S., Raman, M. G., & Balusu, R. (Eds.), Perspectives on the architecture and acquisition of syntax: essays in honor of R. Amritavalli (pp. 235242). Singapore: Springer Nature.Google Scholar
Snyder, W., & Stromswold, K. (1997). The structure and acquisition of English dative constructions. Linguistic Inquiry, 28(2), 281317.Google Scholar
Stromswold, K. (1996). Analyzing children's spontaneous speech. In McDaniel, D., McKee, C., & Cairns, H. S. (Eds.), Methods for assessing children's syntax (pp. 2354). Cambridge, MA: MIT Press.Google Scholar
Sugisaki, K., & Snyder, W. (2003). Do parameters have default values? Evidence from the acquisition of English and Spanish. In Otsu, Y. (Ed.), Proceedings of the Fourth Tokyo Conference on Psycholinguistics (pp. 215237). Tokyo: Hituzi Syobo.Google Scholar
Sugisaki, K., & Snyder, W. (2006). Evaluating the variational model of language acquisition. In K. U. Deen, J. Nomura, B. Schulz, & B. D. Schwartz (Eds.), The Proceedings of the Inaugural Conference on Generative Approaches to Language Acquisition – North America, Honolulu, HI (Vol. 2. )University of Connecticut Occasional Papers in Linguistics 4 (pp. 354–352).Google Scholar
Tesar, B., & Smolensky, P. (2000). Learnability in Optimality Theory. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Xu, T., & Snyder, W. (2017). There and back again: an acquisition study. Language Acquisition, 24(1), 326.CrossRefGoogle Scholar
Yamane, M., Pichler, D. C., & Snyder, W. (1999). Subject-object asymmetries and children's left-branch violations. In Greenhill, A., Littlefield, H., & Tano, C. (Eds.), Proceedings of the 23rd Annual Boston University Conference on Language Development (pp. 732740). Somerville, MA: Cascadilla Press.Google Scholar
Yang, C. D. (2002). Knowledge and learning in natural language. Oxford: Oxford University Press.Google Scholar
Figure 0

Figure 1. Sarah's production of intransitive verb-particle, by age.

Figure 1

Figure 2. Sarah's production of transitive verb-DP-particle, by age.

Figure 2

Figure 3. Scatter plot, with best-fit linear trendline, showing each child's age (in years) at the FRU of creative N-N compounding, versus age at FRU of V-DP-Particle.