Hostname: page-component-586b7cd67f-dsjbd Total loading time: 0 Render date: 2024-11-22T15:08:22.690Z Has data issue: false hasContentIssue false

Left node blocking1

Published online by Cambridge University Press:  07 July 2017

TIMOTHY OSBORNE*
Affiliation:
Zhejiang University, China
THOMAS GROẞ*
Affiliation:
Aichi University, Japan
*
Author’s address: School of International Studies, Zhejiang University, Zijinggang Campus, East Building 5, Hangzhou Zhejiang Province 310058, China[email protected]
Author’s address: Aichi University, Hiraike-cho 4-60-6, Nakamura-ku, Nagoya-shi 453-8777, Japan[email protected]
Rights & Permissions [Opens in a new window]

Abstract

This article investigates a particular phenomenon of coordination that delivers important clues about the nature of syntactic structures. We call this phenomenon left node blocking – the designation is a play on the related concept of right node raising. Left node blocking provides insight into how syntactic structures are produced and processed. The dependency grammar analysis of the left node blocking phenomenon put forth here focuses on roots in coordinated strings. By acknowledging roots, it is possible to discern what coordination is revealing about syntactic structures. In particular, coordination delivers evidence for relatively flat structures.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © Cambridge University Press 2017

1 The coordination diagnostic

The use of coordination as a diagnostic for constituents can be traced at least as far back as Chomsky (Reference Chomsky1957: 36), who wrote that ‘the possibility of conjunction offers one of the best criteria for the initial determination of phrase structure’. Ever since then the willingness to employ coordination as a test for constituents has been unrivaled by the other diagnostics that are typically used (clefting, pseudoclefting, proform substitution, answer fragments, etc.). There is, however, a serious problem with coordination as a diagnostic for constituents: it is too liberal; it identifies too many strings as constituents, and many of these strings are not corroborated as constituents by other tests. While its liberal nature is widely acknowledged as a problem for accounts of constituent structure, the willingness to employ coordination as a diagnostic for constituents is unyielding. Most introductory textbooks on syntax and linguistics continue to use coordination as a means of demonstrating the presence of constituents in sentences.Footnote [2]

The greater problem facing coordination as a diagnostic for constituents can be broken down into (at least) three sub-problems, each sub-problem being associated with a particular phenomenon of coordination:

The following three data sets illustrate these three sub-problems:

The instances of gapping appear to involve non-initial conjuncts that are not constituents. Similarly, the conjuncts of RNR also do not appear to qualify as constituents, and the same is true of the conjuncts of NCC. For each of these three phenomena, the associated literature is extensive.Footnote [3] However, most accounts cling to the assumption that coordination is in fact operating on phrase structure constituents.Footnote [4] Apparent non-constituent conjuncts like those here in (1)–(3) actually involve constituents in one sense or another. The accounts appeal to ellipsis in some way, or if they are in the tradition of Categorial Grammar, they assume flexible constituent structure (e.g. Steedman Reference Steedman1985, Reference Steedman1990, Reference Steedman, Baltin and Kroch1991; Dowty Reference Dowty, Oehrle, Bach and Wheeler1988).Footnote [5]

Taken together, the three types of data just enumerated motivate our rejection of coordination as a diagnostic for constituents in the standard sense. We assume instead that coordination is a mechanism that operates on strings, and we therefore pursue a theory that takes the notion of string coordination (Hudson Reference Hudson1988, Reference Hudson1989; Heringer Reference Heringer1996: 198–210; Osborne Reference Osborne2006) as its centerpiece. The coordination mechanism coordinates strings, whereby these strings may or may not qualify as constituents.

While we reject coordination as a test for identifying constituents in the standard sense, we think there is an aspect of coordination that does in fact provide guidance about the nature of constituent structure. This aspect is now illustrated with the following failed attempts at coordination:

Each of these examples is unacceptable on the reading indicated. This reading has the material appearing to the immediate left of the coordinate structure being shared by the conjuncts. Examples (4a, f, g, l) are acceptable on the alternative readings that lack sharing of the pre-modifier to the immediate left of the initial conjunct.

We now introduce the term left node blocking (LNB) to denote the phenomenon illustrated with (4a–o). Sharing by the conjuncts of certain material that appears to the immediate left of a coordinate structure is blocked. The term itself is a play on the term right node raising – more about the choice of terminology further below. The LNB phenomenon illustrated with examples (4a–o) has been acknowledged to varying extents by some of the sources listed, although the analyses offered are not coherent accounts of the whole phenomenon. In this respect, we think the potential of LNB to shed light on the nature of syntactic structures has not been realized. This article focuses on LNB, demonstrating that it supports the more traditional analyses of sentence structure that view syntactic structures as relatively flat, before strict binarity of branching took hold in generative circles.

The flatness of structure that we adopt here as the basis for exploring LNB and coordination more generally is actually even flatter than traditional phrase structure systems assume. We pursue an approach to syntax that is primarily dependency-based. Dependency structures are flatter than traditional phrase structures because they lack a finite VP constituent. Our dependency analysis of the sentence Sam has been buying beer is as follows:

The conventions in this tree are those that we prefer for representing dependency structures graphically. The words themselves appear in the tree as node labels. The string of words at the bottom of the tree is just as it would be in standard written English, maintaining most of the standard orthographic conventions. The words above in the actual tree are, then, given in a way that matches the words below and thus allows exact vertical alignment. For instance, sentence initial capitalization is maintained.

The pertinent aspect of this structure in (5a) is the absence of a finite VP constituent: has been buying beer is not a complete subtree (i.e. not a constituent) in (5a). The corresponding phrase structure tree would likely include a finite VP constituent. Rendering the sentence in terms of phrase structure, but maintaining the tree-drawing conventions employed in (5a), the following tree obtains:

The finite VP has been buying beer now corresponds to a complete subtree. This phrase structure tree is more layered – five layers in (5b) but only four in (5a). The message delivered in this article is that the relatively flat structure shown as (5a) is consistent with what the LNB phenomenon reveals about syntactic structures. The greater message, then, is that flat structures – be they dependency or phrase structures – are consistent with the LNB phenomenon illustrated with examples (4a–o).

This article lays out a theory of coordination that accounts for the LNB phenomenon. Section 2 establishes more clearly just when LNB occurs. Section 3 presents key ideas and concepts that are needed for the dependency grammar (DG) account of LNB, introducing the root and constituent notions. Section 4 presents the constraint on the coordination mechanism that predicts the LNB data; this constraint is called the Principle of Full Clusivity (PFC). Section 5 discusses the manner in which the PFC reveals how syntactic structures are being produced and processed in time (left to right). Section 6 examines apparent counterexamples to the PFC that involve the ellipsis of gapping. Section 7 acknowledges counterexamples to the PFC that involve prepositions in preposition-stranding languages. Section 8 illustrates and discusses the extent to which the LNB phenomenon is consistent with relatively flat structures, but problematic for more layered ones. Section 9 extends the coverage to data from German and Japanese. Section 10 concludes the article.

2 The LNB phenomenon

The coordination mechanism is quite flexible, just how flexible is illustrated using the following sentence, for which we provide the dependency structure:

Most of the distinct strings present in sentence (6) can be coordinated:

Acceptability increases in certain cases if the appropriate intonation contour is used. Some of the indicated conjuncts that do not qualify as constituents can perhaps be analyzed in terms of RNR (examples (7k, l, n, p)). Other such conjuncts, however, cannot, but rather they are more in line with an analysis in terms of NCC (examples (7h, i, j, m)).

Examples (7a–q) deliver a sense of just how permissive coordination actually is. In fact, an initial impression might be that most every distinct string present in (6) can be coordinated. This impression is, however, inaccurate. There are six words in (6), which means there are 21 ( $=6+5+4+3+2+1$ ) distinct strings present. Examples (7a–q) above demonstrate that 17 of these distinct strings can be coordinated. What about the other four? There are in fact four strings in the original sentence that cannot be coordinated, and these four, presented in (7r–u) below, are the type of data that this contribution focuses on intently; they involve LNB:Footnote [7]

Each of these attempts at coordination fails. For some as of yet unidentified reason, the possessive determiner her is blocked from being shared if the verb gave is included in the initial conjunct. But why should this be the case? How can this trait of coordination, i.e. LNB, be described and explained in a coherent manner? These questions are particularly vexing in light of the fact that shared material that appears to the immediate right of the conjuncts does not do this. Analogous failed attempts at coordination with shared material to the right of the coordinate structure do not occur, as RNR-type data demonstrate, e.g. [He sat on] and [she crouched under] the table, [He has a picture] and [she has a portrait] of you.

The term we have chosen to denote the phenomenon, i.e. left node blocking (LNB), is, as stated above, a play on the term right node raising (RNR). But LNB is crucially unlike RNR insofar as ‘LNB’ denotes failed attempts at coordination, whereas ‘RNR’ denotes successful attempts. Certain material that immediately precedes the coordinate structure cannot be shared, whereas there is no analogous restriction limiting the material that immediately follows the coordinate structure. The stance taken and message developed in this article is that the principle of coordination responsible for the LNB phenomenon is an important clue about the nature of syntactic structures in general. An understanding of the principle reveals that the behavior of coordination is actually consistent with an approach to syntax that takes syntactic structures to be relatively flat. In fact, LNB challenges those phrase structure analyses that take syntactic structures to be more layered.

The goal now is to identify, describe, and understand the principle of syntax that accounts for the LNB phenomenon.

3 Central concepts

The following subsections present some key aspects of our dependency theory of coordination. These aspects are consistent in important ways with the approach developed by Osborne (Reference Osborne2008). When our account differs from Osborne’s, however, we point out the differences.

3.1 Theoretical preliminaries

In order to produce an account of the LNB phenomenon, we must first establish some more general aspects of the current approach to coordination and grammar. The following points summarize some central points:

These points are consistent with established DG accounts of coordination in important ways (see Hudson Reference Hudson1988, Reference Hudson1989, Reference Hudson1990; Lobin Reference Lobin1993; Heringer Reference Heringer1996; Groß Reference Groß1999; Osborne Reference Osborne2006, Reference Osborne2008). We agree with Hudson (Reference Hudson1988: 315, Reference Hudson1990: 404–421) concerning the first point above: dependency syntax can understand coordination in a way that is not available to phrase structure syntax. Dependency syntax can augment its analyses with constituencies, whereas it is difficult to see how phrase structure syntax could augment its view of sentence structure with DG dependencies. Note that Groß’ (Reference Groß1999) and Osborne’s (Reference Osborne2006, Reference Osborne2008) accounts are three-dimensional; the conjuncts of coordinate structures are arranged along the third dimension (z-axis). Our account here, in contrast, is two-dimensional (x-axis and y-axis only).

The understanding of coordination as represented by the four points above is now sketched using the following tree conventions:

These trees illustrate key aspects of our approach to coordination. The roots (see the next section) of coordinated strings are matched using the horizontal connectors. The brackets are included in the tree to help indicate the extent of the coordinate structure. The coordinator is attached to the word that immediately follows it (or to the word that it immediately precedes it as in the Japanese examples below). Dependencies reach into the nearest conjunct only; thus, arrived in (8a), for example, is connected directly to Tom alone, not to Fred as well. The nested coordinate structure in (8c) is particularly important, since it shows the manner in which phrase structure is assumed. The words Tom and Fred form a constituent together, and this constituent is then coordinated with Bill, forming the greater constituent Tom and Fred, or Bill. A purely dependency account of coordination would have difficulty accommodating nesting of this sort.

There are a few empirical considerations that support the approach to coordination just sketched. Having the dependencies reach into the nearest conjunct only preserves the valence traits of valence carriers. Consider, for instance, how coordination would impact the understanding of valence if dependencies reached into all the conjuncts, e.g.

This analysis (of example (8a) above) attaches both Fred and Tom to arrived – instead of to just Tom. Such an analysis is similar to the original account of coordination proposed by Tesnière (Reference Tesnière1959/Reference Tesnière2015: Chapters 136, 143). It incurs the problem of misrepresenting the valence of the verb arrive. As an intransitive verb, arrive accepts just one subject valent, not two. The analysis shown in (8a) accurately reflects this aspect of arrive’s valence, whereas the analysis in (8a $^{\prime }$ ) misrepresents arrive’s valence, since it suggests that arrive can take two subject valents.

A second observation that supports having dependencies reach into the nearest conjunct only occurs with subcategorization restrictions. The subcategorization characteristics are less strict for conjuncts that are further away from the head words, e.g.

Sentence (9b) is unacceptable because about cannot take a that-clause as its complement. Sentence (9c) demonstrates in this respect that if the that-clause appears further removed from about as the non-initial conjunct of a coordinate structure, it can in fact appear. Extending the dependency from about to dissatisfaction alone accommodates this loosening of the subcategorization restriction, since the dependency between about and the that-clause becomes indirect.

The loosening of subcategorization restrictions also occurs in the case of multiple complements. A transitive verb can be coordinated with a ditransitive verb, whereby the relevant complements all appear outside the coordinate structure, e.g.

The object NP the beer is shared by the coordinated verbs in (10a). The complement PP to Bill, in contrast, is a dependent of the second verb sent only. Sentence (10b) illustrates that shared material should appear outside of the coordinate structure, and sentence (10c) demonstrates that neither complement need be shared. Extending the dependencies into the nearest conjunct only accommodates these facts. If the analysis extended the dependency from to Bill to both sent and purchased, the valence potential of purchased would be misrepresented, *We purchased the beer to Bill.

A third observation that supports having dependencies reach into the nearest conjunct only has to do with agreement. At times agreement obtains with the nearest conjunct only, e.g.

Examples (11a–b) illustrate that if agreement obtains between one of two coordinated determiners and a noun, then it should be congruent with the determiner that is closest to the noun. Similarly, while the acceptability judgments for examples (12a–b) are not robust, the verb appears to prefer agreement with the closer of the two subject nouns. These facts are easy to accommodate if the dependency that reaches into the coordinate structure extends into the nearest conjunct only, as shown.

3.2 Roots and constituents

Strings contain roots. The root concept is defined over strings of words organized in terms of dependencies as follows:

For similar definitions of this concept, see Hudson (Reference Hudson1990: 412), Pickering & Barry (Reference Pickering and Barry1993: 891), and Osborne (Reference Osborne2008: 1134). The following dependency structure is used to illustrate roots:

The arrow dependency edges identify in bed and on weekends as adjuncts. The arrow indicates that semantic selection is pointing from the adjunct towards its governor; the adjunct is semantically selecting its governor, not vice versa. In employing the arrow in this manner, we are following others (e.g. Tesnière Reference Tesnière1959/Reference Tesnière2015: Chapter 21; Engel Reference Engel1994: 44; Jung Reference Jung1995: 111–116; Eroms Reference Eroms2000: 85–86; Mel’ǔk Reference Mel’čuk2003: 193), although the exact visual convention employed across these sources varies in important ways.

Any string that one picks out from (13) has at least one root, and some of them have more than one. To provide examples, some distinct strings in (13) and their root(s) are listed in Table 1. The examples in the table illustrate that every string has at least one root and some strings have more than one. The flatter a syntactic structure, the more roots there are in the strings that make up that structure.

Table 1 Examples of roots in strings.

Coordinated strings can contain two, three, or even more roots. The roots are matched across the conjuncts. The roots in coordinated strings are marked here and in many of the examples below using bold script:

In this case, the roots you and Susan, yesterday and on, and in and in are matched across the conjuncts. The entire structure is rather flat. This flatness of structure is important for the account of LNB.

Next, a definition of the constituent is needed. Constituents are identified by a number of heuristics, diagnostics for constituents being the most important of these. Once a given constituent analysis has been arrived at and the corresponding tree representation produced, then the constituent can be defined in a theory-neutral manner over both dependency and phrase structure trees as follows:

This definition of the constituent is similar to the definition that one finds in phrase structure grammars when the constituent is defined over phrase structure trees: all the material that a given node exhaustively dominates is a constituent (see Keyser & Postal Reference Keyser and Postal1976: 34; Atkinson, Kilby & Roca Reference Atkinson, Kilby and Roca1982: 161; Haegeman & Guéron Reference Haegeman and Guéron1999: 51; van Valin Reference van Valin2001: 117; Poole Reference Poole2002: 35; Kroeger Reference Kroeger2005: 40; Tallerman Reference Tallerman2005: 136; Carnie Reference Carnie2010: 37; Sportiche et al. Reference Sportiche, Koopman and Stabler2014: 47). This understanding of constituents has also been used by certain DGs (e.g. Hays Reference Hays1964: 520; Hudson Reference Hudson1984: 92; Starosta Reference Starosta1988: 105; Hellwig Reference Hellwig2003: 603; Osborne Reference Osborne2008: 1126; Anderson Reference Anderson2011: 92). According to the definition, any complete subtree is a constituent. This fact makes the definition theory-neutral with respect to the dependency vs. phrase structure distinction. The definition identifies constituents in both dependency syntax and phrase structure syntax.

Worth noting is that traditionally many DGs have not acknowledged constituents. They do, however, acknowledge complete subtrees, so the issue is merely a terminological distinction, not a substantive one. We are using the term constituent here in part because the account will be applied to phrase structures further below, and phrase structures acknowledge constituents, of course.

According to the definition and the dependency tree, there are eight distinct constituents (including the whole) in sentence (13) above: Susan, romance, bed, weekends, romance novels, in bed, on weekends, and Susan reads romance novels in bed on weekends. Seven of these strings appear as complete subtrees and the eighth string is the whole. Acknowledging roots and constituents in dependency structures makes it possible to identify the principle of coordination that is responsible for the LNB phenomenon.

4 The Principle of Full Clusivity

Osborne (Reference Osborne2008) produces a restriction that predicts the LNB data; he calls it the restriction on external sharing. We simplify Osborne’s restriction somewhat, and express and rename it in a way that is more transparent.Footnote [8] We call the restriction the Principle of Full Clusivity:

The following data provide an overview of the types of structures where this principle limits the strings that can be coordinated. The underline is used henceforth to mark the constituent relevant to the PFC:

Each of these examples is unacceptable because the initial conjunct cuts into the underlined constituent. Observe that the underlined constituent in examples (15a–c) precedes the finite verb and is therefore a predependent of the verb. In examples (15d–e), in contrast, the underlined constituent follows the verb and is therefore a postdependent of the verb. What this means is that LNB is not limited to appearing in one particular area of a clause.

An initial tree, of example (15a), is now given, in (15a $^{\prime }$ ), to illustrate how the PFC should be understood:

The relevant constituent is underlined. The constituent the man precedes the root arrived of the initial conjunct. The example is unacceptable because the initial conjunct cuts into this constituent, instead of including or excluding it entirely. In other words, the man is a constituent that precedes the roots arrived and left of the coordinate structure, and this constituent consists of both the shared material the and the non-shared material man. The example is fine if the conjuncts are extended leftward to include the definite article, [The man arrived] and [the woman left], or shortened rightward to exclude the entire NP, The man [arrived] and [left].

The illustrations of examples (15b–e) continue with examples (15b $^{\prime }$ –e $^{\prime }$ ). Example (15e $^{\prime }$ ) is split in two due to space limitations on the page:

These trees show the manner in which the underlined constituent is cut into by the initial conjunct; it contains material that is inside the coordinate structure as well as material that is outside of the coordinate structure. The PFC disallows sharing of this sort. If, however, the conjuncts are extended leftward, each example becomes good, e.g. [A man with long hair arrived] and [a man with short hair left]. Or if the conjuncts are shortened rightward, each example also becomes good, e.g. A man with long hair [arrived] and [left].

One might object that examples like (15a–e) are unacceptable because attempts have been made to coordinate non-constituent strings, not because there is any sort of restriction on constituents preceding the roots of the conjuncts. Indeed, the LNB phenomenon can be misconstrued as motivating the widespread use of coordination as a diagnostic for constituents, since it appears as though (15a–e) are unacceptable precisely because the coordinated strings are not constituents. To accept such reasoning, however, one has to overlook and/or play down the importance of the various cases illustrated above in Sections 1 and 2, the cases where coordination is possible even though the coordinated strings do not qualify as constituents in surface syntax. In contrast, the PFC accounts for cases like (15a–e) without forcing one into the untenable position that sees coordination operating on constituent strings only.

To establish a fuller sense of the work that the PFC does, examples (4a–o) from the introduction are repeated here as (16a–o). The underlines and bold script continue to identify the relevant constituent and the roots of the conjuncts. Due to space limitations, we refrain from producing the trees for these examples:

In each of these cases, the underlined constituent is cut into by the coordinate structure, violating the PFC. Recall that examples (16a, f, g, l) are acceptable on the alternative readings that lack sharing of the pre-modifier to the immediate left of the initial conjunct.

5 Production and processing in time

An aspect of the LNB phenomenon that Osborne (Reference Osborne2008) does not address concerns its motivation. What does the PFC actually accomplish? The discussion in this section demonstrates that the principle exists to smooth the production and processing of coordinate structures and of syntactic structures more generally.

Coordinate structures are produced and processed in time (earlier to later). This trait is evident in the fact that only constituents preceding the conjunct roots are restricted. The very nature of RNR data demonstrates that the PFC is not a mirror image rule, as mentioned above in Section 2. This fact challenges approaches to syntax that see syntactic structures being produced bottom-up (for instance, in terms of Merge) as well as those theories of syntax that are multistratal. If syntactic structures were being generated bottom-up, or if an underlying level of syntax existed that excludes linear order, we would expect the restriction on sharing to affect material following the conjuncts of coordinate structures in the same manner that it does material preceding the conjuncts, but that is not what we encounter.

The LNB phenomenon is revealing how constituents are perceived by language users as they produce and process syntactic structures. As soon as a constituent is encountered in production and processing, subparts of that constituent become inaccessible to sharing. The constituent as a whole can be shared, but not some subpart of it. The point is illustrated first with head-final structures:Footnote [9]

The capital letters represent words. At the point in production and processing immediately after A appears as shown in (17a), there is no indication as to whether A is a constituent or not. At the point when B appears as shown in (17b), however, A is acknowledged as a constituent, and as such it can be shared by conjuncts that follow (e.g. his [wife] and [friend]).

At the point in production and processing immediately after C appears as shown in (17c), AB is acknowledged as a constituent and can be shared (e.g. his wife’s [friend] and [boss]). Crucially, however, A alone cannot be shared because it is a subpart of a constituent that has already been perceived (e.g. *his [wife’s friend] and [sister’s boss]). By the time D appears as shown in (17d), ABC is acknowledged as a constituent and can be shared (e.g. His wife’s friend [arrived] and [sat down]), but A alone cannot be shared (e.g. *His [wife’s friend arrived] and [sister’s boss left]), nor can AB alone be shared (e.g. *His wife’s [friend arrived] and [boss left]).

Examples (17a–d) illustrate the manner in which constituents are produced and processed for a head-final structure. When a head-initial structure is involved, the situation is much different:

In the left-to-right production and processing of this example, the only constituent that is perceived is the entirety when (18d) is reached. This fact means that each of the indicated strings can be coordinated: [Stop] and [start] doing it; [Stop whining] and [start thinking] about this!; Stop [whining about this] and [complaining about that]!; ?Stop whining [about this] and [about that]!; Stop whining about [this] and [that]!; Stop [whining] and [complaining] about this!.Footnote [10]

A comparison of these cases reveals an important distinction across head-initial and head-final structures. Head-initial structures are much more permissive for coordination. In contrast, the PFC is a major limitation on the coordinate structures that can occur when head-final structures are present. Interestingly, when syntactic structures are flat, i.e. neither head-initial nor head-final, we see that the PFC again limits the coordinate structures that can occur, e.g.

In this case, no constituent can be perceived until the structure in (19c) is reached, at which point B is acknowledged as a constituent. Note that BC is not yet perceived as a constituent in (19c) because the next node that appears might be a dependent of C and as such, it would render BC a non-constituent. As soon as D appears, however, BC is acknowledged as a constituent, which blocks B alone from being shared (e.g. *Eat this [donut today] and [pastry tomorrow]!). Observe that AB is not acknowledged as a constituent at any point, which means, it can be coordinated (e.g. ?He was [for your] but [against my] proposal yesterday).

The main point developed and illustrated in this section has been that the PFC limits the strings that can be coordinated when head-final structures and/or flat structures are involved. When head-initial structures are present, however, the principle exercises no influence over the coordinate structures that can occur.

6 Interference from gapping

A question the reader might be contemplating concerns the fact that the LNB phenomenon has gone largely unnoticed until now: Why has the PFC been overlooked? The answer to this question is in two parts, each part having to do with a type of data that obscure the true nature of the LNB phenomenon: cases involving gapping and cases involving prepositions. This section addresses the first of these two areas, i.e. the role of gapping.

The following example appears to contradict the PFC:Footnote [11]

The bracketing in (20a) is in terms of NCC. However, the underlined constituent to read Chomsky precedes the root since, so that constituent should be included in or excluded from the coordinate structure entirely, but it obviously is not. Thus, the PFC appears to be making an incorrect prediction in this case. Sentence (20b), which is an instance of gapping, provides a clue about sentence (20a). The similarity across (20a) and (20b) suggests that sentence (20a) may actually involve gapping, which would mean it is not an instance of normal string coordination. The message delivered in the following subsections is that this is indeed the case. That is, example (20a) actually does not contradict the PFC because it involves gapping.

6.1 External vs. internal sharing

Before directly addressing the role that gapping plays in the current account of LNB, the general distinction between string and gapping coordination must be established. This distinction is understood in terms of the position of the shared material with respect to the conjuncts of the coordinate structure. The conjunct-external sharing of string coordination behaves differently from the conjunct-internal sharing of gapping in interesting ways. The main distinction is illustrated with the following two sentences. A light font shade is used henceforth to mark gapped material.

The material in italics each time can be interpreted as shared by the conjuncts of the coordinate structure. When this material can be construed as appearing outside the coordinate structure as in (21a), string coordination (usually) obtains, whereas when this material must be construed as appearing inside (the initial conjunct of) the coordinate structure as in (21b), gapping obtains. Gapping is a form of ellipsis, as the text in light gray in (21b) is intended to indicate.

The distinction between the conjunct-external sharing of string coordination and the conjunct-internal sharing of gapping is supported by empirical differences that have been acknowledged and discussed to varying degrees in numerous places (e.g. Kuno Reference Kuno1976; Klein Reference Klein, Klein and Levelt1981; van Oirsouw Reference van Oirsouw1987; Hudson Reference Hudson1989; Wesche Reference Wesche1995; McCawley Reference McCawley1998; Zoerner & Agbayani Reference Zoerner, Agbayani, Okrent and Boyle2000; Osborne Reference Osborne2006, Reference Osborne2008). Four of these differences are: subject–verb agreement, distinct pronoun forms, the (im)possibility of redundancy and distinct readings. Each of these differences will now be illustrated in turn.

6.1.1 Subject–verb agreement

A number of linguists have observed (e.g. Wesche Reference Wesche1995: 139; McCawley Reference McCawley1998: 285; Zoerner & Agbayani Reference Zoerner, Agbayani, Okrent and Boyle2000: 551; Osborne Reference Osborne2006: 45, Reference Osborne2008: 118) that subject–verb agreement is strict with conjunct-external sharing:

The (a) sentences are marginal or unacceptable because the finite verb cannot agree with each of the subjects simultaneously (have vs. has and am vs. are). The (b) sentences are much better because the finite verb does agree with each of the subjects. When the conjunct-internal sharing of gapping occurs, in contrast, agreement becomes more lenient:

Despite the mismatching forms of the finite verb across the conjuncts (have vs. has and are vs. am), the conjunct-internal sharing of gapping is possible.

6.1.2 Distinct pronoun forms

Some linguists have noted that conjunct-external sharing is strict concerning pronoun forms, whereas it is lenient with conjunct-internal sharing (e.g. Hudson Reference Hudson1989: 63; Zoerner & Agbayani Reference Zoerner, Agbayani, Okrent and Boyle2000: 550; Osborne Reference Osborne2008: 117–118). Given conjunct-external sharing, the subject form of the pronoun can be obligatory, e.g.

The conjunct-internal sharing of gapping, in contrast, is more flexible, allowing the object forms of the pronouns to serve as subjects:Footnote [12]

Despite the fact that her and them are functioning as subjects, these pronouns are in their object forms. This is a trait associated with ellipsis in general, for instance with answer fragments: Who helped?Me./*I. vs. *Me did./I did.

6.1.3 (Im)possibility of redundancy

Conjunct-external sharing allows some redundancy for the sake of emphasis:

The repetition of watermelon and beans is possible in these cases. In contrast, various linguists (e.g. Kuno Reference Kuno1976: 309; Klein Reference Klein, Klein and Levelt1981: 73; van Oirsouw Reference van Oirsouw1987: 218; Hudson Reference Hudson1989: 67; Osborne Reference Osborne2006: 46, Reference Osborne2008: 1118) have noted that when gapping occurs, this redundancy is less acceptable:

The conjunct-internal sharing of gapping is dubious in the (b) sentences due to the redundancy of watermelon and today. When this redundancy is removed, as in the (c) sentences, gapping is fine. There is thus a contrast requirement on the remnants of gapping: the remnants should stand in contrast to the parallel constituents in the antecedent clause.

6.1.4 Distinct readings

The reading associated with conjunct-external sharing can be distinct from that of conjunct-internal sharing:

The reading indicated in (32a) is that of string coordination; the answer is Yes or No. The reading indicated in (32b), in contrast, is that of gapping; the answer is Coffee today or Tea yesterday. These distinct readings are accommodated based upon the difference between conjunct-external and -internal sharing.

While the difference between conjunct-external and -internal sharing provides a basis for the distinction between gapping and string coordination, there is a large gray area that has to be acknowledged concerning flexibility of analysis. This gray area is seen in examples (33a–b), where both analyses are possible for one and the same sentence:

The analysis in terms of string coordination assumes small conjuncts, whereas the gapping analysis assumes large conjuncts in such a manner that the finite verb is included in the coordinate structure. Since it is often possible to extend the conjuncts leftward to include the finite verb, the gapping analysis often competes with the analysis in terms of string coordination. Indeed, the possibility to assume gapping at times is important for the account of LNB, a point that will be established shortly.

While a pause and/or the addition of an additive particle (e.g. too, also, as well) can promote the gapping reading/analysis, in principle both analyses are often plausible. But consider examples (26)–(27) in this regard. Those examples are repeated here as (34)–(35), although we now add the (c) sentences showing the gapping analyses:

The pertinent question in these cases concerns the failure of gapping. Gapping allows the object forms of the pronouns to function as subjects, as demonstrated with examples (28)–(29), yet gapping is blocked in (34c) and (35c). Why? The reason, we believe, is that when just an auxiliary verb is gapped, acceptability decreases to begin with, e.g. ?[He has started] and [she/her has finished], ?[You are staying], but [they/them are going]. Gapping/stripping prefers to elide entire predicates, i.e. it prefers to elide the auxiliary together with the full verb that it accompanies, e.g. [He has started], and [she/her has started, too]?

A related observation is that there is a general preference for string coordination over gapping due to the lower cognitive load of string coordination, since string coordination does not involve ellipsis. What this means is that if other things are equal, the reading in terms of string coordination is preferred, e.g.

Although not impossible, the gapping reading indicated in (36b) is unlikely, the reading of string coordination indicated in (36a) being preferred. Such a gapping reading becomes more plausible, however, if semantic factors (and intonation) promote it, e.g.

In this case, semantic factors tip the scale in favor of the gapping reading.

This section has established the distinction between string and gapping coordination. String coordination involves conjunct-external sharing, whereas gapping has the shared material appearing inside the initial conjunct. The empirical differences across the two types of sharing are accommodated in terms of ellipsis. Gapping is a form of ellipsis, whereas string coordination does not involve ellipsis. When both string coordination and gapping appear to be possible, string coordination is preferred due to the lower cognitive load. The importance of the distinction is that certain apparent counterexamples to the PFC, e.g. example (20a) above, allow an analysis in terms of the ellipsis of gapping and therefore do not actually challenge the current account of LNB.

6.2 The specificity effect

The following three sentences are similar to sentence (20a) above insofar as they appear to contradict the PFC:

The string analyses indicated with the brackets, underlines, and bold script suggest that these coordinate structures should violate the PFC. In each case, the underlined constituent precedes a root in the initial conjunct. The relevant insight concerning such cases has to do with gapping, of course. The analyses indicated are in fact incorrect; these sentences actually involve gapping.

The plausibility of the gapping analysis for such cases starts to become visible with the following acceptability contrast:

Due to the presence of the subject in the non-initial conjunct each time, these examples necessarily involve the ellipsis of gapping, as indicated with the lighter font shade. The acceptability contrast across the (a) and (b) sentences is due to a specificity effect. The gapping mechanism is incapable of including part of a specified NP in the gap. When the NP is not specified as in the (a) sentences, the gap can cut into it, but when the NP is fully specified as in the (b) sentences, the gap cannot cut into it.

Returning to examples (38)–(40), we see that the same specificity effect shows up in those cases. Sentences (38)–(40) are repeated here as the (a) examples in (44)–(46), but this time the analyses shown are in terms of gapping; the (b) examples are added to draw attention to the specificity effect:

The same acceptability contrast across the (a) and (b) sentences is present in these cases. In other words, the (b) sentences here in (44)–(46) are dubious for the same reason that the (b) sentences in (41)–(43) are dubious. The attempts to elide part of a fully specified NP mostly fail.

To summarize the insight, certain apparent counterexamples to the PFC are in fact not counterexamples, but rather they involve the gapping mechanism. These counterexamples look like string coordination, but in fact they involve the ellipsis of gapping. That being so, example (20a) above, given here again as (47), actually receives the following analysis:

On this gapping analysis, the PFC does not come into play. The only constituent that precedes the roots of the coordinate structure is I, and I is included in the coordinate structure entirely (it cannot be cut into, of course).

6.3 String vs. gapping NCC

The previous section has demonstrated that apparent counterexamples to the PFC are due to interference from gapping. In fact, the possibility of gapping raises a general concern about the analysis of many instances of NCC, e.g.

Based on the examples and discussion in the previous two sections, both of these analyses seem possible. This fact raises the difficult question concerning the nature of NCC in general. Should all putative cases of NCC be analyzed in terms of gapping? In other words, perhaps the string analysis of NCC shown in (48a) is in fact never available. The discussion now focuses on this possibility, demonstrating that it is incorrect. The string analysis shown in (48a) is actually accurate. Gapping is a last resort that is accessed in those cases where the PFC (or some other principle of semantics or syntax) blocks string coordination.

The competing analyses given with (48a–b) have been debated in recent years. A number of accounts argue that all instances of NCC actually involve ellipsis (e.g. Beavers & Sag Reference Beavers and Sag2004, Yatabe Reference Yatabe2012, Sailor & Thoms Reference Sailor and Thoms2013) – although these accounts vary in important ways and not one of them specifically appeals to the ellipsis mechanism of gapping. Other accounts argue that ellipsis is not involved at all (e.g. Mouret Reference Mouret2006, Levine Reference Levine2011). The discussion here now reiterates and expands on the point made above in Section 6.1, namely that when other things are equal, string coordination is preferred over gapping.

A couple of considerations demonstrate that non-gapping NCC is real, that is, there are cases of NCC that do not submit to an ellipsis analysis (in terms of gapping or otherwise). Section 6.1 already presented a couple of these cases – see examples (30)–(31) and (34)–(35) – and the point is further strengthened by the observations that follow. A gapping analysis is not available in the following sentences involving coordinated postdependents of a noun:

Gapping is not available in these cases because the coordinate structure precedes the finite verb each time.Footnote [14] A gapping analysis would have to assume that the gap can precede its filler. This is not generally how gapping is understood in English. Hence since gapping is not available here, a non-gapping NCC account is necessary.

Another consideration demonstrating that non-gapping NCC exists is evident in the meaning contrast across the following sentences:

Levine (Reference Levine2011) points out that on the ellipsis approach indicated in (50b), sentences like these two should be truth-conditionally identical, but they are of course not truth-conditionally identical: we can imagine a situation in which sentence (50a) is false, but sentence (50b) true. A similar type of data is next:

The semantic mismatch across these two sentences is apparent. The ellipsis analysis incorrectly predicts these two sentences to mean the same thing.Footnote [15]

The aspect of the mismatch illustrated with examples (50)–(51) that is particularly telling concerns true instances of gapping. The mismatch, namely, does not obtain when gapping is indisputably present:

The instances of gapping in (52b) and (53b) mean the same thing as their non- gapping counterparts (52a) and (53a). Apparently, the semantic mismatch illustrated with examples (50)–(51) does not obtain when gapping really occurs.Footnote [16]

The examples just considered demonstrate that there are instances of NCC that do not submit to a gapping account. The conclusion, then, is that non-gapping NCC does indeed exist. When both analyses would seem to be possible, as with examples (48a–b) above, the non-gapping analysis is preferred because of its lower cognitive load.

7 Preposition stranding

A second type of data that challenges the PFC occurs with prepositions in the few languages that allow preposition stranding (e.g. English, Swedish, and some varieties of Welsh). The PFC incorrectly predicts the following sentence to be unacceptable:

The initial conjunct cuts into the constituent to Denver, which should constitute a violation of the PFC. Sentence (54) is, however, acceptable. The importance of examples like (54) in syntactic theorizing is evident in Pesetsky’s (Reference Pesetsky1995) analysis of such cases. Pesetsky takes such data as evidence for his unorthodox analysis of constituent structure in terms of cascades. Cascades are strictly right-branching structures. The right-branching is such that the conjuncts form constituents. Pesetsky (Reference Pesetsky1995: 176) produces the following data in this area – the underlines and bold script are our additions:

These cases should also constitute violations of the PFC and therefore be unacceptable. The fact that they are acceptable is a problem for the current understanding of LNB, and they do appear to support Pesetskys cascades.

Osborne (Reference Osborne2008: 142–144) addresses data like (54)–(55). He points out that such contradictory data do not occur in most other languages. Frazier, Potter & Yoshida (Reference Frazier, Potter and Yoshida2012) and Sailor & Thoms (Reference Sailor and Thoms2013) also address such data (in a somewhat different context); they point out that there is a systematic distinction in this area across languages that do and do not allow preposition stranding. Such conjuncts can share a preposition to their immediate left only in languages that allow preposition stranding. Since most other languages do not allow preposition stranding, the data that directly contradict the PFC are limited cross-linguistically. This observation is relevant for the conclusion Pesetsky draws based upon data like (55a–c). His cascades are motivated by a type of data that occur in a limited set of languages, but they are contradicted by analogous cases in a larger set of languages.

The extent to which the repetition of the preposition is obligatory or strongly preferred in other languages is illustrated with the following data from Osborne (Reference Osborne2008: 1143–1144):

These examples are now consistent with the PFC. The repetition of the preposition in the (b) sentences extends the conjuncts leftward to include the entire PP – see Frazier et al. (Reference Frazier, Potter and Yoshida2012: 145) and Sailor & Thoms (Reference Sailor and Thoms2013) for similar data from many other languages.

The fact that a certain type of data from a limited set of languages contradicts the PFC should not be overestimated. Such contradictory data do generate a basic question, though, namely: What is responsible for this contrast across the languages that do and do not allow preposition stranding? Consider in this regard that the challenge is similar concerning long-distance dependencies. Languages like English that allow preposition stranding can extract the object of a preposition, whereas languages like German that do not generally allow preposition stranding cannot do this, e.g.

The same principle that allows English to ignore the prohibition on the extraction of the object of a preposition can also allow English to ignore the PFC when the relevant shared material is just a preposition. Apparently, prepositions in preposition-stranding languages are transparent to various principles of syntax more generally, the PFC being one of these principles.

A related observation that points to the importance of preposition stranding for addressing counterevidence against the PFC like examples (54)–(55) is apparent when one considers where preposition stranding can and cannot occur. A fronted PP does not allow preposition stranding, and similarly, a fronted PP cannot violate the PFC:

The unacceptability of examples (61a–b) is addressed in terms of the position of the preposition before the main verb; preposition stranding is apparently only possible if the preposition appears in its canonical position after the main verb and as a postdependent. Similarly, a preposition can contradict the PFC only if the PP appears in its canonical position after the main verb as a postdependent; if the PP has been fronted and thus become a predependent as in (62a–b), in contrast, PFC violations are unacceptable. Note that while example (62d), which does not violate the PFC, seems quite marginal, we think it is better than examples (62a–b), which do violate the PFC.

To restate the point, the current account of the LNB phenomenon concedes that certain cases involving prepositions challenge the current account of LNB. The answer to this challenge emphasizes that such exceptions occur only with postdependent prepositions in the limited set of languages that allow preposition stranding, and the challenge therefore has to do with the theory of preposition stranding and how it relates to the PFC. Whatever aspect of prepositions allows preposition stranding, this aspect can also supersede the PFC.

8 Contra layered structures

Most of the discussion above has focused on the LNB phenomenon using the relatively flat structures of DG for the analysis. A key point to be established now is that the root concept and the PFC are consistent with flat structures in general, be these structures dependency or phrase structure. Problems arise, however, if one assumes a more layered analysis. The more layered analyses also incur a greater cost, since the account of coordinate structures in terms of roots becomes more complex.

To transition the current account of LNB to phrase structure syntax, it is first necessary to identify roots in phrase structures. An alternative definition of the root is needed to this end, one that identifies the same words as roots in phrase structure syntax. The definition of the root produced above is repeated here, followed by a phrase structure version of it:

These definitions can identify the same words as roots across dependency and phrase structures.Footnote [17]

The extent to which the two definitions can identify the same words as roots is illustrated across the following three trees. Tree (63a) is dependency; tree (63b) is the direct translation of (63a) (dependency $\rightarrow$ phrase structure); and tree (63c) is similar to (63b), but a finite VP constituent has been added:

Tree (63b) is, due to the fact that it is the direct translation of tree (63a), unusual insofar as it lacks the traditional finite VP constituent; it takes the entirety to be a big VP instead. The traditional finite VP constituent does appear in (63c), which has the familiar S  $\rightarrow$  NP VP division. Tree (63c) is similar to the analyses of sentence structure that one encountered in syntax textbooks before strict binarity of branching took hold in generative circles.

Consider next how trees (63a–c) fare with respect to the following pair of sentences:

According to the definitions of the root, all three trees take the words boy and hug as roots of the string the boy a hug. This means that the PFC makes the same correct prediction over all three trees; they therefore all correctly predict (63e) to be unacceptable, since the boy is a constituent that precedes the root hug in each tree. This situation generates the following question: if all three trees are making a correct prediction, which of them should be preferred? The answer to this question can be chosen based on the preferences of the grammarian at hand.

While examples (63a–e) suggest that the current account of LNB can be adopted into phrase structure grammar, difficulties emerge for phrase structure systems when the structural analyses become more layered. First, however, consider the following relatively flat analysis of a sentence that contains a post-verb adjunct:

The PFC correctly predicts (64b) to be unacceptable. The constituent her brother precedes the root today and the initial conjunct cuts into this constituent.

The relatively flat analysis shown in (64a) is, however, unusual. A more layered analysis, as shown in (64c) below, is likely to be preferred, one in which the adjunct appears above the object in the hierarchy of structure:

Details aside, the basic structural analysis shown here of the clause Sue visited her brother today is the preferred analysis in many textbooks. A left-branching VP is assumed in order to accommodate certain facts (associated with coordination and other diagnostics). With respect to the PFC, this structure no longer makes the correct prediction, however. Observe the underlined constituent visited her brother (V $^{\prime }$ ) in (64c). This constituent precedes the root today, which means there should be a violation of the PFC, but there is not; sentence (64c) is perfect. The tree (64c) therefore demonstrates that the PFC is incompatible with more layered structures.

This section has considered the extent to which the current account of LNB can be adapted for phrase structures. If the phrase structure structures that one assumes are relatively flat, the current account can indeed be adopted into phrase structure grammar with little difficulty. But if more layered structures are assumed (i.e. ‘taller’ structures), the account in terms of roots can no longer be adopted in a straightforward manner.

9 More evidence for flatter structures

Before concluding this paper, data from two other languages will now be considered, from German and Japanese. These data are important for two reasons: the Japanese data suggest that the PFC is in force in languages that are far removed from English, and a confounding factor that obscures the presence of the PFC occurs in both languages. The pertinent insight in this area is that both languages allow rising/raising in terms of scrambling or fronting. The flatter structures that result from scrambling and fronting are then consistent with what the PFC predicts.

9.1 German

The LNB phenomenon is present in German, and its distribution is predicted by the PFC, as suggested by the following examples:

These data are similar to their English counterparts, as the translation each time suggests. The one exception is the translation of (67), where the preposition need not be repeated see – Section 7. Therefore, based upon these cases, the distribution of LNB in German is close to what it is in English.

There is, though, an important difference across English and German concerning the distribution of LNB. This difference has to do with the (in)ability to scramble constituents: German allows scrambling, English does not. With the ability to scramble in mind, examine the following two competing analyses of the given subordinate clause in German:

The structural analysis given in (70a) makes an inaccurate prediction. On that analysis, the example should be ungrammatical due to a violation of the PFC: the underlined constituent Bier kaufen ‘buy beer’ is cut into by the coordinate structure. The structural analysis given in (70b), in contrast, does not view Bier kaufen as a constituent, which means there is no violation of the PFC there. The (b) analysis assumes rising (Groß  & Osborne Reference Groß and Osborne2009, Osborne Reference Osborne, Gerdes, Hajičová and Wanner2014); the constituent Bier rises to attach to kann, which dominates its governor kaufen. The dashed dependency edge indicates rising and the $_{\text{g}}$ subscript marks the governor (here kaufen) of the risen constituent (here Bier).

Most of the time, rising results in flatter structures, and these flatter structures are then congruent with the current approach to coordination, which takes the PFC as its centerpiece. In the case of (70), rising obviates what would otherwise be a PFC violation (the (a) analysis). Consider that the rising analysis shown in (70b) receives some independent motivation, i.e. independent of coordination. The following (b) analysis of (what would otherwise be) a discontinuity due to scrambling supports the concept of rising:

The (a) analysis shows a projectivity violation, i.e. a discontinuity, and the (b) analysis shows how projectivity violations can be analyzed according to Groß  & Osborne (Reference Groß and Osborne2009). Rising occurs, resulting in a flatter structure, and this flatter structure can avoid PFC violations.

An outstanding issue concerns the necessity of rising. The rising shown in (71b) is necessary to avoid the projectivity violation shown in (71a). The pertinent question in the current context, though, is whether rising always occurs in the topological middle field (Mittelfeld) of German (and Dutch). The analysis shown in (70b), namely, suggests that rising occurs in the middle field even when there is no discontinuity to avoid. The nature of so-called cross-serial dependencies in Dutch (and German) is one source of support for this assumption. Without rising, cross-serial dependencies, which are the unmarked word order in Dutch, would result in non-projective structures on a large scale. Furthermore, rising usually results in flatter structures, and these flatter structures can be easier to process insofar as they reduce center embedding.

This section has demonstrated that the LNB phenomenon is present in German like it is in English. The one area where LNB is perhaps unexpectedly absent has to do with the ability of German to scramble constituents in terms of rising. Rising in the middle field in German results in flatter structures that obviate PFC violations. The next section demonstrates that rising is also occurring in Japanese in a similar manner, obviating PFC violations.

9.2 Japanese

This section takes a look at a strictly head-final language, namely Japanese. As stated in Section 5, the PFC predicts that coordination in head-final languages is more restricted than in languages with head-initial or flat structures. The rising mechanism can, however, also circumvent PFC violations in Japanese, just like it can in German. Since coordination in Japanese is expressed with substantial variation, the data below are limited to those cases where an element expressing coordination can or must appear twice, i.e. after each conjunct. The coordinative devices to be used below are -to...(-to) (closed listing), -ka...-ka (disjunctive), and -mo...-mo (inclusive). We also restrict our analysis to nominal conjuncts.

Coordinating constituents in Japanese is easily possible. The following example provides orientation; it illustrates a standard case of coordinated constituents:

This example illustrates the strictly head-final nature of syntactic structures in Japanese. The coordinated strings Beikoku-o tabi-shita koto ‘traveled to America’ and Chuugoku-e itta koto ‘went to China’ are constituents.

Example (72) does not involve sharing in the sense relevant for the account of LNB. Since the PFC constrains the sharing of predependent constituents in head- final structures, however, it should be possible to test whether the PFC is in force in Japanese. The following examples suggest that it is:

Example (73) shows that the NP Mie-o ‘Mie[-acc]’ cannot be shared. (74) shows that the superlative ichiban cannot be shared by attributive adjectives in different conjuncts. (75) shows that the adverb hayaku ‘fast’ cannot be shared by the verbs hashiru ‘run’ and tobu ‘fly’ residing in different conjuncts. Finally, (76) demonstrates that the attributive NP Doitsu-no ‘German(y)-gen’ cannot be shared by the nouns dansei ‘man’ and josei ‘woman’ in different conjuncts. In each case the predependent forms a constituent together with the next word, which is always part of the first conjunct, and hence creates a situation in which the constituent formed by the predependent and its governor, is cut into by the coordinate construct.

The failed attempts at sharing in examples (73)–(76) should be compared with the following examples, which show successful attempts at sharing:

The acceptable examples (77)–(80) differ from (73)–(76) insofar as the shared constituents in (77)–(80), which are underlined, are fully excluded from the coordinate structure each time. Predictably, the shared reading is therefore available in each case.

There are exceptions that appear to challenge the PFC, though, and these exceptions are structurally similar to the German examples discussed in the previous section, e.g.

Sentence (81) is ambiguous between reading (81a), which fully includes the constituent Chuugoku-no josei-ni furareta ‘rejected by a Chinese woman’ in the coordinate structure, and reading (81b), which excludes the first part of that constituent from the coordinate structure. Reading (81b), however, should not be available according to the PFC.

In order to address the (b) reading for (81), we first direct attention to one well-established fact: discontinuities (i.e. long distance dependencies) have been a well-studied phenomenon of Japanese syntax at least since Saito (Reference Saito1985). Japanese fronting involving a discontinuity is illustrated with the next example, taken from Saito (Reference Saito, Erteschik-Shir and Rochman2010). The conventions employed to address the discontinuity appear again here (dashed dependency edge and $_{\text{g}}$ subscript; see examples (70)–(71) above).

In (82a) the allative complement Teruabibu-e ‘to Tel Aviv’ (boldface) resides inside the subordinate clause. (82b) shows that this complement can be fronted. It is not necessary to understand the subtleties of fronting (and other types of discontinuities) in Japanese; in the current context it suffices to simply acknowledge that this operation is available in Japanese. The analysis shown in (82b) is again that of Groß  & Osborne (Reference Groß and Osborne2009) and Osborne (Reference Osborne, Gerdes, Hajičová and Wanner2014). Rising is assumed; the head (omowanakatta) of the fronted constituent (Teruabibu-e) is not its governor (iku). The dashed dependency edge again marks the risen constituent, and the g subscript marks the governor of the risen constituent.

Returning to example (81), which is now given as (83), the (b) reading becomes more tractable once one acknowledges the possibility of rising. The structural difference across the two readings is shown next:

In (83a) the expression Chuugoku-no josei-ni ‘Chinese woman’ is a dependent of its governor furareta ‘rejected’. In that constellation, a shared reading is illicit. (83b) gives our analysis of the shared reading.Footnote [18] As the tree shows, Chuugoku-no josei-ni ‘by a Chinese woman’ rises to attach to Nihonjin(-to) ‘Japanese (and)’. The PFC violation is thus obviated.

The phenomenon has not gone unnoticed. Yatabe (Reference Yatabe, Flickinger and Kathol2001) calls constellations such as the one shown in (83b) ‘left-node raising’, and puts forth a number of arguments in favor of acknowledging raising in such cases. However, Yatabe does not realize that the structural difference between examples such as (83a) and (83b) is accompanied by overt intonation signals. Consider the two associated readings dyet again, given here as (83a $^{\prime }$ ) and (83b $^{\prime }$ ):

Two intonation signals and one prosodic signal are involved. If the first four words of (83) are pronounced as one intonation phrase (marked by curly brackets as in (83a $^{\prime }$ )) followed by a rising intonation (marked by the arrow) on the coordinator, followed by a major break (marked by two vertical lines), then the (a) reading applies. If, however, only the first two words are pronounced as one intonation phrase followed by rising intonation on the last syllable, followed by a break, then the (b) reading applies.

An open issue concerns the acceptability contrast across cases such as (73)–(76), where rising apparently cannot occur, and cases such as (83) (= (81)), where rising can occur (given the appropriate intonation and prosodic cues). This is not the place to explore the theory of discontinuities in Japanese. Whatever the explanation for the acceptability contrast, it lies more with the theory of discontinuities and rising than with the theory of coordination.

In summary, LNB occurs in Japanese like it does in English and German. The PFC helps predict when and where it will appear. Apparent counterexamples have to do with the ability to rise/raise constituents. Risen/raised constituents can be marked by intonation and prosody. The result of rising/raising is flatter structures. Without these flatter structures, the PFC would be making incorrect predictions. The upshot of all this is therefore again that relatively flat structures are consistent with the LNB phenomenon and the account thereof in terms of the PFC.

10 Conclusion

This article has examined an aspect of coordination that has not been explored in detail until now. This aspect is the observation that while coordination is capable of coordinating most strings, there are in fact attempts at coordination that fail, and these attempts have the shared material to the immediate left of the coordinate structure. The phenomenon has been named left node blocking (LNB) – the term is a play on the related concept of right node raising (RNR). LNB occurs when the initial conjunct of a coordinate structure cuts into a constituent that appears on its left side. The account of LNB presented here has appealed to production and processing of syntactic structures in time (earlier to later). Constituents that appear in online production and processing of sentence structure are acknowledged as constituents as soon as possible. Once a constituent is acknowledged, that constituent can be shared as a whole by the conjuncts of a coordinate structure that follow it, but no subpart of the constituent alone can be shared. Data from German and Japanese support the hypothesis that this phenomenon is not language-specific but rather general.

The discussion also considered whether the dependency account of LNB developed can be adopted into phrase structure syntax. It can be if the phrase structures assumed are entirely endocentric, but doing so bears a cost. The definitions of the root concept and the formulation of the PFC must be adapted. Furthermore, the phrase structures assumed must be relatively flat, since the PFC makes inaccurate predictions if more layered, i.e. taller, phrase structures are assumed.

Footnotes

[1]

The research presented in this article was funded by the Ministry of Education of the People’s Republic of China, Grant #15YJA74001. The content has benefitted greatly from feedback provided by three anonymous Journal of Linguistics referees.

For ease of reference, the main abbreviated forms that appear time and again in this article are listed here together: DG = dependency grammar, LNB = left node blocking, NCC = non-constituent coordination, PFC = Principle of Full Clusivity, RNR = right node raising.

2 The following syntax, linguistics, and grammar books all use coordination as a diagnostic for identifying constituents: Baker Reference Baker1978: 269–276, Reference Baker1989: 419–427; Radford Reference Radford1981: 59–60, Reference Radford1988: 75–78, Reference Radford1997: 104–107, Reference Radford2004: 70–71; Atkinson et al. Reference Atkinson, Kilby and Roca1982: 172–173; Akmajian et al. Reference Akmajian, Demers, Farmer and Harnish1990: 152–153; Borsley Reference Borsley1991: 25–30; Cowper Reference Cowper1992: 34–37; Napoli Reference Napoli1993: 159–161; Ouhalla Reference Ouhalla1994: 17; Roberts Reference Roberts1997: 12; Haegeman & Guéron Reference Haegeman and Guéron1999: 27; Fromkin et al. 2000: 160–162; Lasnik Reference Lasnik, Depiante and Stepanov2000: 11; Lobeck Reference Lobeck2000: 61–63; Börjars & Burridge Reference Börjars and Burridge2001: 27–31; van Valin Reference van Valin2001: 113–114; Huddleston, Payne & Peterson Reference Huddleston, Payne and Peterson2002: 1348–1349; Poole Reference Poole2002: 31–32; Adger Reference Adger2003: 125–126; Sag, Wasow & Bender Reference Sag, Wasow and Bender2003: 30; Kroeger Reference Kroeger2005: 91, 218–9; Tallerman Reference Tallerman2005: 144–146; Haegeman Reference Haegeman2006: 89–92; Payne Reference Payne2006: 162; Kim & Sells Reference Kim and Sells2008: 22; Carnie Reference Carnie2010: 115–156, 125, Reference Carnie2013: 99–100; Quirk et al. Reference Quirk, Greenbaum, Leech and Svartvik1985/2010: 46–47; Sobin Reference Sobin2011: 31–32; Sportiche et al. Reference Sportiche, Koopman and Stabler2014: 62–68.

3 We do not list the important works that explore gapping, stripping, RNR, and NCC here, since such a list would be too long. Many sources are, though, cited below when they become more directly relevant. We would, however, like to point out that the DG tradition began exploring coordination early. Tesnière (Reference Tesnière1959 [2015: 325–360]) discussed cases of gapping, stripping, and RNR (but using a much different nomenclature) long before these phenomena were acknowledged by linguists in Anglo-American circles.

4 We use the term phrase structure in the sense of ‘not dependency structure’. Another term that is often employed in this regard is constituency, i.e. phrase structure = constituency. We are hence grouping together Categorial Grammar, Transformational Grammar, Government and Binding theory, Minimalist Program, Head-driven Phrase Structure Grammar, Lexical-Functional Grammar, etc. as phrase structure grammars; they stand in opposition to dependency grammars (Word Grammar, Meaning–Text Theory, Functional General Description, etc.).

5 If constituent structure were flexible, then this flexibility would be independently verifiable. It is, however, not independently verifiable. Consider example (3b) and standard tests for constituents in this regard. If the string Bill to the store in (3b) could qualify as a constituent as predicted by an approach that assumes flexible constituent structure, then we would expect other diagnostics beyond coordination to identify it as a constituent. But other diagnostics do not do this. Bill to the store cannot be topicalized: *…and Bill to the store they sent; it cannot serve as the pivot of a cleft sentence: *It was Bill to the store that they sent; it cannot be focused in a pseudocleft sentence: *Bill to the store is who they sent; it cannot appear as an answer to a question containing a single question word: Who did they send? – *Bill to the store; it cannot be replaced by a single proform: They sent him (him $\neq$ Bill to the store). Without independent evidence confirming that Bill to the store can be a constituent, the approach in terms of flexible constituent structure is circular.

6 The arrow dependency edge marks today as an adjunct. See example (13) below.

7 The unacceptable examples (7r–u) attempt to share the determiner her. Note in this regard that the same sort of unacceptability contrast would obtain if the shared material were an attributive adjective, e.g. *Lazy [children should not] and [adults must not] eat candy. In this regard, the LNB phenomenon does not shed light on the DP vs. NP debate. The traditional NP analysis of noun phrases is assumed throughout this article.

8 Osborne (Reference Osborne2008: 1136) formulates his restriction as follows:

9 Our DG distinguishes between root and head. The root of a given constituent is the one word in that constituent that is not immediately dominated by any other word in that constituent. The head of a given constituent, in contrast, is the one word outside of that constituent that immediately dominates that constituent. In this regard, structures (17b–d) are both root-final and head-final, and structures (18b–d) are both root-initial and head-initial.

10 The following example could be added, although it is of dubious acceptability:

The strong marginality of this example is probably due to the unnecessary repetition of about.

11 Many thanks to Sylvain Kahane for providing example (20a) and drawing attention to the challenge it poses to the PFC.

12 The subject forms of these pronouns are also allowed: [He started this evening], and [ she started yesterday morning], [You are staying today] and [ they are staying tomorrow].

13 The other gapping analysis would of course be as follows: ?[Fred saw Larry today], and [Fred saw Bill yesterday]. This analysis is also implausible because it would be synonymous with (36a) but would, because it implicates ellipsis, be cognitively more expensive than (36a).

14 Note that examples like [Jim eating mushrooms on Monday] and [Fred eating mushrooms on Tuesday] were both surprising occurrences do not constitute counterexamples to this point. The gerund in such cases is verb-like enough to allow gapping.

15 Meaning mismatches associated with the large conjunct analysis in (51b) have long been acknowledged and discussed in the literature on coordination, at least since Dik (Reference Dik1968: 74–92).

16 One might object here that there is in fact a crucial mismatch that does occur with gapping. This mismatch occurs when negation is present:

This acceptability contrast has to do with the fact that gapping cannot elide a negation. This is a particular trait of the gapping mechanism that does not bear on examples like (52)–(53). Note that example (ia) is fine on the reading where the negation in the initial conjunct scopes over both conjuncts.

17 However, strictly endocentric constituents must be assumed for the phrase structures – dependency by its very nature necessarily sees all syntactic structure as endocentric, whereas phrase structure can distinguish between endo- and exocentrism.

18 Another hypothesis appears possible. Chuugoku-no josei-ni ‘by a Chinese woman’ could be left-dislocated, and the verbs in the respective conjuncts could each dominate a zero anaphor. We reject this possibility, though, because in Japanese anaphors are overt in cases of left-dislocation.

References

Adger, David. 2003. Core syntax: A minimalist approach. Oxford: Oxford University Press.Google Scholar
Ágel, Vilmos, Eichinger, Ludwig M., Eroms, Hans-Werner, Hellwig, Peter, Heringer, Hans Jürgen & Lobin, Henning (eds.). 2003. Dependency and valency: An international handbook of contemporary research, vol. 1. Berlin: Walter de Gruyter.Google Scholar
Akmajian, Adrian, Demers, Richard A., Farmer, Ann K. & Harnish, Robert M.. 1990. An introduction to language and communication, 3rd edn. Cambridge, MA: The MIT Press.Google Scholar
Anderson, John M. 2011. The substance of language, vol. 1: The domain of syntax. Oxford: Oxford University Press.Google Scholar
Atkinson, Martin, Kilby, David & Roca, Iggy. 1982. Foundations of general linguistics, 2nd edn. London: Unwin Hyman.Google Scholar
Baker, C. L. 1978. Introduction to generative-transformational syntax. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
Baker, C. L. 1989. English syntax. Cambridge, MA: The MIT Press.Google Scholar
Beavers, John & Sag, Ivan A.. 2004. Coordinate ellipsis and apparent non-constituent coordination. Proceedings of the 11th International Conference on Head-driven Phrase Structure Grammar, 4869. Stanford, CA: CSLI Publications.Google Scholar
Börjars, Kersti & Burridge, Kate. 2001. Introducing English grammar. London: Arnold.Google Scholar
Borsley, Robert D. 1991. Syntactic theory: A unified approach. London: Edward Arnold.Google Scholar
Carnie, Andrew. 2010. Constituent structure, 2nd edn. Oxford: Oxford University Press.Google Scholar
Carnie, Andrew. 2013. Syntax: A generative introduction, 3rd edn. Malden, MA: Wiley-Blackwell.Google Scholar
Chomsky, Noam. 1957. Syntactic structures. The Hague: Mouton.Google Scholar
Cowper, Elizabeth A. 1992. A concise introduction to syntactic theory: The government-binding approach. Chicago, IL: The University of Chicago Press.CrossRefGoogle Scholar
Dik, Simon C. 1968. Coodination: Its implications for the theory of general linguistics. Amsterdam: North-Holland.Google Scholar
Dowty, David. 1988. Type raising, functional composition, and non-constituent roconjunction. In Oehrle, Richard T., Bach, Emmon & Wheeler, Deirdre (eds.), Categorial grammars and natural language structures, 153197. Dordrecht: D. Reidel.Google Scholar
Engel, Ulrich. 1994. Syntax der deutschen Sprache, 3rd edn. Berlin: Erich Schmidt Verlag.Google Scholar
Eroms, Hans-Werner. 2000. Syntax der deutschen Sprache. Berlin: de Gruyter.CrossRefGoogle Scholar
Frazier, Michael, Potter, David & Yoshida, Masaya. 2012. Pseudo noun phrase coordination. Proceedings of the 30th West Coast Conference on Formal Linguistics (WCCFL 30), 142152. Somerville, MA: Cascadilla Proceedings Project.Google Scholar
Fromkin, Victoria A. (ed.), Bruce P. Hayes, Susan Curtiss, Anna Szabolcsi, Tim Stowell, Edward P. Stabler, Dominique Sportiche, Hilda Koopman, Patricia A. Keating, Pamela Munro, Nina Hyams & Donca Steriade. 2000. Linguistics: An introduction to linguistic theory. Malden, MA; Blackwell.Google Scholar
Groß, Thomas M. 1999. Theoretical foundations of dependency syntax. München: Iudicium.Google Scholar
Groß, Thomas M. & Osborne, Timothy. 2009. Toward a practical dependency grammar theory of discontinuities. SKY Journal of Linguistics 22, 4390.Google Scholar
Haegeman, Liliane. 2006. Thinking syntactically: A guide to argumentation and analysis. Malden, MA: Blackwell.Google Scholar
Haegeman, Liliane & Guéron, Jacqueline. 1999. English grammar: A generative perspective. Oxford: Blackwell.Google Scholar
Hays, David G. 1964. Dependency theory: A formalism and some observations. Language 40, 511525.Google Scholar
Hellwig, Peter. 2003. Dependency Unification Grammar. In Ágel et al. (eds.), 593–635.Google Scholar
Heringer, Hans Jürgen. 1996. Deutsche Syntax: Dependentiell. Tübingen: Stauffenburg.Google Scholar
Huddleston, Rodney, Payne, John & Peterson, Peter. 2002. Coordination and supplementation. In Rodney Huddleston & Geoffrey K. Pullum et al., The Cambridge grammar of the English language, 12731362. Cambridge: Cambridge University Press.Google Scholar
Hudson, Richard. 1984. Word Grammar. Oxford: Basil Blackwell.Google Scholar
Hudson, Richard. 1988. Coordination and grammatical relations. Journal of Linguistics 24, 303342.CrossRefGoogle Scholar
Hudson, Richard. 1989. Gapping and grammatical relations. Linguistics 25, 5794.Google Scholar
Hudson, Richard. 1990. An English Word Grammar. Oxford: Basil Blackwell.Google Scholar
Jung, Wha-Young. 1995. Syntaktische Relationen im Rahmen der Dependenzgrammatik. Hamburg: Helmut Buske Verlag.Google Scholar
Keyser, Samuel Jay & Postal, Paul M.. 1976. Beginning English grammar. New York: Harper & Row.Google Scholar
Kim, Jong-Bok & Sells, Peter. 2008. English syntax: An introduction. Stanford: CSLI Publications.Google Scholar
Klein, Wolfgang. 1981. Some rules of regular ellipsis. In Klein, Wolfgang & Levelt, Willemijn J. M. (eds.), Crossing the boundaries in linguistic studies: Studies presented to Manfred Bierwisch, 5178. Dordrecht: Reidel.Google Scholar
Kohrt, Manfred. 1976. Koordinationsreduktion und Verbstellung in einer generativen Grammatik des Deutschen (Linguistische Arbeiten 41). Tübingen: Max Niemeyer.CrossRefGoogle Scholar
Kroeger, Paul. 2005. Analyzing grammar: An introduction. Cambridge: Cambridge University Press.Google Scholar
Kuno, Susumo. 1976. Gapping: A functional analysis. Linguistic Inquiry 7, 300318.Google Scholar
Lasnik, Howard, Depiante, Marcela & Stepanov, Arthur. 2000. Syntactic structures revisited: Contemporary lectures on classic transformational theory. Cambridge, MA: The MIT Press.Google Scholar
Levine, Robert D. 2011. Linearization and its discontents. Proceedings of the 18th International Conference on Head-driven Phrase Structure Grammar, 126146. Stanford, CA: CSLI Publications.Google Scholar
Lobeck, Anne. 2000. Discovering grammar: An introduction to English sentence structure. New York: Oxford University Press.Google Scholar
Lobin, Henning. 1993. Koordinationssyntax als prozedurales Phänomen. Tübingen: Narr.Google Scholar
McCawley, James D. 1998. The syntactic phenomena of English, 2nd edn. Chicago, IL: The University of Chicago Press.Google Scholar
Mel’čuk, Igor A.2003. Levels of dependency description: Concepts and problems. In Ágel et al. (eds.), 188–229.Google Scholar
Mouret, François. 2006. A phrase structure approach to argument cluster coordination. Proceedings of the 13th International Conference on Head-driven Phrase StructureGrammar, 247267. Stanford, CA: CSLI Publications.Google Scholar
Napoli, Donna Jo. 1993. Syntax: Theory and problems. New York: Oxford University Press.Google Scholar
Neijt, Anneka. 1980. Gapping: A contribution to sentence grammar. Dordrecht: Foris.Google Scholar
van Oirsouw, Robert. 1987. The syntax of coordination. New York: Croom Helm.Google Scholar
Osborne, Timothy. 2006. Shared material and grammar: A dependency grammar theory of non-gapping coordination. Zeitschrift für Sprachwissenschaft 25, 3993.Google Scholar
Osborne, Timothy. 2008. Major constituents: And two dependency grammar constraints on sharing in coordination. Linguistics 46, 11091165.CrossRefGoogle Scholar
Osborne, Timothy. 2014. Type 2 rising: A contribution to a DG account of discontinuities. In Gerdes, Kim, Hajičová, Eva & Wanner, Leo (eds.), Dependency linguistics: Recent advances in linguistic theory using dependency structures, 273298. Amsterdam: John Benjamins.Google Scholar
Ouhalla, Jamal. 1994. Transformational grammar: From rules to principles and parameters. London: Edward Arnold.Google Scholar
Payne, Thomas E. 2006. Exploring language structure: A student’s guide. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Pesetsky, David. 1995. Zero syntax: Experiencers and cascades. Cambridge, MA: The MIT Press.Google Scholar
Phillips, Colin. 2003. Linear order and phrase structure. Linguistic Inquiry 34, 3790.CrossRefGoogle Scholar
Pickering, Martin & Barry, Guy. 1993. Dependency categorical grammar and coordination. Linguistics 31, 855902.Google Scholar
Poole, Geoffrey. 2002. Syntactic theory. New York: Palgrave.Google Scholar
Quirk, Randolph, Greenbaum, Sidney, Leech, Geoffrey & Svartvik, Jan. 1985/2010. A comprehensive grammar of the English language. New Delhi: Dorling Kindersley. [Authorized licensed reprint of the original]Google Scholar
Radford, Andrew. 1981. Transformational syntax: A student’s guide to Chomsky’s Extended Standard Theory. Cambridge: Cambridge University Press.Google Scholar
Radford, Andrew. 1988. Transformational grammar: A first course. Cambridge: Cambridge University Press.Google Scholar
Radford, Andrew. 1997. Syntactic theory and the structure of English: A minimalist approach. Cambridge: Cambridge University Press.Google Scholar
Radford, Andrew. 2004. English syntax: An introduction. Cambridge: Cambridge University Press.Google Scholar
Roberts, Ian. 1997. Comparative syntax. London: Arnold.Google Scholar
Sag, Ivan A., Wasow, Thomas & Bender, Emily M.. 2003. Syntactic theory, 2nd edn. Stanford: CSLI Publications.Google Scholar
Sailor, Craig & Thoms, Gary. 2013. On the non-existence of non-constituent coordination and non-constituent ellipsis. Proceedings of the 31st West Coast Conference on Formal Linguistics (WCCFL 31), 361370. Somerville, MA: Cascadilla Proceedings Project.Google Scholar
Saito, Mamoru. 1985. Some asymmetries in Japanese and their theoretical implications. Ph.D. dissertation, MIT.Google Scholar
Saito, Mamoru. 2010. Semantic and discourse interpretation of the Japanese left periphery. In Erteschik-Shir, Nomi & Rochman, Lisa (eds.), The sound patterns of syntax, 140173. Oxford: Oxford University Press.Google Scholar
Sobin, Nicholas. 2011. Syntactic analysis: The basics. Malden, MA: Wiley-Blackwell.Google Scholar
Sportiche, Dominique, Koopman, Hilda & Stabler, Edward. 2014. An introduction to syntactic analysis and theory. Malden, MA: Wiley Blackwell.Google Scholar
Starosta, Stanley. 1988. The case for Lexicase: An outline of Lexicase grammatical theory. London: Pinter Publishers.Google Scholar
Steedman, Mark. 1985. Dependency and coordination in the grammar of Dutch and English. Language 61.3, 523568.Google Scholar
Steedman, Mark. 1990. Gapping as constituent coordination. Linguistics and Philosophy 13.2, 207263.Google Scholar
Steedman, Mark. 1991. Phrase structure and coordination in a combinatory grammar. In Baltin, Mark & Kroch, Anthony (eds.), Alternative conceptions of phrase structure, 201231. Chicago, IL: The University of Chicago Press.Google Scholar
Tallerman, Maggie. 2005. Understanding syntax. 2nd edn. London: Hodder Education.Google Scholar
Tesnière, Lucien. 1959. Éléments de syntaxe structurale. Paris: Klincksieck.Google Scholar
Tesnière, Lucien. 2015 [1959]. Elements of structural syntax, translated by Timothy Osborne & Sylvain Kahane. Amsterdam: John Benjamins.CrossRefGoogle Scholar
van Valin, Robert. 2001. An introduction to syntax. Cambridge: Cambridge University Press.Google Scholar
Wesche, Birgit. 1995. Symmetric coordination: An alternative theory of phrase structure (Linguistische Arbeiten 332). Tübingen: Niemeyer.Google Scholar
Wilder, Chris. 1997. Some properties of ellipsis in coordination. In Alexiandou, Artemis & Alan Hall, T. (eds.), Studies in Universal Grammar and typological variation, 59106. Amsterdam: John Benjamins.Google Scholar
Yatabe, Shûichi. 2001. The syntax and semantics of left-node raising in Japanese. In Flickinger, Dan & Kathol, Andreas (eds.), Proceedings of the 7th International Conference on Head-driven Phrase Structure Grammar, 325344. Stanford, CA: CSLI Publications.Google Scholar
Yatabe, Shûichi. 2012. Comparison of the ellipsis-based theory of non-constituent coordination with its alternatives. Proceedings of the 19th International Conference on Head-driven Phrase Structure Grammar, 454474. Stanford, CA: CSLI Publications.Google Scholar
Zoerner, Cyril & Agbayani, Brian. 2000. Unifying left-peripheral deletion, gapping, and pseudogapping. In Okrent, Akira & Boyle, John (eds.), Proceedings of CLS 36, vol. 1: The Main Session, 549561. Chicago, IL: Chicago Linguistic Society.Google Scholar
Figure 0

Table 1 Examples of roots in strings.