Book contents
- Frontmatter
- Contents
- Contributors
- Preface
- 1 Architectures and Mechanisms in Sentence Comprehension
- Part I Frameworks
- Part II Syntactic and Lexical Mechanisms
- 6 The Modular Statistical Hypothesis: Exploring Lexical Category Ambiguity
- 7 Lexical Syntax and Parsing Architecture
- 8 Constituency, Context, and Connectionism in Syntactic Parsing
- Part III Syntax and Semantics
- Part IV Interpretation
- Author Index
- Subject Index
8 - Constituency, Context, and Connectionism in Syntactic Parsing
Published online by Cambridge University Press: 03 October 2009
- Frontmatter
- Contents
- Contributors
- Preface
- 1 Architectures and Mechanisms in Sentence Comprehension
- Part I Frameworks
- Part II Syntactic and Lexical Mechanisms
- 6 The Modular Statistical Hypothesis: Exploring Lexical Category Ambiguity
- 7 Lexical Syntax and Parsing Architecture
- 8 Constituency, Context, and Connectionism in Syntactic Parsing
- Part III Syntax and Semantics
- Part IV Interpretation
- Author Index
- Subject Index
Summary
Introduction
As is evident from the other chapters in this book, ambiguity resolution is a major issue in the study of the human sentence processing mechanism. Many of the proposed models of ambiguity resolution involve the combination of multiple soft constraints, including both local and contextual constraints. Finding the best solution to multiple soft constraints is exactly the kind of problem that connectionist networks are good at solving, and several models have used them in one way or another. The difficulty has been that standard connectionist networks do not have sufficient representational power to capture some central properties of natural language (Fodor and Pylyshyn, 1988; Fodor and McLaughlin, 1990; Hadley, 1994). In particular, standard connectionist networks cannot represent constituency. Thus, they cannot capture generalizations over constituents, and in learning they cannot generalize what they have learned from one constituent to another. Since regularities across constituents are fundamental and pervasive in all natural languages, any computational model that predicts no such pattern of regularities cannot be adequate as a complete model of sentence processing. To address this inadequacy without losing their advantages, the representational power of connectionist networks needs to be extended.
This chapter discusses exactly such an extension to connectionist networks. Temporal synchrony variable binding (Shastri and Ajjanagadde, 1993) gives connectionist networks the ability to represent constituency, and thus to capture and learn generalizations over constituents.
- Type
- Chapter
- Information
- Architectures and Mechanisms for Language Processing , pp. 189 - 210Publisher: Cambridge University PressPrint publication year: 1999