Book contents
- Frontmatter
- Contents
- Figures
- Tables
- Contributors
- Part I About This Book
- Part II Models of Neural and Cognitive Processing
- 2 Light and Deep Parsing: A Cognitive Model of Sentence Processing
- 3 Decoding Language from the Brain
- 4 Graph Theory Applied to Speech: Insights on Cognitive Deficit Diagnosis and Dream Research
- Part III Data Driven Models
- Part IV Social and Language Evolution
- Index
- References
2 - Light and Deep Parsing: A Cognitive Model of Sentence Processing
from Part II - Models of Neural and Cognitive Processing
Published online by Cambridge University Press: 30 November 2017
- Frontmatter
- Contents
- Figures
- Tables
- Contributors
- Part I About This Book
- Part II Models of Neural and Cognitive Processing
- 2 Light and Deep Parsing: A Cognitive Model of Sentence Processing
- 3 Decoding Language from the Brain
- 4 Graph Theory Applied to Speech: Insights on Cognitive Deficit Diagnosis and Dream Research
- Part III Data Driven Models
- Part IV Social and Language Evolution
- Index
- References
Summary
Abstract
Humans process language quickly and efficiently, despite the complexity of the task. However, classical language-processing models do not account well for this feature. In particular, most of them are based on an incremental organization, in which the process is homogeneous and consists in building step-by-step a precise syntactic structure, from which an interpretation is calculated. In this chapter, we present evidence that contradicts this view, and show that language processing can be achieved at varying levels of precision. Often, processing remains shallow, leaving interpretation greatly underspecifie.
We propose a new language-processing architecture, involving two types of mechanisms. We show that, inmost cases, shallow processing is sufficient and deep parsing is required only when faced with difficulty. The architecture we propose is based on an interdisciplinary perspective in which elements from linguistics, natural language processing, and psycholinguistics come into play.
Introduction
How humans process language quickly and efficiently remains largely unexplained. The main difficulty is that, although many disciplines (linguistics, psychology, computer science, and neuroscience) have addressed this question, it is difficult to describe language as a global system. Typically, no linguistic theory entirely explains how the different sources of linguistic information interact. Most theories, and then most descriptions, only capture partial phenomena, without providing a general framework bringing together prosody, pragmatics, syntax, semantics, etc. For this reason, many linguistic theories still consider language organization as modular: linguistic domains are studied and processed separately, their interaction is implemented at a later stage. As a consequence, the lack of a general theory of language, accounting for its different aspects, renders difficult the elaboration of a global processing architecture. This problem has direct consequences for natural language processing: the classical architecture relies on different subtasks: segmenting, labeling, identifying the structures, interpreting, etc. This organization more or less strictly imposes a sequential view of language processing, considering in particular words as being the core of the system. Such a view does not account for the fact that language is based on complex objects, made of different and hetergeneous sources of information, interconnected at different levels, and which interpretation cannot always be done compositionally (each information domain transferring a subset of information to another).
- Type
- Chapter
- Information
- Language, Cognition, and Computational Models , pp. 27 - 52Publisher: Cambridge University PressPrint publication year: 2018
References
- 2
- Cited by