Book contents
- Frontmatter
- Contents
- Figures
- Tables
- Contributors
- Part I About This Book
- Part II Models of Neural and Cognitive Processing
- Part III Data Driven Models
- 5 Putting Linguistics Back into Computational Linguistics
- 6 A Distributional Model of Verb-Specific Semantic Roles Inferences
- 7 Native Language Identification on EFCAMDAT
- 8 Evaluating Language Acquisition Models: A Utility-Based Look at Bayesian Segmentation
- Part IV Social and Language Evolution
- Index
5 - Putting Linguistics Back into Computational Linguistics
from Part III - Data Driven Models
Published online by Cambridge University Press: 30 November 2017
- Frontmatter
- Contents
- Figures
- Tables
- Contributors
- Part I About This Book
- Part II Models of Neural and Cognitive Processing
- Part III Data Driven Models
- 5 Putting Linguistics Back into Computational Linguistics
- 6 A Distributional Model of Verb-Specific Semantic Roles Inferences
- 7 Native Language Identification on EFCAMDAT
- 8 Evaluating Language Acquisition Models: A Utility-Based Look at Bayesian Segmentation
- Part IV Social and Language Evolution
- Index
Summary
Abstract
Almost all practitioners of natural language processing make a crucial error that also besets much of Chomsky's argument about the poverty of the stimulus in first language learning, namely that we can discover all we need to know about language by examining sufficiently large quantities of it. The error is to ignore the crucial function of language in referring to things in the real and imaginary worlds. Speakers and hearers appeal to this in many ways. Translators rely on it to provide information that they must supply in the target text but which is only implicit in the source. Reference is one of several properties of language that relate parts of a text or a discourse that may not be adjacent to one another. To the extent that linguists are concerned with how people use language for communication, they are interested in processes in people's heads and, to the extent that they are concerned with processes, they are interested in computation. It is this, rather that engineering feats like machine translation, that gives computational linguistics its special role.
Explicit and Implicit Information
Alfred Charles William Harmsworth, 1st Viscount Northcliffe (1865–1922), was the owner of two British newspapers, the Daily Mail and the Daily Mirror. He is credited with first pointing out that “Dog Bites Man” is an unlikely headline, whereas “Man Bites Dog” might herald a newsworthy story. Modern journalists are referring to the same phenomenon when they point out that no one ever writes a story about a plane because it did not crash. The point is obvious to journalists and to ordinary citizens seeking to avoid newsworthy flights. But it is missed by the adherents of Noam Chomsky's linguistic theories and by practitioners of a branch of linguistic engineering that has come to be known as natural language processing (NLP). This is curious because, but for an interest in language, these two groups have little in common.
Chomsky argues against the behaviorist view of language acquisition espoused, for example, by Skinner, on the grounds that even very large quantities of raw linguistic data contain neither the kind nor the quantity of information that would enable a general learning device to acquire everything that it would have to know to use the system as humans do.
- Type
- Chapter
- Information
- Language, Cognition, and Computational Models , pp. 101 - 117Publisher: Cambridge University PressPrint publication year: 2018