Book contents
- Frontmatter
- Contents
- List of contributors
- Preface
- Acknowledgments
- 1 Introduction
- 2 The formal foundations of AI
- 3 Levels of theory
- 4 Programs and theories
- 5 The role of representations
- Thinking machines: Can there be? Are we?
- Evolution, error, and intentionality
- 6 The role of programs in AI
- 7 Rational reconstruction as an AI methodology
- 8 Is AI special in regard to its methodology?
- 9 Does connectionism provide a new paradigm for AI?
- 10 The role of correctness in AI
- 11 Limitations on current AI technology
- 12 Annotated bibliography on the foundations of AI
- Index of names
Evolution, error, and intentionality
Published online by Cambridge University Press: 03 May 2010
- Frontmatter
- Contents
- List of contributors
- Preface
- Acknowledgments
- 1 Introduction
- 2 The formal foundations of AI
- 3 Levels of theory
- 4 Programs and theories
- 5 The role of representations
- Thinking machines: Can there be? Are we?
- Evolution, error, and intentionality
- 6 The role of programs in AI
- 7 Rational reconstruction as an AI methodology
- 8 Is AI special in regard to its methodology?
- 9 Does connectionism provide a new paradigm for AI?
- 10 The role of correctness in AI
- 11 Limitations on current AI technology
- 12 Annotated bibliography on the foundations of AI
- Index of names
Summary
The foundational problem of the semantics of mental representation has been perhaps the primary topic of philosophical research in cognitive science in recent years, but progress has been negligible, largely because the philosophers have failed to acknowledge a major but entirely tacit difference of outlook that separates them into two schools of thought. My task here is to bring this central issue into the light.
The Great Divide I want to display resists a simple, straightforward formulation, not surprisingly, but we can locate it by retracing the steps of my exploration, which began with a discovery about some theorists' attitudes towards the interpretation of artifacts. The scales fell from my eyes during a discussion with Jerry Fodor and some other philosophers about a draft of a chapter of Fodor's Psychosemantics (Fodor, 1987). The chapter in question, “Meaning and the World Order,” concerns Fred Dretske's attempts (1981, especially chapter 8; 1985; 1986) to solve the problem of misrepresentation. As an aid to understanding the issue, I had proposed to Fodor and the other participants in the discussion that we first discuss a dead simple case of misrepresentation: a coin-slot testing apparatus on a vending machine accepting a slug. “That sort of case is irrelevant,” Fodor retorted instantly, “because after all, John Searle is right about one thing; he's right about artifacts like that. They don't have any intrinsic or original intentionality – only derived intentionality.”
- Type
- Chapter
- Information
- The Foundations of Artificial IntelligenceA Sourcebook, pp. 190 - 212Publisher: Cambridge University PressPrint publication year: 1990
- 3
- Cited by