Book contents
- Frontmatter
- Contents
- Contributors
- Introduction
- 1 Conceptual analysis of abduction
- 2 Knowledge-based systems and the science of AI
- 3 Two RED systems – abduction machines 1 and 2
- 4 Generalizing the control strategy – machine 3
- 5 More kinds of knowledge: Two diagnostic systems
- 6 Better task analysis, better strategy – machine 4
- 7 The computational complexity of abduction
- 8 Two more diagnostic systems
- 9 Better task definition, better strategy – machine 5
- 10 Perception and language understanding
- Appendix A Truth seekers
- Appendix B Plausibility
- Extended Bibliography
- Acknowledgments
- Index
8 - Two more diagnostic systems
Published online by Cambridge University Press: 08 October 2009
- Frontmatter
- Contents
- Contributors
- Introduction
- 1 Conceptual analysis of abduction
- 2 Knowledge-based systems and the science of AI
- 3 Two RED systems – abduction machines 1 and 2
- 4 Generalizing the control strategy – machine 3
- 5 More kinds of knowledge: Two diagnostic systems
- 6 Better task analysis, better strategy – machine 4
- 7 The computational complexity of abduction
- 8 Two more diagnostic systems
- 9 Better task definition, better strategy – machine 5
- 10 Perception and language understanding
- Appendix A Truth seekers
- Appendix B Plausibility
- Extended Bibliography
- Acknowledgments
- Index
Summary
In chapter 7 abduction stumbled. Our powerful all-purpose inference pattern, maybe the basis for all knowledge from experience, was mathematically proved to be impossible (or anyway deeply impractical under ordinary circumstances). How can this be? Apparently we do make abductions all the time in ordinary life and science. Successfully. Explanation-seeking processes not only finish in reasonable time, they get right answers. Correct diagnosis is possible, even practical. (Or maybe skepticism is right after all, knowledge is impossible, correct diagnosis is an illusion.)
Maybe there is no deep question raised by those mathematical results. Perhaps all they are telling us is that we do not always get the right answer. Sometimes our best explanation is not the “true cause” (ways this can occur are systematically described in chapter 1). Sometimes we cannot find a best explanation in reasonable time, or we find one but do not have enough time to determine whether it is unique. Maybe knowledge is possible after all, but it is a kind of hit or miss affair. Yet if knowledge is possible, how can we succeed in making abductions without being defeated by incompatible hypotheses, cancellation effects, and too-close confidence values?
Whether or not knowledge is possible, we can build diagnostic systems able to achieve good performance in complex domains. This chapter presents two such systems and also includes a special section on how a kind of learning can be fruitfully treated as abduction. A fuller response to the complexity results is given in chapter 9.
- Type
- Chapter
- Information
- Abductive InferenceComputation, Philosophy, Technology, pp. 180 - 201Publisher: Cambridge University PressPrint publication year: 1994