Hostname: page-component-78c5997874-dh8gc Total loading time: 0 Render date: 2024-11-03T02:10:38.220Z Has data issue: false hasContentIssue false

Computer Reliability and Public Policy: Limits of Knowledge of Computer-Based Systems*

Published online by Cambridge University Press:  13 January 2009

James H. Fetzer
Affiliation:
Philosophy, University of Minnesota, Duluth

Extract

Perhaps no technological innovation has so dominated the second half of the twentieth century as has the introduction of the programmable computer. It is quite difficult if not impossible to imagine how contemporary affairs—in business and science, communications and transportation, governmental and military activities, for example—could be conducted without the use of computing machines, whose principal contribution has been to relieve us of the necessity for certain kinds of mental exertion. The computer revolution has reduced our mental labors by means of these machines, just as the Industrial Revolution reduced our physical labor by means of other machines.

Type
Research Article
Copyright
Copyright © Social Philosophy and Policy Foundation 1996

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1 Davis, William S., Fundamental Computer Concepts (Reading, MA: Addison-Wesley, 1986), p. 2.Google Scholar

2 Kleene, Stephen C., Mathematical Logic (New York: John Wiley and Sons, 1967), ch. 5.Google Scholar

3 Downing, Douglas and Covington, Michael, Dictionary of Computer Terms (Woodbury, NY: Barren's, 1986), p. 117Google Scholar. On the use of the term “heuristics” in the field of artificial intelligence, see Barr, Avron and Feigenbaum, Edward A., The Handbook of Artificial Intelligence, vol. I (Reading, MA: Addison-Wesley, 1981), pp. 2830, 58, 109.Google Scholar

4 Examples of expert systems may be found in Barr, Avron and Feigenbaum, Edward A., The Handbook of Artificial Intelligence, vol. II (Reading, MA: Addison-Wesley, 1982).Google Scholar

5 A discussion of various kinds of expert systems may be found in Fetzer, James H., Artificial Intelligence: Its Scope and Limits (Dordrecht, The Netherlands: Kluwer Academic Publishers, 1990), pp. 180–91.CrossRefGoogle Scholar

6 Buchanan, Bruce G. and Shortliffe, Edward H., eds., Rule-based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project (Reading, MA: Addison-Wesley, 1984).Google Scholar

7 Ibid., p. 74. The number “.6” represents a “certainty factor” which, on a scale from -1 to 1, indicates how strongly the claim has been confirmed (CF > 0) or disconfirmed (CF < 0); see also note 34 below.

8 Inference to the best explanation is also known as “abductive inference.” See, for example, Fetzer, James H. and Almeder, Robert, Glossary of Epistemology/Philosophy of Science (New York: Paragon House, 1993)Google Scholar; and especially Peng, Yun and Reggia, James, Abductive Inference Models for Diagnostic Problem-Solving (New York: Springer-Verlag, 1990).CrossRefGoogle Scholar

9 Barr, and Feigenbaum, , The Handbook of Artificial Intelligence, vol. II, p. 189Google Scholar. The tendency has been toward the use of measures of subjective probability in lieu of CFs; see note 34 below.

10 Buchanan, and Shortliffe, , eds., Rule-based Expert Systems, p. 4.Google Scholar

11 Ibid., p. 5.

12 On the project manager, see, for example, Whitten, Neal, Managing Software Development Projects (New York: John Wiley and Sons, 1989).Google Scholar

13 Criteria for the selection of domain experts are discussed by Waterman, D. A., A Guide to Expert Systems (Reading, MA: Addison-Wesley, 1986).Google Scholar

14 The term “traditional” occurs here in contrast to the (far weaker) “artificial intelligence” conception of knowledge, in particular. On the traditional conception, see, for example, Scheffler, Israel, Conditions of Knowledge (Chicago, IL: University of Chicago Press, 1965)Google Scholar. On the use of this term in AI, see especially Fetzer, , Artificial Intelligence, ch. 5, pp. 127–32.Google Scholar

15 See Fetzer, James H., Scientific Knowledge (Dordrecht, The Netherlands: D. Reidel, 1981), ch. 1.CrossRefGoogle Scholar

16 The origins of distinctions between analytic and synthetic knowledge can be traced back to the work of eighteenth and nineteenth century philosophers, especially David Hume (1711–1776) and Immanuel Kant (1724–1804). Hume drew a distinction between knowledge of relations between ideas and knowledge of matters of fact, while Kant distinguished between knowledge of conceptual connections and knowledge of the world. While it would not be appropriate to review the history of the distinction here, it should be observed that it has enormous importance in many philosophical contexts. For further discussion, see Ackermann, Robert, Theories of Knowledge (New York: McGraw-Hill, 1965)Google Scholar; Scheffler, , Conditions of KnowledgeGoogle Scholar; and Fetzer, and Almeder, , GlossaryGoogle Scholar. For a recent defense of the distinction, see Fetzer, , Artificial Intelligence, pp. 106–9Google Scholar; and especially Fetzer, James H., Philosophy of Science (New York: Paragon House, 1993), chs. 1 and 3.Google Scholar

17 Thus, if the “many” who are honest were a large proportion of all the senators, then that degree of support should be high; if it were only a modest proportion, then it should be low; and so on. If the percentage were, say, m/n, then the support conferred upon the conclusion by those premises would presumably equal m/n. See, for example, Fetzer, , Scientific Knowledge, Part IIIGoogle Scholar; Fetzer, , Philosophy of Science, chs. 4–6Google Scholar; and note 34 below.

18 On the total-evidence condition, see Hempel, Carl G., Aspects of Scientific Explanation (New York: The Free Press, 1965), pp. 5379.Google Scholar

19 This is a pragmatic requirement that governs inductive reasoning.

20 For further discussion, see, for example, Fetzer, , Philosophy of Science, ch. 1.Google Scholar

21 See, for example, Hempel, Carl G., “On the Nature of Mathematical Truth” and “Geometry and Empirical Science,” both of which are reprinted in Feigl, Herbert and Sellars, Wilfrid, eds., Readings in Philosophical Analysis (New York: Appleton-Cenrury-Crofts, 1949), pp. 222–37 and 238–49.Google Scholar

22 Thus, as Einstein observed, to the extent to which the laws of mathematics refer to reality, they are not certain; and to the extent to which they are certain, they do not refer to reality—a point I shall pursue.

23 For further discussion, see, for example, Fetzer, , Scientific Knowledge, pp. 1415.Google Scholar

24 The differences between stipulative truths and empirical truths are crucial for understanding computer programming.

25 Davis, , Fundamental Computer Concepts, p. 20Google Scholar. It should be observed, however, that some consider the clock to be convenient for but not essential to computer operations.

26 Ibid., p. 189. There are languages and machines that permit the representation of numbers of arbitrary size through the concatenation of 32-bit words, where limitations are imposed by memory resources.

27 Nelson, David, “Deductive Program Verification (A Practitioner's Commentary),” Minds and Machines, vol. 2, no. 3 (08 1992), pp. 283307CrossRefGoogle Scholar; the quote is from p. 289. On this and other grounds, Nelson denies that computers are properly described as “mathematical machines” and asserts that they are better described as “logic machines.”

28 Up to ten billion times as large, according to Markoff, John, “Flaw Undermines Accuracy of Pentium Chips,” New York Times, 11 24, 1994, pp. C1–C2Google Scholar. As Markoff illustrates, the difficulty involves division:

Problem:

4,195,835 – [(4,195,835 ÷ 3,145,727) × 3,145,727] = ?

Correct Calculation:

4,195,835 – [(1.333S204) × 3,145,727] = 0

Pentium's Calculation:

4,195,835 – [(1.3337391) × 3,145,727] = 256

29 The remark is attributed to William Kahan of the University of California at Berkeley by Markoff, , “Flaw Undermines Accuracy,” p. C1Google Scholar. A number of articles discussing the problem have since appeared, including Markoff, John, “Error in Chip Still Dogging Intel Officials,” New York Times, 12 6, 1994, p. C4Google Scholar; Flynn, Laurie, “A New York Banker Sees Pentium Problems,” New York Times, 12 19, 1994, pp. C1–C2Google Scholar; Markoff, John, “In About-Face, Intel Will Swap Flawed Pentium Chip for Buyers,” New York Times, 12 21, 1994, pp. A1 and C6Google Scholar; and Markoff, John, “Intel's Crash Course on Consumers,” New York Times, 12 21, 1994, p. C1.Google Scholar

30 Including a security loophole with Sun Microsystems that was acknowledged in 1991, as Markoff observes in “Flaw Undermines Accuracy,” p. C2.Google Scholar

31 Hockenberry, John, “Pentium and Our Crisis of Faith,” New York Times, 12 28, 1994, p. A11Google Scholar; Lewis, Peter H., “From a Tiny Flaw, a Major Lesson,” New York Times, 12 27, 1994, p. B10Google Scholar; and “Cyberscope,” Newsweek, 12 12, 1994Google Scholar. Another example of humor at Intel's expense: Question: What's another name for the “Intel Inside” sticker they put on Pentiums? Answer: A warning label.

32 Hockenberry, , “Pentium and Our Crisis of Faith,” p. A11.Google Scholar

33 As M. M. Lehman has observed, another—often more basic—problem can arise when changes in the world affect the truth of assumptions on which programs are based—which leads him to separate (what he calls) S-type and E-type systems, where the latter but not the former are subject to revision under the control of feedback. See, for example, Lehman, M. M., “Feedback, Evolution, and Software Technology,” IEEE Software Process Newsletter, 04 1995Google Scholar, for more discussion.

34 Fetzer, , Scientific Knowledge, p. 15Google Scholar. Other problems not discussed in the text include determining the precise conditions that must be satisfied for something to properly qualify as “scientific knowledge” (by arbitrating among inductivist, deductivist, and abductivist models, for example), and the appropriate measures that should be employed in determining degrees of evidential support (by accounting for the proper relations between subjective, frequency, and propensity interpretations of probability), a precondition for the proper appraisal of “certainty factors” (CFs), for example. These issues are pursued in Fetzer, , Scientific KnowledgeGoogle Scholar, and Fetzer, , Philosophy of Science.Google Scholar

35 See, for example, Davis, , Fundamental Computer Concepts, pp. 110–13.Google Scholar

36 Marcotty, Michael and Ledgard, Henry E., Programming Language Landscape, 2d ed. (Chicago, IL: Science Research Associates, 1986), ch. 2.Google Scholar

37 See Fetzer, James H., “Philosophical Aspects of Program Verification,” Minds and Machines, vol. 1, no. 2 (05 1991), pp. 197216.CrossRefGoogle Scholar

38 See ibid., p. 202.

39 See Fetzer, James H., “Program Verification: The Very Idea,” Communications of the ACM, vol. 31, no. 9 (09 1988), p. 1057.CrossRefGoogle Scholar

40 Smith, Brian C., “Limits of Correctness in Computers,” Center for the Study of Language and Information, Stanford University, Report No. CSLI–85–35 (1985)Google Scholar; reprinted in Dunlop, Charles and Kling, Rob, eds., Computerization and Controversy (San Diego, CA: Academic Press, 1991), pp. 632–46Google Scholar. The passage quoted here is found on p. 638 (emphasis in original).

41 Smith, , “Limits,” p. 639Google Scholar. As Smith also observes, computers and models themselves are “embedded within the real world,” which is why the symbol for “REAL WORLD” is open in relation to the box, which surrounds the elements marked “COMPUTER” and “MODEL.”

42 Smith, , “Limits,” p. 638 (emphasis added).Google Scholar

43 See Fetzer, , Philosophy of Science, pp. xiixiii.Google Scholar

44 Smith, , “Limits,” p. 639.Google Scholar

45 Indeed, on the deductivist model of scientific inquiry, which has been advocated especially by Karl R. Popper, even the adequacy of scientific theories is deductively checkable by comparing deduced consequences with descriptions of the results of observations and experiments, which are warranted by perceptual inference. This process is not a symmetrical decision procedure, since it can lead to the rejection of theories in science but not to their acceptance. The failure to reject on the basis of severe tests, however, counts in favor of a theory. See Popper, Karl R., Conjectures and Refutations (New York: Harper and Row, 1968)Google Scholar. On the deductivist model, see Fetzer, Philosophy of Science. The construction of proofs in formal sciences, incidentally, is also an asymmetrical procedure, since the failure to discover a proof does not establish that it does not exist.

46 The use of the term “propensity” is crucial here, since it refers to the strength of the causal tendency. The general standard being employed may be referred to as the propensity criterion of causal relevance. See, for example, Fetzer, , Scientific KnowledgeGoogle Scholar, and Fetzer, , Philosophy of ScienceGoogle Scholar, for technical elaboration.

47 The use of the term “frequency” is crucial here, since it refers to the relative frequency of an attribute. The general standard being employed may be referred to as the frequency criterion of statistical relevance. See, for example, Salmon, Wesley C., Statistical Explanation and Statistical Relevance (Pittsburgh, PA: University of Pittsburgh Press, 1971)CrossRefGoogle Scholar. But Salmon mistakes statistical relevance for explanatory relevance.

48 Strictly speaking, in the case of propensities, causal relations and relative frequencies are related probabilistically. See, for example, Fetzer, , Scientific KnowledgeGoogle Scholar, and Fetzer, , Philosophy of Science.Google Scholar

49 Even when the chemical composition, the manner of striking, and the dryness of a match are causally relevant to its lighting, that outcome may be predicted with deductive certainty (when the relationship is deterministic) or with probabilistic confidence (when the relationship is indeterministic) only if no other relevant properties, such as the presence or absence of oxygen, have been overlooked. For discussion, see, for example, Fetzer, James H., “The Frame Problem: Artificial Intelligence Meets David Hume,” International Journal of Expert Systems, vol. 3, no. 3 (1990), pp. 219–32Google Scholar; and Fetzer, James H., “Artificial Intelligence Meets David Hume: A Response to Pat Hayes,” International Journal of Expert Systems, vol. 3, no. 3 (1990), pp. 239–47.Google Scholar

50 Laws of nature are nature's algorithms. See Fetzer, , “Artificial Intelligence Meets David Hume,” p. 239Google Scholar. A complete theory of the relations between models of the world and the world would include a defense of the abductivist model of science as “inference to the best explanation.”

51 Smith thus appears to have committed a fallacy of equivocation by his ambiguous use of the phrase “theory of the model-world relationship.”

52 Smith, , “Limits,” p. 640.Google Scholar

53 Whitten, , Managing Software Development Projects, p. 13.Google Scholar

54 Jacky, Jonathan, “Safety-Critical Computing: Hazards, Practices, Standards, and Regulations,” The Sciences, 09/10 1989Google Scholar; reprinted in Dunlop, and Kling, , Computerization and Controversy, pp. 612–31.Google Scholar

55 Ibid., p. 617.

56 As M. M. Lehman has observed (in personal communication with the author), specifications are frequently merely partial models of the problem to be solved.

57 Hoare, C. A. R., “An Axiomatic Basis for Computer Programming,” Communications of the ACM, vol. 12, no. 10 (10 1969), pp. 576–80 and 583CrossRefGoogle Scholar; the quotation may be found on p. 576.

58 Smith, , “Limits,” pp. 639–43Google Scholar. Other authors have concurred. See, for example, Borning, Alan, “Computer System Reliability and Nuclear War,” Communications of the ACM, vol. 30, no. 2 (02 1987), pp. 112–31CrossRefGoogle Scholar; reprinted in Dunlop, and Kling, , Computerization and Controversy.Google Scholar

59 For further discussion, see, for example, Boehm, B. W., Software Engineering Economics (New York: Prentice-Hall, 1981).Google Scholar

60 See Fetzer, , “Program Verification,” pp. 1056–57.Google Scholar

61 See Fetzer, James H., “Author's Response,” Communications of the ACM, vol. 32, no. 4 (04 1989), p. 512.Google Scholar

62 Hoare, , “An Axiomatic Basis for Computer Programming,” p. 579.Google Scholar

63 See Fetzer, , “Program Verification.”Google Scholar

64 Cohn, Avra, “The Notion of Proof in Hardware Verification,” Journal of Automated Reasoning, vol. 5, no. 2 (06 1989), p. 132.CrossRefGoogle Scholar

65 Littlewood, Bev and Strigini, Lorenzo, “The Risks of Software,” Scientific American, 11 1992, p. 65.Google Scholar

66 Ibid., pp. 65–66 and 75.

67 Shepherd, David and Wilson, Greg, “Making Chips That Work,” New Scientist, 05 13, 1989, pp. 6164.Google Scholar

68 Ibid., p. 62.

70 Littlewood, and Strigini, , “The Risks of Software,” p. 75.Google Scholar

71 Hoare, C. A. R., “Mathematics of Programming,” BYTE, 08 1986, p. 116.Google Scholar

72 David Pamas, quoted in Suydam, William E., “Approaches to Software Testing Embroiled in Debate,” Computer Design, vol. 24, no. 21 (11 15, 1986), p. 50.Google Scholar

73 Littlewood, and Strigini, , “The Risks of Software,” p. 63.Google Scholar

74 For an introduction to chaos theory, see Gleick, James, Chaos: Making a New Science (New York: Penguin Books, 1988).Google Scholar

75 See Fetzer, , “The Frame Problem,” pp. 228–29Google Scholar. Predictions based upon partial and incomplete descriptions of chaotic systems are obviously fallible—their reliability would appear to be unknowable.

76 Sometimes unverifiable or incorrect programs can even be preferred; see Savitzky, Stephen, “Technical Correspondence,” Communications of the ACM, vol. 32, no. 3 (03 1989), p. 377Google Scholar. These include cases where a verifiably incorrect program yields a better performance than a verifiably correct program as a solution to a problem—where the most important features of a successful solution may involve properties that are difficult or impossible to formalize—and other kinds of situations.

77 For further discussion, see Fetzer, , Scientific KnowledgeGoogle Scholar, and Fetzer, , Philosophy of Science.Google Scholar

78 Lehman, M. M., “Uncertainty in Computer Application,” Communications of the ACM, vol. 33, no. 5 (05 1990), p. 585 (emphasis added).Google Scholar

79 Nelson, , “Deductive Program Verification” (supra note 27), p. 292.Google Scholar

80 Donald Gillies has informed me that Hoare now advocates this position.

81 The prospect of having to conduct statistical tests of nuclear weapons, space shuttle launches, etc., suggests the dimensions of the problem.

82 See, for example, Jacky, , “Safety-Critical Computing,” pp. 622–27.Google Scholar

83 Ibid., p. 627. An excellent and accessible discussion of problems involving computer systems that affect many important areas of life is Peterson, Ivars, Fatal Defect: Chasing Killer Computer Bugs (New York: Random House/Times Books, 1995).Google Scholar