Book contents
- Frontmatter
- Contents
- Preface
- Contributors
- 1 The Complexity of Algorithms
- 2 Building Novel Software: the Researcher and the Marketplace
- 3 Prospects for Artificial Intelligence
- 4 Structured Parallel Programming: Theory meets Practice
- 5 Computer Science and Mathematics
- 6 Paradigm Merger in Natural Language Processing
- 7 Large Databases and Knowledge Re-use
- 8 The Global-yet-Personal Information System
- 9 Algebra and Models
- 10 Real-time Computing
- 11 Evaluation of Software Dependability
- 12 Engineering Safety-Critical Systems
- 13 Semantic Ideas in Computing
- 14 Computers and Communications
- 15 Interactive Computing in Tomorrow's Computer Science
- 16 On the Importance of Being the Right Size
- References
- Index
11 - Evaluation of Software Dependability
Published online by Cambridge University Press: 10 December 2009
- Frontmatter
- Contents
- Preface
- Contributors
- 1 The Complexity of Algorithms
- 2 Building Novel Software: the Researcher and the Marketplace
- 3 Prospects for Artificial Intelligence
- 4 Structured Parallel Programming: Theory meets Practice
- 5 Computer Science and Mathematics
- 6 Paradigm Merger in Natural Language Processing
- 7 Large Databases and Knowledge Re-use
- 8 The Global-yet-Personal Information System
- 9 Algebra and Models
- 10 Real-time Computing
- 11 Evaluation of Software Dependability
- 12 Engineering Safety-Critical Systems
- 13 Semantic Ideas in Computing
- 14 Computers and Communications
- 15 Interactive Computing in Tomorrow's Computer Science
- 16 On the Importance of Being the Right Size
- References
- Index
Summary
On Disparity, Difficulty, Complexity, Novelty – and Inherent Uncertainty
It has been said that the term software engineering is an aspiration not a description. We would like to be able to claim that we engineer software, in the same sense that we engineer an aero-engine, but most of us would agree that this is not currently an accurate description of our activities. My suspicion is that it never will be.
From the point of view of this essay – i.e. dependability evaluation – a major difference between software and other engineering artefacts is that the former is pure design. Its unreliability is always the result of design faults, which in turn arise as a result of human intellectual failures. The unreliability of hardware systems, on the other hand, has tended until recently to be dominated by random physical failures of components – the consequences of the ‘perversity of nature’. Reliability theories have been developed over the years which have successfully allowed systems to be built to high reliability requirements, and the final system reliability to be evaluated accurately. Even for pure hardware systems, without software, however, the very success of these theories has more recently highlighted the importance of design faults in determining the overall reliability of the final product. The conventional hardware reliability theory does not address this problem at all.
In the case of software, there is no physical source of failures, and so none of the reliability theory developed for hardware is relevant. We need new theories that will allow us to achieve required dependability levels, and to evaluate the actual dependability that has been achieved, when the sources of the faults that ultimately result in failure are human intellectual failures.
- Type
- Chapter
- Information
- Computing TomorrowFuture Research Directions in Computer Science, pp. 198 - 216Publisher: Cambridge University PressPrint publication year: 1996