Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Introduction
- 2 Metrics of performance
- 3 Average performance and variability
- 4 Errors in experimental measurements
- 5 Comparing alternatives
- 6 Measurement tools and techniques
- 7 Benchmark programs
- 8 Linear-regression models
- 9 The design of experiments
- 10 Simulation and random-number generation
- 11 Queueing analysis
- Appendix A Glossary
- Appendix B Some useful probability distributions
- Appendix C Selected statistical tables
- Index
Preface
Published online by Cambridge University Press: 15 December 2009
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Introduction
- 2 Metrics of performance
- 3 Average performance and variability
- 4 Errors in experimental measurements
- 5 Comparing alternatives
- 6 Measurement tools and techniques
- 7 Benchmark programs
- 8 Linear-regression models
- 9 The design of experiments
- 10 Simulation and random-number generation
- 11 Queueing analysis
- Appendix A Glossary
- Appendix B Some useful probability distributions
- Appendix C Selected statistical tables
- Index
Summary
“Education is not to reform students or amuse them or to make them expert technicians. It is to unsettle their minds, widen their horizons, inflame their intellects, teach them to think straight, if possible.”
Robert M. HutchinsGoals
Most fields of science and engineering have well-defined tools and techniques for measuring and comparing phenomena of interest and for precisely communicating results. In the field of computer science and engineering, however, there is surprisingly little agreement on how to measure something as fundamental as the performance of a computer system. For example, the speed of an automobile can be readily measured in some standard units, such as meters traveled per second. The use of these standard units then allows the direct comparison of the speed of the automobile with that of an airplane, for instance. Comparing the performance of different computer systems has proven to be not so straightforward, however.
The problems begin with a lack of agreement in the field on even the seemingly simplest of ideas, such as the most appropriate metric to use to measure performance. Should this metric be MIPS, MFLOPS, QUIPS, or seconds, for instance? The problems then continue with many researchers obtaining and reporting results using questionable and even, in many cases, incorrect methodologies. Part of this lack of rigor in measuring and reporting performance results is due to the fact that tremendous advances have been made in the performance of computers in the past several decades using an ad hoc ‘seat-of-the-pants’ approach. Thus, there was little incentive for researchers to report results in a scientifically defensible way. Consequently, these researchers never taught their students sound scientific methodologies to use when conducting their own experiments.
- Type
- Chapter
- Information
- Measuring Computer PerformanceA Practitioner's Guide, pp. xi - xivPublisher: Cambridge University PressPrint publication year: 2000