6 - System Performance
Published online by Cambridge University Press: 29 September 2009
Summary
As was indicated in Chapter 1, there is a prima facie case for supposing that parallel computers can be both more powerful and more cost-effective than serial machines. The case rests upon the twin supports of increased amount of computing power and, just as importantly, improved structure in terms of mapping to specific classes of problems and in terms of such parameters as processor-to-memory bandwidth.
This chapter concerns what is probably the most contentious area of the field of parallel computing – how to quantify the performance of these allegedly superior machines. There are at least two significant reasons why this should be difficult. First, parallel computers, of whatever sort, are attempts to map structures more closely to some particular type of data or problem. This immediately invites the question – on what set of data and problems should their performance be measured? Should it be only the set for which a particular system was designed, in which case how can one machine be compared with another, or should a wider range of tasks be used, with the immediate corollary – which set? Contrast this with the accepted view of the general-purpose serial computer, where a few convenient acronyms such as MIPS and MFLOPS (see Section 6.1.3) purport to tell the whole story. (That they evidently do not do so casts an interesting sidelight on our own problem.)
The second reason concerns the economic performance, or cost-effectiveness, of parallel systems.
- Type
- Chapter
- Information
- Parallel ComputingPrinciples and Practice, pp. 193 - 223Publisher: Cambridge University PressPrint publication year: 1994