5 - Processor Design
Published online by Cambridge University Press: 29 September 2009
Summary
The second substantially technical issue in the design of parallel computers concerns the complexity of each of the processing elements. To a very significant extent this will reflect the target functionality of the system under consideration. If a parallel computer is intended for use in scientific calculations, then in all likelihood floating-point operations will be a sine qua non and appropriate units will be incorporated. On the other hand, a neural network might be most efficiently implemented with threshold logic units which don't compute normal arithmetic functions at all. Often, the only modifier on such a natural relationship will be an economic or technological one. Some desirable configurations may be too expensive for a particular application, and the performance penalty involved in a non-optimum solution may be acceptable. Alternatively, the benefits of a particularly compact implementation may outweigh those of optimum performance. There may even be cases where the generality of application of the system may not permit an optimum solution to be calculated at all. In such cases, a variety of solutions may be supportable.
We will begin our analysis of processor complexity by considering the variables which are involved, and the boundaries of those variables.
Analogue or digital?
Although almost all modern computers are built from digital circuits, this was not always, and need not necessarily be, so. The original, overwhelming reason for using digital circuits was because of the vastly improved noise immunity which could be obtained, particularly important when using highprecision numbers.
- Type
- Chapter
- Information
- Parallel ComputingPrinciples and Practice, pp. 166 - 192Publisher: Cambridge University PressPrint publication year: 1994