Book contents
- Frontmatter
- Contents
- List of Figures
- Preface
- Acknowledgments
- Glossary of Notations
- 1 Introduction
- 2 Information Dispersal
- 3 Interconnection Networks
- 4 Introduction to Parallel Routing
- 5 Fault-Tolerant Routing Schemes and Analysis
- 6 Simulation of the PRAM
- 7 Asynchronism and Sensitivity
- 8 On-Line Maintenance
- 9 A Fault-Tolerant Parallel Computer
- Bibliography
- Index
7 - Asynchronism and Sensitivity
Published online by Cambridge University Press: 03 October 2009
- Frontmatter
- Contents
- List of Figures
- Preface
- Acknowledgments
- Glossary of Notations
- 1 Introduction
- 2 Information Dispersal
- 3 Interconnection Networks
- 4 Introduction to Parallel Routing
- 5 Fault-Tolerant Routing Schemes and Analysis
- 6 Simulation of the PRAM
- 7 Asynchronism and Sensitivity
- 8 On-Line Maintenance
- 9 A Fault-Tolerant Parallel Computer
- Bibliography
- Index
Summary
The Lacedæmonians [advanced] slowly and to the music of
many flute-players […], meant to make them advance evenly,
stepping in time, without breaking their order, as large
armies are apt to do in the moment of engaging.
—ThucydidesThe routing algorithms in Chapter 5 can be converted into efficient asynchronous algorithms by replacing the global clock with a synchronization scheme based on message passing. We also demonstrate that asynchronous fsra has low sensitivity to variations in link and processor speeds.
Introduction
The assumption of synchronism often greatly simplifies the design of algorithms, be they sequential or parallel. Many computation models — for example, RAM (“Random Access Machine”) [5] in the sequential setting and PRAM in the parallel setting — assume the existence of a global clock. But, this assumption will become less desirable as the number of processors increases. For one thing, a global clock introduces a single point of failure. A global clock also restrains each processor's degree of autonomy and renders the machine unable to exploit differences in running speed [42, 192], limiting the overall speed to, so to speak, that of the (“slowest” component instead of the “average” one, thus wasting cycles. Tight synchronization also limits the size of the parallel computer, since it takes time to distribute the clock-signal to the whole system [316].
Proceeding in epochs, our routing schemes in Chapter 5 assume synchronism. In fact, the very definition of ECS assumes a global clock to synchronize epochs. We show in this chapter that with synchronization done via message passing, ECSs can be made asynchronous without loss of efficiency and without global control. Much work has been done in this area; see, for example, [31, 32, 33].
- Type
- Chapter
- Information
- Information Dispersal and Parallel Computation , pp. 110 - 122Publisher: Cambridge University PressPrint publication year: 1993