Book contents
- Frontmatter
- Contents
- List of Figures
- List of Tables
- Preface
- 1 Concurrent Processes
- 2 Basic Models of Parallel Computation
- 3 Elementary Parallel Algorithms
- 4 Designing Parallel Algorithms
- 5 Architectures of Parallel Computers
- 6 Message-passing Programming
- 7 Shared-memory Programming
- Solutions to Selected Exercises
- Glossary
- References
- Index
6 - Message-passing Programming
Published online by Cambridge University Press: 06 January 2017
- Frontmatter
- Contents
- List of Figures
- List of Tables
- Preface
- 1 Concurrent Processes
- 2 Basic Models of Parallel Computation
- 3 Elementary Parallel Algorithms
- 4 Designing Parallel Algorithms
- 5 Architectures of Parallel Computers
- 6 Message-passing Programming
- 7 Shared-memory Programming
- Solutions to Selected Exercises
- Glossary
- References
- Index
Summary
INTRODUCTION
One of the types of parallel processing is distributed computing. This type of processing can be conducted in integrated computers with distributed memory, or in clusters, which are systems of homogeneous or heterogeneous networked computers. In distributed computing the tasks communicate via communication channels (or links). The channels form an interconnection network between processors or computers. Processors or computers that are vertices of a network perform computing tasks, as well as send and receive messages.
In this chapter we explore how to implement parallel programs that consist of tasks cooperating with each other using message passing. Parallel programs should be written in a suitable programming language. Probably the only language specially developed to describe parallel computing with message passing was occam. This language proposed by May et al. in Inmos company was based on the CSP notation (acronym for Communicating Sequential Processes) defined by Hoare. In the 1980s occam was used as the programming language for transputers—systems of large-scale integration, each combining a processor and four communication channels. Along with development of computer hardware, it turned out that occam due to certain weaknesses and restrictions is insufficient to describe distributed computing. Nowadays, these are often carried out using C or Fortran languages augmented by functions intended for cooperation of parallel processes. The most popular libraries of such functions are PVM (Parallel Virtual Machine) and MPI (Message Passing Interface).
The PVM library was developed in Oak Ridge National Laboratory. It permits the creation and execution of parallel programs in heterogeneous networks consisting of sequential and parallel computers. Another popular and largely universal library that is used to build distributed programs is MPI. It can be applied together with the OpenMP interface in computers with distributed memory (see Section 5.4.3), in particular in clusters composed of multicore processors or SMP nodes (see Sections 5.4.2 and 5.4.3, and Chapter 7). The library is highly portably enabling to build scalable programs for applications where achieving high computational performance is essential.
- Type
- Chapter
- Information
- Introduction to Parallel Computing , pp. 214 - 242Publisher: Cambridge University PressPrint publication year: 2017