Book contents
- Frontmatter
- Contents
- List of Figures
- List of Tables
- Preface
- 1 Concurrent Processes
- 2 Basic Models of Parallel Computation
- 3 Elementary Parallel Algorithms
- 4 Designing Parallel Algorithms
- 5 Architectures of Parallel Computers
- 6 Message-passing Programming
- 7 Shared-memory Programming
- Solutions to Selected Exercises
- Glossary
- References
- Index
4 - Designing Parallel Algorithms
Published online by Cambridge University Press: 06 January 2017
- Frontmatter
- Contents
- List of Figures
- List of Tables
- Preface
- 1 Concurrent Processes
- 2 Basic Models of Parallel Computation
- 3 Elementary Parallel Algorithms
- 4 Designing Parallel Algorithms
- 5 Architectures of Parallel Computers
- 6 Message-passing Programming
- 7 Shared-memory Programming
- Solutions to Selected Exercises
- Glossary
- References
- Index
Summary
STEPS OF DESIGNING
As we stated in Section 3.1, a parallel algorithm is composed of a number of algorithm components. These components are intended to solve the subproblems into which the original problem has been divided. Normally, the components cooperate with each other using intermediate results of computation as well as synchronize their action. The desired output of the parallel algorithm is a composition of results computed by the algorithm components. Since the subproblems are solved by the components of a parallel program called tasks, we will use the terms subproblem and task interchangeably.
The process of designing a parallel algorithm consists of four steps:
□ decomposition of a computational problem into tasks that can be executed simultaneously, and development of sequential algorithms for individual tasks;
□ analysis of computation granularity;
□ minimizing the cost of the parallel algorithm;
□ assigning tasks to processors executing the parallel algorithm.
The following sections characterize these activities in more detail. In addition, we present some general methods for designing parallel algorithms, such as the method of data parallelism, the method of functional parallelism, the method of task pool, the method of master and slaves, and the method of pipelining. These methods differ from each other with respect to ways of exploiting potential parallelism in the computation that solve the problem, as well as with respect to the ways of assigning processors the tasks to be executed.
PROBLEM DECOMPOSITION
4.2.1 Types of Decomposition
Dividing a computational problem into tasks, called decomposition or partitioning, can be done in several ways. We will discuss them on examples
Example 4.1 Evaluation of test results
Let us examine a problem of evaluation of multiple-choice test results. Suppose that the test results are written on n sheets, where on each sheet there are m questions with marked answers.
- Type
- Chapter
- Information
- Introduction to Parallel Computing , pp. 125 - 174Publisher: Cambridge University PressPrint publication year: 2017