Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Scientific Computing and Simulation Science
- 2 Basic Concepts and Tools
- 3 Approximation
- 4 Roots and Integrals
- 5 Explicit Discretizations
- 6 Implicit Discretizations
- 7 Relaxation: Discretization and Solvers
- 8 Propagation: Numerical Diffusion and Dispersion
- 9 Fast Linear Solvers
- 10 Fast Eigensolvers
- A C++ Basics
- B MPI Basics
- Bibliography
- Index
7 - Relaxation: Discretization and Solvers
Published online by Cambridge University Press: 05 October 2013
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Scientific Computing and Simulation Science
- 2 Basic Concepts and Tools
- 3 Approximation
- 4 Roots and Integrals
- 5 Explicit Discretizations
- 6 Implicit Discretizations
- 7 Relaxation: Discretization and Solvers
- 8 Propagation: Numerical Diffusion and Dispersion
- 9 Fast Linear Solvers
- 10 Fast Eigensolvers
- A C++ Basics
- B MPI Basics
- Bibliography
- Index
Summary
In this chapter we present discretizations for mixed initial value/boundary value problems (IVP/BVP) and for relaxation iterative solvers associated with such discretizations. The analogy between iterative procedures and equations of evolution, especially of parabolic type (diffusion), was realized about two centuries ago, but a rigorous connection was not established until the mid-1950s.
In the following, we first consider various mixed discretizations, and subsequently we derive some of the most popular iterative solvers. Our emphasis will be on parallel computing: A good algorithm is not simply the one that converges faster but also the one that is parallelizable. The Jacobi algorithm is such an example; forgotten for years in favor of the Gauss-Seidel algorithm, which converges twice as fast for about the same computational work, it was rediscovered during the past two decades as it is trivially parallelizable, and today it is used mostly as a preconditioner for multigrid methods. The Gauss-Seidel algorithm, although faster on a serial computer, is not parallelizable unless a special multicolor algorithm is employed, as we explain in Section 7.2.4. Based on these two basic algorithms, we present the multigrid method that exploits their good convergence properties but in a smart adaptive way.
On the parallel computing side, we introduce three new commands: MPI_Gather, MPI_Allgather, and MPI_Scatter. Both MPI_Gather and MPI_Allgather are used for gathering information from a collection of processes. MPI_Scatter is used to scatter data from one process to a collection of processes. In addition to providing syntax and usage information, we present the applicability of the gathering functions in the parallel implementation of the Jacobi method.
- Type
- Chapter
- Information
- Parallel Scientific Computing in C++ and MPIA Seamless Approach to Parallel Algorithms and their Implementation, pp. 347 - 411Publisher: Cambridge University PressPrint publication year: 2003