Book contents
- Frontmatter
- Contents
- Preface
- PART I MACHINES AND COMPUTATION
- PART II LINEAR SYSTEMS
- 4 Building Blocks – Floating Point Numbers and Basic Linear Algebra
- 5 Direct Methods for Linear Systems and LU Decomposition
- 6 Direct Methods for Systems with Special Structure
- 7 Error Analysis and QR Decomposition
- 8 Iterative Methods for Linear Systems
- 9 Finding Eigenvalues and Eigenvectors
- PART III MONTE CARLO METHODS
- APPENDIX: PROGRAMMING EXAMPLES
- References
- Index
4 - Building Blocks – Floating Point Numbers and Basic Linear Algebra
Published online by Cambridge University Press: 12 December 2009
- Frontmatter
- Contents
- Preface
- PART I MACHINES AND COMPUTATION
- PART II LINEAR SYSTEMS
- 4 Building Blocks – Floating Point Numbers and Basic Linear Algebra
- 5 Direct Methods for Linear Systems and LU Decomposition
- 6 Direct Methods for Systems with Special Structure
- 7 Error Analysis and QR Decomposition
- 8 Iterative Methods for Linear Systems
- 9 Finding Eigenvalues and Eigenvectors
- PART III MONTE CARLO METHODS
- APPENDIX: PROGRAMMING EXAMPLES
- References
- Index
Summary
Many problems in scientific computation can be solved by reducing them to a problem in linear algebra. This turns out to be an extremely successful approach. Linear algebra problems often have a rich mathematical structure, which gives rise to a variety of highly efficient and well-optimized algorithms. Consequently, scientists frequently consider linear models or linear approximations to nonlinear models simply because the machinery for solving linear problems is so well developed.
Basic linear algebraic operations are so fundamental that many current computer architectures are designed to maximize performance of linear algebraic computations. Even the list of the top 500 fastest computers in the world (maintained at www.top500.org) uses the HPL benchmark for solving dense systems of linear equations as the main performance measure for ranking computer systems.
In 1973, Hanson, Krogh, and Lawson described the advantages of adopting a set of basic routines for problems in linear algebra. These basic linear algebra subprograms are commonly referred to as the Basic Linear Algebra Subprograms (BLAS), and they are typically divided into three heirarchical levels: level 1 BLAS consists of vector–vector operations, level 2 BLAS are matrix–vector operations, and level 3 BLAS are matrix–matrix operations. The BLAS have been standardized with an application programming interface (API). This allows hardware vendors, compiler writers, and other specialists to provide programmers with access to highly optimized kernel routines adapted to specific architectures. Profiling tools indicate that many scientific computations spend most of their time in those sections of the code that call the BLAS. Thus, even small improvements in the BLAS can yield substantial speedups.
- Type
- Chapter
- Information
- An Introduction to Parallel and Vector Scientific Computation , pp. 103 - 125Publisher: Cambridge University PressPrint publication year: 2006