Book contents
- Frontmatter
- Contents
- Preface to the Second Edition
- Preface to the First Edition
- 1 Algorithms and Computers
- 2 Computer Arithmetic
- 3 Matrices and Linear Equations
- 4 More Methods for Solving Linear Equations
- 5 Regression Computations
- 6 Eigenproblems
- 7 Functions: Interpolation, Smoothing, and Approximation
- 8 Introduction to Optimization and Nonlinear Equations
- 9 Maximum Likelihood and Nonlinear Regression
- 10 Numerical Integration and Monte Carlo Methods
- 11 Generating Random Variables from Other Distributions
- 12 Statistical Methods for Integration and Monte Carlo
- 13 Markov Chain Monte Carlo Methods
- 14 Sorting and Fast Algorithms
- Author Index
- Subject Index
- References
4 - More Methods for Solving Linear Equations
Published online by Cambridge University Press: 01 June 2011
- Frontmatter
- Contents
- Preface to the Second Edition
- Preface to the First Edition
- 1 Algorithms and Computers
- 2 Computer Arithmetic
- 3 Matrices and Linear Equations
- 4 More Methods for Solving Linear Equations
- 5 Regression Computations
- 6 Eigenproblems
- 7 Functions: Interpolation, Smoothing, and Approximation
- 8 Introduction to Optimization and Nonlinear Equations
- 9 Maximum Likelihood and Nonlinear Regression
- 10 Numerical Integration and Monte Carlo Methods
- 11 Generating Random Variables from Other Distributions
- 12 Statistical Methods for Integration and Monte Carlo
- 13 Markov Chain Monte Carlo Methods
- 14 Sorting and Fast Algorithms
- Author Index
- Subject Index
- References
Summary
Introduction
The previous chapter dwelled on the fundamental methods of matrix computations. In this chapter, more specialized methods are considered. The first topic is an alternative approach to solving general systems of equations – full elimination (often the method taught in beginning linear algebra courses), which has some advantages whenever the inverse is required. Next, our goal is reducing the effort in solving equations by exploiting the structure of a matrix. One such structure is bandedness, and the Cholesky factorization of a banded positive definite matrix is then applied to time-series computations, cutting the work from O(n3) to O(n). Next is the Toeplitz structure, also arising in time-series analysis, where the work can be reduced to O(n2) in a more general setting. Sparse matrix methods are designed to exploit unstructured patterns of zeros and so avoid unneeded work. Finally, iterative methods are discussed, beginning with iterative improvement.
Full Elimination with Complete Pivoting
Gaussian elimination creates an upper triangular matrix, column by column, by adding multiples of a row to the rows below it and placing zeros below the diagonal of each column. An alternative is to place zeros throughout that column – with the exception of the pivot position, which could be made equal to one.
- Type
- Chapter
- Information
- Numerical Methods of Statistics , pp. 67 - 90Publisher: Cambridge University PressPrint publication year: 2011