Book contents
- Frontmatter
- Contents
- List of illustrations
- Preface
- 1 Introduction
- 2 Problems, algorithms, and solutions
- 3 Transformation of problems
- Part I Linear simultaneous equations
- Part II Non-linear simultaneous equations
- 6 Case studies
- 7 Algorithms
- 8 Solution of the case studies
- Part III Unconstrained optimization
- Part IV Equality-constrained optimization
- Part V Inequality-constrained optimization
- References
- Index
7 - Algorithms
Published online by Cambridge University Press: 03 December 2009
- Frontmatter
- Contents
- List of illustrations
- Preface
- 1 Introduction
- 2 Problems, algorithms, and solutions
- 3 Transformation of problems
- Part I Linear simultaneous equations
- Part II Non-linear simultaneous equations
- 6 Case studies
- 7 Algorithms
- 8 Solution of the case studies
- Part III Unconstrained optimization
- Part IV Equality-constrained optimization
- Part V Inequality-constrained optimization
- References
- Index
Summary
In Chapter 5, we introduced triangular factorization and forwards and backwards substitution as a direct algorithm for solving linear equations. In a finite number of steps (assuming infinite precision arithmetic) we could solve the linear equations exactly. We have already remarked in Section 2.4.1.2 that no direct algorithm exists for general non-linear simultaneous equations.
Our approach to solving non-linear simultaneous equations will be iterative. We will start with an initial guess of the solution and try to successively improve on the guess. We will continue until the current iterate becomes sufficiently close to the solution according to a suitable criterion. In general, we will not be able to provide an exact solution, even if we could calculate in infinite precision. The theme of iterative progress towards a solution will recur throughout the rest of the book.
In Section 7.1, we discuss an iterative algorithm called the Newton–Raphson method. Then, in Section 7.2 we will introduce some variations on the basic Newton–Raphson method. In Section 7.3 we discuss local convergence of the iterates to a solution, while in Section 7.4 we discuss global convergence. Finally, we discuss sensitivity and large change analysis in Section 7.5.
The key issues discussed in this chapter are:
approximating non-linear functions by a linear approximation about a point,
using the linear approximation to improve our estimate of the solution of the non-linear equations and then re-linearizing about the improved solution in an iterative algorithm,
convergence of the sequence of iterates produced by repeated re-linearization,
variations that reduce computational effort, and
sensitivity and large change analysis.
- Type
- Chapter
- Information
- Applied OptimizationFormulation and Algorithms for Engineering Systems, pp. 285 - 333Publisher: Cambridge University PressPrint publication year: 2006