Book contents
- Frontmatter
- Contents
- Introduction
- Acknowledgments
- Part one Reachable sets and controllability
- Part two Optimal control theory
- 7 Linear systems with quadratic costs
- 8 The Riccati equation and quadratic systems
- 9 Singular linear quadratic problems
- 10 Time-optimal problems and Fuller's phenomenon
- 11 The maximum principle
- 12 Optimal problems on Lie groups
- 13 Symmetry, integrability, and the Hamilton-Jacobi theory
- 14 Integrable Hamiltonian systems on Lie groups: the elastic problem, its non-Euclidean analogues, and the rolling-sphere problem
- References
- Index
7 - Linear systems with quadratic costs
Published online by Cambridge University Press: 07 October 2009
- Frontmatter
- Contents
- Introduction
- Acknowledgments
- Part one Reachable sets and controllability
- Part two Optimal control theory
- 7 Linear systems with quadratic costs
- 8 The Riccati equation and quadratic systems
- 9 Singular linear quadratic problems
- 10 Time-optimal problems and Fuller's phenomenon
- 11 The maximum principle
- 12 Optimal problems on Lie groups
- 13 Symmetry, integrability, and the Hamilton-Jacobi theory
- 14 Integrable Hamiltonian systems on Lie groups: the elastic problem, its non-Euclidean analogues, and the rolling-sphere problem
- References
- Index
Summary
Minimizing the integral of a quadratic form over the trajectories of a linear control problem, known as the linear quadratic problem, was one of the earliest optimal-control problems (Kalman, 1960). Rather than limit our attention to the positive-definite case, as is usually done in the control-theory literature, we shall consider the most general situation for which the question is well posed. The minimal assumptions under which this problem is treated reveal a rich theory that derives from the classic heritage of the calculus of variations and yet is sufficiently distinctive to describe new phenomena outside the scope of the classic theory. As such, this class of problems is a natural starting point for optimal control theory.
This chapter contains a derivation of the “maximum principle” for this class of problems. The curves that satisfy the maximum principle are called extremal curves. The class of problems for which the Legendre condition holds is called “regular.” In the regular case, the maximum principle determines a single Hamiltonian, and the optimal solutions are the projections of the integral curves of the corresponding Hamiltonian vector field. The projections of these extremal curves remain optimal up to the first conjugate point.
Problems in the subclass for which the Legendre condition is not satisfied are called “singular.” For singular problems, the maximum principle determines an affine space of quadratic Hamiltonians and a space of linear constraints. The resolution of the corresponding constrained Hamiltonian system reveals a generalized optimal synthesis consisting of turnpike-type solutions. The complete description of these solutions makes use of higher-order Poisson brackets and is sufficiently complex to merit a separate chapter.
- Type
- Chapter
- Information
- Geometric Control Theory , pp. 199 - 227Publisher: Cambridge University PressPrint publication year: 1996
- 1
- Cited by