Book contents
- Frontmatter
- Contents
- Preface
- 1 Linear filtering theory
- 2 Optimal stochastic control for linear dynamic systems with quadratic payoff
- 3 Optimal control of linear stochastic systems with an exponential-of-integral performance index
- 4 Non linear filtering theory
- 5 Perturbation methods in non linear filtering
- 6 Some explicit solutions of the Zakai equation
- 7 Some explicit controls for systems with partial observation
- 8 Stochastic maximum principle and dynamic programming for systems with partial observation
- 9 Existence results for stochastic control problems with partial information
- References
- Index
8 - Stochastic maximum principle and dynamic programming for systems with partial observation
Published online by Cambridge University Press: 16 September 2009
- Frontmatter
- Contents
- Preface
- 1 Linear filtering theory
- 2 Optimal stochastic control for linear dynamic systems with quadratic payoff
- 3 Optimal control of linear stochastic systems with an exponential-of-integral performance index
- 4 Non linear filtering theory
- 5 Perturbation methods in non linear filtering
- 6 Some explicit solutions of the Zakai equation
- 7 Some explicit controls for systems with partial observation
- 8 Stochastic maximum principle and dynamic programming for systems with partial observation
- 9 Existence results for stochastic control problems with partial information
- References
- Index
Summary
Introduction
The stochastic maximum principle and dynamic programming are among the main methods stochastic control theory. It is also possible to develop such methods for partially observable systems. This is a priori expectable since the stochastic control problem with partial observation can be reduced to a stochastic control problem with complete observation, with the reservation that the system with full observation (to be controlled) is not finite dimensional, but infinite dimensional. With this remark in mind, we see that we are led to use the maximum principle or dynamic programming for an infinite dimensional system. The situation is very similar to that for systems governed by partial differential equations.
We shall not attempt to cover all possible cases in one theorem. Our approach will be to reduce the problem to the control with full observation of a stochastic PDE, namely the Zakai equation. This equation will be formulated as a differential equation in a Hilbert space, using variational techniques (see Section 4.7). In this framework, it is convenient to use variational techniques to derive necessary conditions of optimality. One advantage of this approach is that it is mostly analytic. On the other hand, the case of unbounded coefficients (for instance the linear quadratic case) is not easily covered in this formulation, without substantial technical transformations. However, the result is formally applicable without any difficulty.
The present developments improve previous results of mine (see Bensoussan 1983 and simplify the derivation. Besides the application to the LQG problem, we consider the situation of the separation principle and obtain it in a new way, through the stochastic maximum principle.
- Type
- Chapter
- Information
- Stochastic Control of Partially Observable Systems , pp. 268 - 325Publisher: Cambridge University PressPrint publication year: 1992