Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-24T17:26:17.022Z Has data issue: false hasContentIssue false

Dynamic programming principle for stochastic recursive optimalcontrol problem with delayed systems

Published online by Cambridge University Press:  16 January 2012

Li Chen
Affiliation:
Department of Mathematics, China University of Mining Technology, Beijing 100083, P.R. China. [email protected]
Zhen Wu
Affiliation:
School of Mathematics, Shandong University, Jinan 250100, P.R. China; [email protected]
Get access

Abstract

In this paper, we study one kind of stochastic recursive optimal control problem for thesystems described by stochastic differential equations with delay (SDDE). In ourframework, not only the dynamics of the systems but also the recursive utility depend onthe past path segment of the state process in a general form. We give the dynamicprogramming principle for this kind of optimal control problems and show that the valuefunction is the viscosity solution of the corresponding infinite dimensionalHamilton-Jacobi-Bellman partial differential equation.

Type
Research Article
Copyright
© EDP Sciences, SMAI, 2012

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Chang, M., Pang, T. and Pemy, M., Optimal control of stochastic functional differential equations with a bounded memory. Stochastic An International J. Probability & Stochastic Process 80 (2008) 6996. Google Scholar
Duffie, D. and Epstein, L.G., Stochastic differential utility. Economicrica 60 (1992) 353394. Google Scholar
El Karoui, N., Peng, S. and Quenez, M.C., Backward stochastic differential equation in finance. Math. Finance 7 (1997) 171. Google Scholar
Fuhrman, M. and Tessitore, G., Nonlinear Kolmogorov equation in infinite dimensional spaces : the backward stochastic differential equations approach and applications to optimal control. Ann. Probab. 30 (2002) 13971465. Google Scholar
Fuhrman, M., Masiero, F. and Tessitore, G., Stochastic equations with delay : optimal control via BSDEs and regular solutions of Hamilton-Jacobi-Bellman equations. SIAM J. Control Optim. 48 (2010) 46244651. Google Scholar
Larssen, B., Dynamic programming in stochastic control of systems with delay. Stoch. Stoch. Rep. 74 (2002) 651673. Google Scholar
B. Larssen and N.H. Risebro, When are HJB equations for control problems with stochastic delay equations finite dimensional? Dr. Scient. thesis, University of Oslo (2003).
S.E.A. Mohammed, Stochastic Functional Differential Equations, Pitman (1984).
S.E.A. Mohammed, Stochastic Differential Equations with Memory : Theory, Examples and Applications, Stochastic Analysis and Related Topics 6. The Geido Workshop (1996); Progress in Probability. Birkhauser (1998).
Peng, S., A generalized dynamic programming principle and Hamilton-Jacobi-Bellmen equation. Stoch. Stoch. Rep. 38 (1992) 119134. Google Scholar
S. Peng, Backward stochastic differential equations-stochastic optimization theory and viscosity solution of HJB equations. Topics on Stochastic Analysis (in Chinese), edited by J. Yan, S. Peng, S. Fang and L. Wu. Science Press, Beijing (1997) 85–138.
Wu, Z. and Yu, Z., Dynamic programming principle for one kind of stochastic recursive optimal control problem and Hamilton-Jacobi-Bellman equation. SIAM J. Control Optim. 47 (2008) 26162641. Google Scholar
J. Yong and X.Y. Zhou, Stochastic Controls. Springer-Verlag (1999).