Hostname: page-component-586b7cd67f-2plfb Total loading time: 0 Render date: 2024-11-22T08:04:34.741Z Has data issue: false hasContentIssue false

Determination of a controllable set for a controlled dynamic system

Published online by Cambridge University Press:  17 February 2009

Shige Peng
Affiliation:
Department of Mathematics, Fudan University, Shanghai 200433, China.
Jiongmin Yong
Affiliation:
Department of Mathematics, Fudan University, Shanghai 200433, China.
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

The controllable set of a controlled ordinary differential dynamic system to a given set is defined. Under certain reasonable conditions, the controllable set is characterised by a level set of the unique viscosity solution to some Hamilton-Jacobi-Bellman equation. The result is used to determine the asymptotic stable set of nonlinear autonomous differential equations.

Type
Research Article
Copyright
Copyright © Australian Mathematical Society 1991

References

[1] Bardi, M., “A boundary value problem for the minimum time function”, SIAM J. Control & Optim., 27 (1989) 776785.CrossRefGoogle Scholar
[2] Bardi, M. and Falcone, M., “Discrete approximation of the minimal time function for systems with regular optimal trajectories”, Analysis and optimization of systems, (eds. Bensoussan, A. and Lions, J. L.), Lecture Notes in Control & Inform. Sci., 1, (1990) 103112.CrossRefGoogle Scholar
[3] Bellman, R., Dynamic programming (Oxford Univ. Press, London, 1957).Google ScholarPubMed
[4] Berkovitz, L. D., Optimal control theory (Springer-Verlag, New York, 1974).CrossRefGoogle Scholar
[5] Dolcetta, I. Capuzzo, “On a discrete approximation of the Hamilton-Jacobi equation of dynamic programming”, Appl. Math. Optim. 10 (1983) 367377.CrossRefGoogle Scholar
[6] Dolcetta, I. Capuzzo and Ishii, H., “Approximate solution of the Bellman equation of deterministic control theory”, Appl. Math. Optim. 11 (1984) 161181.CrossRefGoogle Scholar
[7] Crandall, M. G. and Lions, P. L., “Some properties of viscosity solutions of Hamilton-Jacobi equations,” Trans. Amer. Math. Soc. 282 (1984) 487502.CrossRefGoogle Scholar
[8] Crandall, M. G. and Lions, P. L., “Viscosity solutions of Hamilton-Jacobi equations”, Trans. Amer. Math. Soc. 277 (1983) 142.CrossRefGoogle Scholar
[9] Evans, L. C. and James, M. R., “The Hamilton-Jacobi-Bellman equation for time-optimal control”, SIAM J. Control & Optim. 27 (1989) 14771489.CrossRefGoogle Scholar
[10] Falcone, M., “A numerical approach to the infinite horizon problem of deterministic control theory”, Appl. Math. Optim. 15 (1987) 113.CrossRefGoogle Scholar
[11] Gabasov, R. and Kirilova, F., The qualitative theory of optimal processes (Marcel Dekker, New York and Basel, 1976).Google Scholar
[12] Grantham, W. J. and Vincent, T. L., “A controllability minimum principle”, J. Optim. Theory & Appl. 17 (1975) 93114.CrossRefGoogle Scholar
[13] Lions, P. L., Generalised solutions of Hamilton-Jacobi equations (Pitman, Boston, 1982).Google Scholar
[14] Souganidis, P. E., “Existence of viscosity solutions of Hamilton-Jacobi equations”, J. Diff. Eqn. 56 (1985) 345390.CrossRefGoogle Scholar
[15] Souganidis, P. E., “Approximation schemes for viscosity solutions of Hamilton-Jacobi equations”, J. Diff. Eqn. 59 (1985) 143.CrossRefGoogle Scholar
[16] Sussman, H. J., “A general theorem on local controllability”, SIAM J. Control & Optim. 25 (1987) 158194.CrossRefGoogle Scholar
[17] Vincent, T. L. and Skowronski, J. M., “Controllability with capture”, J. Optim. Theory & Appl. 29 (1979) 7786.CrossRefGoogle Scholar
[18] Yong, J., “On differential pursuit games”, SIAM J. Control & Optim. 26 (1988) 478495.CrossRefGoogle Scholar