Hostname: page-component-78c5997874-ndw9j Total loading time: 0 Render date: 2024-11-19T04:17:03.797Z Has data issue: false hasContentIssue false

Nonzero-sum games for continuous-time Markov chains with unbounded discounted payoffs

Part of: Game theory

Published online by Cambridge University Press:  14 July 2016

Xianping Guo*
Affiliation:
CINVESTAV and Zhongshan University
Onésimo Hernández-Lerma*
Affiliation:
CINVESTAV
*
Postal address: Department of Mathematics, Zhongshan University, Guangzhou, 510275, PR China. Email address: [email protected]
∗∗Postal address: Department of Mathematics, CINVESTAV IPN, A. Postal 14-740, Mexico DF 07000, Mexico. Email address: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

In this paper, we study two-person nonzero-sum games for continuous-time Markov chains with discounted payoff criteria and Borel action spaces. The transition rates are possibly unbounded, and the payoff functions might have neither upper nor lower bounds. We give conditions that ensure the existence of Nash equilibria in stationary strategies. For the zero-sum case, we prove the existence of the value of the game, and also provide a recursive way to compute it, or at least to approximate it. Our results are applied to a controlled queueing system. We also show that if the transition rates are uniformly bounded, then a continuous-time game is equivalent, in a suitable sense, to a discrete-time Markov game.

Type
Research Papers
Copyright
© Applied Probability Trust 2005 

References

Anderson, W. J. (1991). Continuous-Time Markov Chains. Springer, New York.Google Scholar
Chung, K. L. (1960). Markov Chains with Stationary Transition Probabilities. Springer, Berlin.Google Scholar
Fan, K. (1953). Minimax theorems. Proc. Nat. Acad. Sci. USA 39, 4247.Google Scholar
Ghosh, M. K. and Bagchi, A. (1998). Stochastic games with average payoff criterion. Appl. Math. Optimization 38, 238301.Google Scholar
Guo, X. P. and Hernández-Lerma, O. (2003). Continuous-time controlled Markov chains with discounted rewards. Acta Appl. Math. 79, 195216.Google Scholar
Guo, X. P. and Hernández-Lerma, O. (2003). Zero-sum games for continuous-time Markov chains with unbounded transition and average payoff rates. J. Appl. Prob. 40, 327345.Google Scholar
Guo, X. and Liu, K. (2001). A note on optimality conditions for continuous-time Markov decision processes with average cost criterion. IEEE Trans. Automatic Control 46, 19841984.Google Scholar
Guo, X. P. and Zhu, W. (2002). Denumerable state continuous time Markov decision processes with unbounded cost and transition rates under average criterion. ANZIAM J. 43, 541557.Google Scholar
Guo, X. P. and Zhu, W. (2002). Optimality conditions for CTMDP with average cost criterion. In Markov Processes and Controlled Markov Chains, eds Hou, Z. T., Filar, J. A. and Chen, A. Y., Kluwer, Dordrecht, pp. 167188.CrossRefGoogle Scholar
Hamadéne, S. (1999). Nonzero sum linear–quadratic stochastic differential games and backward–forward equations. Stoch. Anal. Appl. 17, 117130.Google Scholar
Hernández-Lerma, O. (1994). Lectures on Continuous-Time Markov Control Processes (Aportaciones Matematicas 3). Sociedad Matematica Mexicana, México.Google Scholar
Hernández-Lerma, O. and Govindan, T. E. (2001). Nonstationary continuous-time Markov control processes with discounted costs on infinite horizon. Acta Appl. Math. 67, 277293.CrossRefGoogle Scholar
Hernández-Lerma, O. and Lasserre, J. B. (1999). Further Topics on Discrete-Time Markov Control Processes. Springer, New York.Google Scholar
Himmelberg, C. J., Parthasarathy, T., Raghavan, T. E. S. and van Vleck, F. S. (1976). Existence of p-equilibrium and optimal stationary strategies in stochastic games. Proc. Amer. Math. Soc. 60, 245251.Google Scholar
Hou, Z. T. and Guo, X. (1998). Markov Decision Processes. Science and Technology Press of Hunan, Changsha (in Chinese).Google Scholar
Hou, Z. T. et al. (1994). The Q-Matrix Problems on Markov Processes. Science and Technology Press of Hunan, Changsha (in Chinese).Google Scholar
Karatzas, I., Shubik, M. and Sudderth, W. D. (1994). Construction of stationary Markov equilibria in a strategic market game. Math. Operat. Res. 19, 9751006.Google Scholar
Kushner, H. J. (2002). Numerical approximations for stochastic differential games. SIAM J. Control Optimization 41, 457486.Google Scholar
Lefèvre, C. (1981). Optimal control of a birth and death epidemic process. Operat. Res. 29, 971982.Google Scholar
Parthasarathy, T. and Sinha, S. (1989). Existence of equilibrium stationary strategies in nonzero-sum discounted stochastic games with uncountable state space, and state independent transitions. Internat. J. Game Theory 18, 189194.Google Scholar
Puterman, M. L. (1994). Markov Decision Processes. John Wiley, New York.Google Scholar
Sennott, L. I. (1994). Nonzero-sum stochastic games with unbounded cost: discounted and average cost cases. Math. Meth. Operat. Res. 40, 145162.Google Scholar
Sennott, L. I. (1999). Stochastic Dynamic Programming and the Control of Queueing Systems. John Wiley, New York.Google Scholar
Ushida, K. (1978). On the existence of Nash equilibrium points in N-person nonzero sum stochastic differential games. SIAM J. Control Optimization 16, 142149.Google Scholar
Walrand, J. (1988). An Introduction to Queueing Networks. Prentice-Hall, Englewood Cliffs, New Jersey.Google Scholar
Yushkevich, A. A. and Feinberg, E. A. (1979). On homogeneous Markov model with continuous time and finite or countable state space. Theory Prob. Appl. 24, 156161.Google Scholar