Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-26T00:36:09.222Z Has data issue: false hasContentIssue false

Applications of Borovkov's Renovation Theory to Non-Stationary Stochastic Recursive Sequences and Their Control

Published online by Cambridge University Press:  01 July 2016

Eitan Altman*
Affiliation:
INRIA
Arie Hordijk*
Affiliation:
Leiden University
*
Postal address: INRIA, 2004 Route des Lucioles, BP93, 06902 Sophia-Antipolis Cedex, France.
∗∗ Postal address: Department of Mathematics and Computer Science, Leiden University, P.O. Box 9512, 2300RA Leiden, The Netherlands.

Abstract

We investigate in this paper the stability of non-stationary stochastic processes, arising typically in applications of control. The setting is known as stochastic recursive sequences, which allows us to construct on one probability space stochastic processes that correspond to different initial states and even different control policies. It does not require any Markovian assumptions. A natural criterion for stability for such processes is that the influence of the initial state disappears after some finite time; in other words, starting from different initial states, the process will couple after some finite time to the same limiting (not necessarily stationary nor ergodic) stochastic process. We investigate this as well as other types of coupling, and present conditions for them to occur uniformly in some class of control policies. We then use the coupling results to establish new theoretical aspects in the theory of non-Markovian control.

Type
General Applied Probability
Copyright
Copyright © Applied Probability Trust 1997 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] Akhmarov, I. and Leont'Eva, N. P. (1976) Conditions for convergence to limit processes and the strong law of large numbers for queueing systems. Teor. Veroyatnost Primenen 21, 559571. (In Russian.) Google Scholar
[2] Asmussen, S. and Foss, S. G. (1993) Renovation, regeneration, and coupling in multiple-server queues in continuous time. Frontiers Pure Appl. Prob. 1, 16.Google Scholar
[3] Baccelli, F. and Bremaud, P. (1994) Elements of Queueing Theory. Springer, Berlin.Google Scholar
[4] Baccelli, F. and Foss, S. (1994) Stability of Jackson-type queueing networks. QUESTA 17, 572.Google Scholar
[5] Baccelli, F. and Foss, S. (1995) On the saturation rule for the stability of queues. J. Appl. Prob. 32, 494507.CrossRefGoogle Scholar
[6] Bertsekas, D. P. (1987) Dynamic Programming. Deterministic and Stochastic Models. Prentice Hall, Englewood Cliffs, NJ.Google Scholar
[7] Borovkov, A. A. (1984) Asymptotic Methods in Queueing Theory. Wiley, New York.Google Scholar
[8] Borovkov, A. A. (1996) Ergodicity and stability of stochastic processes. To appear. (In Russian.) Google Scholar
[9] Borovkov, A. A. (1996) Remarks on boundedness of stochastic sequences and on estimates of their moments. Preprint. Google Scholar
[10] Borovkov, A. A. and Foss, S. G. (1992) Stochastically recursive sequences and their generalizations. Siberian Adv. Math. 2, 1681.Google Scholar
[11] Borovkov, A. A. and Foss, S. G. (1994) Two ergodicity criteria for stochastically recursive sequences. ACTA Appl. Math. 34, 125134.CrossRefGoogle Scholar
[12] Brandt, A., Franken, P. and Lisek, B. (1992) Stationary Stochastic Models. Akademie, Berlin.Google Scholar
[13] Cornfeld, I. P., Fomin, S. V. and Sinai, Ya. G. (1982) Ergodic Theory. Springer, New York.CrossRefGoogle Scholar
[14] Dekker, R. and Hordijk, A. (1988) Average, sensitive and Blackwell optimal policies in denumerable Markov decision chains with unbounded rewards. Math. Operat. Res. 13, 395421.CrossRefGoogle Scholar
[15] Dekker, R., Hordijk, A. and Spieksma, F. M. (1994) On the relation between recurrence and ergodicity properties in denumerable Markov decision chains. Math. Operat. Res. 19, 539559.CrossRefGoogle Scholar
[16] Derman, C. (1970) Finite State Markovian Decision Processes. Academic Press, New York.Google Scholar
[17] Dynkin, E. B. and Yushkevich, A. A. (1979) Controlled Markov Processes. Springer, Berlin.CrossRefGoogle Scholar
[18] Federgruen, A., Hordijk, A. and Tijms, H. C. (1978) A note on simultaneous recurrence conditions on a set of denumerable stochastic matrices. J. Appl. Prob. 15, 842847.CrossRefGoogle Scholar
[19] Federgruen, A. and Tijms, H. C. (1978) The optimality equation in average cost denumerable state semi-Markov decision problems, recurrency conditions and algorithms. J. Appl. Prob. 15, 356373.CrossRefGoogle Scholar
[20] Foss, S. G. (1984) The method of renovating events and its applications in queueing theory. Semi Markov Models, Theory and Applications (Proc. 1st Symp. on Semi-Markov Processes). Brussels. (Plenum 86).Google Scholar
[21] Foss, S. G. Private communication.Google Scholar
[22] Foss, S. G. and Kalashnikov, V. V. (1991) Regeneration and renovation in queues. Queueing Systems 8, 211223.CrossRefGoogle Scholar
[23] Hajek, B. (1982) Hitting-time and occupation-time bounds implied by drift analysis with applications. Adv. Appl. Prob. 14, 502525.CrossRefGoogle Scholar
[24] Hajek, B. (1985) Extremal splitting of point processes. Math. Operat. Res. 10, 543556.CrossRefGoogle Scholar
[25] Hordijk, A. (1976) Regenerative Markov decision models. Math. Prog. Study 6, 4972.CrossRefGoogle Scholar
[26] Hordijk, A. (1977) Dynamic Programming and Markov Potential Theory. (Mathematical Centre Tracts 51.) 2nd edn. Mathematisch Centrum, Amsterdam.Google Scholar
[27] Hordijk, A. and Holewijn, P. J. (1975) On the convergence of moments in stationary Markov chains. Stoch. Proc. Appl. 3, 5564.Google Scholar
[28] Hordijk, A., Koole, G. M. and Loeve, J. A. (1994) Analysis of a customer assignment model with no state information. Prob. Eng. Inf. Sci. 2, 419429.CrossRefGoogle Scholar
[29] Hordijk, A. and Loeve, J. A. (1994) Undiscounted Markov decision chains with partial information; an algorithm for computing a locally optimal periodic policy. Math. Meth. Operat. Res. 40, 163181.CrossRefGoogle Scholar
[30] Hordijk, A. and Loeve, J. A. (1996) Markov decision chains with partial information: optimality results. Technical report. University of Leiden.Google Scholar
[31] Kendall, D. G. (1960) Geometric ergodicity in the theory of queues. In Mathematical Methods in the Social Sciences. ed. Arrow, K. J., Karlin, S. and Suppes, P. Stanford University Press, Stanford, CA. pp. 176195.Google Scholar
[32] Kulkarni, V. G. and Sering, Y. (1992) Optimal implementable policies: average cost and minimax criteria. Technical report UNC/OR TR-90/9. University of North Carolina.Google Scholar
[33] Kulkarni, V. G. and Sering, Y. (1995) Optimal implementable policies: discounted cost case. In Computations with Markov Chains. ed. Stewart, W. Kluwer, Dordrecht. pp. 283307.Google Scholar
[34] Lippman, S. A. and Stidh?M, S. (1977) Individual versus social optimization in exponential congestion systems. Operat. Res. 25, 233247.CrossRefGoogle Scholar
[35] Lindvall, T. (1992) Lecture on the Coupling Method. Wiley, New York.Google Scholar
[36] Loynes, R. (1962) The stability of a queue with non-independent inter-arrival and service times. Proc. Camb. Phil. Soc. 58, 497520.CrossRefGoogle Scholar
[37] Ma, D. J. and Makowski, A. M. (1987) Optimality results for a simple flow control problem. Proc. 26th IEEE Conf. on Decision and Control. Los Angeles, CA.Google Scholar
[38] Meyn, S. and Tweedie, R. L. (1993) Markov Chains and Stochastic Stability. Springer, Berlin.CrossRefGoogle Scholar
[39] Spieksma, F. M. (1990) Geometrically ergodic Markov chains and the optimal control of queues. PhD thesis. University of Leiden.Google Scholar
[40] Tweedie, R. L. (1983) The existence of moments for stationary Markov chains. J. Appl. Prob. 20, 191196.CrossRefGoogle Scholar