Article contents
Perturbation theory for unbounded Markov reward processes with applications to queueing
Published online by Cambridge University Press: 01 July 2016
Abstract
Consider a perturbation in the one-step transition probabilities and rewards of a discrete-time Markov reward process with an unbounded one-step reward function. A perturbation estimate is derived for the finite horizon and average reward function. Results from [3] are hereby extended to the unbounded case. The analysis is illustrated for one- and two-dimensional queueing processes by an M/M/1-queue and an overflow queueing model with an error bound in the arrival rate.
- Type
- Research Article
- Information
- Copyright
- Copyright © Applied Probability Trust 1988
References
[2]
Van Doorn, E. A. (1984) On the overflow process from a finite Markovian queue. Performance Eval.
4, 233–240.CrossRefGoogle Scholar
[3]
Van Dijk, N. M. and Puterman, M. L. (1988) Perturbation theory for Markov reward processes with applications to queueing systems. Adv. Appl. Prob.
20, 79–98.Google Scholar
[4]
Hinderer, K. (1978) On approximate solutions of finite-stage dynamic programs. Dynamic Programming and its Applications, ed. Puterman, M. L., Academic Press, New York.Google Scholar
[5]
Kemeny, J. G., Snell, J. L. and Knapp, A. W. (1966) Denumerable Markov Chains. Van Nostrand, Princeton, NJ.Google Scholar
[6]
Meyer, C. D. Jr. (1980) The condition of a finite Markov chain and perturbation bounds for the limiting probabilities. SIAM J. Alg. Disc. Math.
1, 273–283.Google Scholar
[7]
Schweitzer, P. J. (1968) Perturbation theory and finite Markov chains. J. Appl. Prob.
5, 401–413.Google Scholar
[8]
Tijms, H. C. (1986) Stochastic Modelling and Analysis. A Computational Approach. Wiley, New York.Google Scholar
[9]
Whitt, W. (1978) Approximations of dynamic programs I. Math. Operat. Res.
3, 231–243.Google Scholar
- 19
- Cited by