Hostname: page-component-586b7cd67f-rcrh6 Total loading time: 0 Render date: 2024-11-23T02:16:55.269Z Has data issue: false hasContentIssue false

Potentially unlimited variance reduction in importance sampling of Markov chains

Published online by Cambridge University Press:  01 July 2016

Sigrún Andradóttir*
Affiliation:
University of Wisconsin-Madison
Daniel P. Heyman*
Affiliation:
Bellcore
Teunis J. Ott*
Affiliation:
Bellcore
*
* Postal address: Department of Industrial Engineering, University of Wisconsin-Madison, 1513 University Avenue, Madison, WI 53706, USA. (Current address: School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332, USA.)
** Postal address: Bellcore, 445 South Street, Morristown, NJ 07962, USA.
** Postal address: Bellcore, 445 South Street, Morristown, NJ 07962, USA.

Abstract

We consider the application of importance sampling in steady-state simulations of finite Markov chains. We show that, for a large class of performance measures, there is a choice of the alternative transition matrix for which the ratio of the variance of the importance sampling estimator to the variance of the naive simulation estimator converges to zero as the sample path length goes to infinity. Obtaining this ‘optimal’ transition matrix involves computing the performance measure of interest, so the optimal matrix cannot be computed in precisely those situations where simulation is required to estimate steady-state performance. However, our results show that alternative transition matrices of the form Q = P + E/T, where P is the original transition matrix and T is the sample path length, can be expected to provide good results. Moreover, we provide an iterative algorithm for obtaining alternative transition matrices of this form that converge to the optimal matrix as the number of iterations increases, and present an example that shows that spending some computer time iterating this algorithm and then conducting the simulation with the resulting alternative transition matrix may provide considerable variance reduction when compared to naive simulation.

MSC classification

Type
General Applied Probability
Copyright
Copyright © Applied Probability Trust 1996 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Andradóttir, S., Heyman, D. P. and Ott, T. J. (1993) Variance reduction through smoothing and control variates for Markov chain simulations. ACM Trans. Model. Comput. Simul. 3, 167189.CrossRefGoogle Scholar
Andradóttir, S., Heyman, D. P. and Ott, T. J. (1995) On the choice of alternative measures in importance sampling with Markov chains. Operat. Res. 43, 509519.Google Scholar
Bratley, P., Fox, B. L. and Schrage, L. E. (1987) A Guide to Simulation. 2nd edn. Springer, Berlin.CrossRefGoogle Scholar
Bucklew, J. A., Ney, P. and Sadowsky, J. S. (1990) Monte Carlo simulation and large deviations theory for uniformly recurrent Markov chains. J. Appl. Prob. 27, 4459.Google Scholar
Courtois, P. J. (1977) Decomposability, Queueing and Computer System Applications. Academic Press, New York.Google Scholar
Frater, M. R., Lennon, T. M. and Anderson, B. D. O. (1991) Optimally efficient estimation of the statistics of rare events in queueing networks. IEEE Trans. Automatic Control 36, 13951405.CrossRefGoogle Scholar
Glynn, P. W. and Iglehart, D. L. (1989) Importance sampling for stochastic simulations. Management Sci. 35, 13671392.CrossRefGoogle Scholar
Keilson, J. and Wishart, D. M. G. (1964) A central limit theorem for processes defined on a finite Markov chain. Proc. Camb. Phil. Soc. 60, 547567.CrossRefGoogle Scholar
Kemeny, J. G. and Snell, J. L. (1960) Finite Markov Chains. Van Nostrand, Toronto.Google Scholar
Ott, T. J. (1993) A ‘double sequence’ central limit theorem for processes defined on a finite Markov chain. Working paper. Google Scholar
Parekh, S. and Walrand, J. (1989) A quick simulation method for excessive backlogs in networks of queues. IEEE Trans. Automatic Control 34, 5466.CrossRefGoogle Scholar
Sadowsky, J. S. (1991) Large deviations theory and efficient simulation of excessive backlogs in a GI/GI/m queue. IEEE Trans. Automatic Control 36, 13831394.CrossRefGoogle Scholar
Schweitzer, P. J. (1968) Perturbation theory and finite Markov chains. J. Appl. Prob. 5, 401413.CrossRefGoogle Scholar
Shahabuddin, P., Nicola, V. F., Heidelberger, P., Goyal, A. and Glynn, P. W. (1988) Variance reduction in mean time to failure simulations. Proc. 1988 Winter Simul. Conf. ACM/IEEE, New York.Google Scholar
Stewart, W. J. and Wu, W. (1992) Numerical experiments with iteration and aggregation for Markov chains. ORSA J. Comput. 4, 336356.CrossRefGoogle Scholar