Hostname: page-component-586b7cd67f-rdxmf Total loading time: 0 Render date: 2024-11-25T05:10:04.695Z Has data issue: false hasContentIssue false

Automated State-Dependent Importance Sampling for Markov Jump Processes via Sampling from the Zero-Variance Distribution

Published online by Cambridge University Press:  30 January 2018

Adam W. Grace*
Affiliation:
The University of Queensland
Dirk P. Kroese*
Affiliation:
University of Derby
Werner Sandmann*
Affiliation:
University of Derby
*
Postal address: School of Mathematics and Physics, The University of Queensland, Brisbane 4072, QLD, Australia.
Postal address: School of Mathematics and Physics, The University of Queensland, Brisbane 4072, QLD, Australia.
∗∗∗ Postal address: School of Computing and Mathematics, University of Derby, Derby DE22 1GB, UK.
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Many complex systems can be modeled via Markov jump processes. Applications include chemical reactions, population dynamics, and telecommunication networks. Rare-event estimation for such models can be difficult and is often computationally expensive, because typically many (or very long) paths of the Markov jump process need to be simulated in order to observe the rare event. We present a state-dependent importance sampling approach to this problem that is adaptive and uses Markov chain Monte Carlo to sample from the zero-variance importance sampling distribution. The method is applicable to a wide range of Markov jump processes and achieves high accuracy, while requiring only a small sample to obtain the importance parameters. We demonstrate its efficiency through benchmark examples in queueing theory and stochastic chemical kinetics.

Type
Research Article
Copyright
© Applied Probability Trust 

References

Asmussen, S. (1987). Applied Probability and Queues. John Wiley, Chichester.Google Scholar
Asmussen, S. and Glynn, P. W. (2007). Stochastic Simulation: Algorithms and Analysis. Springer, New York.Google Scholar
Blanchet, J. and Lam, H. (2012). State-dependent importance sampling for rare-event simulation: an overview and recent advances. Surveys Operat. Res. Manag. Sci. 17, 3859.Google Scholar
Chan, J. C. C. and Kroese, D. P. (2012). Improved cross-entropy method for estimation. Statist. Comput. 22, 10311040.Google Scholar
Cohen, J. E. (1972). Markov population processes as models of primate social and population dynamics. Theoret. Pop. Biol. 3, 119134.Google Scholar
Daigle, B. J. Jr., Roh, M. K., Gillespie, D. T. and Petzold, L. R. (2011). Automated estimation of rare event probabilities in biochemical systems. J. Chem. Phys. 134, 044110.Google Scholar
De Boer, P. T. and Nicola, V. F. (2002). Adaptive state-dependent importance sampling simulation of Markovian queueing networks. Europ. Trans. Telecommun. 13, 303315.Google Scholar
Dupuis, P. and Wang, H. (2009). Importance sampling for Jackson networks. Queueing Systems 62, 113157.Google Scholar
Dupuis, P., Sezer, A. D. and Wang, H. (2007). Dynamic importance sampling for queueing networks. Ann. Appl. Prob. 17, 13061346.Google Scholar
Gillespie, D. T. (1977). Exact stochastic simulation of coupled chemical reactions. J. Physical Chem. 81, 23402361.Google Scholar
Glynn, P. W. and Iglehart, D. L. (1989). Importance sampling for stochastic simulations. Manag. Sci. 35, 13671392.Google Scholar
Grace, A. W. and Wood, I. A. (2012). Approximating the tail of the Anderson–Darling distribution. Comput. Statist. Data Anal. 56, 43014311.Google Scholar
Heidelberger, P. (1995). Fast simulation of rare events in queuing and reliability models. ACM Trans. Modeling Comput. Simul. 5, 4385.Google Scholar
Kroese, D. P., Taimre, T. and Botev, Z. I. (2011). Handbook of Monte Carlo Methods. John Wiley, Hoboken, NJ.Google Scholar
Kroese, D. P. and Nicola, V. F. (2002). Efficient simulation of a tandem Jackson network. ACM Trans. Modeling Comput. Simul. 12, 119141.Google Scholar
Lovász, L. (1999). Hit-and-run mixes fast. Math. Programming 86, 443461.Google Scholar
McQuarrie, D. A. (1967). Stochastic approach to chemical kinetics. J. Appl. Prob. 4, 413478. (Correction: 5 (1968), 484–485.)Google Scholar
Nicola, V. F. and Zaburnenko, T. S. (2007). Efficient importance sampling heuristics for the simulation of population overflow in Jackson networks. ACM Trans. Modeling Comput. Simul. 17, 2, article 10.Google Scholar
Pearl, R. (1925). The Biology of Population Growth. Knopf, New York.Google Scholar
Pollett, P. K. and Vassallo, A. (1992). Diffusion approximations for some simple chemical reaction schemes. Adv. Appl. Prob. 24, 875893.Google Scholar
Roh, M. K., Daigle, B. J. Jr., Gillespie, D. T. and Petzold, L. R. (2011). State-dependent doubly weighted stochastic simulation algorithm for automatic characterization of stochastic biochemical rare events. J. Chem. Phys. 135, 234108.Google Scholar
Rubinstein, R. Y. and Kroese, D. P. (2004). The Cross-Entropy Method. A Unified Approach to Combinatorial Optimization, Monte-Carlo Simulation, and Machine Learning. Springer, New York.Google Scholar
Smith, R. L. (1984). Efficient Monte Carlo procedures for generating points uniformly distributed over bounded regions. Operat. Res. 32, 12961308.Google Scholar
Smith, R. L. (1996). The hit-and-run sampler: a globally reaching Markov chain sampler for generating arbitrary multivariate distributions. In Proc. 28th Conf. Winter Simulation, IEEE, Washington, DC, pp. 260264.Google Scholar
Van Kampen, N. G. (1992). Stochastic Processes in Physics and Chemistry. North Holland, Amsterdam.Google Scholar
Wolf, R. W. (1989). Stochastic Modeling and the Theory of Queues. Prentice Hall, Englewood Cliffs, NJ.Google Scholar