A sequential Monte Carlo approach to computing tail probabilities in stochastic models. (English) Zbl 1246.60042

In complex stochastic models, it is often difficult to evaluate probabilities of events. In cases where an analytical approach is impossible, Monte Carlo methods provide a practical alternative.
In the present paper, the authors introduce a sequential importance sampling and resampling (SISR) procedure to attain a weaker form of asymptotic optimality, namely, logarithmic efficiency. The SISR procedure to compute probabilities of rare events is closely related to interacting particle systems and dynamic importance sampling methods. By making use of martingale representations of the sequential Monte Carlo estimators, it is shown how resampling weights can be chosen to yield logarithmically efficient Monte Carlo estimates of large deviation probabilities for multidimensional Markov random walks.
The following is a short description of the contents of the paper.
In the introduction, the problem of calculating the probabilities of rare events is presented, and a brief review of stochastic methods for solving it is given.
In section 2, the \(\sigma\)-field generated by \(n\) random variables \(Y_{1}, Y_{2}, \dots, Y_{n}\) on a probability space \((\Omega, {\mathcal F},P)\) is considered. For the computation of the probability \(\alpha = P(Y_{n} \in \Gamma)\), a direct Monte Carlo method, based on i.i.d. random vectors, is proposed. This method provides the quantity \(\hat{\alpha}_{D}\) as an estimate of the probability \(\alpha\). Also, a SISR procedure is introduced to construct the quantity \(\hat{\alpha}_{B}\) as a Monte Carlo estimation of \(\alpha\). Here, an important role is played by the so-called resampling weights. The martingale decomposition of \(\hat{\alpha}_{B} - \alpha\) is obtained and used for estimating the standard error of \(\hat{\alpha}_{B}\). Also, one more estimator \(\hat{\alpha}_{R}\) of \(\alpha\) is constructed and the martingale representation of \(\hat{\alpha}_{R} - \alpha\) is given.
In section 3, the logarithmic efficiency of SISR for Monte Carlo computations of small tail probabilities is studied. The notion of asymptotic optimality of an importance sampling measure is recalled. The resampling weights for SISR estimates \(\hat{\alpha}_{B}\) are chosen, and two conditions, in the terms of exceedance probabilities, which give logarithmic efficiency, are presented. In example 1, a concrete realization of the SISR procedure and its logarithmic efficiency are developed. In subsection 3.1, a heuristic principle for efficient SISR procedures is studied. In theorem 1, it is shown that the estimates \(\hat{\alpha}_{B}\) and \(\hat{\alpha}_{R}\) of \(\alpha\) are logarithmically efficient. The heuristic principle is used in example 2 to construct a logarithmically efficient SISR. In theorem 2, the resampling weights for logarithmically efficient simulation of \(\hat{\alpha}_{B}\) and \(\hat{\alpha}_{R}\) are provided. The SISR procedure is extended to Markov additive process in subsection 3.2. Example 3 is an extension of example 1 to Markov additive processes. Theorem 3 is an extension of the theorem1 and theorem 2 to Markov additive processes. In theorem 4, under a given assumption, the choice of the resampling weights which provide logarithmic efficiency is presented. In subsection 3.3, it is discussed how the basic idea of examples 1 and 2 can be extended to more general rare events and more general stochastic sequences.
In section 4, two examples are used to illustrate theorems 1 and 4.
The paper finishes with an appendix, in which some useful equalities are proved.


60F10 Large deviations
65C05 Monte Carlo methods
60J22 Computational methods in Markov chains
60K35 Interacting random processes; statistical mechanics type models; percolation theory
65C20 Probabilistic models, generic numerical methods in probability and statistics
Full Text: DOI arXiv


[1] Baker, J. E. (1985). Adaptive selection methods for genetic algorithms. In Proc. International Conference on Genetic Algorithms and Their Applications (J. Grefenstette, ed.) 101-111. Erlbaum, Mahwah, NJ. · Zbl 0676.68043
[2] Baker, J. E. (1987). Reducing bias and inefficiency in the selection algorithm. In Genetic Algorithms and Their Applications (J. Grefenstette, ed.) 14-21. Erlbaum, Mahwah, NJ.
[3] Blanchet, J. and Glynn, P. (2008). Efficient rare-event simulation for the maximum of heavy-tailed random walks. Ann. Appl. Probab. 18 1351-1378. · Zbl 1147.60315
[4] Brown, L. (1986). Fundamentals of Statistical Exponential Families. Institute of Mathematical Statistics Lecture Notes 9 . IMS, Hayward, CA. · Zbl 0685.62002
[5] Bucklew, J. A., Ney, P. and Sadowsky, J. S. (1990). Monte Carlo simulation and large deviations theory for uniformly recurrent Markov chains. J. Appl. Probab. 27 44-59. · Zbl 0702.60028
[6] Chan, H. P. and Lai, T. L. (2000). Asymptotic approximations for error probabilities of sequential or fixed sample size tests in exponential families. Ann. Statist. 28 1638-1669. · Zbl 1105.62367
[7] Chan, H. P. and Lai, T. L. (2003). Saddlepoint approximations and nonlinear boundary crossing probabilities of Markov random walks. Ann. Appl. Probab. 13 395-429. · Zbl 1029.60058
[8] Chan, H. P. and Lai, T. L. (2007). Efficient importance sampling for Monte Carlo evaluation of exceedance probabilities. Ann. Appl. Probab. 17 440-473. · Zbl 1134.65005
[9] Chan, H. P. and Lai, T. L. (2008). A general theory of particle filters in hidden Markov models and some applications. Technical report, Dept. Statistics, Stanford Univ.
[10] Collamore, J. F. (2002). Importance sampling techniques for the multidimensional ruin problem for general Markov additive sequences of random vectors. Ann. Appl. Probab. 12 382-421. · Zbl 1021.65003
[11] Crisan, D., Del Moral, P. and Lyons, T. (1999). Discrete filtering using branching and interacting particle systems. Markov Process. Related Fields 5 293-318. · Zbl 0967.93088
[12] de la Peña, V. H., Lai, T. L. and Shao, Q.-M. (2009). Self-Normalized Processes : Limit Theory and Statistical Applications . Springer, Berlin. · Zbl 1165.62071
[13] Del Moral, P., Doucet, A. and Jasra, A. (2006). Sequential Monte Carlo samplers. J. R. Stat. Soc. Ser. B Stat. Methodol. 68 411-436. · Zbl 1105.62034
[14] Del Moral, P. and Garnier, J. (2005). Genealogical particle analysis of rare events. Ann. Appl. Probab. 15 2496-2534. · Zbl 1097.65013
[15] Del Moral, P. and Jacod, J. (2001). Interacting particle filtering with discrete observations. In Sequential Monte Carlo Methods in Practice (A. Doucet, N. de Freitas and N. Gordon, eds.) 43-75. Springer, New York. · Zbl 1056.93574
[16] Dupuis, P. and Wang, H. (2005). Dynamic importance sampling for uniformly recurrent Markov chains. Ann. Appl. Probab. 15 1-38. · Zbl 1068.60036
[17] Dupuis, P. and Wang, H. (2007). Subsolutions of an Isaacs equation and efficient schemes for importance sampling. Math. Oper. Res. 32 723-757. · Zbl 1341.62042
[18] Glasserman, P. and Wang, Y. (1997). Counterexamples in importance sampling for large deviations probabilities. Ann. Appl. Probab. 7 731-746. · Zbl 0892.60043
[19] Ney, P. and Nummelin, E. (1987). Markov Additive Processes. I. Eigenvalues Properties and Limit Theorems. II. Large Deviations. Ann. Probab. 15 561-592, 593-609. · Zbl 0625.60027
[20] Press, W. H., Flannery, B. P., Teukolsky, S. A. and Vetterling, W. T. (1992). Numerical Recipes in C : The Art of Scientific Computing , 2nd ed. Cambridge Univ. Press, Cambridge. · Zbl 0845.65001
[21] Sadowsky, J. S. and Bucklew, J. A. (1990). On large deviations theory and asymptotically efficient Monte Carlo estimation. IEEE Trans. Inform. Theory 36 579-588. · Zbl 0702.60029
[22] Siegmund, D. (1975). Error probabilities and average sample number of the sequential probability ratio test. J. Roy. Statist. Soc. Ser. B 37 394-401. · Zbl 0312.62063
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.