×

On convergence properties of the Monte Carlo EM algorithm. (English) Zbl 1329.62287

Jones, Galin (ed.) et al., Advances in modern statistical theory and applications. A Festschrift in honor of Morris L. Eaton. Beachwood, OH: IMS, Institute of Mathematical Statistics (ISBN 978-0-940600-84-3). Institute of Mathematical Statistics Collections 10, 43-62 (2013).
Summary: The Expectation-Maximization (EM) algorithm is a popular method for computing maximum likelihood estimates (MLEs) in problems with missing data. Each iteration of the algorithm formally consists of an E-step: evaluate the expected complete-data log-likelihood given the observed data, with expectation taken at current parameter estimate; and an M-step: maximize the resulting expression to find the updated estimate. Conditions that guarantee convergence of the EM sequence to a unique MLE were found by Boyles and Wu in 1983. In complicated models for high-dimensional data, it is common to encounter an intractable integral in the E-step. The Monte Carlo EM algorithm of Wei and Tanner works around this difficulty by maximizing instead a Monte Carlo approximation to the appropriate conditional expectation. Convergence properties of Monte Carlo EM have been studied, most notably, by Chan and Ledolter in 1995 and Fort and Moulines in 2003.
The goal of this review paper is to provide an accessible but rigorous introduction to the convergence properties of EM and Monte Carlo EM. No previous knowledge of the EM algorithm is assumed. We demonstrate the implementation of EM and Monte Carlo EM in two simple but realistic examples. We show that if the EM algorithm converges it converges to a stationary point of the likelihood, and that the rate of convergence is linear at best. For Monte Carlo EM we present a readable proof of the main result of K. S. Chan and J. Ledolter [J. Am. Stat. Assoc. 90, No. 429, 242–252 (1995; Zbl 0819.62069)], and state without proof the conclusions of G. Fort and E. Moulines [Ann. Stat. 31, No. 4, 1220–1259 (2003; Zbl 1043.62015)]. An important practical implication of Fort and Moulines’s result relates to the determination of Monte Carlo sample sizes in MCEM; we provide a brief review of the literature [J. G. Booth and J. P. Hobert, J. R. Stat. Soc., Ser. B, Stat. Methodol. 61, No. 1, 265–285 (1999; Zbl 0917.62058); B. S. Caffo et al., J. R. Stat. Soc., Ser. B, Stat. Methodol. 67, No. 2, 235–251 (2005; Zbl 1075.65011)] on that problem.
For the entire collection see [Zbl 1319.62004].

MSC:

62H30 Classification and discrimination; cluster analysis (statistical aspects)
62F10 Point estimation
65C05 Monte Carlo methods
65C60 Computational problems in statistics (MSC2010)
62-02 Research exposition (monographs, survey articles) pertaining to statistics
PDF BibTeX XML Cite
Full Text: DOI arXiv