The ECME algorithm: A simple extension of EM and ECM with faster monotone convergence. (English) Zbl 0812.62028

Summary: A generalisation of the ECM algorithm [X. L. Meng and D. B. Rubin, ibid. 80, No. 2, 267-278 (1993; Zbl 0778.62022)], which is itself an extension of the EM algorithm, can be obtained by replacing some CM- steps of ECM, which maximise the constrained expected complete-data loglikelihood function, with steps that maximise the correspondingly constrained actual likelihood function. This algorithm, which we call ECME algorithm, for Expectation/Conditional Maximisation Either, shares with both EM and ECM their stable monotone convergence and basic simplicity of implementation relative to competing faster converging methods. Moreover, ECME can have a substantially faster convergence rate than either EM or ECM, measured using either the number of iterations or actual computer time. There are two reasons for this improvement.
First, in some of ECME’s maximisation steps, the actual likelihood is being conditionally maximised, rather than a current approximation to it, as with EM and ECM. Secondly, ECME allows faster converging numerical methods to be used on only those constrained maximisations where they are most efficacious. Illustrative ECME algorithms are presented with both closed-form and iterative CM-steps, which demonstrate the faster rate of convergence and the associated easier assessment of convergence. Also, theoretical expressions are presented and illustrated regarding the rate of convergence of ECME. Finally, relationships with Markov chain Monte Carlo methods are discussed.


62F10 Point estimation
65C99 Probabilistic methods, stochastic differential equations


Zbl 0778.62022
Full Text: DOI