×

zbMATH — the first resource for mathematics

Learning mixtures by simplifying kernel density estimators. (English) Zbl 1269.94015
Nielsen, Frank (ed.) et al., Matrix information geometry. Selected papers based on the presentations at the Indo-French workshop on matrix information geometries (MIG): Applications in sensor and cognitive systems engineering, Palaiseau, France, February 23–25, 2011. Berlin: Springer (ISBN 978-3-642-30231-2/hbk; 978-3-642-30232-9/ebook). 403-426 (2013).
Summary: Gaussian mixture models are a widespread tool for modeling various and complex probability density functions. They can be estimated by various means, often using expectation-maximization or kernel density estimation. In addition to these well-known algorithms, new and promising stochastic modeling methods include Dirichlet process mixtures and \(k\)-maximum likelihood estimators. Most of the methods, including expectation-maximization, lead to compact models but may be expensive to compute. On the other hand, kernel density estimation yields large models which are computationally cheap to build. In this chapter we present new methods to get high-quality models that are both compact and fast to compute. This is accomplished by a simplification of the kernel density estimator. The simplification is a clustering method based on \(k\)-means-like algorithms. Like all \(k\)-means algorithms, our method relies on divergences and centroids computation and we use two different divergences (and their associated centroids). Along with the description of the algorithms, we describe the pyMEF library, which is a Python library designed for the manipulation of mixture of exponential families. Unlike most of the other existing tools, this library allows one to use any exponential family instead of being limited to a particular distribution. The generic library allows one to rapidly explore the different available exponential families in order to choose the ones better suited for a particular application. We evaluate the proposed algorithms by building mixture models on examples from a bio-informatics application. The quality of the resulting models is measured in terms of log-likelihood and of Kullback-Leibler divergence.
For the entire collection see [Zbl 1252.94003].

MSC:
94A12 Signal theory (characterization, reconstruction, filtering, etc.)
62H30 Classification and discrimination; cluster analysis (statistical aspects)
62P30 Applications of statistics in engineering and industry; control charts
92C40 Biochemistry, molecular biology
94A17 Measures of information, entropy
Software:
Mixmod; PyMix; pyMEF
PDF BibTeX XML Cite
Full Text: DOI