zbMATH — the first resource for mathematics

Latent variable models. (English) Zbl 0948.62043
Jordan, Michael I. (ed.), Learning in graphical models. Proceedings of the NATO ASI, Ettore Maiorana Centre, Erice, Italy, September 27 - October 7, 1996. Dordrecht: Kluwer Academic Publishers. NATO ASI Series. Series D. Behavioural and Social Sciences. 89, 371-403 (1998).
Summary: A powerful approach to probabilistic modelling involves supplementing a set of observed variables with additional latent, or hidden, variables. By defining a joint distribution over visible and latent variables, the corresponding distribution of the observed variables is then obtained by marginalization. This allows relatively complex distributions to be expressed in terms of more tractable joint distributions over the expanded variable space. One well-known example of a hidden variable model is the mixture distribution in which the hidden variable is the discrete component label. In the case of continuous latent variables we obtain models such as factor analysis. The structure of such probabilistic models can be made particularly transparent by giving them a graphical representation, usually in terms of a directed acyclic graph, or Bayesian network.
We provide an overview of latent variable models for representing continuous variables. We show how a particular form of linear latent variable model can be used to provide a probabilistic formulation of the well-known technique of principal components analysis (PCA). By extending this technique to mixtures, and hierarchical mixtures, of probabilistic PCA models we are led to a powerful interactive algorithm for data visualization. We also show how the probabilistic PCA approach can be generalized to nonlinear latent variable models leading to the generative topographic mapping algorithm (GTM). Finally, we show how GTM can itself be extended to model temporal data.
For the entire collection see [Zbl 0889.00024].

62H25 Factor analysis and principal components; correspondence analysis