×

A hierarchical Bayesian model for learning nonlinear statistical regularities in nonstationary natural signals. (English) Zbl 1092.93614

Summary: Capturing statistical regularities in complex, high-dimensional data is an important problem in machine learning and signal processing. Models such as principal component analysis (PCA) and independent component analysis (ICA) make few assumptions about the structure in the data and have good scaling properties, but they are limited to representing linear statistical regularities and assume that the distribution of the data is stationary. For many natural, complex signals, the latent variables often exhibit residual dependencies as well as nonstationary statistics. Here we present a hierarchical Bayesian model that is able to capture higher-order nonlinear structure and represent nonstationary data distributions. The model is a generalization of ICA in which the basis function coefficients are no longer assumed to be independent; instead, the dependencies in their magnitudes are captured by a set of density components. Each density component describes a common pattern of deviation from the marginal density of the pattern ensemble; in different combinations, they can describe nonstationary distributions. Adapting the model to image or audio data yields a nonlinear, distributed code for higher-order statistical regularities that reflect more abstract, invariant properties of the signal.

MSC:

93E35 Stochastic learning and adaptive control
94A12 Signal theory (characterization, reconstruction, filtering, etc.)

Software:

Bubbles
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] DOI: 10.1162/neco.1995.7.6.1129 · Zbl 05479838 · doi:10.1162/neco.1995.7.6.1129
[2] DOI: 10.1016/S0042-6989(97)00121-1 · doi:10.1016/S0042-6989(97)00121-1
[3] DOI: 10.1109/83.806616 · doi:10.1109/83.806616
[4] DOI: 10.1109/97.566704 · doi:10.1109/97.566704
[5] DOI: 10.1023/A:1016319502849 · Zbl 1010.94506 · doi:10.1023/A:1016319502849
[6] DOI: 10.1162/neco.1991.3.2.194 · doi:10.1162/neco.1991.3.2.194
[7] DOI: 10.1016/S0042-6989(02)00017-2 · doi:10.1016/S0042-6989(02)00017-2
[8] DOI: 10.1162/089976603321192121 · Zbl 1046.92015 · doi:10.1162/089976603321192121
[9] DOI: 10.1162/089976600300015312 · doi:10.1162/089976600300015312
[10] DOI: 10.1162/089976601750264992 · Zbl 1009.62049 · doi:10.1162/089976601750264992
[11] DOI: 10.1364/JOSAA.20.001237 · doi:10.1364/JOSAA.20.001237
[12] DOI: 10.1088/0954-898X/14/3/306 · doi:10.1088/0954-898X/14/3/306
[13] Kayser C., Artificial Neural Networks 2130 pp 1075– (2001)
[14] DOI: 10.1023/A:1009688428205 · Zbl 05469939 · doi:10.1023/A:1009688428205
[15] DOI: 10.1109/83.988960 · Zbl 05453077 · doi:10.1109/83.988960
[16] DOI: 10.1109/34.879789 · Zbl 05112401 · doi:10.1109/34.879789
[17] DOI: 10.1038/381607a0 · doi:10.1038/381607a0
[18] DOI: 10.1109/78.942614 · Zbl 1369.94262 · doi:10.1109/78.942614
[19] DOI: 10.1109/83.931100 · Zbl 01964285 · doi:10.1109/83.931100
[20] DOI: 10.1038/90526 · doi:10.1038/90526
[21] DOI: 10.1098/rspb.1998.0303 · doi:10.1098/rspb.1998.0303
[22] DOI: 10.1006/acha.2000.0350 · Zbl 0983.68228 · doi:10.1006/acha.2000.0350
[23] DOI: 10.1162/089976602317318938 · Zbl 0994.68591 · doi:10.1162/089976602317318938
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.