×

Mixtures of regressions with predictor-dependent mixing proportions. (English) Zbl 1284.62467

Summary: We extend the standard mixture of linear regressions model by allowing the mixing proportions to be modeled nonparametrically as a function of the predictors. This framework allows for more flexibility in the modeling of the mixing proportions than the fully parametric mixture of experts model, which we also discuss. We present an EM-like algorithm for estimation of the new model. We also provide simulations demonstrating that our nonparametric approach can provide a better fit than the parametric approach in some instances and can serve to validate and thus reinforce the parametric approach in others. We also analyze and interpret two real data sets using the new method.

MSC:

62J12 Generalized linear models (logistic models)
PDFBibTeX XMLCite
Full Text: DOI

References:

[2] DeVeaux, R. D., Mixtures of linear regressions, Computational Statistics and Data Analysis, 8, 3, 227-245 (1989) · Zbl 0726.62109
[3] Fan, J.; Gijbels, I., Local Polynomial Modelling and its Applications (1996), Chapman & Hall: Chapman & Hall London · Zbl 0873.62037
[4] Hastie, T.; Tibshirani, R.; Friedman, J. H., The Elements of Statistical Learning: Data Mining, Inference, and Prediction (2001), Springer: Springer New York · Zbl 0973.62007
[6] Hurn, M.; Justel, A.; Robert, C. P., Estimating mixtures of regressions, Journal of Computational and Graphical Statistics, 12, 1, 55-79 (2003)
[7] Jacobs, R. A.; Jordan, M. I.; Nowlan, S. J.; Hinton, G. E., Adaptive mixtures of local experts, Neural Computation, 3, 1, 79-87 (1991)
[8] Jacobs, R. A.; Peng, F.; Tanner, M. A., A Bayesian approach to model selection in hierarchical mixtures-of-experts architectures, Neural Networks, 10, 2, 231-241 (1997)
[9] Jordan, M. I.; Jacobs, R. A., Hierarchies of adaptive experts, (Moody, J.; Hanson, S.; Lippmann, R., Neural Information Processing Systems, vol. 4 (1992), Morgan Kaufmann: Morgan Kaufmann San Mateo, CA), 985-993
[10] Jordan, M. I.; Jacobs, R. A., Hierarchical mixtures of experts and the EM algorithm, Neural Computation, 6, 181-214 (1994)
[11] Jordan, M. I.; Xu, L., Convergence results for the EM approach to mixtures of experts architectures, Neural Networks, 8, 9, 1409-1431 (1995)
[12] Justel, A.; Pena, D., Gibbs sampling will fail in outlier problems with strong masking, Journal of Computational and Graphical Statistics, 5, 2, 176-189 (1996)
[13] Nagin, D. S., Analyzing developmental trajectories: a semiparametric group-based approach, Psychological Methods, 4, 2, 139-157 (1999)
[14] Pena, D.; Rodrìguez, J.; Tiao, G. C., Identifying mixtures of regression equations by the SAR procedure, (Bernardo, J. M.; Bayarri, M. J.; Berger, J. O.; Dawid, A. P.; Heckerman, D.; Smith, A. F.M.; West, M., Bayesian Statistics, vol. 7 (2003), Clarendon Press: Clarendon Press Oxford), 327-348
[15] Quandt, R. E., A new approach to estimating switching regressions, Journal of the American Statistical Association, 67, 338, 306-310 (1972) · Zbl 0237.62047
[17] Simonoff, J. S., Smoothing Methods in Statistics (1996), Springer: Springer New York · Zbl 0859.62035
[18] Turner, T. R., Estimating the propagation rate of a viral infection of potato plants via mixtures of regressions, Applied Statistics, 49, 3, 371-384 (2000) · Zbl 0971.62076
[20] Viele, K.; Tong, B., Modeling with mixtures of linear regressions, Statistics and Computing, 12, 4, 315-330 (2002)
[21] Wand, M. P.; Jones, M. C., Kernel Smoothing (1995), Chapman & Hall/CRC: Chapman & Hall/CRC Florida · Zbl 0854.62043
[22] Yau, K. K.W.; Lee, A. H.; Ng, A. S.K., Finite mixture regression model with random effects: application to neonatal hospital length of stay, Computational Statistics and Data Analysis, 41, 3-4, 359-366 (2003) · Zbl 1256.62065
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.