×

zbMATH — the first resource for mathematics

An analytical toast to wine: using stacked generalization to predict wine preference. (English) Zbl 07260692
Summary: Due to the intricacies surrounding taste profiles, one’s view of good wine is subjective. Therefore, it is advantageous to provide a more objective, data-driven way to assess wine preferences. Motivated by a previous study that modeled wine preferences using machine learning algorithms, this work presents an ensemble approach to predict a wine sample’s quality level given its physiochemical properties. Results show the proposed framework out-performs many sophisticated models including the one recommended by the motivational study. Moreover, the proposed framework offers a simple variable importance strategy to gain insight as to the relevance of the predictor variables and is applied to both simulated and real data. Given the predictive power of using ensembles, especially when they can be interpretable, practitioners can use the following approach to provide an accurate and inferential perspective towards demystifying wine preferences.
MSC:
62 Statistics
68 Computer science
PDF BibTeX XML Cite
Full Text: DOI
References:
[1] A. Abbasi et al.,Metafraud: A meta-learning framework for detecting financial fraud, MIS Q. 36 (2012), 1293-1327.
[2] R. R. Bouckaert and E. Frank,Evaluating the replicability of significance tests for comparing learning algorithms, inAdvances in Knowledge Discovery and Data Mining, Springer, Berlin, Heidelberg, 2004, 3-12.
[3] L. Breiman,Stacked regressions, Mach. Learn. 24 (1996), 49-64. · Zbl 0849.68104
[4] X. Cai, A. Huang, and S. Xu,Fast empirical bayesian lasso for multiple quantitative trait locus mapping, BMC Bioinform. 12 (2011), 211.
[5] R. Caruana et al.,Ensemble selection from libraries of models, inProceedings of the Twenty-first International Conference on Machine Learning, ACM, New York, NY, 2004, 18-25.
[6] P. Cortez,Data mining with multilayer perceptrons and support vector machines, inData Mining: Foundations and Intelligent Paradigms, Springer, Berlin, Heidelberg, 2012, 9-25. · Zbl 1231.68195
[7] P. Cortez et al.,Modeling wine preferences by data mining from physicochemical properties, Decis. Support Syst. 47 (2009), 547-553.
[8] P. Cortez and M. J. Embrechts,Using sensitivity analysis and visualization techniques to open black box data mining models, Inform. Sci. 225 (2013), 1-17.
[9] J. Duke,Tongues taste for better wine, 2014, available at http:// classic.scopeweb.mit.edu/articles/robo-tongues-taste-forbetter-wine/.
[10] S. Džeroski and B. Ženko,Is combining classifiers with stacking better than selecting the best one?Mach. Learn. 54 (2004), 255-273. · Zbl 1101.68077
[11] S. E. Ebeler,Linking flavor chemistry to sensory analysis of wine, inFlavor Chemistry, Springer, Boston, MA, 1999, 409-421.
[12] J. H. Friedman,Multivariate adaptive regression splines, Ann. Stat. 19 (1991), 1-67. · Zbl 0765.62064
[13] G. D. Garson,Interpreting neural-network connection weights, AI Expert 6 (1991), 46-51 http://dl.acm.org/citation.cfm?id= 129449.129452.
[14] A. E. Hoerl and R. W. Kennard,Ridge regression: Biased estimation for nonorthogonal problems, Technometrics 12 (1970), 55-67. · Zbl 0202.17205
[15] G. Hu et al.,Classification of wine quality with imbalanced data, in2016 IEEE International Conference on Industrial Technology (ICIT), IEEE, Taipei, Taiwan, 2016, 1712-1217.
[16] A. Huang and D. Liu,EBglmnet: Empirical Bayesian lasso and elastic net methods for generalized linear models. R package version 4.1, 2016, available at https://CRAN.R-project.org/ package=EBglmnet.
[17] A. Huang and D. Liu,Ebglmnet vignette, 2016, available at http:// cran.fhcrc.org/web/packages/EBglmnet/vignettes/EBglmnet_intro.pdf.
[18] A. Huang, S. Xu, and X. Cai,Empirical bayesian lasso-logistic regression for multiple binary trait locus mapping, BMC Genet. 14 (2013), 5.
[19] A. Huang, S. Xu, and X. Cai,Empirical bayesian elastic net for multiple quantitative trait locus mapping, Heredity 114 (2015), 107-115.
[20] S. Janitza, G. Tutz, and A.-L. Boulesteix,Random forest for ordinal responses: Prediction and variable selection, Comput. Stat. Data Anal. 96 (2016), 57-73. · Zbl 06918563
[21] A. Karatzoglou et al.,kernlab - An S4 package for kernel methods in R, J. Stat. Softw. 11 (2004), 1-20.
[22] M. Kuhn,Building predictive models in R using the caret package, J. Stat. Softw. 28 (2008), 1-26.
[23] M. Kuhn, J. Wing, S. Weston, A. Williams, C. Keefer, A. Engelhardt, T. Cooper, Z. Mayer, B. Kenkel, the R Core Team, M. Benesty, R. Lescarbeau, A. Ziem, L. Scrucca, Y. Tang, and C. Candan, caret: Classification and Regression Training. R package version 6.0-71, 2016, available at https://cran.r-project.org/web/ packages/caret/index.html.
[24] T. Larkin and D. McManus,Impact of analytics and metalearning on predicting geomagnetic storms: Risk to global telecommunications, inData Analytics 2016, The Fifth International Conference on Data Analytics, S. Bhulai and I. Semanjski, Eds., IARIA, Wilmington, DE, 2016, 8-13.
[25] M. LeBlanc and R. Tibshirani,Combining estimates in regression and classification, J. Amer. Stat. Assoc. 91 (1996), 1641-1650. · Zbl 0881.62046
[26] M. Lichman,UCI machine learning repository, 2013, available at http://archive.ics.uci.edu/ml.
[27] J. Mendes-Moreira et al.,Ensemble approaches for regression: A survey, ACM Comput. Surv. (CSUR) 45 (2012), 1-40. · Zbl 1293.68234
[28] C. Petersohn,Temporal video segmentation, Jörg Vogt Verlag, Dresden, Germany, 2010.
[29] E. C. Polley, S. Rose, and M. J. Van der Lann,Targeted learning: Causal inference for observational and experimental data, Springer, New York, 2011.
[30] R Core Team,R: A language and environment for statistical computing, R Foundation for Statistical Computing, Vienna, Austria, 2020, available at: https://www.R-project.org/.
[31] S. Reid and G. Grudic,Regularized linear models in stacked generalization, inMultiple Classifier Systems, Springer, Berlin, Heidelberg, 2009, 112-121.
[32] B. D. Ripley,Modern applied statistics with S, Springer, New York City, NY, 2002. · Zbl 1006.62003
[33] G. Santafe, I. Inza, and J. A. Lozano,Dealing with the evaluation of supervised classification algorithms, Artificial Intelligence Review 44 (2015), 467-508.
[34] P. Schmitt,Us to gain 16m wine drinkers by 2025, 2015, available at https://www.thedrinksbusiness.com/2015/11/us-to-get16m-extra-wine-drinkers-by-2025/.
[35] M. J. Shaw et al.,Knowledge management and data mining for marketing, Decis. Support Syst. 31 (2001), 127-137.
[36] J. Sill, G. Takács, L. Mackey, and D. Lin,Feature-weighted linear stacking. arXiv preprint arXiv:0911.0460, 2009.
[37] D. V. Smith and R. F. Margolskee,Making sense of taste, Sci. Amer. 16 (2006), 84-92.
[38] R. Tibshirani,Regression shrinkage and selection via the lasso, J. R. Stat. Soc. Ser. B 58 (1996), 267-288. · Zbl 0850.62538
[39] K. M. Ting and I. H. Witten,Issues in stacked generalization, J. Artif. Intell. Res. 10 (1999), 271-289. · Zbl 0915.68075
[40] C.-F. Tsai and Y.-F. Hsu,A meta-learning framework for bankruptcy prediction, J. Forecast. 32 (2013), 167-179. · Zbl 1397.91012
[41] S. Varma and R. Simon,Bias in error estimation when using cross-validation for model selection, BMC Bioinform. 7 (2006), 91.
[42] R. Vilalta and Y. Drissi,A perspective view and survey of meta-learning, Artif. Intell. Rev. 18 (2002), 77-95.
[43] I. H. Witten and E. Frank,Data mining: Practical machine learning tools and techniques, Morgan Kaufmann, San Francisco, CA, 2005. · Zbl 1076.68555
[44] D. H. Wolpert,Stacked generalization, Neural Netw. 5 (1992), 241-259.
[45] A. Woodie,Outsmarting wine snobs with machine learning, 2015, available at https://www.datanami.com/2015/02/20/ outsmarting-wine-snobs-with-machine-learning/.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.