×

Ensembling classification models based on phalanxes of variables with applications in drug discovery. (English) Zbl 1454.62408

Summary: Statistical detection of a rare class of objects in a two-class classification problem can pose several challenges. Because the class of interest is rare in the training data, there is relatively little information in the known class response labels for model building. At the same time the available explanatory variables are often moderately high dimensional. In the four assays of our drug-discovery application, compounds are active or not against a specific biological target, such as lung cancer tumor cells, and active compounds are rare. Several sets of chemical descriptor variables from computational chemistry are available to classify the active versus inactive class; each can have up to thousands of variables characterizing molecular structure of the compounds. The statistical challenge is to make use of the richness of the explanatory variables in the presence of scant response information. Our algorithm divides the explanatory variables into subsets adaptively and passes each subset to a base classifier. The various base classifiers are then ensembled to produce one model to rank new objects by their estimated probabilities of belonging to the rare class of interest. The essence of the algorithm is to choose the subsets such that variables in the same group work well together; we call such groups phalanxes.

MSC:

62P10 Applications of statistics to biology and medical sciences; meta analysis
62H30 Classification and discrimination; cluster analysis (statistical aspects)

References:

[1] Bolton, R. J. and Hand, D. J. (2002). Statistical fraud detection: A review. Statist. Sci. 17 235-249. · Zbl 1013.62115 · doi:10.1214/ss/1042727940
[2] Breiman, L. (1996a). Bagging predictors. Machine Learning 24 123-140. · Zbl 0858.68080
[3] Breiman, L. (1996b). Out-of-bag estimation. Technical report, Dept. Statistics, Univ. California, Berkeley, Berkeley, CA.
[4] Breiman, L. (2001). Random forests. Machine Learning 45 5-32. · Zbl 1007.68152 · doi:10.1023/A:1010933404324
[5] Breiman, L., Friedman, J. H., Olshen, R. A. and Stone, C. J. (1984). Classification and Regression Trees . Chapman & Hall/CRC, Boca Raton, FL. · Zbl 0541.62042
[6] Bruce, C. L., Melville, J. L., Pickett, S. D. and Hirst, J. D. (2007). Contemporary QSAR classifiers compared. J. Chem. Inf. Model. 47 219-227.
[7] Burden, F. R. (1989). Molecular identification number for substructure searches. J. Chem. Inf. Comput. Sci. 29 225-227.
[8] Carhart, R. E., Smith, D. H. and Venkataraghavan, R. (1985). Atom pairs as molecular features in structure-activity studies: Definition and applications. J. Chem. Inf. Comput. Sci. 25 64-73.
[9] Chawla, N. V., Bowyer, K. W., Hall, L. O. and Kegelmeyer, W. P. (2002). SMOTE: Synthetic minority over-sampling technique. J. Artificial Intelligence Res. 16 321-357. · Zbl 0994.68128
[10] Chen, C., Liaw, A. and Breiman, L. (2004). Using random forest to learn imbalanced data. Technical report, Dept. Statistics, Univ. California, Berkeley, Berkeley, CA.
[11] Deng, H. and Runger, G. (2013). Gene selection with guided regularized random forest. Pattern Recognition 46 3483-3489.
[12] Freund, Y. and Schapire, R. E. (1996). Experiments with a new boosting algorithm. In Machine Learning , Proceedings of the Thirteenth International Conference ( ICML’ 96), Bari , Italy , July 3 - 6, 1996 (L. Saitta, ed.) 148-156. Morgan Kaufmann, San Mateo, CA.
[13] Goodarzi, M., Dejaegher, B. and Vander Heyden, Y. (2012). Feature selection methods in QSAR studies. Journal of AOAC International 95 636-650.
[14] Hastie, T., Tibshirani, R. and Friedman, J. (2009). The Elements of Statistical Learning : Data Mining , Inference , and Prediction , 2nd ed. Springer, New York. · Zbl 1273.62005 · doi:10.1007/978-0-387-84858-7
[15] Hawkins, D. M. and Kass, G. V. (1982). Automatic interaction detection. In Topics in Applied Multivariate Analysis (D. M. Hawkins, ed.) 269-302. Cambridge Univ. Press, Cambridge.
[16] Hughes-Oliver, J. M., Brooks, A. D., Welch, W. J., Khaledi, M. G., Hawkins, D., Young, S. S., Patil, K., Howell, G. W., Ng, R. T. and Chu, M. T. (2012). ChemModLab: A web-based cheminformatics modeling laboratory. In Silico Biology 11 61-81.
[17] Kearsley, S. K., Sallamack, S., Fluder, E. M., Andose, J. D., Mosley, R. T. and Sheridan, R. P. (1996). Chemical similarity using physiochemical property descriptors. J. Chem. Inf. Comput. Sci. 36 118-127.
[18] Liaw, A. and Wiener, M. (2002). Classification and regression by randomForest. R News 2 18-22.
[19] Liu, K., Feng, J. and Young, S. S. (2005). PowerMV: A software environment for molecular viewing, descriptor generation, data analysis and hit evaluation. J. Chem. Inf. Model. 45 515-522.
[20] Meier, L., van de Geer, S. and Bühlmann, P. (2008). The group lasso for logistic regression. J. R. Stat. Soc. Ser. B Stat. Methodol. 70 53-71. · Zbl 1400.62276 · doi:10.1111/j.1467-9868.2007.00627.x
[21] Pearlman, R. S. and Smith, K. M. (1999). Metric validation and the receptor-relevant subspace concept. J. Chem. Inf. Comput. Sci. 39 28-35.
[22] Podder, M., Welch, W. J., Zamar, R. H. and Tebbutt, S. J. (2006). Dynamic variable selection in SNP genotype autocalling from APEX microarray data. BMC Bioinformatics 7 521, 11 pp.
[23] Polishchuk, P. G., Muratov, E. N., Artemenko, A. G., Kolumbin, O. G., Muratov, N. N. and Kuz’min, V. E. (2009). Application of random forest approach to QSAR prediction of aquatic toxicity. J. Chem. Inf. Model. 49 2481-2488.
[24] Rusinko, A. III, Farmen, M. W., Lambert, C. G., Brown, P. L. and Young, S. S. (1999). Analysis of a large structure/biological activity data set using recursive partitioning. J. Chem. Inf. Comput. Sci. 39 1017-1026.
[25] Svetnik, V., Liaw, A., Tong, C., Culberson, J. C., Sheridan, R. P. and Feuston, B. P. (2003). Random forest: A classification and regression tool for compound classification and QSAR modeling. J. Chem. Inf. Comput. Sci. 43 1947-1958.
[26] Tibshirani, R. (1996). Bias, variance, and prediction error for classification rules. Technical report, Dept. Statistics, Univ. Toronto.
[27] Wang, Y. (2005). Statistical methods for high throughput screening drug discovery data. Ph.D. thesis, Dept. Statistics and Actuarial Science, Univ. Waterloo.
[28] Wolpert, D. H. and Macready, W. G. (1999). An efficient method to estimate bagging’s generalization error. Machine Learning 35 41-55. · Zbl 0936.68090 · doi:10.1023/A:1007519102914
[29] Young, S. S. and Hawkins, D. M. (1998). Using recursive partitioning to analyze a large SAR data set. SAR and QSAR in Environmental Research 8 183-193.
[30] Yuan, M. and Lin, Y. (2006). Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. Ser. B Stat. Methodol. 68 49-67. · Zbl 1141.62030 · doi:10.1111/j.1467-9868.2005.00532.x
[31] Zhu, M., Su, W. and Chipman, H. A. (2006). LAGO: A computationally efficient approach for statistical detection. Technometrics 48 193-205. · doi:10.1198/004017005000000643
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.