Classification with asymmetric label noise: consistency and maximal denoising. (English) Zbl 1347.62106

Electron. J. Stat. 10, No. 2, 2780-2824 (2016); corrigendum ibid. 12, No. 1, 1779-1781 (2018).
Summary: In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that a majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible”, a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions.
Our results are facilitated by a connection to “mixture proportion estimation”, which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach.


62H30 Classification and discrimination; cluster analysis (statistical aspects)
68T10 Pattern recognition, speech recognition
Full Text: DOI arXiv Euclid


[1] J. M. Adams and G. White. A versatile pulse shape discriminator for charged particle separation and its application to fast neutron time-of-flight spectroscopy., Nuclear Instruments and Methods in Physics Research , 1978.
[2] D. Aldous and P. Diaconis. Strong uniform times and finite random walks., Adv. Appl. Math. , 8(1):69-97, 1987. · Zbl 0631.60065
[3] S. Ambers, M. Flaska, and S. Pozzi. A hybrid pulse shape discrimination technique with enhanced performance at neutron energies below 500 kev., Nuclear Instruments and Methods in Physics Research A , 638:116-121, 2011.
[4] D. Angluin and P. Laird. Learning from noisy examples., Machine Learning , 2:343-370, 1988.
[5] J. Aslam and S. Decatur. On the sample complexity of noise-tolerant learning., Inf. Process. Lett. , 57:189-195, 1996.
[6] P. Bartlett, M. Jordan, and J. McAuliffe. Convexity, classification, and risk bounds., J. American Statistical Association , 101(473):138-156, 2006. · Zbl 1118.62330
[7] G. Blanchard, G. Lee, and C. Scott. Semi-supervised novelty detection., Journal of Machine Learning Research , 11 :2973-3009, 2010. · Zbl 1242.68205
[8] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In, Proceedings of the 11th Annual Conference on Computational Learning Theory , pages 92-100, 1998.
[9] C. Bouveyron and S. Girard. Robust supervised classification with mixture models: Learning from data with uncertain labels., Journal of Pattern Recognition , 42 :2649-2658, 2009. · Zbl 1175.68313
[10] C. Brodley and M. Friedl. Identifying mislabeled training data., Journal of Artifcial Intelligence Research , 131-167, 1999. · Zbl 0924.68158
[11] N. H. Bshouty, S. A. Goldman, H. D. Mathias, S. Suri, and H. Tamaki. Noise-tolerant distribution-free learning of general geometric concepts., J. ACM , 45(5):863-890, 1998. · Zbl 1065.68598
[12] A. Buja, W. Stuetzle, and Y. Shen. Loss functions for binary class probability estimation and classification: Structure and applications, manuscript, available at www-stat.wharton.upenn.edu/ buja, 2005.
[13] N. Cesa-Bianchi, P. Fischer, E. Shamir, and H.-U. Simon. Randomized hypotheses and minimum disagreement hypotheses for learning with noise. In, Proc. Third European Conf. on Computational Learning Theory , pages 119-133, 1997.
[14] V. Denchev, N. Ding, S. V. N. Vishwanathan, and H. Neven. Robust classification with adiabatic quantum optimization. In J. Langford and J. Pineau, editors, Proc. 29th Int. Conf. on Machine Learning , pages 863-870, 2012.
[15] L. Devroye, L. Györfi, and G. Lugosi., A Probabilistic Theory of Pattern Recognition . Springer, 1996. · Zbl 0853.68150
[16] N. Ding and S. V. N. Vishwanathan. \(t\)-logistic regression. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23 , pages 514-522. 2010.
[17] B. Frénay and M. Verleysen. Classification in the presence of label noise: A survey., IEEE Trans. Neural Networks and Learning Systems , 25:845-869, 2014.
[18] S. Jabbari. PAC-learning with label noise. Master’s thesis, University of Alberta, December, 2010.
[19] A. Kalai and R. Servedio. Boosting in the presence of noise., Symposium on Theory of Computing , pages 196-205, 2003. · Zbl 1192.68526
[20] M. Kearns. Efficient noise-tolerant learning from statistical queries., Proceedings of the Twenty-Fifth Annual ACM Symposium on Theory of Computing , pages 392-401, 1993. · Zbl 1310.68179
[21] O. Koyejo, N. Natarajan, P. Ravikumar, and I. Dhillon. Consistent binary classification with generalized performance metrics. In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 27 , pages 2744-2752, 2014.
[22] J. Langford. Tutorial on practical prediction theory for classification., J. Machine Learning Research , 6:273-306, 2005. · Zbl 1222.68243
[23] N. Lawrence and B. Schölkopf. Estimating a kernel Fisher discriminant in the presence of label noise., Proceedings of the International Conference in Machine Learning , 2001.
[24] E. Lehmann., Testing Statistical Hypotheses . Wiley, New York, 1986. · Zbl 0608.62020
[25] T. Liu and D. Tao. Classification with noisy labels by importance reweighting., IEEE Transactions on Pattern Analysis and Machine Intelligence , 38(3):447-461, 2016.
[26] P. Long and R. Servido. Random classification noise defeats all convex potential boosters., Machine Learning , 78:287-304, 2010. · Zbl 1470.68139
[27] N. Manwani and P. S. Sastry. Noise tolerance under risk minimization., IEEE Trans. on Cybernetics , 43(3) :1146-1151, 2011.
[28] H. Masnadi-Shirazi and N. Vasconcelos. On the design of loss functions for classification: theory, robustness to outliers, and savageboost. In Y. Bengio D. Koller, D. Schuurmans and L. Bottou, editors, Advances in Neural Information Processing Systems 21 , pages 1049-1056. 2009.
[29] L. Mason, J. Baxter, P. Bartlett, and M. Frean. Boosting algorithms as gradient descent. In, Advances in Neural Information Processing Systems 12 , pages 512-518. MIT Press, 2000.
[30] A. Menon, B. Van Rooyen, C. S. Ong, and R. Williamson. Learning from corrupted binary labels via class-probability estimation. In F. Bach and D. Blei, editors, Proc. 32th Int. Conf. Machine Learning (ICML) , Lille, France, 2015.
[31] M. Mohri, A. Rostamizadeh, and A. Talwalkar., Foundations of Machine Learning . MIT Press, 2012. · Zbl 1318.68003
[32] N. Natarajan, I. S. Dhillon, P. Ravikumar, and A. Tewari. Learning with noisy labels. In, Advances in Neural Information Processing Systems 26 , 2013.
[33] W. Peterson, T. Birdsall, and W. Fox. The theory of signal detectability., Trans. Inst. Radio Engrs., Professional Group on Information Theory , 4(4):171 -212, 1954.
[34] U. Rebbapragada and C. Brodley. Class noise mitigation through instance weighting., European Conference on Machine Learning , pages 708-715, 2007.
[35] M. D. Reid and R. C. Williamson. Composite binary losses., J. Machine Learning Research , 11 :2387-2422, 2010. · Zbl 1242.62058
[36] S. Sabato and N. Tishby. Multi-instance learning with any hypothesis class., J. Machine Learning Research , 13 :2999-3039, 2012. · Zbl 1433.68376
[37] L. L. Scharf., Statistical Signal Processing. Detection, Estimation, an Time Series Analysis . Addison-Wesley, Reading, MA, 1991. · Zbl 1130.62303
[38] C. Scott. Calibrated asymmetric surrogate losses., Electronic Journal of Statistics , 6:958-992, 2012. · Zbl 1335.62108
[39] C. Scott. Notes on weakly supervised learning, 2014. URL, web.eecs.umich.edu/ cscott/wsl.pdf.
[40] C. Scott. A rate of convergence for mixture proportion estimation, with application to learning from noisy labels. In, Proceedings of the 18th International Conference on Artificial Intelligence and Statistics (AISTATS) , 2015.
[41] C. Scott, G. Blanchard, and G. Handy. Classification with asymmetric label noise: Consistency and maximal denoising. In, Proc. Conf. on Learning Theory, JMLR W&CP , volume 30, pages 489-511. 2013a. · Zbl 1347.62106
[42] C. Scott, G. Blanchard, G. Handy, S. Pozzi, and M. Flaska. Classification with asymmetric label noise: Consistency and maximal denoising. Technical Report, , 2013b. · Zbl 1347.62106
[43] I. Steinwart and A. Christmann., Support Vector Machines . Springer, 2008. · Zbl 1203.68171
[44] G. Stempfel and L. Ralaivola. Learning SVMs from sloppily labeled data. In, Proc. 19th Int. Conf. on Artificial Neural Networks: Part I , pages 884-893, 2009.
[45] L. Xu, K. Crammer, and D. Schuurmans. Robust support vector machine training via convex outlier ablation., Proceedings of the 21st National Conference on Artificial Intelligence (AAAI) , 2006.
[46] T. Yang, M. Mahdavi, R. Jin, L. Zhang, and Y. Zhou. Multiple kernel learning from noisy labels by stochastic programming. In J. Langford and J. Pineau, editors, Proceedings of the 29th International Conference on Machine Learning (ICML-12) , pages 233-240, New York, NY, USA, 2012. ACM.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.