Applying support vector machines to imbalanced datasets. (English) Zbl 1132.68523
Boulicaut, J.-F. (ed.) et al., Machine learning: ECML 2004. 15th European conference on machine learning, Pisa, Italy, September 20--24, 2004, Proceedings. Berlin: Springer (ISBN 978-3-540-23105-9/pbk). Lecture Notes in Computer Science 3201. Lecture Notes in Artificial Intelligence, 39-50 (2004).
Summary: Support Vector Machines (SVM) have been extensively studied and have shown remarkable success in many applications. However the success of SVM is very limited when it is applied to the problem of learning from imbalanced datasets in which negative instances heavily outnumber the positive instances (e.g. in gene profiling and detecting credit card fraud). This paper discusses the factors behind this failure and explains why the common strategy of undersampling the training data may not be the best choice for SVM. We then propose an algorithm for overcoming these problems which is based on a variant of the SMOTE algorithm by Chawla et al, combined with Veropoulos et al’s different error costs algorithm. We compare the performance of our algorithm against these two algorithms, along with undersampling and regular SVM and show that our algorithm outperforms all of them. For the entire collection see [Zbl 1131.68005
|68T05||Learning and adaptive systems|