Chawla, N. V.; Bowyer, K. W.; Hall, L. O.; Kegelmeyer, W. P. SMOTE: Synthetic minority over-sampling technique. (English) Zbl 0994.68128 J. Artif. Intell. Res. (JAIR) 16, 321-357 (2002). Summary: An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of “normal” examples with only a small percentage of “abnormal” or “interesting” examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of over-sampling the minority (abnormal) class and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space) than only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space) than varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve and the ROC convex hull strategy. Cited in 1 ReviewCited in 173 Documents MSC: 68T10 Pattern recognition, speech recognition Keywords:classifiers; imbalanced datasets Software:SMOTE; C4.5 PDF BibTeX XML Cite \textit{N. V. Chawla} et al., J. Artif. Intell. Res. (JAIR) 16, 321--357 (2002; Zbl 0994.68128)