##
**Optimal reduction of solutions for support vector machines.**
*(English)*
Zbl 1179.68123

Summary: Being a universal learning machine, a Support Vector Machine (SVM) suffers from expensive computational cost in the test phase due to the large number of support vectors, and greatly impacts its practical use. To address this problem, we proposed an adaptive genetic algorithm to optimally reduce the solutions for an SVM by selecting vectors from the trained support vector solutions, such that the selected vectors best approximate the original discriminant function. Our method can be applied to SVMs using any general kernel. The size of the reduced set can be used adaptively based on the requirement of the tasks. As such the generalization/complexity trade-off can be controlled directly. The lower bound of the number of selected vectors required to recover the original discriminant function can also be determined.

### MSC:

68T05 | Learning and adaptive systems in artificial intelligence |

68T10 | Pattern recognition, speech recognition |

### Keywords:

support vector machine; vector correlation; genetic algorithms; optimal solution; discriminant function; pattern recognition
PDFBibTeX
XMLCite

\textit{H.-J. Lin} and \textit{J. P. Yeh}, Appl. Math. Comput. 214, No. 2, 329--335 (2009; Zbl 1179.68123)

Full Text:
DOI

### References:

[1] | B.E. Boser, I.M. Guyon, V.N. Vapnik, A training algorithm for optimal margin classifiers, in: Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 1992, pp. 144-152.; B.E. Boser, I.M. Guyon, V.N. Vapnik, A training algorithm for optimal margin classifiers, in: Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 1992, pp. 144-152. |

[2] | C.J.C. Burges, Simplified support vector decision rules, in: Proceedings 13th International Conference on Machine Learning, 1996, pp. 71-77.; C.J.C. Burges, Simplified support vector decision rules, in: Proceedings 13th International Conference on Machine Learning, 1996, pp. 71-77. |

[3] | Burges, C. J.C., Geometry and Invariance in Kernel Based Method. Geometry and Invariance in Kernel Based Method, Advance in Kernel Method-Support Vector Learning (1999), MIT Press: MIT Press Cambridge, MA, pp. 86-116 |

[4] | Chakraborty, Biman; Chaudhuri, Probal, On the use of genetic algorithm with elitism in robust and nonparametric multivariate analysis, Austrian Journal of Statistics, 32, 1&2, 13-27 (2003) |

[5] | Cortes, C.; Vapnik, V., Support vector networks, Machine Learning, 20, 273-297 (1995) · Zbl 0831.68098 |

[6] | Downs, T.; Gates, K.; Masters, A., Exact simplification of support vector solutions, The Journal of Machine Learning Research, 2, 293-297 (2002) · Zbl 1037.68111 |

[7] | J. Guo, N. Takahashi, T. Nishi, A learning algorithm for improving the classification speed of support vector machines, in: Proceedings of 2005 European Conference on Circuit Theory and Design (ECCTD2005), August-September 2005.; J. Guo, N. Takahashi, T. Nishi, A learning algorithm for improving the classification speed of support vector machines, in: Proceedings of 2005 European Conference on Circuit Theory and Design (ECCTD2005), August-September 2005. |

[8] | Goldberg, D. E., Genetic Algorithms in Search. Genetic Algorithms in Search, Optimization and Machine Learning (1989), Addison Wesley: Addison Wesley New York · Zbl 0721.68056 |

[9] | T. Graepel, R. Herbrich, J. Shawe-Taylor, Generalisation error bounds for sparse linear classifiers, in: Proceedings of the Thirteenth Annual Conference on Computational Learning Theory, 2000, pp. 298-303.; T. Graepel, R. Herbrich, J. Shawe-Taylor, Generalisation error bounds for sparse linear classifiers, in: Proceedings of the Thirteenth Annual Conference on Computational Learning Theory, 2000, pp. 298-303. |

[10] | T. Joachims, Estimating the generalization performance of an SVM efficiently, in: Proceedings ICML-00,17th International Conference on Machine Learning, 1999, pp. 431-438.; T. Joachims, Estimating the generalization performance of an SVM efficiently, in: Proceedings ICML-00,17th International Conference on Machine Learning, 1999, pp. 431-438. |

[11] | LeCun, Y.; Botou, L.; Jackel, L.; Drucker, H.; Cortes, C.; Denker, J.; Guyon, I.; Muller, U.; Sackinger, E.; Simard, P.; Vapnik, V., Learning algorithms for classification: a comparison on handwritten digit recognition, Neural Networks, 261-276 (1995) |

[12] | Li, Q.; Jiao, L.; Hao, Y., Adaptive simplification of solution for support vector machine, Pattern Recognition, 40, 3, 972-980 (2007) · Zbl 1119.68151 |

[13] | Liu, C.; Nakashima, K.; Sako, H.; Fujisawa, H., Handwritten digit recognition: bench-marking of state-of-the-art techniques, Pattern Recognition, 36, 2271-2285 (2003) · Zbl 1054.68124 |

[14] | K.J. Lang, M.J. Witbrock, Learning to tell two spirals apart, in: Proceedings of 1989 Connectionist Models Summer School, 1989, pp. 52-61.; K.J. Lang, M.J. Witbrock, Learning to tell two spirals apart, in: Proceedings of 1989 Connectionist Models Summer School, 1989, pp. 52-61. |

[15] | Ruszczynski, Andrzej P., Nonlinear Optimization (2006), Princeton University Press, p. 160 · Zbl 1108.90001 |

[16] | J. Platt, Sequential minimal optimization: a fast algorithm for training support vector machines, Technical Report, Microsoft Research, Redmond, 1998.; J. Platt, Sequential minimal optimization: a fast algorithm for training support vector machines, Technical Report, Microsoft Research, Redmond, 1998. |

[17] | Scholkopf, B.; Smola, A., Learning with Kernels (1999), MIT Press: MIT Press Cambridge |

[18] | Thies, T.; Weber, F., Optimal reduced-set vectors for support vector machines with a quadratic kernel, Neural Computation, 16, 1769-1777 (2004) · Zbl 1089.68113 |

[19] | Vapnik, V. N., The Nature of Statistical Learning Theory (1995), Springer-Verlag: Springer-Verlag New York · Zbl 0934.62009 |

[20] | Vapnik, V. N., Statistical Learning Theory (1998), Wiley: Wiley New York · Zbl 0934.62009 |

This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.