On \(\psi\)-learning. (English) Zbl 1052.62095

Summary: The concept of large margins has been recognized as an important principle in analyzing learning methodologies, including boosting, neural networks, and support vector machines (SVMs). However, this concept alone is not adequate for learning in nonseparable cases. We propose a learning methodology, called \(\psi\)-learning, that is derived from a direct consideration of generalization errors. We provide a theory for \(\psi\)-learning and show that it essentially attains the optimal rates of convergence in two learning examples. Finally, results from simulation studies and from breast cancer classification confirm the ability of \(\psi\)-learning to outperform SVM in generalization.


62M45 Neural nets and related approaches to inference from stochastic processes
62H30 Classification and discrimination; cluster analysis (statistical aspects)
62B10 Statistical aspects of information-theoretic topics
68T05 Learning and adaptive systems in artificial intelligence
62P10 Applications of statistics to biology and medical sciences; meta analysis
Full Text: DOI