×

Kernel-based nonlinear discriminant analysis for face recognition. (English) Zbl 1083.68595

Summary: Linear subspace analysis methods have been successfully applied to extract features for face recognition. But they are inadequate to represent the complex and nonlinear variations of real face images, such as illumination, facial expression and pose variations, because of their linear properties. In this paper, a nonlinear subspace analysis method, Kernel-based Nonlinear Discriminant Analysis (KNDA), is presented for face recognition, which combines the nonlinear kernel trick with the linear subspace analysis method — Fisher Linear Discriminant Analysis (FLDA). First, the kernel trick is used to project the input data into an implicit feature space, then FLDA is performed in this feature space. Thus nonlinear discriminant features of the input data are yielded. In addition, in order to reduce the computational complexity, a geometry-based feature vectors selection scheme is adopted. Another similar nonlinear subspace analysis is Kernel-based Principal Component Analysis (KPCA), which combines the kernel trick with linear Principal Component Analysis (PCA). Experiments are performed with the polynomial kernel, and KNDA is compared with KPCA and FLDA. Extensive experimental results show that KNDA can give a higher recognition rate than KPCA and FLDA.

MSC:

68T10 Pattern recognition, speech recognition

Software:

FERET
PDF BibTeX XML Cite
Full Text: DOI

References:

[1] Zhao W, Chellappa R, Rosenfeld A, Phillips P J. Face Recognition: A Literature Survey.CS-Tech Report 4167, University of Maryland, 2000.
[2] Guo G D, Li Stan Z, Kapluk O. Face recognition by support vector machines. InProc. Int. Conf. Automatic Face and Gesture Recognition, Grenoble, France, 2000, pp. 196–201.
[3] Li S Z, Lu J W. Face recognition using the nearest feature line method.IEEE Trans. Neural Networks, 1999, 10: 439–443.
[4] Liu C J, Weschsler H. A unified Bayesian framework for face recognition. InProc. Int. Conf. Image Processing, Brisbane, Australia, 1998, pp.151–155.
[5] Turk M, Pentland A. Eigenfaces for recognition.J. Cognitive Neuroscience, 1991, 3(1): 71–86.
[6] Belhumeur P N, Hespanha J P, Kriegman D J. Eigen-face vs. fisherfaces: Recognition using class specific linear projection.IEEE Trans. Pattern Analysis and Machine Intelligence, 1997, 19(7): 711–720. · Zbl 05111919
[7] Zhao W, Chellappa R, Phillips P J. Subspace Linear Discriminant Analysis for Face Recognition.Tech Report CAR-TR-914, Center for Antomation Research, University of Maryland, 1999.
[8] Chen L F, Liao H M, Lin J Cet al. A new IDA-based face recognition system which can solve the small sample size problem.Pattern Recognition, 2000, 33(10).
[9] Yu H, Yang J. A direct LDA algorithm for high-dimensional data – With application to face recognition.Pattern Recognition, 2001, 34(10): 2067–2070. · Zbl 0993.68091
[10] Huang R, Liu Q S, Lu H Q, Ma S D. Solving the small sample size problem of LDA. InProc. Int. Conf. Pattern Recognition, Quebec, Canada, 2002, 3: 29–32.
[11] Phillips P J, Moon H, Rizvi S, Rauss P. The Feret evaluation methodology for face recognition algorithms.IEEE Trans. Pattern Analysis and Machine Intelligence, 2000, 22(10): 1090–1104. · Zbl 05112402
[12] M Stewart Bartlett, H Martin Lades, T Sejnowski. Independent component representations for face recognition. InProc. SPIE, San Jose, Canada, 1998, Vol. 3299, pp.528–539.
[13] Moghaddam B. Principal manifolds and Bayesian subspaces for visual recognition.Tech Report 99-35, Mitsubishi Electric Research Laboratory, 1999.
[14] Back K, Draper B A, Beveridge J Ret al. PCA vs ICA: A comparison on the FERET data set. http://www.cs.colostate.edu/evalfacerec/papers/cvprip02.pdf.
[15] Osuna E, Freund R, Girosi F. Support vector machines: Training and applications.Tech Report, AI Lab, MIT, 1997.
[16] Heisele B, Ho P, Poggio T. Face recognition with support vector machines: Global versus component-based approach. InProc. Int. Conf. Computer Vision, Vancouver, Canada, 2001, pp.688–673.
[17] Scholkopf B, Smola A, Muller K R. Nonlinear component analysis as a kernel eigenvalue problem.Neural Computation, 1998, 10: 1299–1319.
[18] Ming-Hsuan Yang, Narendra Ahuja, David Kriegman. Face recognition using kernel eigenfaces. InProc. Int. Conf. Image Processing, Vancouver, Canada, 2000, pp.37–40.
[19] Kin K I, Jung K, Kim H J. Face recognition using kernel principal component analysis.IEEE Signal Processing Letters, 2002, 9(2): 40–42.
[20] Liu Qingshan, Huang Rui, Lu Hanqinget al. Face recognition using kernel based fisher discriminant analysis. InProc. Int. Conf. Automatic Face and Gesture Recognition, Washington DC, USA, 2002, pp.197–201.
[21] Baudat G, Anouar F. Generalized discriminant analysis using a kernel approach.Neural Computation, 2000, 12(10): 2385–2404.
[22] Mika S, Ratsch G, Weston J. Fisher discriminant analysis with kernels.Neural Networks for Signal Processing IX, 1999, pp.41–48.
[23] Li Y M, Gong S G, Liddell H. Recognising trajectories of facial identities using kernel discriminant analysis. InProc. British Machine Vision Conference, 2001, pp.613–622.
[24] Wu Y, Huang T S, Toyama K. Self-supervised learning for object based on kernel discriminant-EM algorithm, InProc. Int. Conf. Compute. Vision, Vancouver, Canada, 2001, pp.275–280.
[25] Baudat G, Anouar F. Kernel-based methods and function approximation. InProc. Int. Conf. Neural Networks, Washington DC, July 15–19, 2001, pp.1244–1249.
[26] Phillips P J, Wechsler H, Huang J, Rauss P. The FERET database and evaluation procedure for face recognition algorithms.Image and Vision Computing, 1998, 16(5): 295–306.
[27] Yambor W S, Draper B A, Beveridge J R. Analyzing PCA-based face recognition algorithms: Eigen-vectors selection and distance measures.The 2nd Workshop on Empirical Evaluation in Computer Vision, http://www.cs.colostate.edu/\(\sim\)vision/papers/csueemcv.pdf, 2000.
[28] Freund Y, Schapire R E. A decision-theoretic generalization of online learning and application to boosting.Journal of Comp. & Sys. Sci, 1997, 55(1): 119–139. · Zbl 0880.68103
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.