×

Sparsity preserving projections with applications to face recognition. (English) Zbl 1186.68421

Summary: Dimensionality Reduction methods (DRs) have commonly been used as a principled way to understand the high-dimensional data such as face images. In this paper, we propose a new unsupervised DR method called Sparsity Preserving Projections (SPP). Unlike many existing techniques such as Local Preserving Projection (LPP) and Neighborhood Preserving Embedding (NPE), where local neighborhood information is preserved during the DR procedure, SPP aims to preserve the sparse reconstructive relationship of the data, which is achieved by minimizing a L1 regularization-related objective function. The obtained projections are invariant to rotations, rescalings and translations of the data, and more importantly, they contain natural discriminating information even if no class labels are provided. Moreover, SPP chooses its neighborhood automatically and hence can be more conveniently used in practice compared to LPP and NPE. The feasibility and effectiveness of the proposed method is verified on three popular face databases (Yale, AR and Extended Yale B) with promising results.

MSC:

68T10 Pattern recognition, speech recognition

Software:

Yale Face; AR face; PDCO
PDF BibTeX XML Cite
Full Text: DOI

References:

[1] Jain, A.; Duin, R.; Mao, J., Statistical pattern recognition: a review, IEEE trans. pattern anal. Mach. intell., 22, 1, 4-37, (2000)
[2] Belhumeur, P.; Hepanha, J.; Kriegman, D., Eigenfaces vs. fisherfaces: recognition using class specific linear projection, IEEE trans. pattern anal. Mach. intell., 19, 7, 711-720, (1997)
[3] Xu, D.; Yan, S.; Tao, D.; Lin, S.; Zhang, H., Marginal Fisher analysis and its variants for human gait recognition and content-based image retrieval, IEEE trans. image process., 16, 11, 2811-2821, (2007)
[4] Li, H.; Jiang, T.; Zhang, K., Efficient and robust feature extraction by maximum margin criterion, IEEE trans. neural networks, 17, 1, 157-165, (2006)
[5] Liu, J.; Chen, S.; Tan, X.; Zhang, D., Comments on efficient and robust feature extraction by maximum margin criterion, IEEE trans. neural networks, 18, 6, 1862-1864, (2007)
[6] Turk, M.; Pentland, A., Eigenfaces for recognition, J. cognitive neurosci., 3, 1, 71-86, (1991)
[7] X. He, P. Niyogi, Locality preserving projections, in: Proceedings of Conference on Advances in Neural Information Processing Systems (NIPS), 2003.
[8] D. Zhang, Z. Zhou, S. Chen, Semi-supervised dimensionality reduction, in: SIAM Conference on Data Mining (ICDM), 2007.
[9] D. Cai, X. He, J. Han, Semi-supervised discriminant analysis, in: Proceedings of International Conference on Computer Vision (ICCV), 2007.
[10] Song, Y.; Nie, F.; Zhang, C.; Xiang, S., A unified framework for semi-supervised dimensionality reduction, Pattern recognition, 41, 9, 2789-2799, (2008) · Zbl 1154.68501
[11] Scholkopf, B.; Smola, A.; Muller, K., Kernel principal component analysis, (), 327-352
[12] J. Tenenbaum, Mapping a manifold of perceptual observations, in: Advances in Neural Information Processing Systems (NIPS), 1998.
[13] Roweis, S.; Saul, L., Nonlinear dimensionality reduction by locally linear embedding, Science, 290, 5500, 2323-2326, (2000)
[14] Belkin, M.; Niyogi, P., Laplacian eigenmaps for dimensionality reduction and data representation, Neural comput., 15, 6, 1373-1396, (2003) · Zbl 1085.68119
[15] L. Maaten, E. Postma, H. Herik, Dimensionality reduction: a comparative review, available at: ⟨http://ticc.uvt.nl/ lvdrmaaten/Laurens_van_der_Maaten/Publications.html⟩.
[16] Y. Bengio, J. Paiement, P. Vincent, O. Delalleau, N. Roux, M. Ouimet, Out-of-sample extensions for LLE, ISOMAP, MDS, Eigenmaps, and spectral clustering, in: Advances in Neural Information Processing Systems (NIPS), 2004. · Zbl 1085.68120
[17] D. Cai, X. He, J. Han, Spectral regression for dimensionality reduction, Technical Report UIUCDCS-R-2007-2856, Computer Science Department, UIUC, May 2007.
[18] X. He, D. Cai, S. Yan, H. Zhang, Neighborhood preserving embedding, in: Proceedings in International Conference on Computer Vision (ICCV), 2005.
[19] Y. Fu, T. Huang, Locally linear embedded eigenspace analysis, IFP-TR, University of Illinois at Urbana-Champaign, January 2005.
[20] D. Cai, X. He, J. Han, Isometric projection, in: Proceedings of AAAI Conference on Artificial Intelligence, 2007.
[21] Wright, J.; Yang, A.; Sastry, S.; Ma, Y., Robust face recognition via sparse representation, IEEE trans. pattern anal. Mach. intell., 31, 2, 210-227, (2009)
[22] K. Huang, S. Aviyente, Sparse representation for signal classification, in: Advances in Neural Information Processing Systems (NIPS), 2006. · Zbl 1172.94422
[23] M. Davenport, M. Duarte, M. Wakin, D. Takhar, K. Kelly, R. Baraniuk, The smashed filter for compressive classification and target recognition, in: Proceedings of IS&T/SPIE Symposium on Electronic Imaging: Computational Imaging, January 2007.
[24] M. Davenport, M. Wakin, R. Baraniuk, Detection and estimation with compressive measurements, Technical Report, January 24, 2007.
[25] Yan, S.; Xu, D.; Zhang, B.; Zhang, H.; Yang, Q.; Lin, S., Graph embedding and extensions: a general framework for dimensionality reduction, IEEE trans. pattern anal. Mach. intell., 29, 1, 40-51, (2007)
[26] He, X.; Yan, S.; Hu, Y.; Niyogi, P.; Zhang, H., Face recognition using laplacianfaces, IEEE. trans. pattern anal. Mach. intell., 27, 3, (2005)
[27] J. Ham, D. Lee, S. Mika, B. Scholkopf, A kernel view of the dimensionality reduction of manifolds, in: Proceedings of International Conference on Machine Learning, 2004, pp. 47-54.
[28] Murray, J.; Kreutz-Delgado, K., Visual recognition and inference using dynamic overcomplete sparse learning, Neural comput., 19, 2301-2352, (2007) · Zbl 1146.68429
[29] Duda, R.; Hart, P.; Stork, D., Pattern classification, (2001), Wiley New York · Zbl 0968.68140
[30] M. Marcellin, M. Gormish, A. Bilgin, M. Boliek, An overview of JPEG-2000, in: Proceedings of the Data Compression Conference, 2000.
[31] Elad, M.; Aharon, M., Image denoising via sparse and redundant representations over learned dictionaries, IEEE trans. image process., 15, 12, 3736-3745, (2006)
[32] J. Yang, J. Wright, Y. Ma, T. Huang, Image super-resolution as sparse representation of raw image patches, in: Computer Vision and Pattern Recognition (CVPR), 2008.
[33] Tibshirani, R., Regression shrinkage and selection via the LASSO, J. R. stat. soc. B, 58, 1, 267-288, (1996) · Zbl 0850.62538
[34] Zou, H.; Hastie, T., Regularization and variable selection via the elastic net, J. R. stat. soc. ser. B, 67, 2, 301-320, (2005) · Zbl 1069.62054
[35] Baraniuk, R., A lecture on compressive sensing, IEEE signal process. mag., 24, 4, 118-121, (2007)
[36] Donoho, D., Compressed sensing, IEEE trans. inf. theory, 52, 4, 1289-1306, (2006) · Zbl 1288.94016
[37] A. Yang, J. Wright, Y. Ma, S. Sastry, Feature selection in face recognition: a sparse representation perspective, UC Berkeley Technical Report UCB/EECS-2007-99, 2007.
[38] Mallat, S.; Zhang, Z., Matching pursuits with time-frequency dictionaries, IEEE trans. signal process., 41, 12, 3397-3415, (1993) · Zbl 0842.94004
[39] Ji, S.; Xue, Y.; Carin, L., Bayesian compressive sensing, IEEE trans. signal process., 56, 6, 2346-2356, (2008) · Zbl 1390.94231
[40] M. Wu, Kai. Yu, S. Yu, B. Scholkopf, Local learning projections, in: International Conference on Machine Learning (ICML), 2007.
[41] Chen, S.; Donoho, D.; Saunders, M., Atomic decomposition by basis pursuit, SIAM rev., 43, 1, 129-159, (2001) · Zbl 0979.94010
[42] D. Donoho, Y. Tsaig, Fast solution of l1-norm minimization problems when the solution may be sparse, Technical Report, Institute for Computational and Mathematics and Engineering, Stanford University, USA, 2006. · Zbl 1247.94009
[43] Tipping, M.E., Sparse Bayesian learning and the relevance vector machine, J. Mach. learn. res., 1, 211-244, (2001) · Zbl 0997.68109
[44] D. Cai, X. He, J. Han, Spectral regression: a unified approach for sparse subspace learning, in: Proceedings of International Conference on Data Mining (ICDM), 2007.
[45] H. Zhou, T. Hastie, R. Tibshirani, Sparse principle component analysis, Technical Report, Statistics Department, Stanford University, USA, 2004.
[46] R. Zass, A. Shashua, Non-negative sparse PCA, in: Advances in Neural Information Processing systems (NIPS), 2007.
[47] Hoyer, P., Non-negative matrix factorization with sparseness constraints, J. Mach. learn. res., 5, 1457-1469, (2004) · Zbl 1222.68218
[48] Chen, S.; Zhu, Y., Subpattern-based principal component analysis, Pattern recognition, 37, 1, 1081-1083, (2004)
[49] Martinez, A.M.; Kak, A.C., PCA versus LDA, IEEE trans. pattern anal. Mach. intell., 23, 2, 228-233, (2001)
[50] Lee, K.; Ho, J.; Kriegman, D., Acquiring linear subspaces for face recognition under variable lighting, IEEE trans. pattern anal. Mach. intell., 27, 5, 684-698, (2005)
[51] Zhang, K.; Kwok, J.T., Density-weighted nystrom method for computing large kernel eigen-systems, Neural comput., 21, 1, 121-146, (2009) · Zbl 1178.68480
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.