×

Sparsity and nullity: paradigms for analysis dictionary learning. (English) Zbl 1343.68191

Summary: Sparse models in dictionary learning have been successfully applied in a wide variety of machine learning and computer vision problems, and as a result have recently attracted increased research interest. Another interesting related problem based on linear equality constraints, namely the sparse null space (SNS) problem, first appeared in [T. F. Coleman and A. Pothen, SIAM J. Algebraic Discrete Methods 7, 527–537 (1986; Zbl 0608.65024)] and has since inspired results on sparse basis pursuit. In this paper, we investigate the relation between the SNS problem and the analysis dictionary learning (ADL) problem, and show that the SNS problem plays a central role, and may be utilized to solve dictionary learning problems. Moreover, we propose an efficient algorithm of sparse null space basis pursuit (SNS-BP) and extend it to a solution of ADL. Experimental results on numerical synthetic data and real-world data are further presented to validate the performance of our method.

MSC:

68T05 Learning and adaptive systems in artificial intelligence
65F50 Computational methods for sparse matrices
65K05 Numerical mathematical programming methods
90C25 Convex programming

Citations:

Zbl 0608.65024
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] M. Aharon, M. Elad, and A. Bruckstein, {\it K-svd: An algorithm for designing overcomplete dictionaries for sparse representation}, IEEE Trans. Signal Process., 54 (2006), pp. 4311-4322. · Zbl 1375.94040
[2] X. Bian and H. Krim, {\it Optimal operator space pursuit: A framework for video sequence data analysis}, in Computer Vision–ACCV 2012, Springer, Berlin, 2013, pp. 760-769. · Zbl 1322.62184
[3] X. Bian and H. Krim, {\it Robust Subspace Recovery via Bi-Sparsity Pursuit}, preprint, http://arxiv.org/abs/1403.8067 arXiv:1403.8067, 2014.
[4] X. Bian, H. Krim, A. Bronstein, and L. Dai, {\it Sparse null space basis pursuit and analysis dictionary learning for high-dimensional data analysis}, in Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE Press, Piscataway, NJ, 2015, pp. 3781-3785.
[5] O. Bousquet and L. Bottou, {\it The tradeoffs of large scale learning}, in Advances in Neural Information Processing Systems 20, J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, eds., Curran Associates, Inc., Red Hook, NY, 2008, pp. 161-168.
[6] E. J. Candès, {\it The restricted isometry property and its implications for compressed sensing}, C. R. Math. Acad. Sci. Paris, 346 (2008), pp. 589-592. · Zbl 1153.94002
[7] E. J. Candès, J. K. Romberg, and T. Tao, {\it Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information.}, IEEE Trans. Inform. Theory, 52 (2006), pp. 489-509. · Zbl 1231.94017
[8] E. J. Candès, J. K. Romberg, and T. Tao, {\it Stable signal recovery from incomplete and inaccurate measurements}, Comm. Pure Appl. Math., 59 (2006), pp. 1207-1223. · Zbl 1098.94009
[9] Y. Chen, R. Ranftl, and T. Pock, {\it Insights into analysis operator learning: From patch-based sparse models to higher order MRFs}, IEEE Trans. Image Process., 23 (2014), pp. 1060-1072. · Zbl 1374.94065
[10] T. F. Coleman and A. Pothen, {\it The null space problem I. Complexity}, SIAM J. Algebraic Discrete Methods, 7 (1986), pp. 527-537. http://dx.doi.org/10.1137/0607059 doi:10.1137/0607059. · Zbl 0608.65024
[11] T. F. Coleman and A. Pothen, {\it The null space problem II. Algorithms}, SIAM J. Algebraic Discrete Methods, 8 (1987), pp. 544-563. http://dx.doi.org/10.1137/0608045 doi:10.1137/0608045. · Zbl 0642.65028
[12] D. L. Donoho and C. Grimes, {\it Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data}, Proc. Natl. Acad. Sci. USA, 100 (2003), pp. 5591-5596. · Zbl 1130.62337
[13] M. Elad, {\it Sparse and redundant representation modeling: What next?}, IEEE Signal Process. Lett., 19 (2012), pp. 922-928.
[14] M. Elad and M. Aharon, {\it Image denoising via sparse and redundant representations over learned dictionaries}, IEEE Trans. Image Process., 15 (2006), pp. 3736-3745.
[15] M. Elad, M. A. Figueiredo, and Y. Ma, {\it On the role of sparse and redundant representations in image processing}, Proc. IEEE, 98 (2010), pp. 972-982.
[16] Y. C. Eldar and G. Kutyniok, {\it Compressed Sensing: Theory and Applications}, Cambridge University Press, Cambridge, UK, 2012.
[17] E. Elhamifar and R. Vidal, {\it Sparse subspace clustering}, in Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2009), IEEE Press, Piscataway, NJ, 2009, pp. 2790-2797.
[18] J. R. Gilbert and M. T. Heath, {\it Computing a sparse basis for the null space}, SIAM J. Algebraic Discrete Methods, 8 (1987), pp. 446-459. http://dx.doi.org/10.1137/0608037 doi:10.1137/0608037. · Zbl 0635.65037
[19] S. Hawe, M. Kleinsteuber, and K. Diepold, {\it Analysis operator learning and its application to image reconstruction}, IEEE Trans. Image Process., 22 (2013), pp. 2138-2150. · Zbl 1373.94161
[20] K. Huang and S. Aviyente, {\it Sparse representation for signal classification}, in Advances in Neural Information Processing Systems 19, B. Schölkopf, J. C. Platt, and T. Hoffman, eds., MIT Press, Cambridge, MA, 2006, pp. 609-616.
[21] K. Kreutz-Delgado, J. F. Murray, B. D. Rao, K. Engan, T.-W. Lee, and T. J. Sejnowski, {\it Dictionary learning algorithms for sparse representation}, Neural Comput., 15 (2003), pp. 349-396. · Zbl 1047.68101
[22] D. A. Levin, Y. Peres, and E. L. Wilmer, {\it Markov chains and mixing times}, AMS, Providence, RI, 2009. · Zbl 1160.60001
[23] Z. Lin, M. Chen, and Y. Ma, {\it The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices}, preprint, https://arxiv.org/abs/1009.5055 arXiv:1009.5055, 2010.
[24] Z. Lin, R. Liu, and Z. Su, {\it Linearized alternating direction method with adaptive penalty for low-rank representation}, in Advances in Neural Information Processing Systems 24, J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberge, eds., Curran Associates, Inc., Red Hook, NY, 2011, pp. 612-620.
[25] H. Liu, Z. Yang, and Z. Wu, {\it Locality-concept factorization}, in Proceedings of the 22nd International Joint Conference on Artificial Intelligence, Barcelona, Spain, 2011, pp. 1378-1383.
[26] J. Mairal, F. Bach, J. Ponce, and G. Sapiro, {\it Online dictionary learning for sparse coding}, in Proceedings of the 26th Annual International Conference on Machine Learning, ACM, New York, 2009, pp. 689-696. · Zbl 1242.62087
[27] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman, {\it Supervised dictionary learning}, in Neural Information Processing Systems 21, D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, eds., Curran Associates, Inc., Red Hook, NY, 2009, pp. 1033-1040.
[28] S. Nam, M. E. Davies, M. Elad, and R. Gribonval, {\it The cosparse analysis model and algorithms}, Appl. Comput. Harmon. Anal., 34 (2013), pp. 30-56. · Zbl 1261.94018
[29] A. Pothen, {\it Sparse null basis computations in structural optimization}, Numer. Math., 55 (1989), pp. 501-519. · Zbl 0664.65040
[30] T. Randen and J. H. Husoy, {\it Filtering for texture classification: A comparative study}, IEEE Trans. Pattern Anal. Mach. Intell., 21 (1999), pp. 291-310.
[31] S. T. Roweis and L. K. Saul, {\it Nonlinear dimensionality reduction by locally linear embedding}, Science, 290 (2000), pp. 2323-2326.
[32] R. Rubinstein, T. Faktor, and M. Elad, {\it K-SVD dictionary-learning for the analysis sparse model}, in Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE Press, Piscataway, NJ, 2012, pp. 5405-5408.
[33] R. Rubinstein, T. Peleg, and M. Elad, {\it Analysis K-SVD: A dictionary-learning algorithm for the analysis sparse model}, IEEE Trans. Signal Process., 61 (2013), pp. 661-677. · Zbl 1393.94416
[34] B. Wen, S. Ravishankar, and Y. Bresler, {\it Structured overcomplete sparsifying transform learning with convergence guarantees and applications}, Int. J. Comput. Vision, 114 (2015), pp. 137-167. · Zbl 1398.94053
[35] M. Yaghoobi, S. Nam, R. Gribonval, and M. E. Davies, {\it Analysis operator learning for overcomplete cosparse representations}, in Proceedings of the 9th European Conference on Signal Processing, IEEE Press, Piscataway, NJ, 2011, pp. 1470-1474.
[36] M. Yaghoobi, S. Nam, R. Gribonval, and M. E. Davies, {\it Constrained overcomplete analysis operator learning for cosparse signal modelling}, IEEE Trans. Signal Process., 61 (2013), pp. 2341-2355.
[37] G. Zhao, G. Wu, Y. Liu, and J. Chen, {\it Texture classification based on completed modeling of local binary pattern}, in Proceedings of the 2011 International Conference on Computational and Information Sciences (ICCIS), IEEE Press, Piscataway, NJ, 2011, pp. 268-271.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.