×

Surveying and comparing simultaneous sparse approximation (or group-lasso) algorithms. (English) Zbl 1213.94044

Summary: We survey and compare different algorithms that, given an overcomplete dictionary of elementary functions, solve the problem of simultaneous sparse signal approximation, with common sparsity profile induced by a \(\ell_p - \ell_q\) mixed-norm. Such a problem is also known in the statistical learning community as the group lasso problem. We have gathered and detailed different algorithmic results concerning these two equivalent approximation problems. We have also enriched the discussion by providing relations between several algorithms. Experimental comparisons of the detailed algorithms have also been carried out. The main lesson learned from these experiments is that depending on the performance measure, greedy approaches and iterative reweighted algorithms are the most efficient algorithms either in term of computational complexities, sparsity recovery or mean-square error.

MSC:

94A12 Signal theory (characterization, reconstruction, filtering, etc.)
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Baraniuk, R. G.; Cevher, V.; Duarte, M.; Hegde, C.: Model-based compressive sensing, IEEE transactions on information theory 56, No. 4, 1982-2001 (2010) · Zbl 1366.94215
[2] Bertsekas, D.; Nedic, A.; Ozdaglar, A.: Convex analysis and optimization, (2003) · Zbl 1140.90001
[3] Boyd, S.; Vandenberghe, L.: Convex optimization, (2004) · Zbl 1058.90049
[4] Candès, E.; Wakin, M.; Boyd, S.: Enhancing sparsity by reweighted \(\ell 1\) minimization, Journal of Fourier analysis and applications 14, 877-905 (2008) · Zbl 1176.94014 · doi:10.1007/s00041-008-9045-x
[5] R. Chartrand, W. Yin, Iteratively reweighted algorithms for compressive sensing, in: 33rd International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2008.
[6] Chen, J.; Huo, X.: Sparse representations for multiple measurements vectors (MMV) in an overcomplete dictionary, Proceedings of the IEEE international conference on acoustics, speech and signal processing 4, 257-260 (2005)
[7] Chen, S.; Donoho, D.; Saunders, M.: Atomic decomposition by basis pursuit, SIAM journal of scientific computing 20, No. 1, 33-61 (1999) · Zbl 0919.94002 · doi:10.1137/S1064827596304010
[8] Cotter, S.; Rao, B.; Engan, K.; Kreutz-Delgado, K.: Sparse solutions to linear inverse problems with multiple measurement vectors, IEEE transactions on signal processing 53, No. 7, 2477-2488 (2005) · Zbl 1372.65123
[9] Daubechies, I.; Defrise, M.; Mol, C. D.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Communication on pure and applied mathematics 57, 1413-1541 (2004) · Zbl 1077.65055 · doi:10.1002/cpa.20042
[10] Daubechies, I.; Devore, R.; Fornasier, M.; Gunturk, S.: Iteratively reweighted least squares minimization for sparse recovery, Communication on pure and applied mathematics 63, No. 1, 1-38 (2010) · Zbl 1202.65046 · doi:10.1002/cpa.20303
[11] M. Duarte, V. Cevhler, R. Baraniuk, Model-based compressive sensing for signal ensembles, in: Proceeding of the 47th Allerton Conference on Communication, Control and Computing, 2009.
[12] Elad, M.: Why simple shrinkage is still relevant for redundant representations?, IEEE transactions on information theory 52, No. 12, 5559-5569 (2006) · Zbl 1309.94035
[13] Eldar, Y.; Kuppinger, P.; Blocskei, H.: Compressed sensing of block-sparse signals: uncertainty relations and efficient recovery, IEEE transactions on signal processing 58, No. 6, 3042-3054 (2010) · Zbl 1392.94195
[14] Eldar, Y.; Mishali, M.: Robust recovery of signals from a structured union of subspaces, IEEE transactions on information theory 55, No. 11, 5302-5316 (2009) · Zbl 1367.94087
[15] Fan, J.; Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties, Journal of the American statistical association 96, No. 456, 1348-1360 (2001) · Zbl 1073.62547 · doi:10.1198/016214501753382273
[16] M. Figueiredo, J. Bioucas-Dias, R., N., Majorization-minimization algorithms for wavelet-based image for restoration. IEEE Transactions on Image Processing 16(12) (2007) 2980–2991.
[17] Fornasier, M.; Rauhut, H.: Recovery algorithms for vector valued data with joint sparsity constraints, SIAM journal of numerical analysis 46, No. 2, 577-613 (2008) · Zbl 1211.65066 · doi:10.1137/0606668909
[18] Foucart, S.; Lai, M. -J.: Sparsest solutions of underdetermined linear systems via \(\ell \)q-minimization for 0q\(\leq \)1, Applied and computational harmonic analysis 26, No. 3, 395-407 (2009) · Zbl 1171.90014 · doi:10.1016/j.acha.2008.09.001
[19] Friedman, J.; Hastie, T.; Höfling, H.; Tibshirani, R.: Pathwise coordinate optimization, The annals of applied statistics 1, No. 2, 302-332 (2007) · Zbl 1378.90064
[20] Gasso, G.; Rakotomamonjy, A.; Canu, S.: Recovering sparse signals with a certain family of non-convex penalties and dc programming, IEEE transactions on signal processing 57, No. 12, 4686-4698 (2009) · Zbl 1391.90489
[21] Girolami, M.; Rogers, S.: Hierarchic Bayesian models for kernel learning, , 241-248 (2005)
[22] Gorodnitsky, I.; George, J.; Rao, B.: Neuromagnetic source imaging with FOCUSS: a recursive weighted minimum norm algorithm, Journal of electroencephalography and clinical neurophysiology 95, No. 4, 231-251 (1995)
[23] Grandvalet, Y.: Least absolute shrinkage is equivalent to quadratic penalization, ICANN’98. perspectives in neural computing 1, 201-206 (1998)
[24] Gribonval, R.; Nielsen, M.: Sparse approximations in signal and image processing, Signal processing 86, No. 3, 415-416 (2006) · Zbl 1163.94341 · doi:10.1016/j.sigpro.2005.07.012
[25] Huang, J.; Ma, S.; Xie, H.; Zhang, C.: A group Bridge approach for variable selection, Biometrika 96, No. 2, 339-355 (2009) · Zbl 1163.62050 · doi:10.1093/biomet/asp020
[26] Hunter, D.; Lange, K.: A tutorial on MM algorithms, The American statistician 58, 30-37 (2004)
[27] Ji, S.; Dunson, D.; Carin, L.: Multi-task compressive sensing, IEEE transactions on signal processing 57, No. 1, 92-106 (2008) · Zbl 1391.94258
[28] Kim, Y.; Kim, J.; Kim, Y.: Blockwise sparse regression, Statistica sinica 16, 375-390 (2006) · Zbl 1096.62076
[29] H. Liu, J. Zhang, On the estimation and variable selection consistency of the sum of q-norm regularized regression, Technical Report, Department of Statistics, Carnegie Mellon University, 2009.
[30] Luo, Z.; Gaspar, M.; Liu, J.; Swami, A.: Distributed signal processing in sensor networks, IEEE signal processing magazine 23, No. 4, 14-15 (2006)
[31] Malioutov, D.; Cetin, M.; Willsky, A.: Sparse signal reconstruction perspective for source localization with sensor arrays, IEEE transactions on signal processing 53, No. 8, 3010-3022 (2005) · Zbl 1370.94191
[32] Mallat, S.; Zhang, Z.: Matching pursuit with time–frequency dictionaries, IEEE transactions on signal processing 41, No. 12, 3397-3415 (1993) · Zbl 0842.94004 · doi:10.1109/78.258082
[33] Mishali, M.; Eldar, Y.: Reduce and boost: recovering arbitrary sets of jointly sparse vectors, IEEE transactions on signal processing 56, No. 10, 4692-4702 (2008) · Zbl 1390.94306
[34] Needell, D.; Tropp, J.: Cosamp: iterative signal recovery from incomplete and inaccurate samples, Applied and computational harmonic analysis 26, No. 3, 301-321 (2009) · Zbl 1163.94003 · doi:10.1016/j.acha.2008.07.002
[35] Obozinski, G.; Taskar, B.; Jordan, M.: Joint covariate selection and joint subspace selection for multiple classification problems, Statistics and computing 20, No. 2, 231-252 (2010)
[36] Pati, Y.; Rezaiifar, R.; Krishnaprasad, P.: Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition, (1993)
[37] Phillips, C.; Mattout, J.; Rugg, M.; Maquet, P.; Friston, K.: An empirical Bayesian solution to the source reconstruction problem in EEG, Neuroimage 24, 997-1011 (2005)
[38] Rakotomamonjy, A.; Bach, F.; Grandvalet, Y.; Canu, S.: Simplemkl, Journal of machine learning research 9, 2491-2521 (2008) · Zbl 1225.68208
[39] A. Rakotomamonjy, R. Flamary, G. Gasso, S. Canu, \ell p-\ell q penalty for sparse linear and sparse multiple kernel multi-task learning, IEEE Transactions on Neural Networks, submitted for publication, hal-00509608 \langlehttp://hal.archives-ouvertes.fr/hal-00509608/fr/ angle.
[40] Rao, B.; Engan, K.; Cotter, S.; Palmer, J.; Kreutz-Delgado, K.: Subset selection in noise based on diversity measure minimization, IEEE transactions on signal processing 51, No. 3, 760-770 (2003)
[41] R. Saab, R. Chartrand, Özgür Yilmaz, Stable sparse approximations via nonconvex optimization, in: 33rd International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2008.
[42] Sardy, S.; Bruce, A.; Tseng, P.: Block coordinate relaxation methods for non-parametric wavelet denoising, Journal of computational and graphical statistics 9, No. 2, 361-379 (2000)
[43] Simila, T.: Majorize-minimize algorithm for multiresponse sparse regression, , 553-556 (2007)
[44] Simila, T.; Tikka, J.: Input selection and shrinkage in multiresponse linear regression, Computational statistics and data analysis 52, No. 1, 406-422 (2007) · Zbl 1452.62513
[45] M. Stojnic, F. Parvaresh, B. Hassibi, On the reconstruction of block-sparse signals with an optimal number of measurements, Technical Report, ArXiv 0804.0041v1, California Institute of Technology, 2008. · Zbl 1391.94402
[46] Sturm, J.: Using sedumi 1.02 a Matlab toolbox for optimization over symmetric codes, Optimization methods and software 11, No. 12, 625-653 (1999) · Zbl 0973.90526 · doi:10.1080/10556789908805766
[47] L. Sun, J. Liu, J. Chen, J. Ye, Efficient recovery of jointly sparse vectors, in: Advances in Neural Information Processing Systems, vol. 23, 2009.
[48] Szafranski, M.; Grandvalet, Y.; Rakotomamonjy, A.: Composite kernel learning, Machine learning journal 79, No. 1-2, 73-103 (2010)
[49] Theis, F.; Garcia, G.: On the use of sparse signal decomposition in the analysis of multi-channel surface electromyograms, Signal processing 86, No. 3, 603-623 (2006) · Zbl 1163.94394 · doi:10.1016/j.sigpro.2005.05.032
[50] Tibshirani, R.: Regression shrinkage and selection via the lasso, Journal of the royal statistical society 46, 267-288 (1996) · Zbl 0850.62538
[51] Tipping, M.: Sparse Bayesian learning and the relevance vector machine, Journal of machine learning research 1, 211-244 (2001) · Zbl 0997.68109 · doi:10.1162/15324430152748236
[52] Tropp, J.: Algorithms for simultaneous sparse approximation. Part II: Convex relaxation, Signal processing 86, 589-602 (2006) · Zbl 1163.94395 · doi:10.1016/j.sigpro.2005.05.031
[53] Tropp, J.; Gilbert, A.: Signal recovery from random measurements via orthogonal matching pursuit, IEEE transactions on information theory 53, No. 12, 4655-4666 (2007) · Zbl 1288.94022
[54] Tropp, J.; Gilbert, A.; Strauss, M.: Algorithms for simultaneous sparse approximation. Part I: Greedy pursuit, Signal processing 86, 572-588 (2006) · Zbl 1163.94396 · doi:10.1016/j.sigpro.2005.05.030
[55] Tseng, P.: Convergence of block coordinate descent method for nondifferentiable minimization, Journal of optimization theory and application 109, 475-494 (2001) · Zbl 1006.65062 · doi:10.1023/A:1017501703105
[56] E. van den Berg, M. Schmidt, M. Friedlander, K. Murphy, Group sparsity via linear-time projection, Technical Report TR-2008-09, University of British Columbia, Department of Computer Science, 2008.
[57] Wipf, D.; Nagarajan, S.: A new view of automatic relevance determination, Advances in neural information processing systems 20 (2008)
[58] Wipf, D.; Nagarajan, S.: Iterative reweighted \(\ell 1\) and \(\ell 2\) methods for finding sparse solutions, Journal of selected topics in signal processing 4, No. 2, 317-320 (2010)
[59] Wipf, D.; Owen, J.; Attias, H.; Sekihara, K.; Nagarajan, S.: Robust Bayesian estimation of the location orientation and time course of multiple correlated neural sources using meg, Neuroimage 49, No. 1, 641-655 (2010)
[60] Wipf, D.; Rao, B.: An empirical Bayesian strategy for solving the simultaneous sparse approximation problem, IEEE transactions on signal processing 55, No. 7, 3704-3716 (2007) · Zbl 1391.62010
[61] Yuan, M.; Lin, Y.: Model selection and estimation in regression with grouped variables, Journal of royal statistics society B 68, 49-67 (2006) · Zbl 1141.62030 · doi:10.1111/j.1467-9868.2005.00532.x
[62] Zhou, H.; Hastie, T.: Regularization and variable selection via the elastic net, Journal of the royal statistics society series B 67, 301-320 (2005) · Zbl 1069.62054 · doi:10.1111/j.1467-9868.2005.00503.x
[63] Zou, H.: The adaptive lasso and its oracle properties, Journal of the American statistical association 101, No. 476, 1418-1429 (2006) · Zbl 1171.62326
[64] Zou, H.; Li, R.: One-step sparse estimates in nonconcave penalized likelihood models, The annals of statistics 36, No. 4, 1509-1533 (2008) · Zbl 1142.62027 · doi:10.1214/009053607000000802
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.