Tibshirani, Ryan J.; Taylor, Jonathan The solution path of the generalized lasso. (English) Zbl 1234.62107 Ann. Stat. 39, No. 3, 1335-1371 (2011). Summary: We present a path algorithm for the generalized lasso problem. This problem penalizes the \(\ell _{1}\) norm of a matrix \(D\) times the coefficient vector, and has a wide range of applications, dictated by the choice of \(D\). Our algorithm is based on solving the dual of the generalized lasso, which greatly facilitates computation of the path. For \(D = I\) (the usual lasso), we draw a connection between our approach and the well-known LARS algorithm. For an arbitrary \(D\), we derive an unbiased estimate of the degrees of freedom of the generalized lasso fit. This estimate turns out to be quite intuitive in many applications. Cited in 2 ReviewsCited in 186 Documents MSC: 62J05 Linear regression; mixed models 65C60 Computational problems in statistics (MSC2010) 62J99 Linear inference, regression Keywords:path algorithm; Lagrange dual; LARS; degrees of freedom Software:R; NESTA; PDCO × Cite Format Result Cite Review PDF Full Text: DOI arXiv References: [1] Becker, S., Bobin, J. and Candes, E. J. (2011). NESTA: A fast and accurate first-order method for sparse recovery. SIAM Journal on Imaging Sciences 4 1-39. · Zbl 1209.90265 · doi:10.1137/090756855 [2] Bertsekas, D. P. (1999). Nonlinear Programming . Athena Scientific, Nashua, NH. · Zbl 1015.90077 [3] Best, M. J. (1982). An algorithm for the solution of the parametric quadratic programming problem. CORR Report 82-84, Univ. Waterloo. · Zbl 0464.73039 [4] Boyd, S. and Vandenberghe, L. (2004). Convex Optimization . Cambridge Univ. Press, Cambridge. · Zbl 1058.90049 [5] Bredel, M., Bredel, C., Juric, D., Harsh, G. R., Vogel, H., Recht, L. D. and Sikic, B. I. (2005). High-resolution genome-wide mapping of genetic alterations in human glial brain tumors. Cancer Res. 65 4088-4096. [6] Centers for Disease Control and Prevention. (2009). “Novel H1N1 flu situation update.” Available at . [7] Chen, S. S., Donoho, D. L. and Saunders, M. A. (1998). Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20 33-61. · Zbl 0919.94002 · doi:10.1137/S1064827596304010 [8] Cleveland, W., Grosse, E., Shyu, W. and Terpenning, I. (1991). Local regression models. In Statistical Models in S (J. Chambers and T. Hastie, eds.). Wadsworth, Belmont, CA. [9] Donoho, D. L. and Johnstone, I. M. (1995). Adapting to unknown smoothness via wavelet shrinkage. J. Amer. Statist. Assoc. 90 1200-1224. · Zbl 0869.62024 · doi:10.2307/2291512 [10] Donoho, D. L. and Tanner, J. (2010). Counting the faces of randomly-projected hypercubes and orthants, with applications. Discrete Comput. Geom. 43 522-541. · Zbl 1191.52004 · doi:10.1007/s00454-009-9221-z [11] Efron, B., Hastie, T., Johnstone, I. and Tibshirani, R. (2004). Least angle regression. Ann. Statist. 32 407-499. · Zbl 1091.62054 · doi:10.1214/009053604000000067 [12] Elad, M., Milanfar, P. and Rubinstein, R. (2007). Analysis versus synthesis in signal priors. Inverse Problems 23 947-968. · Zbl 1138.93055 · doi:10.1088/0266-5611/23/3/007 [13] Evans, L. C. and Gariepy, R. F. (1992). Measure Theory and Fine Properties of Functions . CRC Press, Boca Raton, FL. · Zbl 0804.28001 [14] Friedman, J., Hastie, T., Höfling, H. and Tibshirani, R. (2007). Pathwise coordinate optimization. Ann. Appl. Statist. 1 302-332. · Zbl 1378.90064 · doi:10.1214/07-AOAS131 [15] Golub, G. H. and Van Loan, C. F. (1996). Matrix Computations , 3rd ed. Johns Hopkins Univ. Press, Baltimore, MD. · Zbl 0865.65009 [16] Hastie, T., Rosset, S., Tibshirani, R. and Zhu, J. (2003/04). The entire regularization path for the support vector machine. J. Mach. Learn. Res. 5 1391-1415 (electronic). · Zbl 1222.68213 [17] Hastie, T. and Tibshirani, R. (1990). Generalized Additive Models. Monographs on Statistics and Applied Probability 43 . Chapman & Hall, London. · Zbl 0747.62061 [18] Hastie, T. and Tibshirani, R. (1993). Varying-coefficient models. J. Roy. Statist. Soc. Ser. B 55 757-796. · Zbl 0796.62060 [19] Hoefling, H. (2009). A path algorithm for the fused lasso signal approximator. Unpublished manuscript. Available at . · doi:10.1198/jcgs.2010.09208 [20] James, G. M., Radchenko, P. and Lv, J. (2009). DASSO: Connections between the Dantzig selector and lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 71 127-142. · Zbl 1231.62129 · doi:10.1111/j.1467-9868.2008.00668.x [21] Kim, S.-J., Koh, K., Boyd, S. and Gorinevsky, D. (2009). l 1 trend filtering. SIAM Rev. 51 339-360. · Zbl 1171.37033 · doi:10.1137/070690274 [22] Osborne, M. R., Presnell, B. and Turlach, B. A. (2000). On the LASSO and its dual. J. Comput. Graph. Statist. 9 319-337. · doi:10.2307/1390657 [23] R Development Core Team (2008). R: A Language and Environment for Statistical Computing . R Foundation for Statistical Computing, Vienna, Austria. Available at . [24] Rosset, S. and Zhu, J. (2007). Piecewise linear regularized solution paths. Ann. Statist. 35 1012-1030. · Zbl 1194.62094 · doi:10.1214/009053606000001370 [25] Rudin, L. I., Osher, S. and Faterni, E. (1992). Nonlinear total variation based noise removal algorithms. Phys. D 60 259-268. · Zbl 0780.49028 · doi:10.1016/0167-2789(92)90242-F [26] Schneider, R. (1993). Convex Bodies: The Brunn-Minkowski Theory. Encyclopedia of Mathematics and Its Applications 44 . Cambridge Univ. Press, Cambridge. · Zbl 0798.52001 [27] She, Y. (2010). Sparse regression with exact clustering. Electron. J. Stat. 4 1055-1096. · Zbl 1329.62327 · doi:10.1214/10-EJS578 [28] She, Y. and Owen, A. B. (2010). Outlier detection using nonconvex penalized regression. Unpublished manuscript. Available at . · Zbl 1232.62068 [29] Stein, C. M. (1981). Estimation of the mean of a multivariate normal distribution. Ann. Statist. 9 1135-1151. · Zbl 0476.62035 · doi:10.1214/aos/1176345632 [30] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58 267-288. · Zbl 0850.62538 [31] Tibshirani, R., Saunders, M., Rosset, S., Zhu, J. and Knight, K. (2005). Sparsity and smoothness via the fused lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 67 91-108. · Zbl 1060.62049 · doi:10.1111/j.1467-9868.2005.00490.x [32] Tibshirani, R. J. and Taylor, J. (2011). Supplement to “The solution path of the generalized lasso” . · Zbl 1234.62107 [33] Tseng, P. (2001). Convergence of a block coordinate descent method for nondifferentiable minimization. J. Optim. Theory Appl. 109 475-494. · Zbl 1006.65062 · doi:10.1023/A:1017501703105 [34] Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B Stat. Methodol. 67 301-320. · Zbl 1069.62054 · doi:10.1111/j.1467-9868.2005.00503.x [35] Zou, H., Hastie, T. and Tibshirani, R. (2007). On the “degrees of freedom” of the lasso. Ann. Statist. 35 2173-2192. · Zbl 1126.62061 · doi:10.1214/009053607000000127 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.