##
**The group Lasso for logistic regression.**
*(English)*
Zbl 1400.62276

Summary: The group lasso is an extension of the lasso to do variable selection on (predefined) groups of variables in linear regression models. The estimates have the attractive property of being invariant under groupwise orthogonal reparameterizations. We extend the group lasso to logistic regression models and present an efficient algorithm, that is especially suitable for high dimensional problems, which can also be applied to generalized linear models to solve the corresponding convex optimization problem. The group Lasso estimator for logistic regression is shown to be statistically consistent even if the number of predictors is much larger than sample size but with sparse true underlying structure. We further use a two-stage procedure which aims for sparser models than the group lasso, leading to improved prediction performance for some cases. Moreover, owing to the two-stage nature, the estimates can be constructed to be hierarchical. The methods are used on simulated and real data sets about splice site detection in DNA sequences.

### MSC:

62P10 | Applications of statistics to biology and medical sciences; meta analysis |

### Keywords:

categorical data; co-ordinate descent algorithm; DNA splice site; group variable selection; high dimensional generalized linear model; penalized likelihood### Software:

SparseLOGREG
PDF
BibTeX
XML
Cite

\textit{L. Meier} et al., J. R. Stat. Soc., Ser. B, Stat. Methodol. 70, No. 1, 53--71 (2008; Zbl 1400.62276)

Full Text:
DOI

### References:

[1] | Antoniadis, Regularization of wavelet approximations (with discussion), J. Am. Statist. Ass. 96 pp 939– (2001) · Zbl 1072.62561 |

[2] | Bakin , S. 1999 Adaptive regression and model selection in data mining problems PhD Thesis Australian National University |

[3] | Balakrishnan , S. Madigan , D. 2006 Algorithms for sparse linear classifiers in the massive data setting Rutgers University http://www.stat.rutgers.edu/ madigan/PAPERS/ · Zbl 1225.68148 |

[4] | Bertsekas, Nonlinear Programming (2003) |

[5] | Burge, Computational Methods in Molecular Biology pp 129– (1998) |

[6] | Burge, Prediction of complete gene structures in human genomic DNA, J. Molec. Biol. 268 pp 78– (1997) |

[7] | Cai, Discussion of ”Regularization of wavelet approximations” (by A. Antoniadis and J. Fan), J. Am. Statist. Ass. 96 pp 960– (2001) |

[8] | Efron, Least angle regression, Ann. Statist. 32 pp 407– (2004) · Zbl 1091.62054 |

[9] | Van De Geer, Recent Advances and Trends in Nonparametric Statistics pp 235– (2003) |

[10] | Van De Geer, High-dimensional generalized linear models and the lasso, Ann. Statist. (2007) · Zbl 1138.62323 |

[11] | Genkin, Large-scale bayesian logistic regression for text categorization 49 pp 291– (2007) |

[12] | Kim, Blockwise sparse regression, Statist. Sin. 16 pp 375– (2006) · Zbl 1096.62076 |

[13] | King, Logistic regression in rare events data, Polit. Anal. 9 pp 137– (2001) |

[14] | Krishnapuram, Sparse multinomial logistic regression: fast algorithms and generalization bounds, IEEE Trans. Pattn Anal. Mach. Intell. 27 pp 957– (2005) |

[15] | Lokhorst , J. 1999 The lasso and generalised linear models Honors Project University of Adelaide |

[16] | Meinshausen, Lasso with relaxation, Computnl Statist. Data Anal. 52 pp 374– (2007) |

[17] | Osborne, A new approach to variable selection in least squares problems, IMA J. Numer. Anal. 20 pp 389– (2000) · Zbl 0962.65036 |

[18] | Park, Regularization path algorithms for detecting gene interactions (2006) |

[19] | Park, L1-regularization path algorithm for generalized linear models, J. R. Statist. Soc. B 69 pp 659– (2007) |

[20] | Rosset, Advances in Neural Information Processing Systems pp 1153– (2005) |

[21] | Roth, The generalized lasso, IEEE Trans. Neur. Netwrks 15 pp 16– (2004) |

[22] | Shevade, A simple and efficient algorithm for gene selection using sparse logistic regression, Bioinformatics 19 pp 2246– (2003) |

[23] | Tarigan, Classifiers of support vector machine type with l1 complexity regularization, Bernoulli 12 pp 1045– (2006) · Zbl 1118.62067 |

[24] | Tibshirani, Regression shrinkage and selection via the lasso, J. R. Statist. Soc. B 58 pp 267– (1996) · Zbl 0850.62538 |

[25] | Tibshirani, The lasso method for variable selection in the cox model, Statist. Med. 16 pp 385– (1997) |

[26] | Tseng, Convergence of a block coordinate descent method for nondifferentiable minimization, J. Optimizn Theory Appl. 109 pp 475– (2001) |

[27] | Tseng, A coordinate gradient descent method for nonsmooth separable minimization (2007) · Zbl 1166.90016 |

[28] | Yeo, Maximum entropy modeling of short sequence motifs with applications to RNA splicing signals, J. Computnl Biol. 11 pp 475– (2004) |

[29] | Yuan, Model selection and estimation in regression with grouped variables, J. R. Statist. Soc. B 68 pp 49– (2006) · Zbl 1141.62030 |

[30] | Zhao, Stagewise lasso (2007) |

This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.