×

Nonnegative adaptive Lasso for ultra-high dimensional regression models and a two-stage method applied in financial modeling. (English) Zbl 1353.62076

Summary: This paper proposes the nonnegative adaptive lasso method for variable selection both in the classical fixed \(p\) setting (OLS initial estimator) and the ultra-high dimensional setting (root-n-consistent initial estimator). This method is an extension of the adaptive lasso with nonnegative constraint on the coefficients. It is shown to have asymptotic unbiasedness, asymptotic normality and variable selection consistency and its mean squared error decays fast too. Comparing with other procedures, nonnegative adaptive lasso satisfies oracle properties and can select the true variables under fewer assumptions. To get the solution of the nonnegative adaptive lasso, we extend the multiplicative approach for computing. This algorithm is valid for the general framework where the number of regression parameters \(p\) is allowed to very large. Simulations are performed to illustrate above results.
The constrained index tracking problem in the stock market without short sales is studied in the empirical part. A two-stage method, nonnegative adaptive lasso\(+\)nonnegative LS, is applied in the financial modeling. The tracking results indicate that nonnegative adaptive lasso and the two-stage method can both get small tracking error and is successful in assets selection.

MSC:

62J07 Ridge regression; shrinkage estimators (Lasso)
62G08 Nonparametric regression and quantile regression
62G20 Asymptotic properties of nonparametric inference
65C60 Computational problems in statistics (MSC2010)
Full Text: DOI

References:

[1] Bickel, P. J.; Ritov, Y.; Tsybakov, A. B., Simultaneous analysis of lasso and dantzig selector, Ann. Statist., 37, 4, 1705-1732 (2009) · Zbl 1173.62022
[2] Breiman, L., Better Subset Regression Using the Nonnegative Garrote, Am. Stat. Assoc. Am. Soc. Qual., 37, 373-384 (1995) · Zbl 0862.62059
[3] Candes, E.; Tao, T., The dantzig selector: statistical estimation when \(p\) is much larger than \(n\), Ann. Statist., 35, 6, 2313-2351 (2007) · Zbl 1139.62019
[4] Fan, J. Q.; Li, R. Z., Variable selection via nonconcave penalized likelihood and its oracle properties, J. Amer. Statist. Assoc., 96, 456, 1348-1360 (2001) · Zbl 1073.62547
[5] Jian, H.; Ma, S. G.; Zhang, C. H., Adaptive lasso for sparse high-dimensional regression models, Statist. Sinica, 18, 1603-1618 (2008) · Zbl 1255.62198
[6] Knight, K.; Fu, W. J., Asymptotics for lasso-type estimators, Ann. Statist., 28, 5, 1356-1378 (2000) · Zbl 1105.62357
[7] Lee, S.; Zhu, J.; Xing, E. P., Adaptive multi-task lasso: with application to eqtl detection, (Advances in Neural Information Processing Systems (2010)), 1306-1314
[8] Meinshausen, N., Sign-constrained least squares estimation for high-dimensional regression, Electron. J. Stat., 7, 1607-1631 (2013) · Zbl 1327.62422
[9] Meinshausen, N.; Bühlmann, P., High-dimensional graphs and variable selection with the lasso, Ann. Statist., 1436-1462 (2006) · Zbl 1113.62082
[10] Sha, F.; Lin, Y., Multiplicative updates for nonnegative quadratic programming, Neural Comput., 19, 2004-2031 (2007) · Zbl 1161.90456
[11] Slawski, M.; Hein, M., Non-negative least squares for high-dimensional linear models: Consistency and sparse recovery without regularization, Electron. J. Stat., 7, 3004-3056 (2013) · Zbl 1280.62086
[12] Tibshirani, R., Regression shrinkage and selection via the lasso, J. R. Stat. Soc. Ser. B, 58, 267-288 (1996) · Zbl 0850.62538
[13] Tropp, J. A., Greed is good: Algorithmic results for sparse approximation, IEEE Trans. Inform. Theory, 50, 10, 2231-2242 (2004) · Zbl 1288.94019
[14] Wainwright, M. J., Sharp thresholds for noisy and high-dimensional recovery of sparsity using \(l_1\)-constrained quadratic programming (lasso), IEEE Trans. Inform. Theory, 55, 5, 2183-2202 (2009) · Zbl 1367.62220
[15] Wang, H. S.; Leng, C. L., A note on adaptive group lasso, Comput. Statist. Data Anal., 52, 12, 5277-5286 (2008) · Zbl 1452.62524
[16] Wang, H. S.; Li, G.; Tsai, C. L., Regression coefficient and autoregressive order shrinkage and selection via lasso, J. R. Stat. Soc. Ser. B Stat. Methodol, 69, 63-78 (2007) · Zbl 07555350
[17] Wu, L.; Yang, Y. H., Nonnegative elastic net and application in index tracking, Appl. Math. Comput., 227, 541-552 (2014) · Zbl 1364.91156
[18] Wu, L.; Yang, Y. H.; Liu, H. Z., Nonnegative-lasso and application in index tracking, Comput. Statist. Data Anal., 70, 116-126 (2014) · Zbl 1471.62220
[19] Yuan, M.; Lin, Y., Model selection and estimation in regression with grouped variables, J. R. Stat. Soc. Ser. B Stat. Methodol, 68, 49-67 (2006) · Zbl 1141.62030
[20] Yuan, M.; Lin, Y., On the non-negative garrotte estimator, J. R. Stat. Soc. Ser. B Stat. Methodol., 69, 2, 143-161 (2007) · Zbl 1120.62052
[21] Zhao, P.; Rocha, G.; Yu, B., The composite absolute penalties family for grouped and hierarchical variable selection, Ann. Statist., 37, 6A, 3468-3497 (2009) · Zbl 1369.62164
[22] Zhao, P.; Yu, B., On model selection consistency of lasso, J. Mach. Learn. Res., 7, 2541-2563 (2006) · Zbl 1222.62008
[23] Zou, H., The adaptive lasso and its oracle properties, J. Amer. Statist. Assoc., 101, 1418-1429 (2006) · Zbl 1171.62326
[24] Zou, H.; Hastie, T., Regularization and variable selection via the elastic net, J. R. Stat. Soc. Ser. B Stat. Methodol, 67, 301-320 (2005) · Zbl 1069.62054
[25] Zou, H.; Zhang, H. L., On the adaptive elastic-net with a diverging number of parameters, Ann. Statist., 37, 4, 1733 (2009) · Zbl 1168.62064
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.