zbMATH — the first resource for mathematics

Online sufficient dimension reduction through sliced inverse regression. (English) Zbl 07255041
Summary: Sliced inverse regression is an effective paradigm that achieves the goal of dimension reduction through replacing high dimensional covariates with a small number of linear combinations. It does not impose parametric assumptions on the dependence structure. More importantly, such a reduction of dimension is sufficient in that it does not cause loss of information. In this paper, we adapt the stationary sliced inverse regression to cope with the rapidly changing environments. We propose to implement sliced inverse regression in an online fashion. This online learner consists of two steps. In the first step we construct an online estimate for the kernel matrix; in the second step we propose two online algorithms, one is motivated by the perturbation method and the other is originated from the gradient descent optimization, to perform online singular value decomposition. The theoretical properties of this online learner are established. We demonstrate the numerical performance of this online learner through simulations and real world applications. All numerical studies confirm that this online learner performs as well as the batch learner.
68T05 Learning and adaptive systems in artificial intelligence
Full Text: Link
[1] Raman Arora, Andrew Cotter, Karen Livescu, and Nathan Srebro. Stochastic optimization for PCA and PLS. In50th Allerton Conference on Communication, Control, and Computing, pages 861-868. IEEE, 2012.
[2] Raman Arora, Andy Cotter, and Nati Srebro. Stochastic optimization of PCA with capped MSG. InAdvances in Neural Information Processing Systems, pages 1815-1823, 2013.
[3] Andreas Artemiou and Bing Li. On principal components and regression: a statistical explanation of a natural phenomenon.Statistica Sinica, pages 1557-1565, 2009. · Zbl 1191.62106
[4] Akshay Balsubramani, Sanjoy Dasgupta, and Yoav Freund. The fast convergence of incremental PCA. InAdvances in Neural Information Processing Systems, pages 3174-3182, 2013.
[5] Bernard Bercu, Thi Mong Ngoc Nguyen, and Jerome Saracco. On the asymptotic behaviour of the recursive nadaraya-watson estimator associated with the recursive sliced inverse regression method.Statistics, 49(3):660-679, 2015. · Zbl 1367.62111
[6] L´eon Bottou.Online Learning and Stochastic Approximations. Cambridge University Press, Cambridge, UK, 1998.
[7] Olivier Bousquet and L´eon Bottou. The tradeoffs of large scale learning. InAdvances in Neural Information Processing Systems, pages 161-168, 2008.
[8] Christos Boutsidis, Dan Garber, Zohar Karnin, and Edo Liberty. Online principal components analysis. InProceedings of the Twenty-sixth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 887-901. SIAM, 2015. · Zbl 1373.62291
[9] Benoit Champagne. Adaptive eigendecomposition of data covariance matrices based on first-order perturbations.IEEE Transactions on Signal Processing, 42(10):2758-2770, 1994.
[10] Marie Chavent, St´ephane Girard, Vanessa Kuentz-Simonet, Benoit Liquet, Thi Mong Ngoc Nguyen, and J´erˆome Saracco.A sliced inverse regression approach for data stream. Computational Statistics, 29(5):1129-1152, 2014. · Zbl 1306.65040
[11] R Dennis Cook.Regression Graphics: Ideas for Studying Regressions through Graphics, volume 482. John Wiley & Sons, 2009. · Zbl 0903.62001
[12] R Dennis Cook and Sanford Weisberg. Discussion of sliced inverse regression for dimension reduction by Li,K.C.Journal of the American Statistical Association, 86(414):328-332, 1991. · Zbl 1353.62037
[13] Donald L Fisk. Quasi-martingales.Transactions of the American Mathematical Society, 120(3):369-389, 1965. · Zbl 0133.40303
[14] Anant Hegde, Jose C Principe, Deniz Erdogmus, Umut Ozertem, Yadunandana N Rao, and Hemanth Peddaneni. Perturbation-based eigenvector updates for on-line principal components analysis and canonical correlation analysis.Journal of VLSI Signal Processing Systems for Signal, Image and Video Technology, 45(1-2):85-95, 2006.
[15] Tosio Kato.Perturbation Theory for Linear Operators, volume 132. Springer Science & Business Media, 2013. · Zbl 0148.12601
[16] Harold Joseph Kushner and Dean S Clark.Stochastic Approximation Methods for Constrained and Unconstrained Systems, volume 26. Springer Science & Business Media, 2012. · Zbl 0381.60004
[17] Yunwen Lei, Lei Shi, and Zheng-Chu Guo. Convergence of unregularized online learning algorithms.The Journal of Machine Learning Research, 18(1):6269-6301, 2017. · Zbl 06982927
[18] Bing Li.Sufficient Dimension Reduction: Methods and Applications with R. CRC Press, 2018. · Zbl 1408.62011
[19] Bing Li and Shaoli Wang. On directional regression for dimension reduction.Journal of the American Statistical Association, 102(479):997-1008, 2007. · Zbl 05564427
[20] Ker-Chau Li. Sliced inverse regression for dimension reduction.Journal of the American Statistical Association, 86(414):316-327, 1991. · Zbl 0742.62044
[21] Weihua Li, H Henry Yue, Sergio Valle-Cervantes, and S Joe Qin. Recursive PCA for adaptive process monitoring.Journal of Process Control, 10(5):471-486, 2000.
[22] Yanyuan Ma and Liping Zhu. A semiparametric approach to dimension reduction.Journal of the American Statistical Association, 107(497):168-179, 2012. · Zbl 1261.62037
[23] Yanyuan Ma and Liping Zhu. A review on dimension reduction.International Statistical Review, 81(1):134-150, 2013. · Zbl 1416.62220
[24] Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro. Online learning for matrix factorization and sparse coding.Journal of Machine Learning Research, 11(Jan):19-60, 2010. · Zbl 1242.62087
[25] David Meyer, Evgenia Dimitriadou, Kurt Hornik, Andreas Weingessel, and Friedrich Leisch. e1071: misc functions of the department of statistics, probability theory group (formerly: E1071), TU Wien. R package version 1.6-8. 2017.
[26] Ioannis Mitliagkas, Constantine Caramanis, and Prateek Jain. Memory limited, streaming PCA. InAdvances in Neural Information Processing Systems, pages 2886-2894, 2013.
[27] Jiazhong Nie, Wojciech Kotlowski, and Manfred K. Warmuth. Online PCA with optimal regret.Journal of Machine Learning Research, 17(173):1-49, 2016. · Zbl 1392.68360
[28] Erkki Oja and Juha Karhunen. On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix.Journal of Mathematical Analysis and Applications, 106(1):69-84, 1985. · Zbl 0583.62077
[29] Ohad Shamir. Convergence of stochastic gradient descent for PCA. InInternational Conference on Machine Learning, pages 257-265, 2016.
[30] Robin Sibson. Studies in the robustness of multidimensional scaling: Perturbational analysis of classical scaling.Journal of the Royal Statistical Society. Series B (Methodological), pages 217-229, 1979. · Zbl 0413.62046
[31] Pierre Tarres and Yuan Yao. Online learning as stochastic approximation of regularization paths: Optimality and almost-sure convergence.IEEE Transactions on Information Theory, 60(9):5716-5735, 2014. · Zbl 1360.62192
[32] Manfred K Warmuth and Dima Kuzmin. Randomized online PCA algorithms with regret bounds that are logarithmic in the dimension.Journal of Machine Learning Research, 9 (Oct):2287-2320, 2008. · Zbl 1225.68273
[33] Yingcun Xia, Howell Tong, Wai Keung Li, and Li-Xing Zhu. An adaptive estimation of dimension reduction space.Journal of the Royal Statistical Society: Series B (Statistical Methodology), 64(3):363-410, 2002. · Zbl 1091.62028
[34] Lin Xiao. Dual averaging methods for regularized stochastic learning and online optimization.Journal of Machine Learning Research, 11(Oct):2543-2596, 2010. · Zbl 1242.62011
[35] Wenzhuo Yang and Huan Xu. Streaming sparse principal component analysis. InInternational Conference on Machine Learning, pages 494-503, 2015.
[36] Zhishen Ye and Robert E Weiss.Using the bootstrap to select one of a new class of dimension reduction methods.Journal of the American Statistical Association, 98(464): 968-979, 2003. · Zbl 1045.62034
[37] Li-Ping Zhu, Li-Xing Zhu, and Zheng-Hui Feng. Dimension reduction in regressions through cumulative slicing estimation.Journal of the American Statistical Association, 105(492): 1455-1466, 2010. · Zbl 1388.62121
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.