zbMATH — the first resource for mathematics

Pass-efficient randomized algorithms for low-rank matrix approximation using any number of views. (English) Zbl 1437.65029

65F55 Numerical methods for low-rank matrix approximation; matrix compression
65F15 Numerical computation of eigenvalues and eigenvectors of matrices
68W20 Randomized algorithms
Full Text: DOI arXiv
[1] H. Avron, P. Maymounkov, and S. Toledo, Blendenpik: Supercharging LAPACK’s least-squares solver, SIAM J. Sci. Comput., 32 (2010), pp. 1217–1236, https://doi.org/10.1137/090767911. · Zbl 1213.65069
[2] E. K. Bjarkason, O. J. Maclaren, J. P. O’Sullivan, and M. J. O’Sullivan, Randomized truncated SVD Levenberg-Marquardt approach to geothermal natural state and history matching, Water Resources Res., 54 (2018), pp. 2376–2404.
[3] C. Boutsidis, D. P. Woodruff, and P. Zhong, Optimal principal component analysis in distributed and streaming models, in Proc. 48th Annual ACM Symposium on Theory of Computing, Cambridge, MA, 2016, pp. 236–249. · Zbl 1381.62140
[4] H. M. Bücker, G. Corliss, P. Hovland, U. Naumann, and B. Norris, eds., Automatic Differentiation: Applications, Theory, and Implementations, Lecture Notes in Comput. Sci. Eng. 50, Springer-Verlag, 2006. · Zbl 1084.65002
[5] T. Bui-Thanh, C. Burstedde, O. Ghattas, J. Martin, G. Stadler, and L. C. Wilcox, Extreme-scale UQ for Bayesian inverse problems governed by PDEs, in Proc. International Conference for High Performance Computing, Networking, Storage and Analysis, Salt Lake City, UT, 2012, pp. 1–11.
[6] J. Carrera, A. Alcolea, A. Medina, J. Hidalgo, and L. J. Slooten, Inverse problem in hydrogeology, Hydrogeology J., 13 (2005), pp. 206–222.
[7] K. L. Clarkson and D. P. Woodruff, Low-rank approximation and regression in input sparsity time, J. ACM, 63 (2017), 54. · Zbl 1426.65057
[8] M. B. Cohen, S. Elder, C. Musco, C. Musco, and M. Persu, Dimensionality reduction for k-means clustering and low rank approximation, in Proc. 47th Annual ACM Symposium on Theory of Computing, Portland, OR, 2015, pp. 163–172. · Zbl 1321.68398
[9] T. Cui, K. J. Law, and Y. M. Marzouk, Dimension-independent likelihood-informed MCMC, J. Comput. Phys., 304 (2016), pp. 109–137. · Zbl 1349.65009
[10] T. Cui, J. Martin, Y. M. Marzouk, A. Solonen, and A. Spantini, Likelihood-informed dimension reduction for nonlinear inverse problems, Inverse Problems, 30 (2014), 114015. · Zbl 1310.62030
[11] P. Drineas, I. C. F. Ipsen, E. M. Kontopoulou, and M. Magdon-Ismail, Structural Convergence Results for Approximation of Dominant Subspaces from Block Krylov Spaces, preprint, https://arxiv.org/abs/1609.00671v2, 2017.
[12] P. Drineas, M. W. Mahoney, S. Muthukrishnan, and T. Sarlós, Faster least squares approximation, Numer. Math., 117 (2010), pp. 219–249. · Zbl 1218.65037
[13] N. B. Erichson, A. Mendible, S. Wihlborn, and J. N. Kutz, Randomized nonnegative matrix factorization, Pattern Recognition Lett., 104 (2018), pp. 1–7.
[14] N. B. Erichson, S. Voronin, S. L. Brunton, and J. N. Kutz, Randomized matrix decompositions using R, J. Statist. Softw., 89 (2019), pp. 1–48.
[15] A. Gittens and M. W. Mahoney, Revisiting the Nyström method for improved large-scale machine learning, J. Mach. Learn. Res., 17 (2016), pp. 1–65. · Zbl 1367.68223
[16] R. M. Gower and P. Richtárik, Randomized iterative methods for linear systems, SIAM J. Matrix Anal. Appl., 36 (2015), pp. 1660–1690, https://doi.org/10.1137/15M1025487. · Zbl 1342.65110
[17] A. Griewank and A. Walther, Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation, 2nd ed., SIAM, 2008, https://doi.org/10.1137/1.9780898717761. · Zbl 1159.65026
[18] M. Gu, Subspace iteration randomization and singular value problems, SIAM J. Sci. Comput., 37 (2015), pp. A1139–A1173, https://doi.org/10.1137/130938700.
[19] N. Halko, P.-G. Martinsson, Y. Shkolnisky, and M. Tygert, An algorithm for the principal component analysis of large data sets, SIAM J. Sci. Comput., 33 (2011), pp. 2580–2594, https://doi.org/10.1137/100804139. · Zbl 1232.65058
[20] N. Halko, P.-G. Martinsson, and J. A. Tropp, Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions, SIAM Rev., 53 (2011), pp. 217–288, https://doi.org/10.1137/090771806. · Zbl 1269.65043
[21] M. Hinze, R. Pinnau, M. Ulbrich, and S. Ulbrich, Optimization with PDE Constraints, Math. Model. Theory Appl. 23, Springer, 2009.
[22] H. Li, G. C. Linderman, A. Szlam, K. P. Stanton, Y. Kluger, and M. Tygert, Algorithm 971: An implementation of a randomized algorithm for principal component analysis, ACM Trans. Math. Softw., 43 (2017), 28. · Zbl 1391.65085
[23] E. Liberty, F. Woolfe, P.-G. Martinsson, V. Rokhlin, and M. Tygert, Randomized algorithms for the low-rank approximation of matrices, Proc. Natl. Acad. Sci. USA, 104 (2007), pp. 20167–20172. · Zbl 1215.65080
[24] M. W. Mahoney, Randomized algorithms for matrices and data, Found. Trends Mach. Learn., 3 (2011), pp. 123–224.
[25] P.-G. Martinsson, Randomized Methods for Matrix Computations and Analysis of High Dimensional Data, preprint, https://arxiv.org/abs/1607.01649v1, 2016.
[26] P.-G. Martinsson, G. Quintana-Ortí, and N. Heavner, randUTV: A Blocked Randomized Algorithm for Computing a Rank-Revealing UTV Factorization, preprint, https://arxiv.org/abs/1703.00998, 2017.
[27] P.-G. Martinsson, V. Rokhlin, and M. Tygert, A Randomized Algorithm for the Approximation of Matrices, Tech. Report YALEU/DCS/RR-1361, Computer Science Department, Yale University, New Haven, CT, 2006.
[28] P.-G. Martinsson, V. Rokhlin, and M. Tygert, A randomized algorithm for the decomposition of matrices, Appl. Comput. Harmon. Anal., 30 (2011), pp. 47–68. · Zbl 1210.65095
[29] P.-G. Martinsson, A. Szlam, and M. Tygert, Normalized power iterations for the computation of SVD, in Proceedings of the Neural and Information Processing Systems (NIPS) Workshop on Low-Rank Methods for Large-Scale Machine Learning, Vancouver, Canada, 2011, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=
[30] P.-G. Martinsson and S. Voronin, A randomized blocked algorithm for efficiently computing rank-revealing factorizations of matrices, SIAM J. Sci. Comput., 38 (2016), pp. S485–S507, https://doi.org/10.1137/15M1026080.
[31] X. Meng, M. A. Saunders, and M. W. Mahoney, LSRN: A parallel iterative solver for strongly over- or underdetermined systems, SIAM J. Sci. Comput., 36 (2014), pp. C95–C118, https://doi.org/10.1137/120866580. · Zbl 1298.65053
[32] C. Musco and C. Musco, Randomized block Krylov methods for stronger and faster approximate singular value decomposition, in Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, eds., Curran Associates, Inc., 2015, pp. 1396–1404.
[33] D. S. Oliver, A. C. Reynolds, and N. Liu, Inverse Theory for Petroleum Reservoir Characterization and History Matching, Cambridge University Press, Cambridge, UK, 2008.
[34] M. J. O’Sullivan, K. Pruess, and M. J. Lippmann, State of the art of geothermal reservoir simulation, Geothermics, 30 (2001), pp. 395–429.
[35] N. Petra, J. Martin, G. Stadler, and O. Ghattas, A computational framework for infinite-dimensional Bayesian inverse problems, part II: Stochastic Newton MCMC with application to ice sheet flow inverse problems, SIAM J. Sci. Comput., 36 (2014), pp. A1525–A1555, https://doi.org/10.1137/130934805. · Zbl 1303.35110
[36] J. R. P. Rodrigues, Calculating derivatives for automatic history matching, Comput. Geosci., 10 (2006), pp. 119–136.
[37] V. Rokhlin, A. Szlam, and M. Tygert, A randomized algorithm for principal component analysis, SIAM J. Matrix Anal. Appl., 31 (2009), pp. 1100–1124, https://doi.org/10.1137/080736417. · Zbl 1198.65035
[38] V. Rokhlin and M. Tygert, A fast randomized algorithm for overdetermined linear least-squares regression, Proc. Natl. Acad. Sci. USA, 105 (2008), pp. 13212–13217. · Zbl 05851018
[39] A. K. Saibaba, Analysis of Randomized Subspace Iteration: Canonical Angles and Unitarily Invariant Norms, preprint, https://arxiv.org/abs/1804.02614, 2018.
[40] T. Sarlós, Improved approximation algorithms for large matrices via random projections, in Proc. 47th Annual IEEE Symposium on Foundations of Computer Science, Berkeley, CA, 2006, pp. 143–152.
[41] M. G. Shirangi, History matching production data and uncertainty assessment with an efficient TSVD parameterization algorithm, J. Petrol. Sci. Eng., 113 (2014), pp. 54–71.
[42] M. G. Shirangi and A. A. Emerick, An improved TSVD-based Levenberg–Marquardt algorithm for history matching and comparison with Gauss–Newton, J. Petrol. Sci. Eng., 143 (2016), pp. 258–271.
[43] R. Tavakoli and A. C. Reynolds, History matching with parameterization based on the singular value decomposition of a dimensionless sensitivity matrix, SPE J., 15 (2010), pp. 495–508.
[44] R. Tavakoli and A. C. Reynolds, Monte Carlo simulation of permeability fields and reservoir performance predictions with SVD parameterization in RML compared with EnKF, Comput. Geosci., 15 (2011), pp. 99–116. · Zbl 1209.86014
[45] J. A. Tropp, A. Yurtsever, M. Udell, and V. Cevher, Fixed-rank approximation of a positive-semidefinite matrix from streaming data, in Proc. 31st International Conference on Neural Information Processing Systems, Long Beach, CA, 2017, pp. 1225–1234.
[46] J. A. Tropp, A. Yurtsever, M. Udell, and V. Cevher, Practical sketching algorithms for low-rank matrix approximation, SIAM J. Matrix Anal. Appl., 38 (2017), pp. 1454–1485, https://doi.org/10.1137/17M1111590. · Zbl 1379.65026
[47] J. A. Tropp, A. Yurtsever, M. Udell, and V. Cevher, More Practical Sketching Algorithms for Low-Rank Matrix Approximation, ACM Report 2018-01, Caltech, Pasadena, CA, 2018. · Zbl 1379.65026
[48] J. Upadhyay, Fast and Space-Optimal Low-Rank Factorization in the Streaming Model with Application in Differential Privacy, preprint, https://arxiv.org/abs/1604.01429v1, 2016.
[49] C. R. Vogel and J. G. Wade, A modified Levenberg-Marquardt algorithm for large-scale inverse problems, in Computation and Control III, Birkhäuser Boston, Boston, MA, 1993, pp. 367–378. · Zbl 0820.93002
[50] C. R. Vogel and J. G. Wade, Iterative SVD-based methods for ill-posed problems, SIAM J. Sci. Comput., 15 (1994), pp. 736–754. · Zbl 0802.65070
[51] S. Voronin and P.-G. Martinsson, RSVDPACK: An Implementation of Randomized Algorithms for Computing the Singular Value, Interpolative, and CUR Decompositions of Matrices on Multi-core and GPU Architectures, preprint, https://arxiv.org/abs/1502.05366v3, 2016.
[52] S. Voronin, D. Mikesell, and G. Nolet, Compression approaches for the regularized solutions of linear systems from large-scale inverse problems, Int. J. Geomath., 6 (2015), pp. 251–294. · Zbl 1338.65110
[53] S. Wang, L. Luo, and Z. Zhang, SPSD matrix approximation vis column selection: Theories, algorithms, and extensions, J. Mach. Learn. Res., 17 (2016), pp. 1–49. · Zbl 1360.65130
[54] D. P. Woodruff, Sketching as a tool for numerical linear algebra, Found. Trends Theoret. Comput. Sci., 10 (2014), pp. 1–157.
[55] F. Woolfe, E. Liberty, V. Rokhlin, and M. Tygert, A fast randomized algorithm for the approximation of matrices, Appl. Comput. Harmon. Anal., 25 (2008), pp. 335–366. · Zbl 1155.65035
[56] H. Xiang and J. Zou, Regularization with randomized SVD for large-scale discrete inverse problems, Inverse Problems, 29 (2013), 085008.
[57] W. Yu, Y. Gu, J. Li, S. Liu, and Y. Li, Single-pass PCA of large high-dimensional data, in Proc. 26th International Joint Conference on Artificial Intelligence (IJCAI-17), Melbourne, Australia, 2017, pp. 3350–3356.
[58] W. Yu, Y. Gu, and Y. Li, Efficient randomized algorithms for the fixed-precision low-rank matrix approximation, SIAM J. Matrix Anal. Appl., 39 (2018), pp. 1339–1359. · Zbl 1405.65063
[59] A. Yurtsever, M. Udell, J. A. Tropp, and V. Cevher, Sketchy decisions: Convex low-rank matrix optimization with optimal storage, in Proc. 20th Intl. Conf. on Artificial Intelligence and Statistics, 2017. · Zbl 1379.65026
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.