×

A practical randomized CP tensor decomposition. (English) Zbl 1444.65016

Authors’ abstract: The CANDECOMP/PARAFAC (CP) decomposition is a leading method for the analysis of multiway data. The standard alternating least squares algorithm for the CP decomposition (CP-ALS) involves a series of highly overdetermined linear least squares problems. We extend randomized least squares methods to tensors and show the workload of CP-ALS can be drastically reduced without a sacrifice in quality. We introduce techniques for efficiently preprocessing, sampling, and computing randomized least squares on a dense tensor of arbitrary order, as well as an efficient sampling-based technique for checking the stopping condition. We also show more generally that the Khatri-Rao product (used within the CP-ALS iteration) produces conditions favorable for direct sampling. In numerical results, we see improvements in speed, reductions in memory requirements, and robustness with respect to initialization.

MSC:

65F99 Numerical linear algebra
15A69 Multilinear algebra, tensor calculus
68W20 Randomized algorithms
PDF BibTeX XML Cite
Full Text: DOI arXiv

References:

[1] E. Acar, C. A. Bingol, H. Bingol, R. Bro, and B. Yener, Multiway analysis of epilepsy tensors, Bioinformatics, 23 (2007), pp. i10–i18, .
[2] N. Ailon and B. Chazelle, Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform, in Proceedings of the Thirty-Eighth Annual ACM Symposium on Theory of Computing, STOC’06, 2006, pp. 557–563, . · Zbl 1301.68232
[3] H. Avron, P. Maymounkov, and S. Toledo, Blendenpik: Supercharging LAPACK’s least-squares solver, SIAM J. Sci. Comput., 32 (2010), pp. 1217–1236, . · Zbl 1213.65069
[4] B. W. Bader and T. G. Kolda, Algorithm 862: MATLAB tensor classes for fast algorithm prototyping, ACM Trans. Math. Software, 32 (2006), pp. 635–653, . · Zbl 1230.65054
[5] B. W. Bader and T. G. Kolda, Efficient MATLAB computations with sparse and factored tensors, SIAM J. Sci. Comput., 30 (2007), pp. 205–231, . · Zbl 1159.65323
[6] B. W. Bader, T. G. Kolda, et al., MATLAB Tensor Toolbox (Version 2.6), available online February 6, 2015, .
[7] A. Beutel, P. P. Talukdar, A. Kumar, C. Faloutsos, E. E. Papalexakis, and E. P. Xing, FlexiFaCT: Scalable flexible factorization of coupled tensors on Hadoop, in Proceedings of the 2014 SIAM International Conference on Data Mining, SDM’14, 2014, pp. 109–117, .
[8] R. Bro, PARAFAC. Tutorial and applications, Chemom. Intell. Lab. Syst., 38 (1997), pp. 149–171, .
[9] E. Candès and B. Recht, Exact matrix completion via convex optimization, Commun. ACM, 55 (2012), pp. 111–119, .
[10] J. Carroll and J.-J. Chang, Analysis of individual differences in multidimensional scaling via an n-way generalization of “Eckart-Young” decomposition, Psychometrika, 35 (1970), pp. 283–319, . · Zbl 0202.19101
[11] D. Cheng, R. Peng, I. Perros, and Y. Liu, SPALS: Fast alternating least squares via implicit leverage scores sampling, in Proceedings of the 29th Annual Conference on Neural Information Processing Systems, NIPS’16, Currant Associates, Inc., Red Hook, NY, 2016, .
[12] M. Chevreuil, R. Lebrun, A. Nouy, and P. Rai, A least-squares method for sparse low rank approximation of multivariate functions, SIAM/ASA J. Uncertain. Quantif., 3 (2015), pp. 897–921, . · Zbl 1327.65029
[13] J. H. Choi and S. Vishwanathan, DFacTo: Distributed factorization of tensors, in Proceedings of the 27th Annual Conference on Neural Information Processing Systems, NIPS’14, Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Weinberger, eds., Curran Associates, Inc., Red Hook, NY, 2014, pp. 1296–1304, .
[14] F. Cong, Q.-H. Lin, L.-D. Kuang, X.-F. Gong, P. Astikainen, and T. Ristaniemi, Tensor decomposition of EEG signals: A brief review, J. Neurosci. Methods, 248 (2015), pp. 59–69, .
[15] I. Davidson, S. Gilpin, O. Carmichael, and P. Walker, Network discovery via constrained tensor analysis of fMRI data, in Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, (KDD’13), 2013, pp. 194–202, .
[16] D. L. Donoho and X. Huo, Uncertainty principles and ideal atomic decomposition, IEEE Trans. Inform. Theory, 47 (2006), pp. 2845–2862, . · Zbl 1019.94503
[17] P. Drineas, M. W. Mahoney, S. Muthukrishnan, and T. Sarlós, Faster least squares approximation, Numer. Math., 117 (2011), pp. 219–249, . · Zbl 1218.65037
[18] A. A. Gorodetsky, S. Karaman, and Y. M. Marzouk, Function-Train: A Continuous Analogue of the Tensor-Train Decomposition, preprint, , 2015.
[19] R. A. Harshman, Foundations of the PARAFAC procedure: Models and conditions for an “explanatory” multi-modal factor analysis, UCLA Working Papers in Phonetics, 16 (1970).
[20] F. L. Hitchcock, The expression of a tensor or a polyadic as a sum of products, J. Math. Phys., 6 (1927), pp. 164–189, . · JFM 53.0095.01
[21] R. Jaffé, K. M. Cawley, and Y. Yamashita, Applications of excitation emission matrix fluorescence with parallel factor analysis (EEM-PARAFAC) in assessing environmental dynamics of natural dissolved organic matter (DOM) in aquatic environments: A review, in Advances in the Physicochemical Characterization of Dissolved Organic Matter: Impact on Natural and Engineered Systems, ACS Symposium Series 1160, American Chemical Society (ACS), Washington, D.C., 2014, pp. 27–73, .
[22] W. Johnson and J. Lindenstrauss, Extensions of Lipschitz mappings into a Hilbert space, in Conference in Modern Analysis and Probability (New Haven, CT, 1982), Contemp. Math. 26, American Mathematical Society, Providence, RI, 1984, pp. 189–206, . · Zbl 0539.46017
[23] T. G. Kolda and B. W. Bader, Tensor decompositions and applications, SIAM Rev., 51 (2009), pp. 455–500, . · Zbl 1173.65029
[24] J. Li, C. Battaglino, I. Perros, J. Sun, and R. Vuduc, An input-adaptive and in-place approach to dense tensor-times-matrix multiply, in Proceedings of the ACM/IEEE Conference on Supercomputing, SC ’15, Austin, TX, 2015, .
[25] K. Maruhashi, F. Guo, and C. Faloutsos, MultiAspectForensics: Pattern mining on large-scale heterogeneous networks with tensor analysis, in Proceedings of the 2011 International Conference on Advances in Social Networks Analysis and Mining, ASONAM ’11, 2011, pp. 203–210, .
[26] R. Motwani and P. Raghavan, Randomized Algorithms, Cambridge University Press, New York, 1995. · Zbl 0849.68039
[27] K. R. Murphy, C. A. Stedmon, D. Graeber, and R. Bro, Fluorescence spectroscopy and multi-way techniques. PARAFAC, Anal. Methods, 5 (2013), pp. 6557–6566, .
[28] S. Nene, S. Nayar, and H. Murase, Columbia Object Image Library (COIL-\textup100), Tech. Report CUCS-006-96, Columbia University, New York, NY, 1996.
[29] I. V. Oseledets, Tensor-train decomposition, SIAM J. Sci. Comput., 33 (2011), pp. 2295–2317, . · Zbl 1232.15018
[30] E. E. Papalexakis, C. Faloutsos, and N. D. Sidiropoulos, ParCube: Sparse parallelizable tensor decompositions, in Machine Learning and Knowledge Discovery in Databases (European Conference, ECML PKDD 2012), Lecture Notes in Comput. Sci. 7523, Springer, Cham, 2012, pp. 521–536, .
[31] A.-H. Phan, P. Tichavský, and A. Cichocki, CANDECOMP/PARAFAC decomposition of high-order tensors through tensor reshaping, IEEE Trans. Signal Process., 61 (2013), pp. 4847–4860, .
[32] A.-H. Phan, P. Tichavský, and A. Cichocki, Fast alternating LS algorithms for high order CANDECOMP/PARAFAC tensor factorizations, IEEE Trans. Signal. Process., 61 (2013), pp. 4834–4846, .
[33] M. Rajih, P. Comon, and R. A. Harshman, Enhanced line search: A novel method to accelerate PARAFAC, SIAM J. Matrix Anal. Appl., 30 (2008), pp. 1128–1147, . · Zbl 1168.65313
[34] M. J. Reynolds, A. Doostan, and G. Beylkin, Randomized alternating least squares for canonical tensor decompositions: Application to a PDE with random data, SIAM J. Sci. Comput., 38 (2016), pp. A2634–A2664, . · Zbl 1348.65012
[35] V. Rokhlin and M. Tygert, A fast randomized algorithm for overdetermined linear least-squares regression, Proc. Natl. Acad. Sci. USA, 105 (2008), pp. 13212–13217, . · Zbl 05851018
[36] N. D. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. E. Papalexakis, and C. Faloutsos, Tensor Decomposition for Signal Processing and Machine Learning, preprint, , 2016.
[37] S. Smith, N. Ravindran, N. Sidiropoulos, and G. Karypis, SPLATT: Efficient and parallel sparse tensor-matrix multiplication, in Proceedings of the 29th IEEE International Parallel & Distributed Processing Symposium, IPDPS ’15, 2015, .
[38] Z. Song, D. P. Woodruff, and H. Zhang, Sublinear time orthogonal tensor decomposition, in Proceedings of the 29th Annual Conference on Neural Information Processing Systems, NIPS ’16, Currant Associates, Inc., Red Hook, NY, 2016, .
[39] G. Tomasi and R. Bro, A comparison of algorithms for fitting the PARAFAC model, Comput. Stat. Data Anal., 50 (2006), pp. 1700–1734, . · Zbl 1445.62136
[40] L. R. Tucker, Some mathematical notes on three-mode factor analysis, Psychometrika, 31 (1966), pp. 279–311, .
[41] A. Vergara, J. Fonollosa, J. Mahiques, M. Trincavelli, N. Rulkov, and R. Huerta, On the performance of gas sensor arrays in open sampling systems using inhibitory support vector machines, Sensors Actuat. B Chem., 185 (2013), pp. 462–477, .
[42] N. Vervliet and L. De Lathauwer, A randomized block sampling approach to canonical polyadic decomposition of large-scale tensors, IEEE J. Sel. Top. Signal Process., 10 (2016), pp. 284–295, .
[43] Y. Wang, H.-Y. Tung, A. J. Smola, and A. Anandkumar, Fast and guaranteed tensor decomposition via sketching, in Proceedings of the 28th Annual Conference on Neural Information Processing Systems, NIP ’15, Currant Associates, Inc., Red Hook, NY, 2015, pp. 991–999, .
[44] D. P. Woodruff, Sketching as a tool for numerical linear algebra, Found. Trends Theor. Comput. Sci., 10 (2014), pp. 1–157, .
[45] G. Zhou, A. Cichocki, and S. Xie, Decomposition of Big Tensors with Low Multilinear Rank, preprint, , 2014.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.