Computing dense tensor decompositions with optimal dimension trees.(English)Zbl 1421.68259

Summary: Dense tensor decompositions have been widely used in many signal processing problems including analyzing speech signals, identifying the localization of signal sources, and many other communication applications. Computing these decompositions poses major computational challenges for big datasets emerging in these domains. CANDECOMP/PARAFAC (CP) and Tucker formulations are the prominent tensor decomposition schemes heavily used in these fields, and the algorithms for computing them involve applying two core operations, namely tensor-times-matrix and tensor-times-vector multiplication, which are executed repetitively within an iterative framework. In the recent past, efficient computational schemes using a data structure called dimension tree, are employed to significantly reduce the cost of these two operations, through storing and reusing partial results that are commonly used across different iterations of these algorithms. This framework has been introduced for sparse CP and Tucker decompositions in the literature, and a recent work investigates using an optimal binary dimension tree structure in computing dense Tucker decompositions. In this paper, we investigate finding an optimal dimension tree for both CP and Tucker decompositions. We show that finding an optimal dimension tree for an $$N$$-dimensional tensor is NP-hard for both decompositions, provide faster exact algorithms for finding an optimal dimension tree in $$O(3^N)$$ time using $$O(2^N)$$ space for the Tucker case, and extend the algorithm to the case of CP decomposition with the same time and space complexities.

MSC:

 68W40 Analysis of algorithms 15A69 Multilinear algebra, tensor calculus 65F30 Other matrix algorithms (MSC2010) 68P05 Data structures 68Q17 Computational difficulty of problems (lower bounds, completeness, difficulty of approximation, etc.) 94A12 Signal theory (characterization, reconstruction, filtering, etc.)
Full Text: