Abstract
By a tensor problem in general, we mean one where all the data on input and output are given (exactly or approximately) in tensor formats, the number of data representation parameters being much smaller than the total amount of data. For such problems, it is natural to seek for algorithms working with data only in tensor formats maintaining the same small number of representation parameters—by the price of all results of computation to be contaminated by approximation (recompression) to occur in each operation. Since approximation time is crucial and depends on tensor formats in use, in this paper we discuss which are best suitable to make recompression inexpensive and reliable. We present fast recompression procedures with sublinear complexity with respect to the size of data and propose methods for basic linear algebra operations with all matrix operands in the Tucker format, mostly through calls to highly optimized level-3 BLAS/LAPACK routines. We show that for three-dimensional tensors the canonical format can be avoided without any loss of efficiency. Numerical illustrations are given for approximate matrix inversion via proposed recompression techniques.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Bader BW, Kolda TG (2006) Algorithm 862: MATLAB tensor classes for fast algorithm prototyping. ACM Trans Math Softw 32(4): 635–653
Beylkin G, Mohlenkamp MJ (2002) Numerical operator calculus in higher dimensions. Proc Natl Acad Sci USA 99(16): 10246–10251
Bro R (1997) PARAFAC: tutorial and applications. Chemom Intell Lab Syst 38(2): 149–171
Caroll JD, Chang JJ (1970) Analysis of individual differences in multidimensional scaling via n-way generalization of Eckart-Young decomposition. Psychometrika 35: 283–319
Comon P (2000) Tensor decomposition: state of the art and applications. In: IMA Conference Mathematics in Signal Processing, Warwick, UK
Eldén L, Savas B (2009) A newton-grassmann method for computing the best multilinear rank-(r 1, r 2, r 3) approximation of a tensor. SIAM J Matrix Anal Appl 31(2): 248–271
Flad H-J, Khoromskij BN, Savostyanov DV, Tyrtyshnikov EE (2008) Verification of the cross 3D algorithm on quantum chemistry data. Rus J Numer Anal Math Model 23(4): 210–220
Ford JM, Tyrtyshnikov EE (2003) Combining Kronecker product approximation with discrete wavelet transforms to solve dense, function-related systems. SIAM J Sci Comput 25(3): 961–981
Goreinov SA (2008) On cross approximation of multi-index array. Doklady Math 420(4): 1–3
Goreinov SA, Tyrtyshnikov EE (2001) The maximal-volume concept in approximation by low-rank matrices. Contemp Math 208: 47–51
Goreinov SA, Tyrtyshnikov EE, Zamarashkin NL (1997) A theory of pseudo-skeleton approximations. Linear Algebra Appl 261: 1–21
Hackbusch W, Khoromskij BN, Tyrtyshnikov EE (2005) Hierarchical Kronecker tensor-product approximations. J Numer Math 13: 119–156
Hackbusch W, Khoromskij BN, Tyrtyshnikov EE (2008) Approximate iterations for structured matrices. Numer Mathematik 109(3): 365–383
Harshman RA (1970) Foundations of the Parafac procedure: models and conditions for an explanatory multimodal factor analysis. UCLA Working Papers in Phonetics 16: 1–84
de Lathauwer L, de Moor B, Vandewalle J (2000) A multlinear singular value decomposition. SIAM J Matrix Anal Appl 21: 1253–1278
de Lathauwer L, de Moor B, Vandewalle J (2000) On best rank-1 and rank-(R 1, R 2, . . . , R N ) approximation of high-order tensors. SIAM J Matrix Anal Appl 21: 1324–1342
de Lathauwer L, de Moor B, Vandewalle J (2004) Computing of canonical decomposition by means of a simultaneous generalized Schur decomposition. SIAM J Matrix Anal Appl 26: 295–327
Olshevsky V, Oseledets IV, Tyrtyshnikov EE (2008) Superfast inversion of two-level Toeplitz matrices using Newton iteration and tensor-displacement structure. Oper Theory Adv Appl 179: 229–240
Oseledets IV, Savostianov DV, Tyrtyshnikov EE (2008) Tucker dimensionality reduction of three-dimensional arrays in linear time. SIAM J Matrix Anal Appl 30(3): 939–956
Oseledets IV, Savostyanov DV, Tyrtyshnikov EE (2009) Fast simultaneous orthogonal reduction to triangular matrices. SIAM J Matrix Anal Appl 31(2): 316–330
Oseledets IV, Tyrtyshnikov EE (2005) Approximate inversion of matrices in the process of solving a hypersingular integral equation. Comput Math Math Phys 45(2): 302–313
Oseledets IV, Tyrtyshnikov EE, Zamarashkin NL (2009) Matrix inversion cases with size-independent rank estimates. Linear Algebra Appl 431(5–7): 558–570
Tucker LR (1966) Some mathematical notes on three-mode factor analysis. Psychometrika 31: 279–311
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by W. Hackbusch.
Rights and permissions
About this article
Cite this article
Oseledets, I.V., Savostyanov, D.V. & Tyrtyshnikov, E.E. Linear algebra for tensor problems. Computing 85, 169–188 (2009). https://doi.org/10.1007/s00607-009-0047-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00607-009-0047-6
Keywords
- Multidimensional arrays
- Tucker decomposition
- Tensor approximations
- Low rank approximations
- Skeleton decompositions
- Dimensionality reduction
- Data compression
- Large-scale matrices
- Data-sparse methods