Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
article

The Alternating Linear Scheme for Tensor Optimization in the Tensor Train Format

Published: 01 March 2012 Publication History

Abstract

Recent achievements in the field of tensor product approximation provide promising new formats for the representation of tensors in form of tree tensor networks. In contrast to the canonical $r$-term representation (CANDECOMP, PARAFAC), these new formats provide stable representations, while the amount of required data is only slightly larger. The tensor train (TT) format [SIAM J. Sci. Comput., 33 (2011), pp. 2295-2317], a simple special case of the hierarchical Tucker format [J. Fourier Anal. Appl., 5 (2009), p. 706], is a useful prototype for practical low-rank tensor representation. In this article, we show how optimization tasks can be treated in the TT format by a generalization of the well-known alternating least squares (ALS) algorithm and by a modified approach (MALS) that enables dynamical rank adaptation. A formulation of the component equations in terms of so-called retraction operators helps to show that many structural properties of the original problems transfer to the micro-iterations, giving what is to our knowledge the first stable generic algorithm for the treatment of optimization tasks in the tensor format. For the examples of linear equations and eigenvalue equations, we derive concrete working equations for the micro-iteration steps; numerical examples confirm the theoretical results concerning the stability of the TT decomposition and of ALS and MALS but also show that in some cases, high TT ranks are required during the iterative approximation of low-rank tensors, showing some potential of improvement.

References

[1]
G. Beylkin and M. J. Mohlenkamp, Algorithms for numerical analysis in high dimensions, SIAM J. Sci. Comput., 26 (2005), pp. 2133-2159.
[2]
J. D. Carroll and J.-J. Chang, Analysis of individual differences in multidimensional scaling via an n-way generalization of Eckart-Young decomposition, Psychometrika, 3 (1970), p. 283.
[3]
D. Conte and C. Lubich, An error analysis of the multi-configuration time-dependent Hartree method of quantum dynamics, M2AN Math. Model. Numer. Anal., 44 (2010), p. 759.
[4]
M. Espig, Effziente Bestapproximation Mittels Summen von Elementartensoren in Hohen Dimensionen, Ph.D. thesis, 2007.
[5]
M. Espig, W. Hackbusch, T. Rohwedder, and R. Schneider, Variational calculus with sums of elementary tensors of fixed rank, Numer. Math., to appear; also available online from http://www.mis.mpg.de/de/publications/preprints/2009/prepr2009-52.html.
[6]
C. Eckart and G. Young, The approximation of one matrix by another of lower rank, Psychometrika, 3 (1936), p. 211.
[7]
A. Falcó and W. Hackbusch, On Minimal Subspaces in Tensor Representations, Found. Comput. Math., to appear.
[8]
L. Grasedyck, Hierarchical singular value decomposition of tensors, SIAM. J. Matrix Anal. Appl., 31 (2010), p. 2029.
[9]
V. de Silva and L.-H. Lim, Tensor rank and the ill-posedness of the best low-rank approximation problem, SIAM J. Matrix Anal. Appl., 3 (2008), pp. 1084-1127.
[10]
E. Hairer, C. Lubich, and G. Wanner, Geometrical Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations, 2nd ed., Springer, Berlin, 2006.
[11]
W. Hackbusch and S. Kühn, A new scheme for the tensor representation, J. Fourier Anal. Appl., 5 (2009), p. 706.
[12]
S. Holtz, T. Rohwedder, and R. Schneider, On manifolds of tensors of fixed TT rank, Numer. Math.
[13]
T. Huckle, K. Waldherr, and T. Schulte-Herbrüggen, Computations in quantum tensor networks, Linear Algebra Appl., to appear.
[14]
A. Kapteyn, H. Neudecker, and T. Wansbeek, An approach to n-mode components analysis, Psychometrika, 51 (1986), p. 269.
[15]
V. A. Kazeev and B. N. Khoromskij, On Explicit QTT Representation of Laplace Operator and Its Inverse, http://www.mis.mpg.de/preprints/2010/preprint2010_75.pdf (2010).
[16]
O. Koch and C. Lubich, Dynamical low rank approximation, SIAM J. Matrix Anal. Appl., 2 (2008), p. 434.
[17]
O. Koch and C. Lubich, Dynamical low-rank approximation of tensors, SIAM J. Matrix Anal. Appl., 31 (2010), p. 2360.
[18]
T. G. Kolda and B. W. Bader, Tensor decompositions and applications, SIAM Rev., 3, pp. 455-500.
[19]
P. M. Kroonenberg and J. De Leeuw, Principal component analysis of three-mode data by means of alternating least squares algorithms, Psychometrika, 45 (1980), p. 69.
[20]
C. Lubich, From Quantum to Classical Molecular Dynamics: Reduced Methods and Numerical Analysis, Zürich Lectures in Advanced Mathematics, EMS, 2008.
[21]
K. H. Marti, B. Bauer, M. Reiher, M. Troyer, and F. Verstraete, Complete-graph tensor network states: A new fermionic wave function ansatz for molecules, New J. Phys., 12 (2010), 103008.
[22]
D. Meyer, F. Gatti, and G. A. Worth, The multi-configurational time-dependent Hartree (MCTDH) method, Phys. Rep., 324 (2000), p. 1.
[23]
D. Meyer, U. Manthe, and L. Cederbaum, The multi-configurational time-dependent Hartree approach, Chem. Phys. Lett., 165 (1990), p. 73.
[24]
M. J. Mohlenkamp, Musings on multilinear fitting, Linear Algebra Appl., (2011).
[25]
I. Oseledets, Tensor-train decomposition, SIAM J. Sci. Comput., 33 (2011), pp. 2295-2317.
[26]
I. Oseledets, On a new tensor decomposition, Dokl. Math. 427 (2009).
[27]
I. Oseledets, TT Toolbox 1.0: Fast Multidimensional Array Operations in MATLAB, preprint 2009-06, INM RAS, 2009.
[28]
I. V. Oseledets and E. E. Tyrtyshnikov, Tensor tree decomposition does not need a tree, Linear Algebra Appl., submitted.
[29]
I. Oseledets and E. E. Tyrtyshnikov, Breaking the Curse of Dimensionality, or How to Use SVD in Many Dimensions, J. Sci. Comput., 5 (2009), p. 3744.
[30]
T. Rohwedder and A. Uschmajew, Local convergence of alternating schemes for optimisation of convex problems in the TT format, SIAM J. Numer. Anal., submitted.
[31]
E. Schmidt, Zur Theorie der linearen und nichtlinearen Integralgleichungen. I Teil. Entwicklung willkürlichen Funktionen nach System vorgeschriebener, Math. Ann., 63 (1907), p. 433.
[32]
U. Schollwöck, The density-matrix renormalization group, Rev. Mod. Phys., 1 (2005), p. 259.
[33]
L. R. Tucker, Some mathematical notes on three-mode factor analysis, Psychometrica, 3 (1966), pp. 279-311.
[34]
G. Vidal, Efficient classical simulation of slightly entagled quantum computation, Phys. Rev. Lett., 14 (2003).
[35]
S. R. White, Density matrix formulation for quantum renormalization groups, Phys. Rev. Lett., 69 (1992), p. 2863.
[36]
http://en.wikipedia.org/wiki/Density_matrix_renormalization_group,

Cited By

View all
  • (2024)Coarse-to-fine tensor trains for compact visual representationsProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693393(32612-32642)Online publication date: 21-Jul-2024
  • (2024)Tensor product approach to modelling epidemics on networksApplied Mathematics and Computation10.1016/j.amc.2023.128290460:COnline publication date: 1-Jan-2024
  • (2024)Taming numerical imprecision by adapting the KL divergence to negative probabilitiesStatistics and Computing10.1007/s11222-024-10480-y34:5Online publication date: 13-Aug-2024
  • Show More Cited By
  1. The Alternating Linear Scheme for Tensor Optimization in the Tensor Train Format

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image SIAM Journal on Scientific Computing
    SIAM Journal on Scientific Computing  Volume 34, Issue 2
    April 2012
    829 pages

    Publisher

    Society for Industrial and Applied Mathematics

    United States

    Publication History

    Published: 01 March 2012

    Author Tags

    1. alternating least squares
    2. density matrix renormalization group
    3. eigenvalue problem
    4. hierarchical tensors
    5. high-dimensional systems
    6. iterative methods for linear systems
    7. matrix product states
    8. optimization problem
    9. tensor decompositions
    10. tensor product approximation
    11. tensor train

    Qualifiers

    • Article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 03 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Coarse-to-fine tensor trains for compact visual representationsProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693393(32612-32642)Online publication date: 21-Jul-2024
    • (2024)Tensor product approach to modelling epidemics on networksApplied Mathematics and Computation10.1016/j.amc.2023.128290460:COnline publication date: 1-Jan-2024
    • (2024)Taming numerical imprecision by adapting the KL divergence to negative probabilitiesStatistics and Computing10.1007/s11222-024-10480-y34:5Online publication date: 13-Aug-2024
    • (2024)Connecting weighted automata, tensor networks and recurrent neural networks through spectral learningMachine Language10.1007/s10994-022-06164-1113:5(2619-2653)Online publication date: 1-May-2024
    • (2022)TTOptProceedings of the 36th International Conference on Neural Information Processing Systems10.5555/3600270.3602159(26052-26065)Online publication date: 28-Nov-2022
    • (2022)TP: tensor product layer to compress the neural network in deep learningApplied Intelligence10.1007/s10489-022-03260-652:15(17133-17144)Online publication date: 1-Dec-2022
    • (2022)Deep Composition of Tensor-Trains Using Squared Inverse Rosenblatt TransportsFoundations of Computational Mathematics10.1007/s10208-021-09537-522:6(1863-1922)Online publication date: 1-Dec-2022
    • (2022)Particle number conservation and block structures in matrix product statesCalcolo: a quarterly on numerical analysis and theory of computation10.1007/s10092-022-00462-959:2Online publication date: 1-Jun-2022
    • (2022)Numerical upscaling of parametric microstructures in a possibilistic uncertainty framework with tensor trainsComputational Mechanics10.1007/s00466-022-02261-z71:4(615-636)Online publication date: 27-Dec-2022
    • (2022)Stable ALS approximation in the TT-format for rank-adaptive tensor completionNumerische Mathematik10.1007/s00211-019-01072-4143:4(855-904)Online publication date: 11-Mar-2022
    • Show More Cited By

    View Options

    View options

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media