Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2339530.2339583acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

GigaTensor: scaling tensor analysis up by 100 times - algorithms and discoveries

Published: 12 August 2012 Publication History

Abstract

Many data are modeled as tensors, or multi dimensional arrays. Examples include the predicates (subject, verb, object) in knowledge bases, hyperlinks and anchor texts in the Web graphs, sensor streams (time, location, and type), social networks over time, and DBLP conference-author-keyword relations. Tensor decomposition is an important data mining tool with various applications including clustering, trend detection, and anomaly detection. However, current tensor decomposition algorithms are not scalable for large tensors with billions of sizes and hundreds millions of nonzeros: the largest tensor in the literature remains thousands of sizes and hundreds thousands of nonzeros.
Consider a knowledge base tensor consisting of about 26 million noun-phrases. The intermediate data explosion problem, associated with naive implementations of tensor decomposition algorithms, would require the materialization and the storage of a matrix whose largest dimension would be ≈7 x 1014; this amounts to ~10 Petabytes, or equivalently a few data centers worth of storage, thereby rendering the tensor analysis of this knowledge base, in the naive way, practically impossible. In this paper, we propose GIGATENSOR, a scalable distributed algorithm for large scale tensor decomposition. GIGATENSOR exploits the sparseness of the real world tensors, and avoids the intermediate data explosion problem by carefully redesigning the tensor decomposition algorithm.
Extensive experiments show that our proposed GIGATENSOR solves 100 times bigger problems than existing methods. Furthermore, we employ GIGATENSOR in order to analyze a very large real world, knowledge base tensor and present our astounding findings which include discovery of potential synonyms among millions of noun-phrases (e.g. the noun 'pollutant' and the noun-phrase 'greenhouse gases').

Supplementary Material

JPG File (311a_m_talk_8.jpg)
MP4 File (311a_m_talk_8.mp4)

References

[1]
Hadoop information. http://hadoop.apache.org/.
[2]
E. Acar, C. Aykut-Bingol, H. Bingol, R. Bro, and B. Yener. Multiway analysis of epilepsy tensors. Bioinformatics, 23(13):i10--i18, 2007.
[3]
C. A. Andersson and R. Bro. The n-way toolbox for matlab. Chemometrics and Intelligent Laboratory Systems, 52(1):1--4, 2000.
[4]
B. W. Bader and T. G. Kolda. Efficient MATLAB computations with sparse and factored tensors. SIAM Journal on Scientific Computing, 30(1):205--231, December 2007.
[5]
B. W. Bader, R. A. Harshman, and T. G. Kolda. Temporal analysis of social networks using three-way dedicom. Sandia National Laboratories TR SAND2006-2161, 2006.
[6]
B. W. Bader and T. G. Kolda. Matlab tensor toolbox version 2.2. Albuquerque, NM, USA: Sandia National Laboratories, 2007.
[7]
S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, New York, NY, USA, 2004.
[8]
R. Bro. Parafac. tutorial and applications. Chemometrics and intelligent laboratory systems, 38(2):149--171, 1997.
[9]
A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. R. Hruschka Jr., and T. M. Mitchell. Toward an architecture for never-ending language learning. In AAAI, 2010.
[10]
P. A. Chew, B. W. Bader, T. G. Kolda, and A. Abdelali. Cross-language information retrieval using parafac2. In KDD, 2007.
[11]
J. Dean and S. Ghemawat. Mapreduce: Simplified data processing on large clusters. In OSDI, pages 137--150, 2004.
[12]
S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6):391--407, September 1990.
[13]
C. Eckart and G. Young. The approximation of one matrix by another of lower rank. Psychometrika, 1(3):211--218, 1936.
[14]
R. A. Harshman. Foundations of the parafac procedure: Models and conditions for an "explanatory" multimodal factor analysis. 1970.
[15]
U. Kang, D. H. Chau, and C. Faloutsos. Mining large graphs: Algorithms, inference, and discoveries. In ICDE, pages 243--254, 2011.
[16]
U. Kang, B. Meeder, and C. Faloutsos. Spectral analysis for billion-scale graphs: Discoveries and implementation. In PAKDD (2), pages 13--25, 2011.
[17]
U. Kang, H. Tong, J. Sun, C.-Y. Lin, and C. Faloutsos. Gbase: a scalable and general graph management system. In KDD, pages 1091--1099, 2011.
[18]
U. Kang, C. E. Tsourakakis, and C. Faloutsos. Pegasus: A peta-scale graph mining system. In ICDM, pages 229--238, 2009.
[19]
J. M. Kleinberg. Authoritative sources in a hyperlinked environment. Journal of the ACM, 46(5):604--632, 1999.
[20]
T. G. Kolda and B. W. Bader. The tophits model for higher-order web link analysis. In Workshop on Link Analysis, Counterterrorism and Security, volume 7, pages 26--29, 2006.
[21]
T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM review, 51(3), 2009.
[22]
T. G. Kolda and J. Sun. Scalable tensor decompositions for multi-aspect data mining. In ICDM, 2008.
[23]
R. Lämmel. Google's mapreduce programming model - revisited. Science of Computer Programming, 70:1--30, 2008.
[24]
C. Liu, H. c. Yang, J. Fan, L.-W. He, and Y.-M. Wang. Distributed nonnegative matrix factorization for web-scale dyadic data analysis on mapreduce. In WWW, pages 681--690, 2010.
[25]
K. Maruhashi, F. Guo, and C. Faloutsos. Multiaspectforensics: Pattern mining on large-scale heterogeneous networks with tensor analysis. In ASONAM, 2011.
[26]
C. Olston, B. Reed, U. Srivastava, R. Kumar, and A. Tomkins. Pig latin: a not-so-foreign language for data processing. In SIGMOD '08, pages 1099--1110, 2008.
[27]
E. E. Papalexakis and N. D. Sidiropoulos. Co-clustering as multilinear decomposition with sparse latent factors. In ICASSP, 2011.
[28]
R. Penrose. A generalized inverse for matrices. In Proc. Cambridge Philos. Soc, volume 51, pages 406--413. Cambridge Univ Press, 1955.
[29]
N. D. Sidiropoulos, G. B. Giannakis, and R. Bro. Blind parafac receivers for ds-cdma systems. Signal Processing, IEEE Transactions on, 48(3):810--823, 2000.
[30]
J. Sun, S. Papadimitriou, and P. S. Yu. Window-based tensor analysis on high-dimensional and multi-aspect streams. In ICDM, pages 1076--1080, 2006.
[31]
J. T. Sun, H. J. Zeng, H. Liu, Y. Lu, and Z. Chen. Cubesvd: a novel approach to personalized web search. In WWW, 2005.
[32]
D. Tao, X. Li, X. Wu, W. Hu, and S. J. Maybank. Supervised tensor learning. KAIS, 13(1):1--42, 2007.
[33]
D. Tao, M. Song, X. Li, J. Shen, J. Sun, X. Wu, C. Faloutsos, and S. J. Maybank. Bayesian tensor approach for 3-d face modeling. IEEE TCSVT, 18(10):1397--1410, 2008.
[34]
L. R. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3):279--311, 1966.

Cited By

View all
  • (2024)cuFasterTucker: A Stochastic Optimization Strategy for Parallel Sparse FastTucker Decomposition on GPU PlatformACM Transactions on Parallel Computing10.1145/364809411:2(1-33)Online publication date: 8-Jun-2024
  • (2024)Minimum Cost Loop Nests for Contraction of a Sparse Tensor with a Tensor NetworkProceedings of the 36th ACM Symposium on Parallelism in Algorithms and Architectures10.1145/3626183.3659985(169-181)Online publication date: 17-Jun-2024
  • (2024)Distributed-Memory Randomized Algorithms for Sparse Tensor CP DecompositionProceedings of the 36th ACM Symposium on Parallelism in Algorithms and Architectures10.1145/3626183.3659980(155-168)Online publication date: 17-Jun-2024
  • Show More Cited By

Index Terms

  1. GigaTensor: scaling tensor analysis up by 100 times - algorithms and discoveries

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    KDD '12: Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
    August 2012
    1616 pages
    ISBN:9781450314626
    DOI:10.1145/2339530
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 12 August 2012

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. big data
    2. distributed computing
    3. hadoop
    4. mapreduce
    5. tensor

    Qualifiers

    • Research-article

    Conference

    KDD '12
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

    Upcoming Conference

    KDD '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)50
    • Downloads (Last 6 weeks)5
    Reflects downloads up to 01 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)cuFasterTucker: A Stochastic Optimization Strategy for Parallel Sparse FastTucker Decomposition on GPU PlatformACM Transactions on Parallel Computing10.1145/364809411:2(1-33)Online publication date: 8-Jun-2024
    • (2024)Minimum Cost Loop Nests for Contraction of a Sparse Tensor with a Tensor NetworkProceedings of the 36th ACM Symposium on Parallelism in Algorithms and Architectures10.1145/3626183.3659985(169-181)Online publication date: 17-Jun-2024
    • (2024)Distributed-Memory Randomized Algorithms for Sparse Tensor CP DecompositionProceedings of the 36th ACM Symposium on Parallelism in Algorithms and Architectures10.1145/3626183.3659980(155-168)Online publication date: 17-Jun-2024
    • (2024)Efficient Utilization of Multi-Threading Parallelism on Heterogeneous Systems for Sparse Tensor ContractionIEEE Transactions on Parallel and Distributed Systems10.1109/TPDS.2024.339125435:6(1044-1055)Online publication date: 19-Apr-2024
    • (2024)Differentially Private Federated Tensor Completion for Cloud–Edge Collaborative AIoT Data PredictionIEEE Internet of Things Journal10.1109/JIOT.2023.331446011:1(256-267)Online publication date: 1-Jan-2024
    • (2024)The Art of Sparsity: Mastering High-Dimensional Tensor Storage2024 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)10.1109/IPDPSW63119.2024.00094(439-446)Online publication date: 27-May-2024
    • (2024)ScalFrag: Efficient Tiled-MTTKRP with Adaptive Launching on GPUs2024 IEEE International Conference on Cluster Computing (CLUSTER)10.1109/CLUSTER59578.2024.00036(335-345)Online publication date: 24-Sep-2024
    • (2024)GPUTucker: Large-Scale GPU-Based Tucker Decomposition Using Tensor PartitioningExpert Systems with Applications10.1016/j.eswa.2023.121445237(121445)Online publication date: Mar-2024
    • (2023)Streaming factor trajectory learning for temporal tensor decompositionProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3668606(56849-56870)Online publication date: 10-Dec-2023
    • (2023)Dynamic tensor decomposition via neural diffusion-reaction processesProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3667138(23453-23467)Online publication date: 10-Dec-2023
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media