Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2723372.2749432acmconferencesArticle/Chapter ViewAbstractPublication PagesmodConference Proceedingsconference-collections
research-article

Resource Elasticity for Large-Scale Machine Learning

Published: 27 May 2015 Publication History
  • Get Citation Alerts
  • Abstract

    Declarative large-scale machine learning (ML) aims at flexible specification of ML algorithms and automatic generation of hybrid runtime plans ranging from single node, in-memory computations to distributed computations on MapReduce (MR) or similar frameworks. State-of-the-art compilers in this context are very sensitive to memory constraints of the master process and MR cluster configuration. Different memory configurations can lead to significant performance differences. Interestingly, resource negotiation frameworks like YARN allow us to explicitly request preferred resources including memory. This capability enables automatic resource elasticity, which is not just important for performance but also removes the need for a static cluster configuration, which is always a compromise in multi-tenancy environments. In this paper, we introduce a simple and robust approach to automatic resource elasticity for large-scale ML. This includes (1) a resource optimizer to find near-optimal memory configurations for a given ML program, and (2) dynamic plan migration to adapt memory configurations during runtime. These techniques adapt resources according to data, program, and cluster characteristics. Our experiments demonstrate significant improvements up to 21x without unnecessary over-provisioning and low optimization overhead.

    References

    [1]
    D. Abadi et al. The Beckman Report on Database Research. SIGMOD Record, 43(3), 2014.
    [2]
    S. Agrawal, E. Chu, and V. R. Narasayya. Automatic Physical Design Tuning: Workload as a Sequence. In SIGMOD, 2006.
    [3]
    Apache. Mahout: Scalable machine learning and data mining.texttthttps://mahout.apache.org/.
    [4]
    M. Boehm. Costing Generated Runtime Execution Plans for Large-Scale Machine Learning Programs. ArXiv e-prints, 2015. arXiv:1503.06384 {cs.DC}.
    [5]
    M. Boehm, D. R. Burdick, A. V. Evfimievski, B. Reinwald, F. R. Reiss, P. Sen, S. Tatikonda, and Y. Tian. SystemML's Optimizer: Plan Generation for Large-Scale Machine Learning Programs. IEEE Data Eng. Bull., 37(3), 2014.
    [6]
    M. Boehm, S. Tatikonda, B. Reinwald, P. Sen, Y. Tian, D. Burdick, and S. Vaithyanathan. Hybrid Parallelization Strategies for Large-Scale Machine Learning in SystemML. PVLDB, 7(7), 2014.
    [7]
    V. R. Borkar, Y. Bu, M. J. Carey, J. Rosen, N. Polyzotis, T. Condie, M. Weimer, and R. Ramakrishnan. Declarative Systems for Large-Scale Machine Learning. IEEE Data Eng. Bull., 35(2).
    [8]
    N. Bruno and R. V. Nehme. Configuration-Parametric Query Optimization for Physical Design Tuning. In SIGMOD, 2008.
    [9]
    B. Chun, T. Condie, C. Curino, R. Ramakrishnan, R. Sears, and M. Weimer. REEF: Retainable Evaluator Execution Framework. PVLDB, 6(12), 2013.
    [10]
    J. Cohen, B. Dolan, M. Dunlap, J. M. Hellerstein, and C. Welton. MAD Skills: New Analysis Practices for Big Data. PVLDB, 2(2), 2009.
    [11]
    T. Condie, P. Mineiro, N. Polyzotis, and M. Weimer. Machine Learning for Big Data. In SIGMOD, 2013.
    [12]
    A. Crotty, A. Galakatos, K. Dursun, T. Kraska, U. Cetintemel, and S. Zdoni. Tupleware: "Big" Data, Big Analytics, Small Clusters. In CIDR, 2015.
    [13]
    A. Crotty, A. Galakatos, and T. Kraska. Tupleware: Distributed Machine Learning on Small Clusters. IEEE Data Eng. Bull., 37(3), 2014.
    [14]
    S. Das, S. Nishimura, D. Agrawal, and A. El Abbadi. Albatross: Lightweight Elasticity in Shared Storage Databases for the Cloud using Live Data Migration. PVLDB, 4(8), 2011.
    [15]
    J. Dean and S. Ghemawat. MapReduce: Simplified Data Processing on Large Clusters. In OSDI, 2004.
    [16]
    A. Deshpande, Z. G. Ives, and V. Raman. Adaptive Query Processing. Foundations and Trends in Databases, 1(1), 2007.
    [17]
    J. Duggan and M. Stonebraker. Incremental Elasticity for Array Databases. In SIGMOD, 2014.
    [18]
    M. Ead, H. Herodotou, A. Aboulnaga, and S. Babu. PStorM: Profile Storage and Matching for Feedback- Based Tuning of MapReduce Jobs. In EDBT, 2014.
    [19]
    A. J. Elmore, S. Das, D. Agrawal, and A. El Abbadi. Zephyr: Live Migration in Shared Nothing Databases for Elastic Cloud Platforms. In SIGMOD, 2011.
    [20]
    A. Ghoting, R. Krishnamurthy, E. P. D. Pednault, B. Reinwald, V. Sindhwani, S. Tatikonda, Y. Tian, and S. Vaithyanathan. SystemML: Declarative Machine Learning on MapReduce. In ICDE, 2011.
    [21]
    H. Herodotou and S. Babu. Profiling, What-if Analysis, and Cost-based Optimization of MapReduce Programs. PVLDB, 4(11), 2011.
    [22]
    H. Herodotou, F. Dong, and S. Babu. No One (Cluster) Size Fits All: Automatic Cluster Sizing for Data-intensive Analytics. In SOCC, 2011.
    [23]
    B. Hindman, A. Konwinski, M. Zaharia, A. Ghodsi, A. D. Joseph, R. H. Katz, S. Shenker, and I. Stoica. Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center. In NSDI, 2011.
    [24]
    B. Huang, S. Babu, and J. Yang. Cumulon: Optimizing Statistical Data Analysis in the Cloud. In SIGMOD, 2013.
    [25]
    B. Huang, N. W. D. Jarrett, S. Babu, S. Mukherjee, and J. Yang. Cumulon: Cloud-Based Statistical Analysis from Users Perspective. IEEE Data Eng. Bull., 37(3), 2014.
    [26]
    K. Kambatla, A. Pathak, and H. Pucha. Towards Optimizing Hadoop Provisioning in the Cloud. In HotCloud, 2009.
    [27]
    K. Karanasos, A. Balmin, M. Kutsch, F. Ozcan, V. Ercegovac, C. Xia, and J. Jackson. Dynamically Optimizing Queries over Large Scale Data Platforms. In SIGMOD, 2014.
    [28]
    D. Kernert, F. Koehler, and W. Lehner. SpMachO - Optimizing Sparse Linear Algebra Expressions with Probabilistic Density Estimation. In EDBT, 2015.
    [29]
    N. Khoussainova, M. Balazinska, and D. Suciu. PerfXplain: Debugging MapReduce Job Performance. PVLDB, 5(7), 2012.
    [30]
    H. Kimura, V. R. Narasayya, and M. Syamala. Compression Aware Physical Database Design. PVLDB, 4(10), 2011.
    [31]
    D. Lyubimov. Mahout Scala Bindings and Mahout Spark Bindings for Linear Algebra Subroutines. Apache, 2014.texttthttp://mahout.apache. org/users/sparkbindings/ScalaSparkBindings.pdf.
    [32]
    V. Markl. Breaking the Chains: On Declarative Data Analysis and Data Independence in the Big Data Era. PVLDB, 7(13), 2014.
    [33]
    K. Morton, M. Balazinska, and D. Grossman. ParaTimer: A Progress Indicator for MapReduce DAGs. In SIGMOD, 2010.
    [34]
    A. C. Murthy, V. K. Vavilapalli, D. Eadline, J. Niemiec, and J. Markham. Apache Hadoop YARN: Moving Beyond Mapreduce and Batch Processing with Apache Hadoop 2. Pearson Education, 2013.
    [35]
    S. Narayanamurthy, M. Weimer, D. Mahajan, T. Condie, S. Sellamanickam, and K. Selvaraj. Towards Resource-Elastic Machine Learning. In NIPS workshop Biglearn, 2013.
    [36]
    S. Papadomanolakis, D. Dash, and A. Ailamaki. Efficient Use of the Query Optimizer for Automated Database Design. In VLDB, 2007.
    [37]
    C. Reiss and R. H. Katz. Recommending Just Enough Memory for Analytics. In SOCC, 2013.
    [38]
    F. Reiss and T. Kanungo. A Characterization of the Sensitivity of Query Optimization to Storage Access Cost Parameters. In SIGMOD, 2003.
    [39]
    A. Rowstron, D. Narayanan, A. Donnelly, G. O'Shea, and A. Douglas. Nobody Ever Got Fired for Using Hadoop on a Cluster. In HotCDP, 2012.
    [40]
    S. Schelter, C. Boden, M. Schenck, A. Alexandrov, and V. Markl. Distributed Matrix Factorization with MapReduce using a series of Broadcast-Joins. In RecSys, 2013.
    [41]
    M. Schwarzkopf, A. Konwinski, M. Abd-El-Malek, and J. Wilkes. Omega: Flexible, Scalable Schedulers for Large Compute Clusters. In EuroSys, 2013.
    [42]
    M. Serafini, E. Mansour, A. Aboulnaga, K. Salem, T. Rafiq, and U. F. Minhas. Accordion: Elastic Scalability for Database Systems Supporting Distributed Transactions. PVLDB, 7(12), 2014.
    [43]
    J. Shi, J. Zou, J. Lu, Z. Cao, S. Li, and C. Wang. MRTuner: A Toolkit to Enable Holistic Optimization for MapReduce Jobs. PVLDB, 7(13), 2014.
    [44]
    M. A. Soliman, L. Antova, V. Raghavan, A. El-Helw, Z. Gu, E. Shen, G. C. Caragea, C. Garcia-Alvarado, F. Rahman, M. Petropoulos, F. Waas, S. Narayanan, K. Krikellas, and R. Baldwin. Orca: A Modular Query Optimizer Architecture for Big Data. In SIGMOD, 2014.
    [45]
    S. Sridharan and J. M. Patel. Profiling R on a Contemporary Processor. PVLDB, 8(2), 2014.
    [46]
    M. Stonebraker, P. Brown, D. Zhang, and J. Becla. SciDB: A Database Management System for Applications with Complex Analytics. Computing in Science and Engineering, 15(3), 2013.
    [47]
    V. K. Vavilapalli, A. C. Murthy, C. Douglas, S. Agarwal, M. Konar, R. Evans, T. Graves, J. Lowe, H. Shah, S. Seth, B. Saha, C. Curino, O. O'Malley, S. Radia, B. Reed, and E. Baldeschwieler. Apache Hadoop YARN: Yet Another Resource Negotiator. In SOCC, 2013.
    [48]
    L. Yu, Y. Shao, and B. Cui. Exploiting Matrix Dependency for Efficient Distributed Matrix Computation. In SIGMOD, 2015.
    [49]
    M. Zaharia, M. Chowdhury, T. Das, A. Dave, J. Ma, M. McCauly, M. J. Franklin, S. Shenker, and I. Stoica. Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing. In NSDI, 2012.
    [50]
    C. Zhang, A. Kumar, and C. Ré. Materialization Optimizations for Feature Selection Workloads. In SIGMOD, 2014.
    [51]
    C. Zhang and C. Re. DimmWitted: A Study of Main- Memory Statistical Analytics. PVLDB, 7(12), 2014.
    [52]
    Y. Zhang, H. Herodotou, and J. Yang. RIOT: I/O-Efficient Numerical Computing without SQL. In CIDR, 2009.
    [53]
    Z. Zhang, C. Li, Y. Tao, R. Yang, H. Tang, and J. Xu. Fuxi: a Fault-Tolerant Resource Management and Job Scheduling System at Internet Scale. PVLDB, 7(13), 2014.
    [54]
    D. C. Zilio, J. Rao, S. Lightstone, G. M. Lohman, A. J. Storm, C. Garcia-Arellano, and S. Fadden. DB2 Design Advisor: Integrated Automatic Physical Database Design. In VLDB, 2004.

    Cited By

    View all
    • (2024)Fundamental Concepts of Cloud ComputingEmerging Trends in Cloud Computing Analytics, Scalability, and Service Models10.4018/979-8-3693-0900-1.ch001(1-43)Online publication date: 22-Mar-2024
    • (2024)Hadar: Heterogeneity-Aware Optimization-Based Online Scheduling for Deep Learning Cluster2024 IEEE International Parallel and Distributed Processing Symposium (IPDPS)10.1109/IPDPS57955.2024.00066(681-691)Online publication date: 27-May-2024
    • (2023)BladeDISC: Optimizing Dynamic Shape Machine Learning Workloads via Compiler ApproachProceedings of the ACM on Management of Data10.1145/36173271:3(1-29)Online publication date: 13-Nov-2023
    • Show More Cited By

    Index Terms

    1. Resource Elasticity for Large-Scale Machine Learning

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      SIGMOD '15: Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data
      May 2015
      2110 pages
      ISBN:9781450327589
      DOI:10.1145/2723372
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 27 May 2015

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. large-scale
      2. machine learning
      3. resource elasticity
      4. resource optimization
      5. what-if analysis

      Qualifiers

      • Research-article

      Conference

      SIGMOD/PODS'15
      Sponsor:
      SIGMOD/PODS'15: International Conference on Management of Data
      May 31 - June 4, 2015
      Victoria, Melbourne, Australia

      Acceptance Rates

      SIGMOD '15 Paper Acceptance Rate 106 of 415 submissions, 26%;
      Overall Acceptance Rate 785 of 4,003 submissions, 20%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)24
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 10 Aug 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Fundamental Concepts of Cloud ComputingEmerging Trends in Cloud Computing Analytics, Scalability, and Service Models10.4018/979-8-3693-0900-1.ch001(1-43)Online publication date: 22-Mar-2024
      • (2024)Hadar: Heterogeneity-Aware Optimization-Based Online Scheduling for Deep Learning Cluster2024 IEEE International Parallel and Distributed Processing Symposium (IPDPS)10.1109/IPDPS57955.2024.00066(681-691)Online publication date: 27-May-2024
      • (2023)BladeDISC: Optimizing Dynamic Shape Machine Learning Workloads via Compiler ApproachProceedings of the ACM on Management of Data10.1145/36173271:3(1-29)Online publication date: 13-Nov-2023
      • (2023)Distributed Encoding and Updating for SAZD Coded Distributed TrainingIEEE Transactions on Parallel and Distributed Systems10.1109/TPDS.2023.327688834:7(2124-2137)Online publication date: Jul-2023
      • (2023)PredictDDL: Reusable Workload Performance Prediction for Distributed Deep Learning2023 IEEE International Conference on Cluster Computing (CLUSTER)10.1109/CLUSTER52292.2023.00009(13-24)Online publication date: 31-Oct-2023
      • (2023)SimCost: cost-effective resource provision prediction and recommendation for spark workloadsDistributed and Parallel Databases10.1007/s10619-023-07436-y42:1(73-102)Online publication date: 22-Jun-2023
      • (2023)Performance models of data parallel DAG workflows for large scale data analyticsDistributed and Parallel Databases10.1007/s10619-023-07425-141:3(299-329)Online publication date: 23-May-2023
      • (2023)Mixtran: an efficient and fair scheduler for mixed deep learning workloads in heterogeneous GPU environmentsCluster Computing10.1007/s10586-023-04104-927:3(2775-2784)Online publication date: 12-Aug-2023
      • (2022)CDI-EProceedings of the VLDB Endowment10.14778/3554821.355482515:12(3319-3331)Online publication date: 1-Aug-2022
      • (2022)Multi-Tenant Cloud Data Services: State-of-the-Art, Challenges and OpportunitiesProceedings of the 2022 International Conference on Management of Data10.1145/3514221.3522566(2465-2473)Online publication date: 10-Jun-2022
      • Show More Cited By

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media