Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2339530.2339556acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Intelligible models for classification and regression

Published: 12 August 2012 Publication History
  • Get Citation Alerts
  • Abstract

    Complex models for regression and classification have high accuracy, but are unfortunately no longer interpretable by users. We study the performance of generalized additive models (GAMs), which combine single-feature models called shape functions through a linear function. Since the shape functions can be arbitrarily complex, GAMs are more accurate than simple linear models. But since they do not contain any interactions between features, they can be easily interpreted by users.
    We present the first large-scale empirical comparison of existing methods for learning GAMs. Our study includes existing spline and tree-based methods for shape functions and penalized least squares, gradient boosting, and backfitting for learning GAMs. We also present a new method based on tree ensembles with an adaptive number of leaves that consistently outperforms previous work. We complement our experimental results with a bias-variance analysis that explains how different shape models influence the additive model. Our experiments show that shallow bagged trees with gradient boosting distinguish itself as the best method on low- to medium-dimensional datasets.

    Supplementary Material

    MP4 File (311b_m_talk_2.mp4)

    References

    [1]
    http://archive.ics.uci.edu/ml/.
    [2]
    http://www.liaad.up.pt/~ltorgo/ Regression/DataSets.html.
    [3]
    http://www.cs.toronto.edu/~delve/data/ datasets.html.
    [4]
    http://osmot.cs.cornell.edu/kddcup/.
    [5]
    http://additivegroves.net.
    [6]
    E. Bauer and R. Kohavi. An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine learning, 36(1):105--139, 1999.
    [7]
    H. Binder and G. Tutz. A comparison of methods for the fitting of generalized additive models. Statistics and Computing, 18(1):87--99, 2008.
    [8]
    L. Breiman. Random forests. Machine learning, 45(1):5--32, 2001.
    [9]
    L. Breiman and J. Friedman. Estimating optimal transformations for multiple regression and correlation. Journal of the American Statistical Association, pages 580--598,1985.
    [10]
    R. Caruana and A. Niculescu-Mizil. An empirical comparison of supervised learning algorithms. In ICML, 2006.
    [11]
    G. Forman, M. Scholz, and S. Rajaram. Feature shaping for linear svm classifiers. In KDD, 2009.
    [12]
    J. Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29:1189--1232,2001.
    [13]
    J. Friedman. Stochastic gradient boosting. Computational Statistics and Data Analysis, 38:367--378, 2002.
    [14]
    T. Hastie and R. Tibshirani. Generalized additive models (with discussion). Statistical Science, 1:297--318, 1986.
    [15]
    T. Hastie and R. Tibshirani. Generalized additive models. Chapman & Hall/CRC, 1990.
    [16]
    G. Hooker. Generalized functional anova diagnostics for high-dimensional functions of dependent variables. Journal of Computational and Graphical Statistics, 16(3):709--732, 2007.
    [17]
    B. Panda, J. Herbach, S. Basu, and R. Bayardo. Planet: massively parallel learning of tree ensembles with mapreduce. PVLDB, 2009.
    [18]
    P. Ravikumar, H. Liu, J. Lafferty, and L. Wasserman. Sparse additive models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71(5):1009--1030, 2009.
    [19]
    D. Sorokina, R. Caruana, and M. Riedewald. Additive groves of regression trees. In ECML, 2007.
    [20]
    S. Tyree, K. Weinberger, K. Agrawal, and J. Paykin. Parallel boosted regression trees for web search ranking. In WWW, 2011.
    [21]
    S. Wood. Thin plate regression splines. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 65(1):95--114, 2003.
    [22]
    S. Wood. Generalized additive models: an introduction with R. CRC Press, 2006.

    Cited By

    View all
    • (2024)Advanced Ensemble Classifier Techniques for Predicting Tumor Viability in Osteosarcoma Histological Slide ImagesApplied Data Science and Analysis10.58496/ADSA/2024/0062024(52-68)Online publication date: 29-May-2024
    • (2024)Metabolic Insight into Glioma Heterogeneity: Mapping Whole Exome Sequencing to In Vivo Imaging with Stereotactic Localization and Deep LearningMetabolites10.3390/metabo1406033714:6(337)Online publication date: 16-Jun-2024
    • (2024)On the generalization discrepancy of spatiotemporal dynamics-informed graph convolutional networksFrontiers in Mechanical Engineering10.3389/fmech.2024.139713110Online publication date: 12-Jul-2024
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    KDD '12: Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
    August 2012
    1616 pages
    ISBN:9781450314626
    DOI:10.1145/2339530
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 12 August 2012

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. classification
    2. intelligible models
    3. regression

    Qualifiers

    • Research-article

    Conference

    KDD '12
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

    Upcoming Conference

    KDD '24

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)188
    • Downloads (Last 6 weeks)22

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Advanced Ensemble Classifier Techniques for Predicting Tumor Viability in Osteosarcoma Histological Slide ImagesApplied Data Science and Analysis10.58496/ADSA/2024/0062024(52-68)Online publication date: 29-May-2024
    • (2024)Metabolic Insight into Glioma Heterogeneity: Mapping Whole Exome Sequencing to In Vivo Imaging with Stereotactic Localization and Deep LearningMetabolites10.3390/metabo1406033714:6(337)Online publication date: 16-Jun-2024
    • (2024)On the generalization discrepancy of spatiotemporal dynamics-informed graph convolutional networksFrontiers in Mechanical Engineering10.3389/fmech.2024.139713110Online publication date: 12-Jul-2024
    • (2024)Human-Centered Evaluation of Explanations in AI-Assisted Decision-MakingCompanion Proceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640544.3645239(134-136)Online publication date: 18-Mar-2024
    • (2024)Relative Keys: Putting Feature Explanation into ContextProceedings of the ACM on Management of Data10.1145/36392632:1(1-28)Online publication date: 26-Mar-2024
    • (2024)Impact Charts: A Tool for Identifying Systematic Bias in Social Systems and DataProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658965(1187-1198)Online publication date: 3-Jun-2024
    • (2024)Deep Neural Networks and Tabular Data: A SurveyIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2022.322916135:6(7499-7519)Online publication date: Jun-2024
    • (2024)Development of a Novel Transformation of Spiking Neural Classifier to an Interpretable ClassifierIEEE Transactions on Cybernetics10.1109/TCYB.2022.318118154:1(3-12)Online publication date: Jan-2024
    • (2024)EXplainable Artificial Intelligence (XAI)—From Theory to Methods and ApplicationsIEEE Access10.1109/ACCESS.2024.340984312(80799-80846)Online publication date: 2024
    • (2024)Investigating Trust in Human-AI Collaboration for a Speech-Based Data Analytics TaskInternational Journal of Human–Computer Interaction10.1080/10447318.2024.2328910(1-19)Online publication date: 22-Mar-2024
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media