Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3097983.3098091acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

SPOT: Sparse Optimal Transformations for High Dimensional Variable Selection and Exploratory Regression Analysis

Published: 04 August 2017 Publication History

Abstract

We develop a novel method called SParse Optimal Transformations (SPOT) to simultaneously select important variables and explore relationships between the response and predictor variables in high dimensional nonparametric regression analysis. Not only are the optimal transformations identified by SPOT interpretable, they can also be used for response prediction. We further show that SPOT achieves consistency in both variable selection and parameter estimation. Numerical experiments and real data applications demonstrate that SPOT outperforms other existing methods and can serve as an effective tool in practice.

References

[1]
Sivaraman Balakrishnan, Kriti Puniyani, and John D Lafferty. 2012. Sparse Additive Functional and Kernel CCA. In Proceedings of the 29th International Conference on Machine Learning (ICML-12). 911--918.
[2]
Leo Breiman. 1996. Heuristics of Instability and Stabilization in Model Selection. The Annals of Statistics (1996), 2350--2383.
[3]
Leo Breiman and Jerome H Friedman 1985. Estimating optimal transformations for multiple regression and correlation. Journal of the American statistical Association, Vol. 80, 391 (1985), 580--598.
[4]
Pierre-André Chiappori, Ivana Komunjer, and Dennis Kristensen. 2015. Nonparametric identification and estimation of transformation models. Journal of Econometrics Vol. 188, 1 (2015), 22--39.
[5]
Carl De Boor. 2001. A practical guide to splines, revised Edition, Vol. 27 of Applied Mathematical Sciences. (2001).
[6]
Sam Efromovich. 2007. Conditional density estimation in a regression setting. The Annals of Statistics (2007), 2504--2535.
[7]
Jianqing Fan and Runze Li 2001. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Statist. Assoc. Vol. 96, 456 (2001), 1348--1360.
[8]
Jianqing Fan, Qiwei Yao, and Howell Tong 1996. Estimation of conditional densities and sensitivity measures in nonlinear dynamical systems. Biometrika, Vol. 83, 1 (1996), 189--206.
[9]
Yingying Fan, Gareth M James, and Peter Radchenko. 2015. Functional additive regression. The Annals of Statistics Vol. 43, 5 (2015), 2296--2325.
[10]
Trevor J Hastie and Robert J Tibshirani 1990. Generalized additive models. Vol. Vol. 43. CRC Press.
[11]
Joel L Horowitz and Enno Mammen 2004. Nonparametric estimation of an additive model with a link function. The Annals of Statistics Vol. 32, 6 (2004), 2412--2443.
[12]
Jian Huang, Patrick Breheny, and Shuangge Ma. 2012. A selective review of group selection in high-dimensional models. Statistical science: a review journal of the Institute of Mathematical Statistics, Vol. 27, 4 (2012).
[13]
Jian Huang, Joel L Horowitz, and Fengrong Wei. 2010. Variable selection in nonparametric additive models. Annals of statistics, Vol. 38, 4 (2010), 2282.
[14]
David Jacho-Chávez, Arthur Lewbel, and Oliver Linton. 2010. Identification and nonparametric estimation of a transformed additively separable model. Journal of Econometrics Vol. 156, 2 (2010), 392--407.
[15]
Wei Lin and Jinchi Lv. 2013. High-dimensional sparse additive hazards regression. J. Amer. Statist. Assoc. Vol. 108, 501 (2013), 247--264.
[16]
Yi Lin and Hao Helen Zhang 2006. Component selection and smoothing in multivariate nonparametric regression. The Annals of Statistics Vol. 34, 5 (2006), 2272--2297.
[17]
Oliver Linton, Stefan Sperlich, and Ingrid Van Keilegom. 2008. Estimation of a semiparametric transformation model. The Annals of Statistics (2008), 686--718.
[18]
Po-Ling Loh and Martin J Wainwright 2015. Regularized M-estimators with Nonconvexity: Statistical and Algorithmic Theory for Local Optima. Journal of Machine Learning Research Vol. 16 (2015), 559--616.
[19]
Jinchi Lv and Yingying Fan 2009. A unified approach to model selection and sparse recovery using regularized least squares. The Annals of Statistics (2009), 3498--3528.
[20]
Jinchi Lv and Jun S Liu 2014. Model selection principles in misspecified models Journal of the Royal Statistical Society: Series B (Statistical Methodology), Vol. 76, 1 (2014), 141--167.
[21]
Sebastián Maldonado and Richard Weber 2010. Feature selection for support vector regression via kernel penalization Neural Networks (IJCNN), The 2010 International Joint Conference on. IEEE, 1--7.
[22]
Lukas Meier, Sara Van de Geer, and Peter Bühlmann. 2009. High-dimensional additive modeling. The Annals of Statistics Vol. 37, 6B (2009), 3779--3821.
[23]
Mila Nikolova. 2000. Local strong homogeneity of a regularized estimator. SIAM J. Appl. Math. Vol. 61, 2 (2000), 633--658.
[24]
Pradeep Ravikumar, John Lafferty, Han Liu, and Larry Wasserman 2007. SpAM: Sparse Additive Models. In Advances in Neural Information Processing Systems. 1201--1208.
[25]
Michael Redmond and Alok Baveja 2002. A data-driven software tool for enabling cooperative information sharing among police departments. European Journal of Operational Research Vol. 141, 3 (2002), 660--678.
[26]
Murray Rosenblatt. 1969. Conditional probability density and regression estimators. Multivariate Analysis II Vol. 25 (1969), 31.
[27]
Larry Schumaker. 1981. Spline functions: basic theory. Wiley, New York.
[28]
Gideon Schwarz. 1978. Estimating the dimension of a model. The Annals of Statistics Vol. 6, 2 (1978), 461--464.
[29]
Le Song, Eric P Xing, and Ankur P Parikh 2011. Kernel embeddings of latent tree graphical models. Advances in Neural Information Processing Systems. 2708--2716.
[30]
Masashi Sugiyama, Ichiro Takeuchi, Taiji Suzuki, Takafumi Kanamori, Hirotaka Hachiya, and Daisuke Okanohara 2010. Conditional density estimation via least-squares density ratio estimation International Conference on Artificial Intelligence and Statistics. 781--788.
[31]
Robert Tibshirani. 1996. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological) (1996), 267--288.
[32]
Junming Yin, Xi Chen, and Eric P Xing 2012. Group Sparse Additive Models. In Proceedings of the 29th International Conference on Machine Learning (ICML-12). 871--878.
[33]
Ming Yuan and Yi Lin. 2006. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), Vol. 68, 1 (2006), 49--67.
[34]
Peng Zhao and Bin Yu. 2006. On model selection consistency of Lasso. The Journal of Machine Learning Research Vol. 7 (2006), 2541--2563.
[35]
Hui Zou and Runze Li. 2008. One-step sparse estimates in nonconcave penalized likelihood models. Annals of statistics, Vol. 36, 4 (2008), 1509.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
KDD '17: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
August 2017
2240 pages
ISBN:9781450348874
DOI:10.1145/3097983
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 August 2017

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. monotone transformation
  2. optimal transformation
  3. regression analysis
  4. spline
  5. variable selection

Qualifiers

  • Research-article

Conference

KDD '17
Sponsor:

Acceptance Rates

KDD '17 Paper Acceptance Rate 64 of 748 submissions, 9%;
Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

Upcoming Conference

KDD '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 313
    Total Downloads
  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)0
Reflects downloads up to 10 Feb 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media