Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
How to win data science 
competitions with Deep Learning 
@ArnoCandel 
Silicon Valley Big Data Science Meetup 
0xdata Campus, Mountain View, Oct 9 2014 
Join us at H2O World 2014 | November 18th and 19th | Computer History Museum, Mountain View 
! 
Register now at http://h2oworld2014.eventbrite.com
H2O Deep Learning Booklet! 
http://bit.ly/deeplearningh2o
Who am I? 
@ArnoCandel 
PhD in Computational Physics, 2005 
from ETH Zurich Switzerland 
! 
6 years at SLAC - Accelerator Physics Modeling 
2 years at Skytree, Inc - Machine Learning 
10 months at 0xdata/H2O - Machine Learning 
! 
15 years in HPC/Supercomputing/Modeling 
! 
Named “2014 Big Data All-Star” by Fortune Magazine
Outline 
• Intro 
• H2O Architecture 
• Latest News 
! 
• Live Modeling on Kaggle Datasets 
• African Soil Property Prediction Challenge 
• Display Advertising Challenge 
• Higgs Boson Machine Learning Challenge 
! 
• Hands-On Training - See slides, booklet, H2O.ai 
• Implementation Concepts 
• Tips & Tricks 
! 
• Outlook
About H20 (aka 0xdata) 
Java, Apache v2 Open Source 
Join the www.h2o.ai/community! 
#1 Java Machine Learning in Github
Data Science Workflow
Powering Intelligent Applications
Sparkling Water: Spark + H2O
“We're thrilled to have H2O bring their machine 
learning know-how to Apache Spark in the form 
of Sparkling Water, and look forward to more 
future collaboration.” 
–Ion Stoica (CEO, Databricks)
Live Modeling #1 
Kaggle 
African Soil Properties 
Prediction Challenge
Still ongoing! 
Any thoughts? Questions? Challenges? Approaches?
Special thanks to 
Jo-fai Chow for pointing 
us to this challenge and 
for his awesome blog! 
Let’s follow this tutorial!
Jo-fai’s blog continued 
launch H2O from R (on your laptop) 
single node mode (here) 
multi-threading enabled 
import (zipped) data
Jo-fai’s blog continued 
train/validation split 
5 different regression targets 
3594 features 
Train Deep Learning model 
check validation error 
predict on test set 
save to CSV
Jo-fai made it to the very top with H2O Deep Learning! 
(He probably used ensembles: stacking/blending, etc.) 
My best ranking was 11th with a single H2O Deep Learning model (per target) 
! 
BUT: Too early to tell: Leaderboard scores are computed on ~100 rows.
It’s time for Live Modeling! 
Challenges: 
• no domain knowledge -> need to find/read papers 
• small data -> need to apply regularization to avoid overfitting 
• need to explore data and do feature engineering, if applicable 
! 
We will do the following steps: 
• launch H2O from R 
• import Kaggle data 
• train separate Deep Learning model for 5 targets 
• compute cross-validation error (N-fold) 
• tune model parameters (grid search) 
• do simple feature engineering (dimensionality reduction) 
• build an ensemble model 
• create a Kaggle submission
How to win data science competitions with Deep Learning
How to win data science competitions with Deep Learning
How to win data science competitions with Deep Learning
https://github.com/0xdata/h2o/
Live Modeling #2 
Kaggle 
Display Advertising 
Challenge
How to win data science competitions with Deep Learning
Real-world dirty data: 
Missing values, categorical values 
with >10k factor levels, etc. 
45.8M rows 
41 columns 
binary classifier
H2O Deep Learning expands categorical factor levels (on the 
fly, but still): 
41 features turn into N=47,411 input values for neural network 
fully categorical expansion: 
training is slow (also more communication)
Use hashing trick to reduce dimensionality N: 
! 
naive: input[key_to_idx(key)] = 1 //one-hot encoding 
hash: input[hash_fun(key) % M] += 1, where M << N 
E.g, M=1000 —> much faster training, rapid convergence! 
Note: 
Hashing trick is also used by winners of Kaggle 
challenge, and the 4-th ranked submission uses an 
Ensemble of Deep Learning models with some 
feature engineering (drop rare categories etc)
Live Modeling #3 
Kaggle 
Higgs Boson Machine 
Learning Challenge
Higgs Particle Discovery 
Large Hadron Collider: Largest experiment of mankind! 
$13+ billion, 16.8 miles long, 120 MegaWatts, -456F, 1PB/day, etc. 
Higgs boson discovery (July ’12) led to 2013 Nobel prize! 
Higgs 
vs 
Background 
Images courtesy CERN / LHC 
I missed the deadline for Kaggle by 1 day! But still want to get good numbers!
Higgs: Binary Classification Problem 
Conventional methods of choice for physicists: 
- Boosted Decision Trees 
- Neural networks with 1 hidden layer 
BUT: Must first add derived high-level features (physics formulae) 
Metric: AUC = Area under the ROC curve (range: 0.5…1, higher is better) 
Algorithm low-level H2O AUC all features H2O AUC 
Generalized Linear Model 0.596 0.684 
add 
derived 
Random Forest 0.764 0.840 
Gradient Boosted Trees 0.753 0.839 
features 
Neural Net 1 hidden layer 0.760 0.830 
HIGGS UCI Dataset: 
21 low-level features AND 
7 high-level derived features 
Train: 10M rows, Test: 500k rows
Higgs: Can Deep Learning Do Better? 
Algorithm low-level H2O AUC all features H2O AUC 
Generalized Linear Model 0.596 0.684 
Random Forest 0.764 0.840 
Gradient Boosted Trees 0.753 0.839 
Neural Net 1 hidden layer 0.760 0.830 
Deep Learning ? ? 
<Your guess goes here> 
reference paper results: baseline 0.733 
Let’s build a H2O Deep Learning model and find out! 
Let’s build a H2O Deep Learning model and 
find out!
H2O Steam: Scoring Platform 
http://server:port/steam/index.html 
Higgs Dataset Demo on 10-node cluster 
Let’s score all our H2O models and compare them!
Scoring Higgs Models in H2O Steam 
Live Demo on 10-node cluster: 
<10 minutes runtime for all algos! 
Better than LHC baseline of AUC=0.73!
Lift chart demo for simple Higgs model 
Quantile Response rate Lift Cumulative lift 
10.00% 93.55% 1.7683085 17.68% 
20.00% 85.76% 1.6210535 33.89% 
30.00% 78.39% 1.4817379 48.71% 
40.00% 69.86% 1.3206078 61.92% 
50.00% 60.80% 1.1492702 73.41% 
60.00% 50.38% 0.9523756 82.93% 
70.00% 39.52% 0.7470124 90.40% 
80.00% 28.24% 0.5337477 95.74% 
90.00% 16.88% 0.319122 98.93% 
100.00% 5.65% 0.10676467 100.00% 
Total 52.90% 1.0 
Lift Chart 
Cumulative Higgs Events 
1 
0.875 
0.75 
0.625 
0.5 
0.375 
0.25 
0.125 
0 
0 
10.00 % 
20.00 % 
30.00 % 
40.00 % 
50.00 % 
60.00 % 
70.00 % 
80.00 % 
90.00 % 
Cumulative Events 
100.00 % 
Total
Higgs Particle Detection with H2O 
HIGGS UCI Dataset: 
21 low-level features AND 
7 high-level derived features 
Train: 10M rows, Test: 500k rows 
*Nature paper: http://arxiv.org/pdf/1402.4735v2.pdf 
Algorithm Paper’s 
l-l AUC 
low-level 
H2O AUC 
all features 
H2O AUC 
Parameters (not heavily tuned), 
H2O running on 10 nodes 
Generalized Linear Model - 0.596 0.684 default, binomial 
Random Forest - 0.764 0.840 50 trees, max depth 50 
Gradient Boosted Trees 0.73 0.753 0.839 50 trees, max depth 15 
Neural Net 1 layer 0.733 0.760 0.830 1x300 Rectifier, 100 epochs 
Deep Learning 3 hidden layers 0.836 0.850 - 3x1000 Rectifier, L2=1e-5, 40 epochs 
Deep Learning 4 hidden layers 0.868 0.869 - 4x500 Rectifier, L1=L2=1e-5, 300 epochs 
Deep Learning 6 hidden layers 0.880 running - 6x500 Rectifier, L1=L2=1e-5 
Deep Learning on low-level features alone beats everything else! 
H2O prelim. results compare well with paper’s results* (TMVA & Theano)
What methods led the Higgs 
Boson Kaggle contest? 
Winning 3 submissions: 
1. Bag of 70 dropout (50%) deep neural networks 
(600,600,600), channel-out activation function, l1=5e-6 and 
l2=5e-5 (for first weight matrix only). Each input neurons is 
connected to only 10 input values (sampled with replacement 
before training). 
2. A blend of boosted decision tree ensembles constructed using 
Regularized Greedy Forest 
3. Ensemble of 108 rather small deep neural networks (30,50), 
(30,50,25), (30,50,50,25) 
Ensemble Deep Learning models are winning! 
Check out h2o/R/ensemble by Erin LeDell!
Deep Learning Tips & Tricks ! 
General: 
More layers for more complex functions (exp. more non-linearity). 
More neurons per layer to detect finer structure in data (“memorizing”). 
Validate model performance on holdout data. 
Add some regularization for less overfitting (lower validation set error). 
Ensembles typically perform better than individual models. 
! 
Specifically: 
Do a grid search to get a feel for convergence, then continue training. 
Try Tanh/Rectifier, try max_w2=10…50, L1=1e-5..1e-3 and/or L2=1e-5…1e-3 
Try Dropout (input: up to 20%, hidden: up to 50%) with test/validation set. Input dropout is 
recommended for noisy high-dimensional input. 
Distributed: More training samples per iteration: faster, but less accuracy? 
With ADADELTA: Try epsilon = 1e-4,1e-6,1e-8,1e-10, rho = 0.9,0.95,0.99 
Without ADADELTA: Try rate = 1e-4…1e-2, rate_annealing = 1e-5…1e-9, 
momentum_start = 0.5…0.9, momentum_stable = 0.99, 
momentum_ramp = 1/rate_annealing. 
Try balance_classes = true for datasets with large class imbalance. 
Enable force_load_balance for small datasets. 
Enable replicate_training_data if each node can hold all the data.
You can participate! 
- Image Recognition: Convolutional & Pooling Layers PUB-644 
- Faster Training: GPGPU support PUB-1013 
- NLP: Recurrent Neural Networks PUB-1052 
- Unsupervised Pre-Training: Stacked Auto-Encoders PUB-1014 
- Benchmark vs other Deep Learning packages 
- Investigate other optimization algorithms 
- Win Kaggle competitions with H2O!
More info 
http://h2o.ai 
http://h2o.ai/blog/ 
http://bit.ly/deeplearningh2o 
http://www.slideshare.net/0xdata/ 
https://www.youtube.com/user/0xdata 
https://github.com/h2oai/h2o 
https://github.com/h2oai/h2o-dev 
@h2oai 
@ArnoCandel 
Join us at H2O World 2014!!! 
Join us at H2O World 2014 | November 18th and 19th | Computer History Museum, Mountain View 
! 
Register now at http://h2oworld2014.eventbrite.com

More Related Content

How to win data science competitions with Deep Learning

  • 1. How to win data science competitions with Deep Learning @ArnoCandel Silicon Valley Big Data Science Meetup 0xdata Campus, Mountain View, Oct 9 2014 Join us at H2O World 2014 | November 18th and 19th | Computer History Museum, Mountain View ! Register now at http://h2oworld2014.eventbrite.com
  • 2. H2O Deep Learning Booklet! http://bit.ly/deeplearningh2o
  • 3. Who am I? @ArnoCandel PhD in Computational Physics, 2005 from ETH Zurich Switzerland ! 6 years at SLAC - Accelerator Physics Modeling 2 years at Skytree, Inc - Machine Learning 10 months at 0xdata/H2O - Machine Learning ! 15 years in HPC/Supercomputing/Modeling ! Named “2014 Big Data All-Star” by Fortune Magazine
  • 4. Outline • Intro • H2O Architecture • Latest News ! • Live Modeling on Kaggle Datasets • African Soil Property Prediction Challenge • Display Advertising Challenge • Higgs Boson Machine Learning Challenge ! • Hands-On Training - See slides, booklet, H2O.ai • Implementation Concepts • Tips & Tricks ! • Outlook
  • 5. About H20 (aka 0xdata) Java, Apache v2 Open Source Join the www.h2o.ai/community! #1 Java Machine Learning in Github
  • 9. “We're thrilled to have H2O bring their machine learning know-how to Apache Spark in the form of Sparkling Water, and look forward to more future collaboration.” –Ion Stoica (CEO, Databricks)
  • 10. Live Modeling #1 Kaggle African Soil Properties Prediction Challenge
  • 11. Still ongoing! Any thoughts? Questions? Challenges? Approaches?
  • 12. Special thanks to Jo-fai Chow for pointing us to this challenge and for his awesome blog! Let’s follow this tutorial!
  • 13. Jo-fai’s blog continued launch H2O from R (on your laptop) single node mode (here) multi-threading enabled import (zipped) data
  • 14. Jo-fai’s blog continued train/validation split 5 different regression targets 3594 features Train Deep Learning model check validation error predict on test set save to CSV
  • 15. Jo-fai made it to the very top with H2O Deep Learning! (He probably used ensembles: stacking/blending, etc.) My best ranking was 11th with a single H2O Deep Learning model (per target) ! BUT: Too early to tell: Leaderboard scores are computed on ~100 rows.
  • 16. It’s time for Live Modeling! Challenges: • no domain knowledge -> need to find/read papers • small data -> need to apply regularization to avoid overfitting • need to explore data and do feature engineering, if applicable ! We will do the following steps: • launch H2O from R • import Kaggle data • train separate Deep Learning model for 5 targets • compute cross-validation error (N-fold) • tune model parameters (grid search) • do simple feature engineering (dimensionality reduction) • build an ensemble model • create a Kaggle submission
  • 21. Live Modeling #2 Kaggle Display Advertising Challenge
  • 23. Real-world dirty data: Missing values, categorical values with >10k factor levels, etc. 45.8M rows 41 columns binary classifier
  • 24. H2O Deep Learning expands categorical factor levels (on the fly, but still): 41 features turn into N=47,411 input values for neural network fully categorical expansion: training is slow (also more communication)
  • 25. Use hashing trick to reduce dimensionality N: ! naive: input[key_to_idx(key)] = 1 //one-hot encoding hash: input[hash_fun(key) % M] += 1, where M << N E.g, M=1000 —> much faster training, rapid convergence! Note: Hashing trick is also used by winners of Kaggle challenge, and the 4-th ranked submission uses an Ensemble of Deep Learning models with some feature engineering (drop rare categories etc)
  • 26. Live Modeling #3 Kaggle Higgs Boson Machine Learning Challenge
  • 27. Higgs Particle Discovery Large Hadron Collider: Largest experiment of mankind! $13+ billion, 16.8 miles long, 120 MegaWatts, -456F, 1PB/day, etc. Higgs boson discovery (July ’12) led to 2013 Nobel prize! Higgs vs Background Images courtesy CERN / LHC I missed the deadline for Kaggle by 1 day! But still want to get good numbers!
  • 28. Higgs: Binary Classification Problem Conventional methods of choice for physicists: - Boosted Decision Trees - Neural networks with 1 hidden layer BUT: Must first add derived high-level features (physics formulae) Metric: AUC = Area under the ROC curve (range: 0.5…1, higher is better) Algorithm low-level H2O AUC all features H2O AUC Generalized Linear Model 0.596 0.684 add derived Random Forest 0.764 0.840 Gradient Boosted Trees 0.753 0.839 features Neural Net 1 hidden layer 0.760 0.830 HIGGS UCI Dataset: 21 low-level features AND 7 high-level derived features Train: 10M rows, Test: 500k rows
  • 29. Higgs: Can Deep Learning Do Better? Algorithm low-level H2O AUC all features H2O AUC Generalized Linear Model 0.596 0.684 Random Forest 0.764 0.840 Gradient Boosted Trees 0.753 0.839 Neural Net 1 hidden layer 0.760 0.830 Deep Learning ? ? <Your guess goes here> reference paper results: baseline 0.733 Let’s build a H2O Deep Learning model and find out! Let’s build a H2O Deep Learning model and find out!
  • 30. H2O Steam: Scoring Platform http://server:port/steam/index.html Higgs Dataset Demo on 10-node cluster Let’s score all our H2O models and compare them!
  • 31. Scoring Higgs Models in H2O Steam Live Demo on 10-node cluster: <10 minutes runtime for all algos! Better than LHC baseline of AUC=0.73!
  • 32. Lift chart demo for simple Higgs model Quantile Response rate Lift Cumulative lift 10.00% 93.55% 1.7683085 17.68% 20.00% 85.76% 1.6210535 33.89% 30.00% 78.39% 1.4817379 48.71% 40.00% 69.86% 1.3206078 61.92% 50.00% 60.80% 1.1492702 73.41% 60.00% 50.38% 0.9523756 82.93% 70.00% 39.52% 0.7470124 90.40% 80.00% 28.24% 0.5337477 95.74% 90.00% 16.88% 0.319122 98.93% 100.00% 5.65% 0.10676467 100.00% Total 52.90% 1.0 Lift Chart Cumulative Higgs Events 1 0.875 0.75 0.625 0.5 0.375 0.25 0.125 0 0 10.00 % 20.00 % 30.00 % 40.00 % 50.00 % 60.00 % 70.00 % 80.00 % 90.00 % Cumulative Events 100.00 % Total
  • 33. Higgs Particle Detection with H2O HIGGS UCI Dataset: 21 low-level features AND 7 high-level derived features Train: 10M rows, Test: 500k rows *Nature paper: http://arxiv.org/pdf/1402.4735v2.pdf Algorithm Paper’s l-l AUC low-level H2O AUC all features H2O AUC Parameters (not heavily tuned), H2O running on 10 nodes Generalized Linear Model - 0.596 0.684 default, binomial Random Forest - 0.764 0.840 50 trees, max depth 50 Gradient Boosted Trees 0.73 0.753 0.839 50 trees, max depth 15 Neural Net 1 layer 0.733 0.760 0.830 1x300 Rectifier, 100 epochs Deep Learning 3 hidden layers 0.836 0.850 - 3x1000 Rectifier, L2=1e-5, 40 epochs Deep Learning 4 hidden layers 0.868 0.869 - 4x500 Rectifier, L1=L2=1e-5, 300 epochs Deep Learning 6 hidden layers 0.880 running - 6x500 Rectifier, L1=L2=1e-5 Deep Learning on low-level features alone beats everything else! H2O prelim. results compare well with paper’s results* (TMVA & Theano)
  • 34. What methods led the Higgs Boson Kaggle contest? Winning 3 submissions: 1. Bag of 70 dropout (50%) deep neural networks (600,600,600), channel-out activation function, l1=5e-6 and l2=5e-5 (for first weight matrix only). Each input neurons is connected to only 10 input values (sampled with replacement before training). 2. A blend of boosted decision tree ensembles constructed using Regularized Greedy Forest 3. Ensemble of 108 rather small deep neural networks (30,50), (30,50,25), (30,50,50,25) Ensemble Deep Learning models are winning! Check out h2o/R/ensemble by Erin LeDell!
  • 35. Deep Learning Tips & Tricks ! General: More layers for more complex functions (exp. more non-linearity). More neurons per layer to detect finer structure in data (“memorizing”). Validate model performance on holdout data. Add some regularization for less overfitting (lower validation set error). Ensembles typically perform better than individual models. ! Specifically: Do a grid search to get a feel for convergence, then continue training. Try Tanh/Rectifier, try max_w2=10…50, L1=1e-5..1e-3 and/or L2=1e-5…1e-3 Try Dropout (input: up to 20%, hidden: up to 50%) with test/validation set. Input dropout is recommended for noisy high-dimensional input. Distributed: More training samples per iteration: faster, but less accuracy? With ADADELTA: Try epsilon = 1e-4,1e-6,1e-8,1e-10, rho = 0.9,0.95,0.99 Without ADADELTA: Try rate = 1e-4…1e-2, rate_annealing = 1e-5…1e-9, momentum_start = 0.5…0.9, momentum_stable = 0.99, momentum_ramp = 1/rate_annealing. Try balance_classes = true for datasets with large class imbalance. Enable force_load_balance for small datasets. Enable replicate_training_data if each node can hold all the data.
  • 36. You can participate! - Image Recognition: Convolutional & Pooling Layers PUB-644 - Faster Training: GPGPU support PUB-1013 - NLP: Recurrent Neural Networks PUB-1052 - Unsupervised Pre-Training: Stacked Auto-Encoders PUB-1014 - Benchmark vs other Deep Learning packages - Investigate other optimization algorithms - Win Kaggle competitions with H2O!
  • 37. More info http://h2o.ai http://h2o.ai/blog/ http://bit.ly/deeplearningh2o http://www.slideshare.net/0xdata/ https://www.youtube.com/user/0xdata https://github.com/h2oai/h2o https://github.com/h2oai/h2o-dev @h2oai @ArnoCandel Join us at H2O World 2014!!! Join us at H2O World 2014 | November 18th and 19th | Computer History Museum, Mountain View ! Register now at http://h2oworld2014.eventbrite.com