The typical organizational model is that teams are in constant flux, are created for work, are only responsible for the change and are not empowered, or lack trust, to run products. A high performance organization model allows teams to take full responsibility for cost, compliance and security, and lets them own their own incidents. This improves quality, change failure rates, lower costs and leads to more happy employees. DevOps is about creating with the end in mind, cross-functional autonomous teams and end-tn-end responsibility. You build it, you run it. You break it, you fix it. This means you want to automate everything in a CI/CD pipeline. Roll-forward, don't roll-back. DevOps principles play an important role in a data-driven maturity model. Continuous prototyping and a data mindset and skills for everybody. In a Data Science Workflow combining input data and deriving the model features usually requires the most of the work, and lots of iterations before its done. Implement features one-by-one. So, start with a baseline model and compare this against more complex models, to see if additional complexity is worth the performance gain. The result of a data scientist is a trained model. Such a model contains 4 components: input data, derived features, chosen model type and hyperparameters. A trained model is always the combination of data and the code. So where do you run this trained model? Model management is versioning code but not the data. A model management server stores hyperparameters, performance metrics, metadata, trained models. IN a data science pipeline, we have two components for deployment: the application and the trained model. So we split the pipeline into parts: a build pipeline, a train pipeline and a deploy pipeline. A complete pipeline mapped to azure components would look largely like this: An Azure DevOps Build pipeline, an Azure ML Training pipeline and an Azure DevOps Release pipeline.
2. Typical organization
model?
• Security
• Compliance
• Cost
• Teams are in constant flux
• Teams are created for work
• Teams are only responsible for the
change
• Not empowered, lack of trust, lack of
responsibility
• Run organization is responsible for:
3. High Performance
Organization Model?
Take full responsibility
• Cost, Compliance, Security
Team owns their incidents
Improves:
• Quality
• Change Failure Rates
• Lower Costs
• More happy employees!
4. The Essence of DevOps
DevOps principles
• Create with the End in Mind
• Cross-functional autonomous teams
• End-to-end responsibility
• You build it, you run it, you break it, you fix it
4
Wall of Confusion
Development Operations
Change Stability
5. • Deploy to production Efficiently & Reliably
• Allow everyone in the team to do so
• Smaller increments
• Roll-forward don’t Roll-back
5
Automate everything
CI/CD pipeline
Trigger
• Version control
Build
• Artifact
Test
• Code
Deploy
Q&A
• Integration tests
Deploy
Prod
• User facing
Measure
• Capture performance
6. (Go!) Data Driven Maturity
Model
DevOps principles play an important role
• Start with data initiatives
• Continuous prototyping
• Successfully implemented data products
• Everybody has data mindset and skills
6
Data Lab
Data CoE
Data Init
Operational ImplementationDigitalCapability
Data Driven
Company
7. DS workflow
Combining input data and deriving the model features
• Typically requires most of the work
• And lots of iterations before its done
• Implementing one feature, testing it out to see if the
model performance improves, and repeat
7
Data Code Data
Code
Trained
Model
8. Choosing the right model
• Start with a baseline model
• what if I just predict the mean?
• Compare against more complex models, see if the
additional complexity is worth the performance gain
8
9. DS artifacts
• The result of a DS is a trained Model
• 4 components define a trained model
• Input data
• Derived features
• Chosen model type
• Hyperparameters
9
Data
Code
Trained
Model
10. • We can improve/automated the code as much as we want
• But, a trained model is the combination of data + our code
• And where are we going to run this trained model?
• What is the App we are building?
10
But DS is different
Trained
Model
11. • Your code is versioned, but your data is not
• The combination of both results in a Trained Model
• Can you recreate it?
• And which model was deployed 6 weeks ago?
• Why did your data scientist choose this hyperparameter?
• A Model Management server stores
• Hyperparameters
• Performance metrics
• Metadata
• Trained Models
11
What is Model Management
12. A DS pipeline
If we would enable the DS to do deployment
We have two components
• The application
• The trained model
Split the pipeline into parts
• A Build pipeline
• A Train pipeline
• A Deploy pipeline
12
Build
Train
Deploy
Data Scientists
Backend Developers
Backend Developers
13. (Go!) Data Driven Maturity
Model
How does Model Management fit in
• Initially a repository where DS push their locally
trained model
• Centralized repository which allows for easier
collaboration between DS working in the cloud
• A place where training pipelines push their models
13
Data Lab
Data CoE
Data Init
Operational ImplementationDigitalCapability
Data Driven
Company
14. An improved DS pipeline
If we would enable the DS to do deployment
• Automatically retrain the model if the Data changes
• Exploit remote compute to accelerate training/finding
different hyperparameters
14
Build
Train
Deploy
16. Create with the end in mind
• Bridge the gap between a successful experiment and using it in your business
• Cost effective setup of your Azure environment for Data Science
• Secure and Compliant by default
16
Value Proposition
DevOps 4 AI