Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
Confidential and Proprietary to Daugherty Business Solutions
May 1, 2019
Data Engineering and the Data Science
Lifecycle
Confidential and Proprietary to Daugherty Business Solutions 3
Data Science Divided
Data Science Solution
Data
Science
Model
Data Engineering
Confidential and Proprietary to Daugherty Business Solutions 4
Data Scientists are not Data Engineers
https://www.oreilly.com/ideas/why-a-data-scientist-is-not-a-data-engineer
Confidential and Proprietary to Daugherty Business Solutions 5
Data Scientists are not Data Engineers
https://www.oreilly.com/ideas/why-a-data-scientist-is-not-a-data-engineer
Confidential and Proprietary to Daugherty Business Solutions
NoSQL
6
What is a data pipeline?
CSV
CSV
CSV
CSV
CSV
CSV
CSV
Avro
Simple
More complicated
Confidential and Proprietary to Daugherty Business Solutions 7
Creating Reliable Pipelines
It’s not enough to do it once.
Reproducible
Performant
Robust
Flexible
Monitored
Governed
Confidential and Proprietary to Daugherty Business Solutions 8
Architecting Distributed Systems
Confidential and Proprietary to Daugherty Business Solutions
• Containers simplify the process
of deployment making it reliable
and repeatable
• Streaming – because yesterday’s
data might be too old.
9
Architecting Distributed Systems
Confidential and Proprietary to Daugherty Business Solutions 10
Shaping Data Sources
Confidential and Proprietary to Daugherty Business Solutions
• Storage Mechanisms
• Serialization Framework
• Compression Mechanisms
Architecting Data Storage
11
Confidential and Proprietary to Daugherty Business Solutions
Data Science Lifecycle:
Collaborating with Data Scientists
12
Confidential and Proprietary to Daugherty Business Solutions
We are looking to create a system
that generates a stream of events
and processes those events.
We will create a machine learning
algorithm to make predictions
based on these events.
We will monitor the effectiveness of
these predictions.
Finally, we will detect model drift
and retrain our machine learning
algorithm to adjust for the new
model.
Exercise: Initial problem statement
Confidential and Proprietary to Daugherty Business Solutions 14
Internal Static Data API/Interactive Exchange Streaming Data
Data Acquisition
External data vendor
Robust
Reliable
Governed
Performant
Confidential and Proprietary to Daugherty Business Solutions 15
Data Preparation
Every block of stone has a statue inside it, and
it is the task of the sculptor to discover it.
Confidential and Proprietary to Daugherty Business Solutions
Exercise Architecture
16
Confidential and Proprietary to Daugherty Business Solutions 17
Collaborating
with Data
Scientists
Hypothesis and
Modeling
• Data Scientists use their
understanding of the
data to make a guess at
what the underlying
phenomena is.
• They create a model that
offers insight into the
inner workings of the
phenomena.
Evaluation and
Interpretation
• Data scientists train their
models using training
data. Some models are
able to be verified using
testing data.
• They interpret the results
of the model against
reality. Then they can
determine if it is
appropriate for use.
Confidential and Proprietary to Daugherty Business Solutions 18
Deployment
Confidential and Proprietary to Daugherty Business Solutions 19
Exercise: Reality changes
Confidential and Proprietary to Daugherty Business Solutions 20
Operations and Monitoring
Confidential and Proprietary to Daugherty Business Solutions 21
Optimization
Retrain Remodel
Confidential and Proprietary to Daugherty Business Solutions 22
Retraining
Confidential and Proprietary to Daugherty Business Solutions
Conclusions
23
Data scientists are not data
engineers.
A data scientist should be
supported by two to five
data engineers.
Data engineers are able to
create reliable, repeatable,
governed data pipelines.
Confidential and Proprietary to Daugherty Business Solutions

More Related Content

Data Engineering and the Data Science Lifecycle

  • 1. Confidential and Proprietary to Daugherty Business Solutions May 1, 2019 Data Engineering and the Data Science Lifecycle
  • 2. Confidential and Proprietary to Daugherty Business Solutions 3 Data Science Divided Data Science Solution Data Science Model Data Engineering
  • 3. Confidential and Proprietary to Daugherty Business Solutions 4 Data Scientists are not Data Engineers https://www.oreilly.com/ideas/why-a-data-scientist-is-not-a-data-engineer
  • 4. Confidential and Proprietary to Daugherty Business Solutions 5 Data Scientists are not Data Engineers https://www.oreilly.com/ideas/why-a-data-scientist-is-not-a-data-engineer
  • 5. Confidential and Proprietary to Daugherty Business Solutions NoSQL 6 What is a data pipeline? CSV CSV CSV CSV CSV CSV CSV Avro Simple More complicated
  • 6. Confidential and Proprietary to Daugherty Business Solutions 7 Creating Reliable Pipelines It’s not enough to do it once. Reproducible Performant Robust Flexible Monitored Governed
  • 7. Confidential and Proprietary to Daugherty Business Solutions 8 Architecting Distributed Systems
  • 8. Confidential and Proprietary to Daugherty Business Solutions • Containers simplify the process of deployment making it reliable and repeatable • Streaming – because yesterday’s data might be too old. 9 Architecting Distributed Systems
  • 9. Confidential and Proprietary to Daugherty Business Solutions 10 Shaping Data Sources
  • 10. Confidential and Proprietary to Daugherty Business Solutions • Storage Mechanisms • Serialization Framework • Compression Mechanisms Architecting Data Storage 11
  • 11. Confidential and Proprietary to Daugherty Business Solutions Data Science Lifecycle: Collaborating with Data Scientists 12
  • 12. Confidential and Proprietary to Daugherty Business Solutions We are looking to create a system that generates a stream of events and processes those events. We will create a machine learning algorithm to make predictions based on these events. We will monitor the effectiveness of these predictions. Finally, we will detect model drift and retrain our machine learning algorithm to adjust for the new model. Exercise: Initial problem statement
  • 13. Confidential and Proprietary to Daugherty Business Solutions 14 Internal Static Data API/Interactive Exchange Streaming Data Data Acquisition External data vendor Robust Reliable Governed Performant
  • 14. Confidential and Proprietary to Daugherty Business Solutions 15 Data Preparation Every block of stone has a statue inside it, and it is the task of the sculptor to discover it.
  • 15. Confidential and Proprietary to Daugherty Business Solutions Exercise Architecture 16
  • 16. Confidential and Proprietary to Daugherty Business Solutions 17 Collaborating with Data Scientists Hypothesis and Modeling • Data Scientists use their understanding of the data to make a guess at what the underlying phenomena is. • They create a model that offers insight into the inner workings of the phenomena. Evaluation and Interpretation • Data scientists train their models using training data. Some models are able to be verified using testing data. • They interpret the results of the model against reality. Then they can determine if it is appropriate for use.
  • 17. Confidential and Proprietary to Daugherty Business Solutions 18 Deployment
  • 18. Confidential and Proprietary to Daugherty Business Solutions 19 Exercise: Reality changes
  • 19. Confidential and Proprietary to Daugherty Business Solutions 20 Operations and Monitoring
  • 20. Confidential and Proprietary to Daugherty Business Solutions 21 Optimization Retrain Remodel
  • 21. Confidential and Proprietary to Daugherty Business Solutions 22 Retraining
  • 22. Confidential and Proprietary to Daugherty Business Solutions Conclusions 23 Data scientists are not data engineers. A data scientist should be supported by two to five data engineers. Data engineers are able to create reliable, repeatable, governed data pipelines.
  • 23. Confidential and Proprietary to Daugherty Business Solutions

Editor's Notes

  1. Data science solutions are more than just modeling. To successfully deliver a data science solution, you need to get able to get the data to the model in the right form in order to train the model. After the model is trained, you need to integrate it into your data science pipeline using good data management and software management process. In other words, you need data engineering to make it work.
  2. Most data scientists are not skilled in software development and data management practices. Their skill set skews towards advanced statistical algorithms and machine learning algorithms. These skills are necessary to create a data science solution, but they aren’t on their own sufficient.
  3. While there is some overlap on data scientists who can do data engineering and data engineers who can do data science, the overlap isn’t particularly deep. A moderately complicated data pipeline may be beyond the skill set of even those cross over data scientists.
  4. An example of a simple pipeline would be processing text files stored in HDFS/S3 with Spark. An example of a moderately complicated data pipeline is to start optimizing your storage with a correctly used NoSQL database that uses a binary format like Avro. More complicated pipelines could include streaming data processing. The additional complexity can turn your data science project into data project science.
  5. Data engineers build data science pipelines that are: Reproducible – across environments using templated solutions to solve common problems Performant – getting the data into the right place at the right time Robust – handles peaks and valleys in volume and data Flexible – can handle different formats without erroring Monitored – communicates error conditions effectively Governed – uses good data governance practices especially around data lifecycle It’s not enough to do it once
  6. Data engineers need to be able to understand how to build distributed systems. If they are using Hadoop or other Big Data technologies, they need to understand how the different ecosystem components can be merged together in order to create a data science solution. If they are using Cloud solutions, they need to be able to understand how the different cloud components can be put together in order to assemble a solution. It is especially important that they are able to understand the cost implications for different solution architectures.
  7. In some cases the solution for a distributed architecture may rely on technologies like Docker and Kubernetes in order to simplify deployment and make it reliable and repeatable. In other cases, the data engineer may have to handle streaming data from IoT devices using technologies like Kafka and NiFi.
  8. Data engineer need to shape the data in order to transform it from data into information. In some cases this will happen programmatically using languages like Java, Python, Scala, or R. The data may be residing in SQL databases or in different forms of NoSQL databases. The kinds of data shaping activities that a data engineer might engage in are: Profiling Filtering Sorting Projection Type conversion Data imputation Feature Abstraction Segmentation
  9. Architecting data storage means that we need to understand different storage mechanisms, different serialization frameworks, and different compression mechanisms
  10. Data engineers collaborate with data scientists in acquiring data and preparing it for use in data science models. Once the model is complete, the data engineers can make sure that it is ready for production work loads and ready for deployment. After the model is in production, data engineers need to monitor its effectiveness. When the model performance starts to degrade, the data engineers collaborate with the data scientist to retrain or remodel it in order to restore its effectiveness. Understanding the kinds of inputs and outputs that come from that process enable the data engineer to assist in the development and deployment of the data science model.
  11. Acquire external data using repeatable process, wrapping external data with data governance processes. Acquire internal static data with repeatable process, wrapping internal data with data governance processes Acquire streaming data with repeatable process. Store the data in such a way that data scientists can use the data Stale Contractual details Approvals Compliance
  12. Preparation of data for the model is an area where data engineers need to collaborate with data scientists in order to make sure the data is fit for modeling. Activities that may happen are: Scaling Feature Abstraction Data Cleaning Data Imputation
  13. Core Components: Observed Data X, Y, Result Messaging Platform Kafka Production and Consumption Database Machine Learning Model
  14. The data scientist generally takes the lead when it comes to the creation and curation of the data science model.
  15. The output from the model creation step may not be ready for production. The model may be not be ready for scaling or able to yield the desired performance. Data engineers need to work with the data scientists to convert the model into something that is production ready. Finally, the data engineer can integrate it into the data pipeline.
  16. In our exercise, we’re changing the inputs into the pipeline. In reality, this may be changing customer tastes or an environmental shift that makes our model less useful.
  17. In this example, you can see that the performance of the model has slipped. But for accuracy and recall, it isn’t immediately apparent that the performance has changed significantly. However precision really tells the story. As a data engineer, you need to understand the outputs of the model in order to make sure that you are able to monitor the effectiveness of the model.
  18. If the model’s general parameters just need a bit of adjustment, you may be able to get away with just retraining the model. This something has seriously changed in the underlying environment, you may have to go back to the beginning and identify the features that now would govern the desired behavior.
  19. With some retraining, our model is back on track.
  20. In conclusion, data scientists are not data engineers. Their skill set may overlap with a data engineer, but their focus should be on preparing, creating, evaluating, and explaining models that produce business value. Data engineers complement data scientists. We recommend that a data scientist be supported with two to five data engineers in order to let them spend their time optimally focusing on the things that they do that bring value. Data engineers create the data pipelines that are needed in order to realize the business value.