Summary
Trusting a machine-learning model is a critical factor that will speed the spread of the fourth industrial revolution. Trust can be achieved by understanding how a model is making decisions. For white-box models, it is easy to “see” the model and examine its prediction. For black-box models, the explanation of the decision process is not straightforward. In this work, we compare the performance of several white- and black-box models on two production data sets in an anomaly detection task. The presence of anomalies in production data can significantly influence business decisions and misrepresent the results of the analysis, if not identified. Therefore, identifying anomalies is a crucial and necessary step to maintain safety and ensure that the wells perform at full capacity. To achieve this, we compare the performance of K-nearest neighbor (KNN), logistic regression (Logit), support vector machines (SVMs), decision tree (DT), random forest (RF), and rule fit classifier (RFC). F1 and complexity are the two main metrics used to compare the prediction performance and interpretability of these models. In one data set, RFC outperformed the remaining models in both F1 and complexity, where F1 = 0.92, and complexity = 0.5. In the second data set, RF outperformed the rest in prediction performance with F1 = 0.84, yet it had the lowest complexity metric (0.04). We further analyzed the best performing models by explaining their predictions using local interpretable model-agnostic explanations, which provide justification for decisions made for each instance. Additionally, we evaluated the global rules learned from white-box models. Local and global analysis enable decision makers to understand how and why models are making certain decisions, which in turn allows trusting the models.