Abstract
Black box machine learning models are currently being used for high-stakes decision making throughout society, causing problems in healthcare, criminal justice and other domains. Some people hope that creating methods for explaining these black box models will alleviate some of the problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practice and can potentially cause great harm to society. The way forward is to design models that are inherently interpretable. This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 /Â 30Â days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
References
Wexler, R. When a computer program keeps you in jail: how computers are harming criminal justice. New York Times (13 June 2017); https://www.nytimes.com/2017/06/13/opinion/how-computers-are-harming-criminal-justice.html
McGough, M. How bad is Sacramentoâs air, exactly? Google results appear at odds with reality, some say. Sacramento Bee (7 August 2018); https://www.sacbee.com/news/state/california/fires/article216227775.html
Varshney, K. R. & Alemzadeh, H. On the safety of machine learning: cyber-physical systems, decision sciences and data products. Big Data 10, 5 (2016).
Freitas, A. A. Comprehensible classification models: a position paper. ACM SIGKDD Explorations Newsletter 15, 1â10 (2014).
Kodratoff, Y. The comprehensibility manifesto. KDD Nugget Newsletter https://www.kdnuggets.com/news/94/n9.txt (1994).
Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J. & Baesens, B. An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decision Support Syst. 51, 141â154 (2011).
Rüping, S. Learning Interpretable Models. PhD thesis, Univ. Dortmund (2006).
Gupta, M. et al. Monotonic calibrated interpolated look-up tables. J. Mach. Learn. Res. 17, 1â47 (2016).
Lou, Y., Caruana, R., Gehrke, J. & Hooker, G. Accurate intelligible models with pairwise interactions. In Proceedings of 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 623â631 (ACM, 2013).
Miller, G. The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychol. Rev. 63, 81â97 (1956).
Cowan, N. The magical mystery four: How is working memory capacity limited, and why? Curr. Dir. Psychol. Sci. 19, 51â57 (2010).
Wang, J., Oh, J., Wang, H. & Wiens, J. Learning credible models. In Proceedings of 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2417â2426 (ACM, 2018).
Rudin, C. Please stop explaining black box models for high stakes decisions. In Proceedings of NeurIPS 2018 Workshop on Critiquing and Correcting Trends in Machine Learning (NIPS, 2018).
Holte, R. C. Very simple classification rules perform well on most commonly used datasets. Mach. Learn. 11, 63â91 (1993).
Fayyad, U., Piatetsky-Shapiro, G. & Smyth, P. From data mining to knowledge discovery in databases. AI Magazine 17, 37â54 (1996).
Chapman, P. et al. CRISP-DM 1.0âStep-by-Step Data Mining Guide (SPSS, 2000).
Agrawal, D. et al. Challenges and Opportunities with Big Data: A White Paper Prepared for the Computing Community Consortium Committee of the Computing Research Association (CCC, 2012); http://cra.org/ccc/resources/ccc-led-whitepapers/
Defense Advanced Research Projects Agency. Broad Agency Announcement, Explainable Artificial Intelligence (XAI), DARPA-BAA-16-53 (DARPA, 2016); https://www.darpa.mil/attachments/DARPA-BAA-16-53.pdf
Hand, D. Classifier technology and the illusion of progress. Statist. Sci. 21, 1â14 (2006).
Rudin, C. et al. A process for predicting manhole events in Manhattan. Mach. Learn. 80, 1â31 (2010).
Rudin, C. & Ustun, B. Optimized scoring systems: toward trust in machine learning for healthcare and criminal justice. Interfaces. 48, 399â486 (2018). Special Issue: 2017 Daniel H. Wagner Prize for Excellence in Operations Research Practice SeptemberâOctober 2018.
Chen, C. et al. An interpretable model with globally consistent explanations for credit risk. In Proceedings of NeurIPS 2018 Workshop on Challenges and Opportunities for AI in Financial Services: the Impact of Fairness, Explainability, Accuracy, and Privacy (NIPS, 2018).
Mittelstadt, B., Russell, C. & Wachter, S. Explaining explanations in AI. In Proceedings of Fairness, Accountability, and Transparency (FAT*) (ACM, 2019).
Flores, A. W., Lowenkamp, C. T. & Bechtel, K. False positives, false negatives, and false analyses: a rejoinder to âmachine bias: thereâs software used across the country to predict future criminalsâ. Fed. Probat. J. 80, 38â46 (2016).
Angwin, J., Larson, J., Mattu, S. & Kirchner, L. Machine bias. ProPublica (2016); https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Larson, J., Mattu, S., Kirchner, L. & Angwin, J. How we analyzed the COMPAS recidivism algorithm. ProPublica (2016); https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
Rudin, C., Wang, C.& Coker, B. The age of secrecy and unfairness in recidivism prediction. Preprint at https://arxiv.org/abs/1811.00731 (2018).
Brennan, T., Dieterich, W. & Ehret, B. Evaluating the predictive validity of the COMPAS risk and needs assessment system. Crim. Justice Behav. 36, 21â40 (2009).
Zeng, J., Ustun, B. & Rudin, C. Interpretable classification models for recidivism prediction. J. R. Stat. Soc. A Stat. Soc. 180, 689â722 (2017).
Tollenaar, N. & van der Heijden, P. G. M. Which method predicts recidivism best? A comparison of statistical, machine learning and data mining predictive models. J. R. Stat. Soc. Ser. A Stat. Soc. 176, 565â584 (2013).
Mannshardt, E. & Naess, L. Air quality in the USA. Significance 15, 24â27 (October, 2018).
Zech, J. R. et al. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study. PLoS Med. 15, e1002683 (2018).
Chang, A., Rudin, C., Cavaretta, M., Thomas, R. & Chou, G. How to reverse-engineer quality rankings. Mach. Learn. 88, 369â398 (2012).
Goodman, B. & Flaxman, S. EU regulations on algorithmic decision-making and a âright to explanationâ. AI Magazine 38, 3 (2017).
Wachter, S., Mittelstadt, B. & Russell, C. Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harvard Journal of Law & Technology 1 (2018).
Quinlan, J. R. C4. 5: Programs for Machine Learning Vol. 1 (Morgan Kaufmann, 1993).
Breiman, L., Friedman, J., Stone, C. J. & Olshen, R. A. Classification and Regression Trees (CRC Press, 1984).
Auer, P., Holte, R. C. & Maass, W. Theory and applications of agnostic PAC-learning with small decision trees. In Proceedings of 12th International Conference on Machine Learning 21â29 (Morgan Kaufmann, 1995).
Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M. & Rudin, C. Certifiably optimal rule lists for categorical data. J. Mach. Learn. Res. 19, 1â79 (2018).
Wang, F. & Rudin, C. Falling rule lists. In Proceedings of Machine Learning Research Vol. 38: Artificial Intelligence and Statistics 1013â1022 (PMLR, 2015).
Chen, C. & Rudin, C. An optimization approach to learning falling rule lists. In Proceedings of Machine Learning Research Vol. 84 : Artificial Intelligence and Statistics 604â612 (PMLR, 2018).
Hu, X. (S.), Rudin, C. & Seltzer, M. Optimal sparse decision trees. Preprint at https://arxiv.org/abs/1904.12847 (2019).
Burgess, E. W. Factors Determining Success or Failure on Parole (Illinois Committee on Indeterminate-Sentence Law and Parole, 1928).
Carrizosa, E., Martn-Barragán, B. & Morales, D. R. Binarized support vector machines. INFORMS J. Comput. 22, 154â167 (2010).
Sokolovska, N., Chevaleyre, Y. & Zucker, J. D. A provable algorithm for learning interpretable scoring systems. In Proceedings of Machine Learning Research Vol. 84: Artificial Intelligence and Statistics 566â574 (PMLR, 2018).
Ustun, B. & Rudin, C. Optimized risk scores. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (ACM, 2017).
Ustun, B. et al. The World Health Organization adult attention-deficit/hyperactivity disorder self-report screening scale for DSM-5. JAMA Psychiatr. 74, 520â526 (2017).
Chen, C. et al. This looks like that: deep learning for interpretable image recognition. Preprint at https://arxiv.org/abs/1806.10574 (2018).
Li, O., Liu, H., Chen, C. & Rudin, C. Deep learning for case-based reasoning through prototypes: a neural network that explains its predictions. In Proceedings of AAAI Conference on Artificial Intelligence 3530â3537 (AAAI, 2018).
Gallagher, N. et al. Cross-spectral factor analysis. In Proceedings of Advances in Neural Information Processing Systems 30 (NeurIPS) 6842â6852 (Curran Associates, 2017).
Wang, F., Rudin, C., McCormick, T. H. & Gore, J. L. Modeling recovery curves with application to prostatectomy. Biostatistics https://doi.org/10.1093/biostatistics/kxy002 (2018).
Lou, Y., Caruana, R. & Gehrke, J. Intelligible models for classification and regression. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (ACM, 2012).
Acknowledgements
The author thanks F. Wang, T. Wang, C. Chen, O. Li, A. Barnett, T. Dietterich, M. Seltzer, E. Angelino, N. Larus-Stone, E. Mannshart, M. Gupta and several others who helped my thought processes in various ways, and particularly B. Ustun, R. Parr, R. Holte and my father, S. Rudin, who went to considerable efforts to provide thoughtful comments and discussion. The author acknowledges funding from the Laura and John Arnold Foundation, NIH, NSF, DARPA, the Lord Foundation of North Carolina and MIT-Lincoln Laboratory.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The author declares no competing interests.
Additional information
Publisherâs note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
About this article
Cite this article
Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1, 206â215 (2019). https://doi.org/10.1038/s42256-019-0048-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s42256-019-0048-x
This article is cited by
-
Procedural fairness in algorithmic decision-making: the role of public engagement
Ethics and Information Technology (2025)
-
NeuDen: a framework for the integration of neuromorphic evolving spiking neural networks with dynamic evolving neuro-fuzzy systems for predictive and explainable modelling of streaming data
Evolving Systems (2025)