Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3531146.3533090acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

It’s Just Not That Simple: An Empirical Study of the Accuracy-Explainability Trade-off in Machine Learning for Public Policy

Published: 20 June 2022 Publication History

Abstract

To achieve high accuracy in machine learning (ML) systems, practitioners often use complex “black-box” models that are not easily understood by humans. The opacity of such models has resulted in public concerns about their use in high-stakes contexts and given rise to two conflicting arguments about the nature — and even the existence — of the accuracy-explainability trade-off. One side postulates that model accuracy and explainability are inversely related, leading practitioners to use black-box models when high accuracy is important. The other side of this argument holds that the accuracy-explainability trade-off is rarely observed in practice and consequently, that simpler interpretable models should always be preferred. Both sides of the argument operate under the assumption that some types of models, such as low-depth decision trees and linear regression are more explainable, while others such as neural networks and random forests, are inherently opaque.
Our main contribution is an empirical quantification of the trade-off between model accuracy and explainability in two real-world policy contexts. We quantify explainability in terms of how well a model is understood by a human-in-the-loop (HITL) using a combination of objectively measurable criteria, such as a human’s ability to anticipate a model’s output or identify the most important feature of a model, and subjective measures, such as a human’s perceived understanding of the model. Our key finding is that explainability is not directly related to whether a model is a black-box or interpretable and is more nuanced than previously thought. We find that black-box models may be as explainable to a HITL as interpretable models and identify two possible reasons: (1) that there are weaknesses in the intrinsic explainability of interpretable models and (2) that more information about a model may confuse users, leading them to perform worse on objectively measurable explainability tasks. In summary, contrary to both positions in the literature, we neither observed a direct trade-off between accuracy and explainability nor found interpretable models to be superior in terms of explainability. It’s just not that simple!

References

[1]
Ashraf Abdul, Christian von der Weth, Mohan Kankanhalli, and Brian Y Lim. 2020. COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
[2]
Everaldo Aguiar, Himabindu Lakkaraju, Nasir Bhanpuri, David Miller, Ben Yuhas, and Kecia L Addison. 2015. Who, when, and why: A machine learning approach to prioritizing students at risk of not graduating high school on time. In Proceedings of the Fifth International Conference on Learning Analytics And Knowledge. 93–102.
[3]
D Aha and Dennis Kibler. 1988. Instance-based prediction of heart-disease presence with the Cleveland database. University of California 3, 1 (1988), 3–2.
[4]
Hiva Allahyari and Niklas Lavesson. 2011. User-oriented Assessment of Classification Model Understandability. In Eleventh Scandinavian Conference on Artificial Intelligence, SCAI 2011, Trondheim, Norway, May 24th - 26th, 2011(Frontiers in Artificial Intelligence and Applications, Vol. 227), Anders Kofod-Petersen, Fredrik Heintz, and Helge Langseth (Eds.). IOS Press, 11–19. https://doi.org/10.3233/978-1-60750-754-3-11
[5]
Kasun Amarasinghe, Kit Rodolfa, Hemank Lamba, and Rayid Ghani. 2020. Explainable machine learning for public policy: Use cases, gaps, and research directions. arXiv preprint arXiv:2010.14374(2020).
[6]
Ryan S Baker and Aaron Hawn. 2021. Algorithmic Bias in Education. https://doi.org/10.35542/osf.io/pbmvz
[7]
Robert Bartlett, Adair Morse, Richard Stanton, and Nancy Wallace. 2021. Consumer-lending discrimination in the FinTech Era. Journal of Financial Economics(2021). https://doi.org/10.1016/j.jfineco.2021.05.047
[8]
Nadia El Bekri, Jasmin Kling, and Marco F. Huber. 2019. A Study on Trust in Black Box Models and Post-hoc Explanations. In 14th International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO 2019) - Seville, Spain, May 13-15, 2019, Proceedings(Advances in Intelligent Systems and Computing, Vol. 950), Francisco Martínez-Álvarez, Alicia Troncoso Lora, José António Sáez Muñoz, Héctor Quintián, and Emilio Corchado (Eds.). Springer, 35–46. https://doi.org/10.1007/978-3-030-20055-8_4
[9]
Andrew Bell, Alexander Rich, Melisande Teng, Tin Orešković, Nuno B Bras, Lénia Mestrinho, Srdan Golubovic, Ivan Pristas, and Leid Zejnilovic. 2019. Proactive advising: a machine learning driven approach to vaccine hesitancy. In 2019 IEEE International Conference on Healthcare Informatics (ICHI). IEEE, 1–6.
[10]
Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José M. F. Moura, and Peter Eckersley. 2020. Explainable Machine Learning in Deployment. arxiv:1909.06342 [cs.LG]
[11]
Zana Buçinca, Phoebe Lin, Krzysztof Z Gajos, and Elena L Glassman. 2020. Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems. In Proceedings of the 25th international conference on intelligent user interfaces. 454–464.
[12]
Samuel Carton, Jennifer Helsby, Kenneth Joseph, Ayesha Mahmud, Youngsoo Park, Joe Walsh, Crystal Cody, CPT Estella Patterson, Lauren Haynes, and Rayid Ghani. 2016. Identifying police officers at risk of adverse events. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 67–76.
[13]
Alexandra Chouldechova. 2016. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. arxiv:1610.07524 [stat.AP]
[14]
FDIC: Federal Deposit Insurance Corporation. 1968. Civil Rights Act of 1968. (1968). https://www.fdic.gov/regulations/laws/rules/6000-1400.html
[15]
P. Cortez and A. M. G. Silva. 2008. Using data mining to predict secondary school student performance.
[16]
Ian Covert, Scott Lundberg, and Su-In Lee. 2020. Understanding global feature contributions with additive importance measures. arXiv preprint arXiv:2004.00668(2020).
[17]
Ian Covert, Scott M. Lundberg, and Su-In Lee. 2020. Understanding Global Feature Contributions Through Additive Importance Measures. CoRR abs/2004.00668(2020). arXiv:2004.00668https://arxiv.org/abs/2004.00668
[18]
Arun Das and Paul Rad. 2020. Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey. arxiv:2006.11371 [cs.CV]
[19]
Anupam Datta, Shayak Sen, and Yair Zick. 2016. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In 2016 IEEE symposium on security and privacy (SP). IEEE, 598–617.
[20]
Inigo Martinez de Troya, Ruqian Chen, Laura O Moraes, Pranjal Bajaj, Jordan Kupersmith, Rayid Ghani, Nuno B Brás, and Leid Zejnilovic. 2018. Predicting, explaining, and understanding risk of long-term unemployment. In NeurIPS Workshop on AI for Social Good.
[21]
Matthew DeCamp and Charlotta Lindvall. 2020. Latent bias and the implementation of artificial intelligence in medicine. Journal of the American Medical Informatics Association 27, 12(2020), 2020–2023.
[22]
William K Diprose, Nicholas Buist, Ning Hua, Quentin Thurier, George Shand, and Reece Robinson. 2020. Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27, 4 (02 2020), 592–600. https://doi.org/10.1093/jamia/ocz229 arXiv:https://academic.oup.com/jamia/article-pdf/27/4/592/34153285/ocz229.pdf
[23]
Graham Dove, Martina Balestra, Devin Mann, and Oded Nov. 2020. Good for the Many or Best for the Few? A Dilemma in the Design of Algorithmic Advice. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2(2020), 1–22.
[24]
Gintare Karolina Dziugaite, Shai Ben-David, and Daniel M Roy. 2020. Enforcing Interpretability and its Statistical Impacts: Trade-offs between Accuracy and Interpretability. arXiv preprint arXiv:2010.13764(2020).
[25]
Virginia Eubanks. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press, Inc., USA.
[26]
Andreas Fuster, Paul Goldsmith-Pinkham, Tarun Ramadorai, and Ansgar Walther. 2020. Predictably unequal? the effects of machine learning on credit markets. The Effects of Machine Learning on Credit Markets (October 1, 2020) (2020).
[27]
Philip Gillingham. 2019. Can predictive algorithms assist decision-making in social work with children and families?Child abuse review 28, 2 (2019), 114–126.
[28]
Michael Gleicher. 2016. A framework for considering comprehensibility in modeling. Big data 4, 2 (2016), 75–88.
[29]
Bryce Goodman and Seth Flaxman. 2017. European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”. AI Magazine 38, 3 (Oct 2017), 50–57. https://doi.org/10.1609/aimag.v38i3.2741
[30]
Ben Green. 2022. The flaws of policies requiring human oversight of government algorithms. Computer Law & Security Review 45 (2022), 105681.
[31]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2019. A Survey of Methods for Explaining Black Box Models. ACM Comput. Surv. 51, 5 (2019), 93:1–93:42. https://doi.org/10.1145/3236009
[32]
David Gunning, Mark Stefik, Jaesik Choi, Timothy Miller, Simone Stumpf, and Guang-Zhong Yang. 2019. XAI—Explainable artificial intelligence. Science Robotics 4, 37 (2019).
[33]
Satoshi Hara and Kohei Hayashi. 2016. Making Tree Ensembles Interpretable. arxiv:1606.05390 [stat.ML]
[34]
Kenneth Holstein and Shayan Doroudi. 2021. Equity and Artificial Intelligence in Education: Will ”AIEd” Amplify or Alleviate Inequities in Education?arxiv:2104.12920 [cs.HC]
[35]
Andreas Holzinger, André Carrington, and Heimo Müller. 2020. Measuring the quality of explanations: the system causability scale (SCS). KI-Künstliche Intelligenz(2020), 1–6.
[36]
Sungsoo Ray Hong, Jessica Hullman, and Enrico Bertini. 2020. Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs. Proceedings of the ACM on Human-Computer Interaction 4, CSCW1 (May 2020), 1–26. https://doi.org/10.1145/3392878
[37]
Qian|Rangwala Hu. 2020. Towards Fair Educational Data Mining: A Case Study on Detecting At-Risk Students.https://eric.ed.gov/?id=ED608050
[38]
Johan Huysmans, Bart Baesens, and Jan Vanthienen. 2006. Using rule extraction to improve the comprehensibility of predictive models. (2006).
[39]
Sérgio Jesus, Catarina Belém, Vladimir Balayan, João Bento, Pedro Saleiro, Pedro Bizarro, and João Gama. 2021. How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 805–815.
[40]
Jeff Larson Julia Angwin. 2016. Machine Bias. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
[41]
Nicol Turner Lee. 2018. Detecting racial bias in algorithms and machine learning. Journal of Information, Communication and Ethics in Society (2018).
[42]
Q. Vera Liao, Daniel M. Gruen, and Sarah Miller. 2020. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, Regina Bernhaupt, Florian ’Floyd’ Mueller, David Verweij, Josh Andres, Joanna McGrenere, Andy Cockburn, Ignacio Avellino, Alix Goguey, Pernille Bjøn, Shengdong Zhao, Briane Paul Samson, and Rafal Kocielnik (Eds.). ACM, 1–15. https://doi.org/10.1145/3313831.3376590
[43]
Brian Y. Lim and Anind K. Dey. 2009. Assessing Demand for Intelligibility in Context-Aware Applications. In Proceedings of the 11th International Conference on Ubiquitous Computing (Orlando, Florida, USA) (UbiComp ’09). Association for Computing Machinery, New York, NY, USA, 195–204. https://doi.org/10.1145/1620545.1620576
[44]
Brian Y. Lim, Anind K. Dey, and Daniel Avrahami. 2009. Why and Why Not Explanations Improve the Intelligibility of Context-Aware Intelligent Systems. Association for Computing Machinery, New York, NY, USA, 2119–2128. https://doi.org/10.1145/1518701.1519023
[45]
Scott M. Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (Eds.). 4765–4774. https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html
[46]
Ricards Marcinkevics and Julia E. Vogt. 2020. Interpretability and Explainability: A Machine Learning Zoo Mini-tour. CoRR abs/2012.01805(2020). arxiv:2012.01805https://arxiv.org/abs/2012.01805
[47]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1–38.
[48]
Christoph Molnar. 2020. Interpretable machine learning. Lulu. com.
[49]
Menaka Narayanan, Emily Chen, Jeffrey He, Been Kim, Sam Gershman, and Finale Doshi-Velez. 2018. How do humans understand explanations from machine learning systems? an evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1802.00682(2018).
[50]
Oded Nov, Yindalon Aphinyanaphongs, Yvonne W Lui, Devin Mann, Maurizio Porfiri, Mark Riedl, John-Ross Rizzo, and Batia Wiesenfeld. 2021. The transformation of patient-clinician relationships with ai-based medical advice. Commun. ACM 64, 3 (2021), 46–48.
[51]
Cathy O’Neil. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group, USA.
[52]
Alun Preece. 2018. Asking ‘Why’ in AI: Explainability of intelligent systems – perspectives and challenges. Intelligent Systems in Accounting, Finance and Management 25, 2(2018), 63–72. https://doi.org/10.1002/isaf.1422 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/isaf.1422
[53]
Jennifer Raso. 2017. Displacement as regulation: New regulatory technologies and front-line decision-making in Ontario works. Canadian Journal of Law & Society/La Revue Canadienne Droit et Société 32, 1 (2017), 75–95.
[54]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ” Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.
[55]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. arxiv:1602.04938 [cs.LG]
[56]
Kit T Rodolfa, Hemank Lamba, and Rayid Ghani. 2020. Empirical observation of negligible fairness-accuracy trade-offs in machine learning for public policy. arXiv preprint arXiv:2012.02972(2020).
[57]
Robert Ross. 2017. The impact of property tax appeals on vertical equity in Cook County, IL. Univerity of Chicago, Harris School of Public Policy Working Paper (2017).
[58]
Cynthia Rudin. 2019. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. arxiv:1811.10154 [stat.ML]
[59]
Piotr Sapiezynski, Valentin Kassarnig, and Christo Wilson. 2017. Academic performance prediction in a gender-imbalanced environment.
[60]
Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. 2020. Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods. arxiv:1911.02508 [cs.LG]
[61]
Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. 2020. Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods. In AIES ’20: AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA, February 7-8, 2020, Annette N. Markham, Julia Powles, Toby Walsh, and Anne L. Washington (Eds.). ACM, 180–186. https://doi.org/10.1145/3375627.3375830
[62]
Gregor Stiglic, Petra Povalej Brzan, Nino Fijacko, Fei Wang, Boris Delibasic, Alexandros Kalousis, and Zoran Obradovic. 2015. Comprehensible predictive modeling using regularized logistic regression and comorbidity based features. PloS one 10, 12 (2015), e0144439.
[63]
Julia Stoyanovich, Jay J Van Bavel, and Tessa V West. 2020. The imperative of interpretable machines. Nature Machine Intelligence 2, 4 (2020), 197–199.
[64]
Joaquin Vanschoren, Jan N. van Rijn, Bernd Bischl, and Luis Torgo. 2013. OpenML: Networked Science in Machine Learning. SIGKDD Explorations 15, 2 (2013), 49–60. https://doi.org/10.1145/2641190.2641198
[65]
Ben Wagner. 2019. Liable, but not in control? Ensuring meaningful human agency in automated decision-making systems. Policy & Internet 11, 1 (2019), 104–122.
[66]
Harrison Wilde, Lucia L. Chen, Austin Nguyen, Zoe Kimpel, Joshua Sidgwick, Adolfo De Unanue, Davide Veronese, Bilal Mateen, Rayid Ghani, Sebastian Vollmer, and et al.2021. A recommendation and risk classification system for connecting rough sleepers to essential outreach services. Data and Policy 3(2021), e2. https://doi.org/10.1017/dap.2020.23
[67]
Yiwei Yang, Eser Kandogan, Yunyao Li, Prithviraj Sen, and Walter S Lasecki. 2019. A study on interaction in human-in-the-loop machine learning for text analytics. In IUI Workshops.
[68]
Jun Yuan, Oded Nov, and Enrico Bertini. 2021. An exploration and validation of visual factors in understanding classification rule sets. In 2021 IEEE Visualization Conference (VIS). IEEE, 6–10.
[69]
Leid Zejnilovic, Susana Lavado, Carlos Soares, Íñigo Martínez De Rituerto De Troya, Andrew Bell, and Rayid Ghani. 2021. Machine Learning Informed Decision-Making with Interpreted Model’s Outputs: A Field Intervention. In Academy of Management Proceedings, Vol. 2021. Academy of Management Briarcliff Manor, NY 10510, 15424.
[70]
Leid Zejnilović, Susana Lavado, Íñigo Martínez de Rituerto de Troya, Samantha Sim, and Andrew Bell. 2020. Algorithmic Long-Term Unemployment Risk Assessment in Use: Counselors’ Perceptions and Use Practices. Global Perspectives 1, 1 (06 2020). https://doi.org/10.1525/gp.2020.12908 arXiv:https://online.ucpress.edu/gp/article-pdf/1/1/12908/462946/12908.pdf12908.
[71]
Yujia Zhang, Kuangyan Song, Yiming Sun, Sarah Tan, and Madeleine Udell. 2019. ” Why Should You Trust My Explanation?” Understanding Uncertainty in LIME Explanations. arXiv preprint arXiv:1904.12991(2019).

Cited By

View all
  • (2024)MAGNets: Micro-Architectured Group Neural NetworksProceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems10.5555/3635637.3663252(2650-2658)Online publication date: 6-May-2024
  • (2024)An Integrated Grassland Fire-Danger-Assessment System for a Mountainous National Park Using Geospatial Modelling TechniquesFire10.3390/fire70200617:2(61)Online publication date: 19-Feb-2024
  • (2024)PARMA: a Platform Architecture to enable Automated, Reproducible, and Multi-party Assessments of AI TrustworthinessProceedings of the 2nd International Workshop on Responsible AI Engineering10.1145/3643691.3648585(20-27)Online publication date: 16-Apr-2024
  • Show More Cited By

Index Terms

  1. It’s Just Not That Simple: An Empirical Study of the Accuracy-Explainability Trade-off in Machine Learning for Public Policy
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Information & Contributors

            Information

            Published In

            cover image ACM Other conferences
            FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
            June 2022
            2351 pages
            ISBN:9781450393522
            DOI:10.1145/3531146
            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Sponsors

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            Published: 20 June 2022

            Permissions

            Request permissions for this article.

            Check for updates

            Author Tags

            1. explainability
            2. machine learning
            3. public policy
            4. responsible AI

            Qualifiers

            • Research-article
            • Research
            • Refereed limited

            Funding Sources

            Conference

            FAccT '22
            Sponsor:

            Contributors

            Other Metrics

            Bibliometrics & Citations

            Bibliometrics

            Article Metrics

            • Downloads (Last 12 months)1,693
            • Downloads (Last 6 weeks)175
            Reflects downloads up to 13 Sep 2024

            Other Metrics

            Citations

            Cited By

            View all
            • (2024)MAGNets: Micro-Architectured Group Neural NetworksProceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems10.5555/3635637.3663252(2650-2658)Online publication date: 6-May-2024
            • (2024)An Integrated Grassland Fire-Danger-Assessment System for a Mountainous National Park Using Geospatial Modelling TechniquesFire10.3390/fire70200617:2(61)Online publication date: 19-Feb-2024
            • (2024)PARMA: a Platform Architecture to enable Automated, Reproducible, and Multi-party Assessments of AI TrustworthinessProceedings of the 2nd International Workshop on Responsible AI Engineering10.1145/3643691.3648585(20-27)Online publication date: 16-Apr-2024
            • (2024)Interpretable Control CompetitionProceedings of the Genetic and Evolutionary Computation Conference Companion10.1145/3638530.3664051(11-12)Online publication date: 14-Jul-2024
            • (2024)Algorithmic Transparency and Participation through the Handoff Lens: Lessons Learned from the U.S. Census Bureau’s Adoption of Differential PrivacyProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658962(1150-1162)Online publication date: 3-Jun-2024
            • (2024)Dealing with Uncertainty: Understanding the Impact of Prognostic Versus Diagnostic Tasks on Trust and Reliance in Human-AI Decision MakingProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641905(1-17)Online publication date: 11-May-2024
            • (2024)Quantifying polarization in online political discourseEPJ Data Science10.1140/epjds/s13688-024-00480-313:1Online publication date: 5-Jun-2024
            • (2024)PERform: assessing model performance with predictivity and explainability readiness formulaJournal of Environmental Science and Health, Part C10.1080/26896583.2024.2340391(1-16)Online publication date: 15-Apr-2024
            • (2024)Improving deep learning with prior knowledge and cognitive modelsCognitive Systems Research10.1016/j.cogsys.2023.10118884:COnline publication date: 17-Apr-2024
            • (2024)Comparing machine-learning models of different levels of complexity for crop protection: A look into the complexity-accuracy tradeoffSmart Agricultural Technology10.1016/j.atech.2023.1003807(100380)Online publication date: Mar-2024
            • Show More Cited By

            View Options

            View options

            PDF

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            HTML Format

            View this article in HTML Format.

            HTML Format

            Get Access

            Login options

            Media

            Figures

            Other

            Tables

            Share

            Share

            Share this Publication link

            Share on social media