Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Jan 24, 2024 · In this work, we introduce a method to perform such concept-based interventions on pretrained neural networks, which are not interpretable by ...
Nov 9, 2023 · This approach performs well on the simpler benchmarks, being slightly less intervenable than models fine-tuned for intervenability. However, interventions have ...
To showcase the practical utility of our techniques, we apply them to deep chest X-ray classifiers and show that fine-tuned black boxes are more intervenable ...
May 27, 2024 · Empirically, we explore the intervenability of black-box classifiers on synthetic tabular and natural image benchmarks. We focus on backbone ...
Recently, interpretable machine learning has re-explored concept bottleneck models (CBM), comprising step-by-step prediction of the high-level concepts.
The notion of intervenability is formalised as a measure of the effectiveness of concept-based interventions and used to fine-tune black boxes, ...
May 27, 2024 · This paper introduces a new approach to making "black box" machine learning models more interpretable and intervenable. The authors propose a ...
Empirically, we explore the intervenability of black-box classifiers on synthetic tabular and natural image benchmarks. We focus on backbone architectures of ...
Jan 25, 2024 · Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable ... Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable ...
Interpretable machine learning models called concept bottleneck models allow for users to intervene on predictions by changing high-level concepts.