Affiliations: [a] Dipartimento di Scienze Pure e Applicate (DiSPeA), Università di Urbino, Urbino, Italy
| [b] Dipartimento di Informatica – Scienza e Ingegneria (DISI), Università di Bologna, Bologna, Italy
Correspondence:
[*]
Corresponding author: Federico Sabbatini, Dipartimento di Scienze Pure e Applicate (DiSPeA), Università di Urbino, Urbino, Italy. E-mail: [email protected].
Abstract: Machine learning black boxes, exemplified by deep neural networks, often exhibit challenges in interpretability due to their reliance on complicated relationships involving numerous internal parameters and input features. This lack of transparency from a human perspective renders their predictions untrustworthy, particularly in critical applications. In this paper, we address this issue by introducing the design and implementation of CReEPy, an algorithm for symbolic knowledge extraction based on explainable clustering. Specifically, CReEPy leverages the underlying clustering performed by the ExACT or CREAM algorithms to generate human-interpretable Prolog rules that mimic the behaviour of opaque models. Additionally, we introduce CRASH, an algorithm for the automated tuning of hyper-parameters required by CReEPy. We present experiments evaluating both the human readability and predictive performance of the proposed knowledge-extraction algorithm, employing existing state-of-the-art techniques as benchmarks for comparison in real-world applications.