1. Comparison of different types of explanations and their impact on the user
Ethical and legal considerations play a central role in the deployment of Artificial Intelligence systems. These include data privacy requirements (e.g., as set out in GDPR) and accountability (e.g., GDPR's "right to explain"). It is therefore crucial to develop frameworks that allow for automatic learning with minimal access to a user's data, while at the same time being able to provide a user-friendly justification for any decision or recommendation made by an ML/DL system.
Explainable IA aims at shading the decisions taken by the ML/DL black boxes. While a plethora of approaches have been proposed and developed to generate post-hoc explanations of black boxes for ML/DL, there is not yet a clear understanding and agreement on what properties make explanations good.
This thesis will focus on the analysis of the different forms of explanation proposed in the social and cognitive science literature, and automatic learning. The aim of this thesis is, on the one hand, to compile a systematic review of the literature. On the other hand, the aim is to develop a user study that evaluates the different forms of explanation, as well as their impact on user comprehensibility. Finally, to propose which are the properties that make a good explanation.
2. Generation and reasoning with semantic explanations
Ethical and legal considerations play a central role in the deployment of Artificial Intelligence systems. These include data privacy requirements (e.g., as set out in GDPR) and accountability (e.g., GDPR's "right to explain"). It is therefore crucial to develop frameworks that allow for automatic learning with minimal access to a user's data, while at the same time being able to provide a user-friendly justification for any decision or recommendation made by an ML/DL system.
The explainable IA aims at shading the decisions taken by the ML/DL black boxes. While a plethora of approaches have been proposed and developed to generate post-hoc explanations of ML/DL black boxes, only a few of them take into account knowledge coded in the form of logical knowledge bases, ontologies and/or knowledge graphs to associate meaningful semantics to these explanations.
This thesis will focus on how knowledge extraction techniques from black box models can be enriched with semantic knowledge to produce explanations that are understandable to humans. It is crucial that this explanability be multiple: the explanation that a lay user will find most understandable does not coincide with the one that would be required by an expert in the field, or with the one that would be needed in a court case. The aim of the TFM is to provide knowledge refinement techniques that support the customization of the levels of specificity and generality of explanations for specific user profiles.
3. Explainable recommendations
In recent years, we have witnessed the advent of increasingly accurate and powerful recommendation algorithms and techniques, capable of effectively assessing user tastes and predicting the information they are likely to be interested in.
Most of these approaches are based on the collaborative paradigm (often exploiting machine learning techniques) and do not take into account the enormous amount of knowledge, both structured and unstructured, that describes the domain of interest of the recommendation scenario.
This thesis will focus on the study of the aspects related to the exploitation of external and explicit knowledge sources to feed a recommendation engine.
The aim of the thesis is to go beyond the accuracy objective of traditional recommendation algorithms, and to propose a recommendation algorithm that, exploiting the knowledge encoded in ontological and logical knowledge bases, knowledge graphs, and/or the semantics arising from the analysis of semi-structured textual sources, is able to provide novel and diverse results, as well as to generate explanations of the recommended elements.