Abstract
Artificial Intelligence (AI) has become an integral part of our lives, and Explainable Artificial Intelligence (XAI) is becoming more essential to ensure trustworthiness and comply with regulations. XAI methodologies help to explain the automatic processing behind data analysis. This paper provides an overview of the use of XAI in the educational domain. Specifically, it analyzes some of the most commonly used XAI tools, emphasizing their characteristics to help users choose the most suitable one. Additionally, two case studies have been analyzed to demonstrate how to use XAI tools in the educational domain by exploiting a subset of the Open University dataset.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
General Data Protection Regulation (GDPR): https://gdpr-info.eu/.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
Student oriented subset of the Open University Learning Analytics dataset: https://zenodo.org/records/4264397.
References
Chen, L., Chen, P., Lin, Z.: Artificial intelligence in education: a review. IEEE Access 8, 75264–75278 (2020)
Ducange, P., Pecori, R., Sarti, L., Vecchio, M.: Educational big data mining: how to enhance virtual learning environments. In: Graña, M., López-Guede, J.M., Etxaniz, O., Herrero, Á., Quintián, H., Corchado, E. (eds.) SOCO/CISIS/ICEUTE -2016. AISC, vol. 527, pp. 681–690. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-47364-2_66
Khosravi, H., et al.: Explainable artificial intelligence in education. Comput. Educ. Artif. Intell. 100074 (2022)
Khosravi, H., Kitto, K., Williams, J.J.: Ripple: a crowdsourced adaptive platform for recommendation of learning activities. arXiv preprint arXiv:1910.05522 (2019)
Conati, C., Barral, O., Putnam, V., Rieger, L.: Toward personalized XAI: a case study in intelligent tutoring systems. Artif. Intell. 298, 103503 (2021)
Embarak, O.H.: Internet of behaviour (IoB)-based AI models for personalized smart education systems. Procedia Comput. Sci. 203, 103–110 (2022)
Shum, S.B., Knight, S., McNamara, D., Allen, L., Bektik, D., Crossley, S.: Critical perspectives on writing analytics. In: Proceedings of the Sixth International Conference on Learning Analytics & Knowledge, pp. 481–483 (2016)
Schicchi, D., Pilato, G.: WORDY: a semi-automatic methodology aimed at the creation of neologisms based on a semantic network and blending devices. In: Barolli, L., Terzo, O. (eds.) CISIS 2017. AISC, vol. 611, pp. 236–248. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-61566-0_23
Palmonari, M., Uboldi, G., Cremaschi, M., Ciminieri, D., Bianchi, F.: DaCENA: serendipitous news reading with data contexts. In: Gandon, F., Gueret, C., Villata, S., Breslin, J., Faron-Zucker, C., Zimmermann, A. (eds.) ESWC 2015. LNCS, vol. 9341, pp. 133–137. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-25639-9_26
El-Assady, M., et al.: Towards XAI: structuring the processes of explanations. In: Proceedings of the ACM Workshop on Human-Centered Machine Learning, Glasgow, UK, vol. 4 (2019)
De Laet, T., Millecamp, M., Broos, T., De Croon, R., Verbert, K., Duorado, R.: Explainable learning analytics: challenges and opportunities. In: Companion Proceedings of the 10th International Conference on Learning Analytics & Knowledge LAK20 Society for Learning Analytics Research (SoLAR), pp. 500–510 (2020)
Schicchi, D., Marino, B., Taibi, D.: Exploring Learning Analytics on YouTube: a tool to support students’ interactions analysis. In: International Conference on Computer Systems and Technologies, vol. 21, pp. 207–211 (2021)
Nagy, M., Molontay, R.: Interpretable dropout prediction: towards XAI-based personalized intervention. Int. J. Artif. Intell. Educ. 1–27 (2023)
Casalino, G., Ducange, P., Fazzolari, M., Pecori, R.: Fuzzy hoeffding decision trees for learning analytics. In: First Workshop on Online Learning from Uncertain Data Streams (OLUD 2022) (2022)
Zanellati, A., Di Mitri, D., Gabbrielli, M., Levrini, O.: Hybrid models for knowledge tracing: a systematic literature review. IEEE Trans. Learn. Technol. (2024)
Farella, M., Taibi, D., Arrigo, M., Todaro, G., Fulantelli, G., Chiazzese, G.: An augmented reality mobile learning experience based on treasure hunt serious game. In: ECEL 2021 20th European Conference on e-Learning, p. 148. Academic Conferences International Limited (2021)
Casalino, G., Castellano, G., Vessio, G.: Exploiting time in adaptive learning from educational data. In: Agrati, L.S., et al. (eds.) HELMeTO 2020. CCIS, vol. 1344, pp. 3–16. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-67435-9_1
Castelvecchi, D.: Can we open the black box of AI? Nat. News 538(7623), 20 (2016)
Lipton, Z.C.: The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018)
Wachter, S., Mittelstadt, B.D., Floridi, L.: Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int. Data Priv. Law 7, 76–99 (2017)
Ducange, P., Marcelloni, F., Pecori, R.: Fuzzy hoeffding decision tree for data stream classification. Int. J. Comput. Intell. Syst. 14, 946–964 (2021)
Gallo, G., Ferrari, V., Marcelloni, F., Ducange, P.: SK-MOEFS: a library in python for designing accurate and explainable fuzzy models. In: Lesot, M.-J., et al. (eds.) IPMU 2020. CCIS, vol. 1239, pp. 68–81. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-50153-2_6
Stepin, I., Suffian, M., Catala, A., Alonso-Moral, J.M.: How to build self-explaining fuzzy systems: From interpretability to explainability [AI-explained]. IEEE Comput. Intell. Mag. 19(1), 81–82 (2024)
Casalino, G., Castellano, G., Kaymak, U., Zaza, G.: Balancing accuracy and interpretability through neuro-fuzzy models for cardiovascular risk assessment. In: 2021 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1–8 (2021)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Mishra, S., Sturm, B.L., Dixon, S.: Local interpretable model-agnostic explanations for music content analysis. In: ISMIR, vol. 53, pp. 537–543 (2017)
Magesh, P.R., Myloth, R.D., Tom, R.J.: An explainable machine learning model for early detection of Parkinson’s disease using lime on DaTSCAN imagery. Comput. Biol. Med. 126, 104041 (2020)
Alvarez-Melis, D., Jaakkola, T.: On the robustness of interpretability methods. arXiv, abs/1806.08049 (2018)
Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. arXiv, abs/1705.07874 (2017)
Kaczmarek-Majer, K., et al.: Plenary: explaining black-box models in natural language through fuzzy linguistic summaries. Inf. Sci. 614, 374–399 (2022)
Andresini, G., et al.: CENTAURO: an explainable AI approach for customer loyalty prediction in retail sector. In: Basili, R., Lembo, D., Limongelli, C., Orlandini, A. (eds.) AIxIA 2023. LNCS, vol. 14318, pp. 205–217. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-47546-7_14
Slack, D., Hilgard, S., Jia, E., Singh, S., Lakkaraju, H.: Fooling lime and shap: adversarial attacks on post hoc explanation methods. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (2019)
Alonso, J.M., Bugarín, A.: Expliclas: automatic generation of explanations in natural language for Weka classifiers. In: 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–6 (2019)
Alonso, J.M., Ducange, P., Pecori, R., Vilas, R.: Building explanations for fuzzy decision trees with the expliclas software. In: 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–8 (2020)
Pecori, R., Ducange, P., Marcelloni, F.: Incremental learning of fuzzy decision trees for streaming data classification. In: Proceedings of the 11th Conference of the European Society for Fuzzy Logic and Technology (EUSFLAT 2019), pp. 748–755. Atlantis Press (2019)
Antonelli, M., Ducange, P., Marcelloni, F.: A fast and efficient multi-objective evolutionary learning scheme for fuzzy rule-based classifiers. Inf. Sci. 283, 36–54 (2014)
Segatori, A., Marcelloni, F., Pedrycz, W.: On distributed fuzzy decision trees for big data. IEEE Trans. Fuzzy Syst. 26(1), 174–192 (2017)
Castellano, G., Castiello, C., Pasquadibisceglie, V., Zaza, G.: FISDeT: fuzzy inference system development tool. Int. J. Comput. Intell. Syst. 10, 13–22 (2017)
Casalino, G., Castellano, G., Mannavola, A., Vessio, G.: Educational stream data analysis: a case study. In: 2020 IEEE 20th Mediterranean Electrotechnical Conference (MELECON), pp. 232–237. IEEE (2020)
Acknowledgment
Gabriella Casalino acknowledges funding from the European Union PON project Ricerca e Innovazione 2014–2020, DM 1062/2021. Gianluca Zaza and Giovanna Castellano acknowledge the support of the PNRR project FAIR - Future AI Research (PE00000013), Spoke 6 - Symbiotic AI (CUP H97G22000210007) under the NRRP MUR program funded by the NextGenerationEU. Gabriella Casalino, Giovanna Castellano, Riccardo Pecori and Gianluca Zaza are members of the INdAM GNCS research group. G. Casalino, G. Castellano, R. Pecori and G. Zaza acknowledge funds from “INdAM-GNCS Project” (CUP E53C22001930001). P. Ducange thanks the PNRR - M4C2 - Investimento 1.3, Partenariato Esteso PE00000013 - “FAIR - Future Artificial Intelligence Research” - Spoke 1 “Human-centered AI” and the Italian Ministry of University and Research (MUR) in the framework of the FoReLab and CrossLab projects (Departments of Excellence). M. Fazzolari acknowledges SERICS project (PE14), funded by NextGenerationEU program.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Casalino, G., Castellano, G., Ducange, P., Fazzolari, M., Pecori, R., Zaza, G. (2024). Leveraging Explainable AI Methods and Tools for Educational Data. In: Casalino, G., et al. Higher Education Learning Methodologies and Technologies Online. HELMeTO 2023. Communications in Computer and Information Science, vol 2076. Springer, Cham. https://doi.org/10.1007/978-3-031-67351-1_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-67351-1_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-67350-4
Online ISBN: 978-3-031-67351-1
eBook Packages: Computer ScienceComputer Science (R0)