Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Leveraging Explainable AI Methods and Tools for Educational Data

  • Conference paper
  • First Online:
Higher Education Learning Methodologies and Technologies Online (HELMeTO 2023)

Abstract

Artificial Intelligence (AI) has become an integral part of our lives, and Explainable Artificial Intelligence (XAI) is becoming more essential to ensure trustworthiness and comply with regulations. XAI methodologies help to explain the automatic processing behind data analysis. This paper provides an overview of the use of XAI in the educational domain. Specifically, it analyzes some of the most commonly used XAI tools, emphasizing their characteristics to help users choose the most suitable one. Additionally, two case studies have been analyzed to demonstrate how to use XAI tools in the educational domain by exploiting a subset of the Open University dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    EU AI ACT: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.

  2. 2.

    General Data Protection Regulation (GDPR): https://gdpr-info.eu/.

  3. 3.

    https://demos.citius.usc.es/ExpliClas/.

  4. 4.

    https://www.cs.waikato.ac.nz/ml/weka/.

  5. 5.

    https://orangedatamining.com/.

  6. 6.

    https://github.com/oegedijk/explainerdashboard.

  7. 7.

    https://eli5.readthedocs.io/en/latest/.

  8. 8.

    https://github.com/GionatanG/skmoefs.

  9. 9.

    https://bitbucket.org/mbarsacchi/fuzzyml/src/master/.

  10. 10.

    https://github.com/Fisdet/FISDeT.

  11. 11.

    Student oriented subset of the Open University Learning Analytics dataset: https://zenodo.org/records/4264397.

References

  1. Chen, L., Chen, P., Lin, Z.: Artificial intelligence in education: a review. IEEE Access 8, 75264–75278 (2020)

    Article  Google Scholar 

  2. Ducange, P., Pecori, R., Sarti, L., Vecchio, M.: Educational big data mining: how to enhance virtual learning environments. In: Graña, M., López-Guede, J.M., Etxaniz, O., Herrero, Á., Quintián, H., Corchado, E. (eds.) SOCO/CISIS/ICEUTE -2016. AISC, vol. 527, pp. 681–690. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-47364-2_66

    Chapter  Google Scholar 

  3. Khosravi, H., et al.: Explainable artificial intelligence in education. Comput. Educ. Artif. Intell. 100074 (2022)

    Google Scholar 

  4. Khosravi, H., Kitto, K., Williams, J.J.: Ripple: a crowdsourced adaptive platform for recommendation of learning activities. arXiv preprint arXiv:1910.05522 (2019)

  5. Conati, C., Barral, O., Putnam, V., Rieger, L.: Toward personalized XAI: a case study in intelligent tutoring systems. Artif. Intell. 298, 103503 (2021)

    Article  Google Scholar 

  6. Embarak, O.H.: Internet of behaviour (IoB)-based AI models for personalized smart education systems. Procedia Comput. Sci. 203, 103–110 (2022)

    Article  Google Scholar 

  7. Shum, S.B., Knight, S., McNamara, D., Allen, L., Bektik, D., Crossley, S.: Critical perspectives on writing analytics. In: Proceedings of the Sixth International Conference on Learning Analytics & Knowledge, pp. 481–483 (2016)

    Google Scholar 

  8. Schicchi, D., Pilato, G.: WORDY: a semi-automatic methodology aimed at the creation of neologisms based on a semantic network and blending devices. In: Barolli, L., Terzo, O. (eds.) CISIS 2017. AISC, vol. 611, pp. 236–248. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-61566-0_23

    Chapter  Google Scholar 

  9. Palmonari, M., Uboldi, G., Cremaschi, M., Ciminieri, D., Bianchi, F.: DaCENA: serendipitous news reading with data contexts. In: Gandon, F., Gueret, C., Villata, S., Breslin, J., Faron-Zucker, C., Zimmermann, A. (eds.) ESWC 2015. LNCS, vol. 9341, pp. 133–137. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-25639-9_26

    Chapter  Google Scholar 

  10. El-Assady, M., et al.: Towards XAI: structuring the processes of explanations. In: Proceedings of the ACM Workshop on Human-Centered Machine Learning, Glasgow, UK, vol. 4 (2019)

    Google Scholar 

  11. De Laet, T., Millecamp, M., Broos, T., De Croon, R., Verbert, K., Duorado, R.: Explainable learning analytics: challenges and opportunities. In: Companion Proceedings of the 10th International Conference on Learning Analytics & Knowledge LAK20 Society for Learning Analytics Research (SoLAR), pp. 500–510 (2020)

    Google Scholar 

  12. Schicchi, D., Marino, B., Taibi, D.: Exploring Learning Analytics on YouTube: a tool to support students’ interactions analysis. In: International Conference on Computer Systems and Technologies, vol. 21, pp. 207–211 (2021)

    Google Scholar 

  13. Nagy, M., Molontay, R.: Interpretable dropout prediction: towards XAI-based personalized intervention. Int. J. Artif. Intell. Educ. 1–27 (2023)

    Google Scholar 

  14. Casalino, G., Ducange, P., Fazzolari, M., Pecori, R.: Fuzzy hoeffding decision trees for learning analytics. In: First Workshop on Online Learning from Uncertain Data Streams (OLUD 2022) (2022)

    Google Scholar 

  15. Zanellati, A., Di Mitri, D., Gabbrielli, M., Levrini, O.: Hybrid models for knowledge tracing: a systematic literature review. IEEE Trans. Learn. Technol. (2024)

    Google Scholar 

  16. Farella, M., Taibi, D., Arrigo, M., Todaro, G., Fulantelli, G., Chiazzese, G.: An augmented reality mobile learning experience based on treasure hunt serious game. In: ECEL 2021 20th European Conference on e-Learning, p. 148. Academic Conferences International Limited (2021)

    Google Scholar 

  17. Casalino, G., Castellano, G., Vessio, G.: Exploiting time in adaptive learning from educational data. In: Agrati, L.S., et al. (eds.) HELMeTO 2020. CCIS, vol. 1344, pp. 3–16. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-67435-9_1

    Chapter  Google Scholar 

  18. Castelvecchi, D.: Can we open the black box of AI? Nat. News 538(7623), 20 (2016)

    Article  Google Scholar 

  19. Lipton, Z.C.: The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018)

    Article  Google Scholar 

  20. Wachter, S., Mittelstadt, B.D., Floridi, L.: Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int. Data Priv. Law 7, 76–99 (2017)

    Article  Google Scholar 

  21. Ducange, P., Marcelloni, F., Pecori, R.: Fuzzy hoeffding decision tree for data stream classification. Int. J. Comput. Intell. Syst. 14, 946–964 (2021)

    Article  Google Scholar 

  22. Gallo, G., Ferrari, V., Marcelloni, F., Ducange, P.: SK-MOEFS: a library in python for designing accurate and explainable fuzzy models. In: Lesot, M.-J., et al. (eds.) IPMU 2020. CCIS, vol. 1239, pp. 68–81. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-50153-2_6

    Chapter  Google Scholar 

  23. Stepin, I., Suffian, M., Catala, A., Alonso-Moral, J.M.: How to build self-explaining fuzzy systems: From interpretability to explainability [AI-explained]. IEEE Comput. Intell. Mag. 19(1), 81–82 (2024)

    Article  Google Scholar 

  24. Casalino, G., Castellano, G., Kaymak, U., Zaza, G.: Balancing accuracy and interpretability through neuro-fuzzy models for cardiovascular risk assessment. In: 2021 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1–8 (2021)

    Google Scholar 

  25. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  26. Mishra, S., Sturm, B.L., Dixon, S.: Local interpretable model-agnostic explanations for music content analysis. In: ISMIR, vol. 53, pp. 537–543 (2017)

    Google Scholar 

  27. Magesh, P.R., Myloth, R.D., Tom, R.J.: An explainable machine learning model for early detection of Parkinson’s disease using lime on DaTSCAN imagery. Comput. Biol. Med. 126, 104041 (2020)

    Article  Google Scholar 

  28. Alvarez-Melis, D., Jaakkola, T.: On the robustness of interpretability methods. arXiv, abs/1806.08049 (2018)

    Google Scholar 

  29. Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. arXiv, abs/1705.07874 (2017)

    Google Scholar 

  30. Kaczmarek-Majer, K., et al.: Plenary: explaining black-box models in natural language through fuzzy linguistic summaries. Inf. Sci. 614, 374–399 (2022)

    Article  Google Scholar 

  31. Andresini, G., et al.: CENTAURO: an explainable AI approach for customer loyalty prediction in retail sector. In: Basili, R., Lembo, D., Limongelli, C., Orlandini, A. (eds.) AIxIA 2023. LNCS, vol. 14318, pp. 205–217. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-47546-7_14

    Chapter  Google Scholar 

  32. Slack, D., Hilgard, S., Jia, E., Singh, S., Lakkaraju, H.: Fooling lime and shap: adversarial attacks on post hoc explanation methods. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (2019)

    Google Scholar 

  33. Alonso, J.M., Bugarín, A.: Expliclas: automatic generation of explanations in natural language for Weka classifiers. In: 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–6 (2019)

    Google Scholar 

  34. Alonso, J.M., Ducange, P., Pecori, R., Vilas, R.: Building explanations for fuzzy decision trees with the expliclas software. In: 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–8 (2020)

    Google Scholar 

  35. Pecori, R., Ducange, P., Marcelloni, F.: Incremental learning of fuzzy decision trees for streaming data classification. In: Proceedings of the 11th Conference of the European Society for Fuzzy Logic and Technology (EUSFLAT 2019), pp. 748–755. Atlantis Press (2019)

    Google Scholar 

  36. Antonelli, M., Ducange, P., Marcelloni, F.: A fast and efficient multi-objective evolutionary learning scheme for fuzzy rule-based classifiers. Inf. Sci. 283, 36–54 (2014)

    Article  MathSciNet  Google Scholar 

  37. Segatori, A., Marcelloni, F., Pedrycz, W.: On distributed fuzzy decision trees for big data. IEEE Trans. Fuzzy Syst. 26(1), 174–192 (2017)

    Article  Google Scholar 

  38. Castellano, G., Castiello, C., Pasquadibisceglie, V., Zaza, G.: FISDeT: fuzzy inference system development tool. Int. J. Comput. Intell. Syst. 10, 13–22 (2017)

    Article  Google Scholar 

  39. Casalino, G., Castellano, G., Mannavola, A., Vessio, G.: Educational stream data analysis: a case study. In: 2020 IEEE 20th Mediterranean Electrotechnical Conference (MELECON), pp. 232–237. IEEE (2020)

    Google Scholar 

Download references

Acknowledgment

Gabriella Casalino acknowledges funding from the European Union PON project Ricerca e Innovazione 2014–2020, DM 1062/2021. Gianluca Zaza and Giovanna Castellano acknowledge the support of the PNRR project FAIR - Future AI Research (PE00000013), Spoke 6 - Symbiotic AI (CUP H97G22000210007) under the NRRP MUR program funded by the NextGenerationEU. Gabriella Casalino, Giovanna Castellano, Riccardo Pecori and Gianluca Zaza are members of the INdAM GNCS research group. G. Casalino, G. Castellano, R. Pecori and G. Zaza acknowledge funds from “INdAM-GNCS Project” (CUP E53C22001930001). P. Ducange thanks the PNRR - M4C2 - Investimento 1.3, Partenariato Esteso PE00000013 - “FAIR - Future Artificial Intelligence Research” - Spoke 1 “Human-centered AI” and the Italian Ministry of University and Research (MUR) in the framework of the FoReLab and CrossLab projects (Departments of Excellence). M. Fazzolari acknowledges SERICS project (PE14), funded by NextGenerationEU program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Riccardo Pecori .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Casalino, G., Castellano, G., Ducange, P., Fazzolari, M., Pecori, R., Zaza, G. (2024). Leveraging Explainable AI Methods and Tools for Educational Data. In: Casalino, G., et al. Higher Education Learning Methodologies and Technologies Online. HELMeTO 2023. Communications in Computer and Information Science, vol 2076. Springer, Cham. https://doi.org/10.1007/978-3-031-67351-1_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-67351-1_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-67350-4

  • Online ISBN: 978-3-031-67351-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics