Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3664476.3670905acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaresConference Proceedingsconference-collections
research-article

Introducing a Multi-Perspective xAI Tool for Better Model Explainability

Published: 30 July 2024 Publication History

Abstract

This paper introduces an innovative tool equipped with a multi-perspective, user-friendly dashboard designed to enhance the explainability of AI models, particularly in cybersecurity. By enabling users to select data samples and apply various xAI methods, the tool provides insightful views into the decision-making processes of AI systems. These methods offer diverse perspectives and deepen the understanding of how models derive their conclusions, thus demystifying the "black box" of AI. The tool’s architecture facilitates easy integration with existing ML models, making it accessible to users regardless of their technical expertise. This approach promotes transparency and fosters trust in AI applications by aligning decision-making with domain knowledge and mitigating potential biases.

References

[1]
2024. AI4CYBER web page at the European Commission’s Cordis portal. https://cordis.europa.eu/project/id/101070450. Accessed April 23, 2024.
[2]
2024. Trustworthy Artificial Intelligence for Cybersecurity Reinforcement and System Resilience (AI4CYBER). https://ai4cyber.eu/. Accessed April 23, 2024.
[3]
Zakaria Abou El Houda, Bouziane Brik, and Lyes Khoukhi. 2022. “why should i trust your ids?”: An explainable deep learning framework for intrusion detection systems in internet of things networks. IEEE Open Journal of the Communications Society 3 (2022), 1164–1176.
[4]
Daniel W Apley and Jingyu Zhu. 2020. Visualizing the effects of predictor variables in black box supervised learning models. Journal of the Royal Statistical Society Series B: Statistical Methodology 82, 4 (2020), 1059–1086.
[5]
Osvaldo Arreche, Tanish R Guntur, Jack W Roberts, and Mustafa Abdallah. 2024. E-XAI: Evaluating Black-Box Explainable AI Frameworks for Network Intrusion Detection. IEEE Access (2024).
[6]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58 (jun 2020), 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
[7]
Leo Breiman. 2001. Random Forests. Machine Learning 45, 1 (2001), 5–32. https://doi.org/10.1023/A:1010933404324
[8]
Michał Choraś, Marek Pawlicki, Damian Puchalski, and Rafał Kozik. 2020. Machine Learning – The Results Are Not the only Thing that Matters! What About Security, Explainability and Fairness? BT - Computational Science – ICCS 2020. In ., Valeria V Krzhizhanovskaya, Gábor Závodszky, Michael H Lees, Jack J Dongarra, Peter M A Sloot, Sérgio Brissos, and João Teixeira (Eds.). Springer International Publishing, Cham, 615–628.
[9]
Robertas Damasevicius, Algimantas Venckauskas, Sarunas Grigaliunas, Jevgenijus Toldinas, Nerijus Morkevicius, Tautvydas Aleliunas, and Paulius Smuikys. 2020. LITNET-2020: An annotated real-world network flow dataset for network intrusion detection. Electronics 9, 5 (2020), 800.
[10]
Jerome H. Friedman. 2001. Greedy function approximation: A gradient boosting machine.The Annals of Statistics 29, 5 (oct 2001). https://doi.org/10.1214/aos/1013203451
[11]
Jerome H. Friedman and Bogdan E. Popescu. 2008. Predictive learning via rule ensembles. (nov 2008). https://doi.org/10.1214/07-AOAS148 arxiv:0811.1679
[12]
Alex Goldstein, Adam Kapelner, Justin Bleich, and Emil Pitkin. 2013. Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation. (sep 2013). arxiv:1309.6392http://arxiv.org/abs/1309.6392
[13]
Swetha Hariharan, RR Rejimol Robinson, Rendhir R Prasad, Ciza Thomas, and N Balakrishnan. 2023. XAI for intrusion detection system: comparing explanations based on global and local scope. Journal of Computer Virology and Hacking Techniques 19, 2 (2023), 217–239.
[14]
Brian Hu, Paul Tunison, Bhavan Vasu, Nitesh Menon, Roddy Collins, and Anthony Hoogs. 2021. XAITK: The explainable AI toolkit. Applied AI Letters 2, 4 (2021), e40.
[15]
Ramanpreet Kaur, Dušan Gabrijelčič, and Tomaž Klobučar. 2023. Artificial intelligence for cybersecurity: Literature review and future research directions. Information Fusion 97 (sep 2023), 101804. https://doi.org/10.1016/j.inffus.2023.101804
[16]
Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems, I Guyon, U Von Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, and R Garnett (Eds.). Vol. 30. Curran Associates, Inc.https://proceedings.neurips.cc/paper_files/paper/2017/file/8a20a8621978632d76c43dfd28b67767-Paper.pdf
[17]
Shraddha Mane and Dattaraj Rao. 2021. Explaining network intrusion detection system using explainable AI framework. arXiv preprint arXiv:2103.07110 (2021).
[18]
Christoph Molnar, Giuseppe Casalicchio, and Bernd Bischl. 2020. Interpretable machine learning–a brief history, state-of-the-art and challenges. In Joint European conference on machine learning and knowledge discovery in databases. Springer, 417–431.
[19]
Ramaravind K Mothilal, Amit Sharma, and Chenhao Tan. 2020. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 607–617.
[20]
Subash Neupane, Jesse Ables, William Anderson, Sudip Mittal, Shahram Rahimi, Ioana Banicescu, and Maria Seale. 2022. Explainable intrusion detection systems (x-ids): A survey of current methods, challenges, and opportunities. IEEE Access 10 (2022), 112392–112415.
[21]
Aleksandra Pawlicka, Michał Choraś, and Marek Pawlicki. 2020. Cyberspace threats: not only hackers and criminals. Raising the awareness of selected unusual cyberspace actors-cybersecurity researchers’ perspective. In Proceedings of the 15th International Conference on Availability, Reliability and Security. 1–11.
[22]
Aleksandra Pawlicka, Michał Choraś, and Marek Pawlicki. 2021. The stray sheep of cyberspace aka the actors who claim they break the law for the greater good. Personal and Ubiquitous Computing 25, 5 (2021), 843–852.
[23]
Marek Pawlicki, Aleksandra Pawlicka, Rafał Kozik, and Michał Choraś. 2023. The survey and meta-analysis of the attacks, transgressions, countermeasures and security aspects common to the Cloud, Edge and IoT. Neurocomputing 551 (sep 2023), 126533. https://doi.org/10.1016/j.neucom.2023.126533
[24]
Marek Pawlicki, Aleksandra Pawlicka, Rafał Kozik, and Michał Choraś. 2024. Advanced insights through systematic analysis: Mapping future research directions and opportunities for xAI in deep learning and artificial intelligence used in cybersecurity. Neurocomputing (apr 2024), 127759. https://doi.org/10.1016/j.neucom.2024.127759
[25]
J. Ross Quinlan. 1986. Induction of decision trees. Machine learning 1 (1986), 81–106.
[26]
Md. Fazley Rafy. 2024. Artificial Intelligence in Cyber Security. https://doi.org/10.13140/RG.2.2.19552.66561
[27]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.
[28]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-precision model-agnostic explanations. In Proceedings of the AAAI conference on artificial intelligence, Vol. 32.
[29]
Thilo Spinner, Udo Schlegel, Hanna Schäfer, and Mennatallah El-Assady. 2019. explAIner: A visual analytics framework for interactive and explainable machine learning. IEEE transactions on visualization and computer graphics 26, 1 (2019), 1064–1074.
[30]
Alexander Warnecke, Daniel Arp, Christian Wressnegger, and Konrad Rieck. 2020. Evaluating explanation methods for deep learning in security. In 2020 IEEE european symposium on security and privacy (EuroS&P). IEEE, 158–174.
[31]
Yongjun Xu, Xin Liu, Xin Cao, Changping Huang, Enke Liu, Sen Qian, Xingchen Liu, Yanjun Wu, Fengliang Dong, Cheng-Wei Qiu, Junjun Qiu, Keqin Hua, Wentao Su, Jian Wu, Huiyu Xu, Yong Han, Chenguang Fu, Zhigang Yin, Miao Liu, Ronald Roepman, Sabine Dietmann, Marko Virta, Fredrick Kengara, Ze Zhang, Lifu Zhang, Taolan Zhao, Ji Dai, Jialiang Yang, Liang Lan, Ming Luo, Zhaofeng Liu, Tao An, Bin Zhang, Xiao He, Shan Cong, Xiaohong Liu, Wei Zhang, James P. Lewis, James M. Tiedje, Qi Wang, Zhulin An, Fei Wang, Libo Zhang, Tao Huang, Chuan Lu, Zhipeng Cai, Fang Wang, and Jiabao Zhang. 2021. Artificial intelligence: A powerful paradigm for scientific research. The Innovation 2, 4 (nov 2021), 100179. https://doi.org/10.1016/j.xinn.2021.100179
[32]
Wenzhuo Yang, Hung Le, Tanmay Laud, Silvio Savarese, and Steven CH Hoi. 2022. Omnixai: A library for explainable ai. arXiv preprint arXiv:2206.01612 (2022).
Index terms have been assigned to the content through auto-classification.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ARES '24: Proceedings of the 19th International Conference on Availability, Reliability and Security
July 2024
2032 pages
ISBN:9798400717185
DOI:10.1145/3664476
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 July 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. AI explainability
  2. machine learning
  3. network intrusion detection

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

ARES 2024

Acceptance Rates

Overall Acceptance Rate 228 of 451 submissions, 51%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 13
    Total Downloads
  • Downloads (Last 12 months)13
  • Downloads (Last 6 weeks)12
Reflects downloads up to 12 Sep 2024

Other Metrics

Citations

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media