Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3548606.3560609acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article

AI/ML for Network Security: The Emperor has no Clothes

Published: 07 November 2022 Publication History

Abstract

Several recent research efforts have proposed Machine Learning (ML)-based solutions that can detect complex patterns in network traffic for a wide range of network security problems. However, without understanding how these black-box models are making their decisions, network operators are reluctant to trust and deploy them in their production settings. One key reason for this reluctance is that these models are prone to the problem of underspecification, defined here as the failure to specify a model in adequate detail. Not unique to the network security domain, this problem manifests itself in ML models that exhibit unexpectedly poor behavior when deployed in real-world settings and has prompted growing interest in developing interpretable ML solutions (e.g., decision trees) for "explaining'' to humans how a given black-box model makes its decisions. However, synthesizing such explainable models that capture a given black-box model's decisions with high fidelity while also being practical (i.e., small enough in size for humans to comprehend) is challenging.
In this paper, we focus on synthesizing high-fidelity and low-complexity decision trees to help network operators determine if their ML models suffer from the problem of underspecification. To this end, we present Trustee, a framework that takes an existing ML model and training dataset as input and generates a high-fidelity, easy-to-interpret decision tree and associated trust report as output. Using published ML models that are fully reproducible, we show how practitioners can use Trustee to identify three common instances of model underspecification; i.e., evidence of shortcut learning, presence of spurious correlations, and vulnerability to out-of-distribution samples.

Supplementary Material

MP4 File (CCS22-fpb305.mp4)
CCS'22 Paper Presentation Video. In this work, we present Trustee, a novel framework to extract global explanations from any type of black-box machine learning model. Trustee?s explanations are given in the form of decision trees that have high accuracy, low complexity, and high stability. Trustee contributes to ongoing efforts that enable end users to gain trust in machine learning models that have been developed for high-stakes decision-making problems (e.g., arising in areas such as network security). In particular, Trustee can be used to detect inductive biases in ML models that have been published in the existing networking and network security literature and are reproducible.

References

[1]
D. W. Apley and J. Zhu. 2020. Visualizing the effects of predictor variables in black box supervised learning models. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 82, 4 (2020), 1059--1086. https://doi.org/10.1111/ rssb.12377 arXiv:https://rss.onlinelibrary.wiley.com/doi/pdf/10.1111/rssb.12377
[2]
G. Apruzzese, L. Pajola, and M. Conti. 2022. The Cross-evaluation of Machine Learning-based Network Intrusion Detection Systems. IEEE Transactions on Network and Service Management (2022), 1-1. https://doi.org/10.1109/TNSM.2022. 3157344
[3]
M. Arjovsky, L. Bottou, I. Gulrajani, and D. Lopez-Paz. 2020. Invariant Risk Minimization. arXiv preprint arXiv:1907.02893 (2020). arXiv:stat.ML/1907.02893
[4]
D. Arp, E. Quiring, F. Pendlebury, A. Warnecke, F. Pierazzi, C. Wressnegger, L. Cavallaro, and K. Rieck. 2022. Dos and Dont's of Machine Learning in Computer Security. In 31st USENIX Security Symposium (USENIX Security 22). USENIX Association, Boston, MA. https://www.usenix.org/conference/usenixsecurity22/ presentation/arp
[5]
V. Bajpai, A. Brunstrom, A. Feldmann, W. Kellerer, A. Pras, H. Schulzrinne, G. Smaragdakis, M. Wählisch, and K. Wehrle. 2019. The Dagstuhl Beginners Guide to Reproducibility for Experimental Networking Research. SIGCOMM Comput. Commun. Rev. 49, 1 (Feb. 2019), 24--30. https://doi.org/10.1145/3314212.3314217
[6]
O. Bastani, C. Kim, and H. Bastani. 2017. Interpreting Blackbox Models via Model Extraction. arXiv preprint arXiv:1705.08504 (2017). arXiv:1705.08504 http://arxiv.org/abs/1705.08504
[7]
O. Bastani, Y. Pu, and A. Solar-Lezama. 2018. Verifiable Reinforcement Learning via Policy Extraction. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (NIPS'18). Curran Associates Inc., Red Hook, NY, USA, 2499--2509.
[8]
J. Bergstra and Y. Bengio. 2012. Random Search for Hyper-Parameter Optimization. J. Mach. Learn. Res. 13, null (feb 2012), 281--305.
[9]
R. Boutaba, M. A. Salahuddin, N. Limam, S. Ayoubi, N. Shahriar, F. Estrada- Solano, and O. M. Caicedo. 2018. A comprehensive survey on machine learning for networking: evolution, applications and research opportunities. Journal of Internet Services and Applications 9, 1 (2018), 16. https://doi.org/10.1186/s13174-018-0087-2
[10]
M. Bramer. 2007. Avoiding Overfitting of Decision Trees. Springer London, London, 119--134. https://doi.org/10.1007/978-1-84628-766-4_8
[11]
L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. 1984. Classification and Regression Trees. Wadsworth and Brooks, Monterey, CA.
[12]
M. Brundage, S. Avin, J. Wang, et al . 2020. Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. arXiv preprint arXiv:1705.08504 (2020). https://doi.org/10.48550/ARXIV.2004.07213
[13]
C. Busse-Grawitz, R. Meier, A. Dietmüller, T. Bühler, and L. Vanbever. 2019. pFor- est: In-Network Inference with Random Forests. arXiv preprint arXiv:1909.05680 (2019). arXiv:1909.05680 http://arxiv.org/abs/1909.05680
[14]
H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom. 2019. nuScenes: A multimodal dataset for autonomous driving. arXiv preprint arXiv:1903.11027 (2019). Available at https: //www.nuscenes.org.
[15]
M. Chang, J. Lambert, P. Sangkloy, J. Singh, S. Bak, A. Hartnett, D. Wang, P. Carr, S. Lucey, D. Ramanan, and J. Hays. 2019. Argoverse: 3D Tracking and Forecasting With Rich Maps. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Available at https://www.argoverse.org/.
[16]
M. W. Craven and J. W. Shavlik. 1995. Extracting Tree-Structured Representations of Trained Networks. In Proceedings of the 8th International Conference on Neural Information Processing Systems (NIPS'95). MIT Press, Cambridge, MA, USA, 24--30.
[17]
A. D'Amour, K. Heller, D. Moldovan, et al. 2020. Under specification Presents Challenges for Credibility in Modern Machine Learning. arXiv preprint arXiv:2011.03395 (2020). arXiv:cs.LG/2011.03395
[18]
J. Deng, W. Dong, R. Socher, L. Lia, K. Li, and L. Fei-Fei. 2009. ImageNet: A large- scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition. 248--255. https://doi.org/10.1109/CVPR.2009.5206848
[19]
A. Dethise, M. Canini, and S. Kandula. 2019. Cracking Open the Black Box: What Observations Can Tell Us About Reinforcement Learning Agents. In Proceedings of the 2019 Workshop on Network Meets AI & ML (NetAI'19). Association for Computing Machinery, New York, NY, USA, 29--36. https://doi.org/10.1145/ 3341216.3342210
[20]
R. Doriguzzi-Corin, S. Millar, S. Scott-Hayward, J. Martínez del Rincón, and D. Siracusa. 2020. Lucid: A Practical, Lightweight Deep Learning Solution for DDoS Attack Detection. IEEE Transactions on Network and Service Management 17, 2 (2020), 876--889. https://doi.org/10.1109/TNSM.2020.2971776
[21]
G. Draper-Gil., A. H. Lashkari., M. S. I. Mamun, and A. A. Ghorbani. 2016. Char- acterization of Encrypted and VPN Traffic using Time-related Features. In Proceedings of the 2nd International Conference on Information Systems Security and Privacy - ICISSP,. INSTICC, SciTePress, 407--414. https://doi.org/10.5220/ 0005740704070414
[22]
Z. Durumeric, F. Li, J. Kasten, J. Amann, J. Beekman, M. Payer, N. Weaver, D. Adrian, V. Paxson, M. Bailey, and J. A. Halderman. 2014. The Matter of Heartbleed. In Proceedings of the 2014 Conference on Internet Measurement Conference (IMC '14). Association for Computing Machinery, 475--488.
[23]
S. Dwivedi, M. Vardhan, and S. Tripathi. 2020. An effect of chaos grasshopper optimization algorithm for protection of network infrastructure. Computer Networks 176 (2020), 107251. https://doi.org/10.1016/j.comnet.2020.107251
[24]
N. Erickson, J. Mueller, A. Shirkov, H. Zhang, P. Larroy, M. Li, and A. Smola. 2020. AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data. arXiv preprint arXiv:2003.06505 (2020). https://doi.org/10.48550/ARXIV.2003.06505
[25]
F. Esposito, D. Malerba, G. Semeraro, and J. Kay. 1997. A comparative analysis of methods for pruning decision trees. IEEE Transactions on Pattern Analysis and Machine Intelligence 19, 5 (1997), 476--491. https://doi.org/10.1109/34.589207
[26]
The Open Information Security Foundation. 2018. Project Clearwater. (Jan. 2018). https://suricata-ids.org/
[27]
J. H. Friedman. 2001. Greedy function approximation: A gradient boosting machine. The Annals of Statistics 29, 5 (2001), 1189--1232. https://doi.org/10. 1214/aos/1013203451
[28]
R. Geirhos, J. Jacobsen, C. Michaelis, R. Zemel, W. Brendel, M. Bethge, and F. A. Wichmann. 2020. Shortcut learning in deep neural networks. Nature Machine Intelligence 2, 11 (01 Nov 2020), 665--673. https://doi.org/10.1038/s42256-020-00257-z
[29]
A. Ghorbani, A. Abid, and J. Zou. 2019. Interpretation of Neural Networks Is Fragile. Proceedings of the AAAI Conference on Artificial Intelligence 33, 01 (Jul. 2019), 3681--3688. https://doi.org/10.1609/aaai.v33i01.33013681
[30]
R. Guidotti and R. Ruggieri. 2019. On The Stability of Interpretable Models. In 2019 International Joint Conference on Neural Networks (IJCNN). 1--8. https: //doi.org/10.1109/IJCNN.2019.8852158
[31]
W. Guo, D. Mu, J. Xu, P. Su, G. Wang, and X. Xing. 2018. LEMNA: Explaining Deep Learning Based Security Applications. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (CCS '18). Association for Computing Machinery, New York, NY, USA, 364--379. https://doi.org/10.1145/ 3243734.3243792
[32]
J. Holland, P. Schmitt, N. Feamster, and P. Mittal. 2021. New Directions in Automated Traffic Analysis. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security (CCS '21). Association for Computing Machinery, New York, NY, USA, 3366--3383. https://doi.org/10.1145/3460120. 3484758
[33]
A. Hussein, M. Gaber Medhat, E. Elyan, and C. Jayne. 2017. Imitation Learning: A Survey of Learning Methods. ACM Comput. Surv. 50, 2, Article 21 (April 2017), 35 pages. https://doi.org/10.1145/3054912
[34]
A. S. Jacobs, R. Beltiukov, W. Willinger, R. A. Ferreira, A. Gupta, and L. Z. Granville. 2022. AI/ML and Network Security: The Emperor has no Clothes (Extended Version). Technical Report. (May 2022). https://github.com/TrusteeML/emperor Available at https://github.com/TrusteeML/emperor.
[35]
A. S. Jacobs, R. Beltiukov, W. Willinger, R. A. Ferreira, A. Gupta, and L. Z. Granville. 2022. Paper supplemental material. (May 2022). https://github.com/TrusteeML/emperor Reproducibility artifacts available at https://github.com/TrusteeML/emperor. The Trustee framework was implemented in a separate Python library for ease of use, available at https://github.com/TrusteeML/trustee.
[36]
H. Kaur, H. S. Pannu, and A. K. Malhi. 2019. A Systematic Review on Imbalanced Data Challenges in Machine Learning: Applications and Solutions. ACM Comput. Surv. 52, 4, Article 79 (aug 2019), 36 pages. https://doi.org/10.1145/3343440
[37]
H. Lakkaraju and O. Bastani. 2020. "How Do I Fool You?": Manipulating User Trust via Misleading Black Box Explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES '20). Association for Computing Machinery, New York, NY, USA, 79--85. https://doi.org/10.1145/3375627.3375833
[38]
H. Lakkaraju, E. Kamar, R. Caruana, and J. Leskovec. 2017. Interpretable & Explorable Approximations of Black Box Models. arXiv preprint arXiv:1707.01154 (2017). arXiv:cs.AI/1707.01154
[39]
Guillaume Lemaître, Fernando Nogueira, and Christos K. Aridas. 2017. Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning. Journal of Machine Learning Research 18, 17 (2017), 1--5. http://jmlr.org/papers/v18/16-365.html
[40]
Z. C. Lipton. 2018. The Mythos of Model Interpretability: In Machine Learning, the Concept of Interpretability is Both Important and Slippery. Queue 16, 3 (June 2018), 31--57. https://doi.org/10.1145/3236386.3241340
[41]
S. M Lundberg and S. Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Associates, Inc., 4765--4774. http://papers.nips.cc/paper/7062-a- unified-approach-to-interpreting-model-predictions.pdf
[42]
H. Mao, R. Netravali, and M. Alizadeh. 2017. Neural Adaptive Video Streaming with Pensieve. In Proceedings of the Conference of the ACM Special Interest Group on Data Communication (SIGCOMM '17). Association for Computing Machinery, New York, NY, USA, 197--210. https://doi.org/10.1145/3098822.3098843 Code available at GitHub repository at https://github.com/hongzimao/pensieve.
[43]
Z. Meng, M. Wang, J. Bai, M. Xu, H. Mao, and H. Hu. 2020. Interpreting Deep Learning-Based Networking Systems. In Proceedings of the Annual Conference of the ACM SIGCOMM (SIGCOMM '20). Association for Computing Machinery, New York, NY, USA, 154--171. https://doi.org/10.1145/3387514.3405859
[44]
Y. Mirsky, T. Doitshman, Y. Elovici, and A. Shabtai. 2018. Kitsune: An Ensemble of Autoencoders for Online Network Intrusion Detection. In Network and Distributed System Security Symposium 2018 (NDSS?18). https://doi.org/10.48550/ARXIV.1802. 09089
[45]
C. Molnar, G. König, J. Herbinger, T. Freiesleben, S. Dandl, C. A. Scholbeck, G. Casalicchio, M. Grosse-Wentrup, and B. Bischl. 2020. Pitfalls to Avoid when Interpreting Machine Learning Models. arXiv preprint arXiv:2007.04131 (2020). arXiv:stat.ML/2007.04131
[46]
National Academies of Sciences Engineering and Medicine. 2019. Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop. The National Academies Press, Washington, DC. https://doi.org/10.17226/25488
[47]
B. A. Nosek, G. Alter, G. C. Banks, et al. 2015. Promoting an open research culture. Science 348, 6242 (2015), 1422--1425. https://doi.org/10.1126/science.aab2374 arXiv:https://www.science.org/doi/pdf/10.1126/science.aab2374
[48]
M. Ribeiro, S. Singh, and C. Guestrin. 2016. ""Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations. Association for Computational Linguistics, San Diego, California, 97--101. https: //doi.org/10.18653/v1/N16-3020
[49]
H. Riiser, P. Vigmostad, C. Griwodz, and P. Halvorsen. 2013. Commute path bandwidth traces from 3G networks: analysis and applications. In Proceedings of the 4th ACM Multimedia Systems Conference. 114--118.
[50]
S. Ross, G. J. Gordon, and J. A. Bagnell. 2011. A Reduction of Imitation Learn- ing and Structured Prediction to No-Regret Online Learning. arXiv preprint arXiv:1011.0686 (2011). arXiv:cs.LG/1011.0686
[51]
C. Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1, 5 (01 May 2019), 206--215. https://doi.org/10.1038/s42256-019-0048-x
[52]
S. Russell and P. Norvig. 2020. Artificial Intelligence: A Modern Approach (4rd ed.). Pearson, USA.
[53]
L. S. Shapley. 2016. 17. A Value for n-Person Games. Princeton University Press, 307--318. https://doi.org/
[54]
I. Sharafaldin., A. H. Lashkari., and A. A. Ghorbani. 2018. Toward Generating a New Intrusion Detection Dataset and Intrusion Traffic Characterization. In Proceedings of the 4th International Conference on Information Systems Security and Privacy - ICISSP,. INSTICC, SciTePress, 108--116. https://doi.org/10.5220/ 0006639801080116
[55]
K. Shaukat, S. Luo, V. Varadharajan, I. A. Hameed, and M. Xu. 2020. A Survey on Machine Learning Techniques for Cyber Security in the Last Decade. IEEE Access 8 (2020), 222310--222354. https://doi.org/10.1109/ACCESS.2020.3041951
[56]
A. Sivanathan, H. H. Gharakheili, F. Loi, A. Radford, C. Wijenayake, A. Vish- wanath, and V. Sivaraman. 2019. Classifying IoT Devices in Smart Environments Using Network Traffic Characteristics. IEEE Transactions on Mobile Computing 18, 8 (2019), 1745--1759. https://doi.org/10.1109/TMC.2018.2866249
[57]
J. Snoek, H. Larochelle, and R. P. Adams. 2012. Practical Bayesian Optimization of Machine Learning Algorithms. In Advances in Neural Information Processing Systems, F. Pereira, C.J. Burges, L. Bottou, and K.Q. Weinberger (Eds.), Vol. 25. Curran Associates, Inc. https://proceedings.neurips.cc/paper/2012/file/ 05311655a15b75fab86956663e1819cd-Paper.pdf
[58]
R. Sommer and V. Paxson. 2010. Outside the Closed World: On Using Machine Learning for Network Intrusion Detection. In 2010 IEEE Symposium on Security and Privacy. 305--316. https://doi.org/10.1109/SP.2010.25
[59]
M. Turek. 2016. (2016). https://www.darpa.mil/program/explainable-artificial- intelligence Available at https://www.darpa.mil/program/explainable-artificial-intelligence. Accessed on May 25th, 2021.
[60]
P. Turney. 1995. Technical Note: Bias and the Quantification of Stability. Machine Learning 20, 1 (1995), 23--33. https://doi.org/10.1023/A:1022682001417
[61]
W. Wang, M. Zhu, J. Wang, X. Zeng, and Z. Yang. 2017. End-to-end encrypted traffic classification with one-dimensional convolution neural networks. In 2017 IEEE International Conference on Intelligence and Security Informatics (ISI). 43--48. https://doi.org/10.1109/ISI.2017.8004872 Code available at GitHub repository https://github.com/echowei/DeepTraffic.
[62]
Y. Xin, L. Kong, Z. Liu, Y. Chen, Y. Li, H. Zhu, M. Gao, H. Hou, and C. Wang. 2018. Machine Learning and Deep Learning Methods for Cybersecurity. IEEE Access 6 (2018), 35365--35381. https://doi.org/10.1109/ACCESS.2018.2836950
[63]
Z. Xiong and N. Zilberman. 2019. Do Switches Dream of Machine Learning? Toward In-Network Classification. In Proceedings of the 18th ACM Workshop on Hot Topics in Networks (HotNets '19). Association for Computing Machinery, New York, NY, USA, 25--33. https://doi.org/10.1145/3365609.3365864
[64]
H. Zhang, L. Huang, C. Q. Wu, and Z. Li. 2020. An effective convolutional neural network based on SMOTE and Gaussian mixture model for intrusion detection in imbalanced dataset. Computer Networks 177 (2020), 107315. https: //doi.org/10.1016/j.comnet.2020.107315

Cited By

View all
  • (2025)A graph representation framework for encrypted network traffic classificationComputers & Security10.1016/j.cose.2024.104134148(104134)Online publication date: Jan-2025
  • (2024)A Comparison of Neural-Network-Based Intrusion Detection against Signature-Based Detection in IoT NetworksInformation10.3390/info1503016415:3(164)Online publication date: 14-Mar-2024
  • (2024)No Pictures, Please: Using eXplainable Artificial Intelligence to Demystify CNNs for Encrypted Network Packet ClassificationApplied Sciences10.3390/app1413546614:13(5466)Online publication date: 24-Jun-2024
  • Show More Cited By

Index Terms

  1. AI/ML for Network Security: The Emperor has no Clothes

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        CCS '22: Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security
        November 2022
        3598 pages
        ISBN:9781450394505
        DOI:10.1145/3548606
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 07 November 2022

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. artificial intelligence
        2. explainability
        3. interpretability
        4. machine learning
        5. network security
        6. trust

        Qualifiers

        • Research-article

        Funding Sources

        • CNPq
        • FAPESP
        • NSF
        • CAPES

        Conference

        CCS '22
        Sponsor:

        Acceptance Rates

        Overall Acceptance Rate 1,261 of 6,999 submissions, 18%

        Upcoming Conference

        CCS '25

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)1,298
        • Downloads (Last 6 weeks)94
        Reflects downloads up to 07 Nov 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2025)A graph representation framework for encrypted network traffic classificationComputers & Security10.1016/j.cose.2024.104134148(104134)Online publication date: Jan-2025
        • (2024)A Comparison of Neural-Network-Based Intrusion Detection against Signature-Based Detection in IoT NetworksInformation10.3390/info1503016415:3(164)Online publication date: 14-Mar-2024
        • (2024)No Pictures, Please: Using eXplainable Artificial Intelligence to Demystify CNNs for Encrypted Network Packet ClassificationApplied Sciences10.3390/app1413546614:13(5466)Online publication date: 24-Jun-2024
        • (2024)Toward Trustworthy Learning-Enabled Systems with Concept-Based ExplanationsProceedings of the The 23rd ACM Workshop on Hot Topics in Networks10.1145/3696348.3696894(60-67)Online publication date: 18-Nov-2024
        • (2024)No Need for Details: Effective Anomaly Detection for Process Control Traffic in Absence of Protocol and Attack KnowledgeProceedings of the 27th International Symposium on Research in Attacks, Intrusions and Defenses10.1145/3678890.3678932(278-297)Online publication date: 30-Sep-2024
        • (2024)Harnessing Public Code Repositories to Develop Production-Ready ML Artifacts for NetworkingProceedings of the 2024 Applied Networking Research Workshop10.1145/3673422.3674898(100-102)Online publication date: 23-Jul-2024
        • (2024)A Trust and Reputation System for Examining Compliance with Access ControlProceedings of the 19th International Conference on Availability, Reliability and Security10.1145/3664476.3670883(1-10)Online publication date: 30-Jul-2024
        • (2024)A Novel Self-Supervised Framework Based on Masked Autoencoder for Traffic ClassificationIEEE/ACM Transactions on Networking10.1109/TNET.2023.333525332:3(2012-2025)Online publication date: Jun-2024
        • (2024)Please Tell Me More: Privacy Impact of Explainability through the Lens of Membership Inference Attack2024 IEEE Symposium on Security and Privacy (SP)10.1109/SP54263.2024.00120(4791-4809)Online publication date: 19-May-2024
        • (2024)Inferring Visibility of Internet Traffic Matrices Using eXplainable AINOMS 2024-2024 IEEE Network Operations and Management Symposium10.1109/NOMS59830.2024.10575173(1-6)Online publication date: 6-May-2024
        • Show More Cited By

        View Options

        Get Access

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media