Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3649158.3657036acmconferencesArticle/Chapter ViewAbstractPublication PagessacmatConference Proceedingsconference-collections
research-article

Prompting LLM to Enforce and Validate CIS Critical Security Control

Published: 25 June 2024 Publication History
  • Get Citation Alerts
  • Abstract

    Proper security control enforcement reduces the attack surface and protects the organizations against attacks. Organizations like NIST and CIS (Center for Internet Security) provide critical security controls (CSCs) as a guideline to enforce cyber security. Automated enforcement and measurability mechanisms for these CSCs still need to be developed. Analyzing the implementations of security products to validate security control enforcement is non-trivial. Moreover, manually analyzing and developing measures and metrics to monitor, and implementing those monitoring mechanisms are resource-intensive tasks and massively dependent on the security analyst's expertise and knowledge. To tackle those problems, we use large language models (LLMs) as a knowledge base and reasoner to extract measures, metrics, and monitoring mechanism implementation steps from security control descriptions to reduce the dependency on security analysts. Our approach used few-shot learning with chain-of-thought (CoT) prompting to generate measures and metrics and generated knowledge prompting for metrics implementation. Our evaluation shows that prompt engineering to extract measures, metrics, and monitoring implementation mechanisms can reduce dependency on humans and semi-automate the extraction process. We also demonstrate metric implementation steps using generated knowledge prompting with LLMs.

    References

    [1]
    2021. Rapid7 Global service. https://www.rapid7.com/solutions/compliance/criticalcontrols/.
    [2]
    2024. Center for Internet Security- Critical Security Control, 2021. https://www.cisecurity.org/controls/cis-controls-list.
    [3]
    2024. CIS Benchmark. https://www.cisecurity.org/cis-benchmarks.
    [4]
    2024. CIS Controls Assessment Specification. https://controls-assessmentspecification. readthedocs.io/en/stable/index.html.
    [5]
    2024. CIS Controls Measurement Companion Guide. https://www.cisecurity.org/insights/white-papers/a-measurement-companionto- the-cis-critical-controls.
    [6]
    2024. CIS Controls Self Assessment Tool (CIS CSAT). https://www.cisecurity.org/controls/cis-controls-self-assessment-tool-ciscsat.
    [7]
    2024. LLM chat trnascript. https://github.com/MohiuddinSohel/CSC-Assessment- Prompting.
    [8]
    Mohiuddin Ahmed and Ehab Al-Shaer. 2019. Measures and metrics for the enforcement of critical security controls: a case study of boundary defense. In Proceedings of the 6th Annual Symposium on Hot Topics in the Science of Security (Nashville, Tennessee, USA) (HotSoS '19). Association for Computing Machinery, New York, NY, USA, Article 21, 3 pages. https://doi.org/10.1145/3314058.3317730
    [9]
    Mohiuddin Ahmed, Jinpeng Wei, and Ehab Al-Shaer. 2023. SCAHunter: Scalable Threat Hunting Through Decentralized Hierarchical Monitoring Agent Architecture. In Intelligent Computing, Kohei Arai (Ed.). Springer Nature Switzerland, Cham, 1282--1307.
    [10]
    Louise Axon, Arnau Erola, Alastair Janse van Rensburg, Jason RC Nurse, Michael Goldsmith, and Sadie Creese. 2021. Practitioners' views on cybersecurity control adoption and effectiveness. In Proceedings of the 16th International Conference on Availability, Reliability and Security. 1--10.
    [11]
    Roy Bar-Haim, Lilach Eden, Yoav Kantor, Vikas Agarwal, Mark Devereux, Nisha Gupta, Arun Kumar, Matan Orbach, and Michael Zan. 2023. Towards Automated Assessment of Organizational Cybersecurity Posture in Cloud. In Proceedings of the 6th Joint International Conference on Data Science & Management of Data (10th ACM IKDD CODS and 28th COMAD) (Mumbai, India) (CODS-COMAD 23). Association for Computing Machinery, New York, NY, USA, 167--175. https: //doi.org/10.1145/3570991.3571008
    [12]
    Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 1877--1901. https://proceedings.neurips.cc/paper_files/paper/2020/file/ 1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf
    [13]
    Ashutosh Dutta and Ehab Al-Shaer. 2019. "What","Where", and "Why" Cybersecurity Controls to Enforce for Optimal Risk Mitigation. In 2019 IEEE Conference on Communications and Network Security (CNS). IEEE, 160--168.
    [14]
    Stjepan Gros. 2019. A Critical View on CIS Controls. 2021 16th International Conference on Telecommunications (ConTEL) (2019), 122--128. https: //api.semanticscholar.org/CorpusID:203736481
    [15]
    Yen-Ting Lin and Yun-Nung Chen. 2023. LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models. arXiv:2305.13711 [cs.CL]
    [16]
    Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, and Hannaneh Hajishirzi. 2022. Generated Knowledge Prompting for Commonsense Reasoning. arXiv:2110.08387 [cs.CL]
    [17]
    Swaroop Mishra, Daniel Khashabi, Chitta Baral, Yejin Choi, and Hannaneh Hajishirzi. 2021. Reframing Instructional Prompts to GPTk's Language. ArXiv abs/2109.07830 (2021). https://api.semanticscholar.org/CorpusID:237532411
    [18]
    Wadlkur Kurniawan Sedano and Muhammad Salman. 2021. Auditing Linux Operating System with Center for Internet Security (CIS) Standard. In 2021 International Conference on Information Technology (ICIT). IEEE, 466--471.
    [19]
    William Stern, Seng Jhing Goh, Nasheen Nur, Patrick J Aragon, and Thomas Mercer. 2024. Natural Language Explanations for Suicide Risk Classification Using Large Language Models. (2024).
    [20]
    JasonWei, XuezhiWang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. 2022. Chain of Thought Prompting Elicits Reasoning in Large Language Models. In Advances in Neural Information Processing Systems, Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (Eds.). https://openreview.net/forum?id=_VjQlMeSB_J
    [21]
    Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of Thoughts: Deliberate Problem Solving with Large Language Models. CoRR abs/2305.10601 (2023). https: //doi.org/10.48550/arXiv.2305.10601
    [22]
    Zhiyuan Zeng, Jiatong Yu, Tianyu Gao, Yu Meng, Tanya Goyal, and Danqi Chen. 2023. Evaluating Large Language Models at Evaluating Instruction Following. ArXiv abs/2310.07641 (2023). https://api.semanticscholar.org/CorpusID: 263834884
    [23]
    Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. 2023. Siren's song in the ai ocean: A survey on hallucination in large language models. arXiv preprint arXiv:2309.01219 (2023).
    [24]
    Zihao Zhao, EricWallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In International Conference on Machine Learning. PMLR, 12697--12706.

    Index Terms

    1. Prompting LLM to Enforce and Validate CIS Critical Security Control

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        SACMAT 2024: Proceedings of the 29th ACM Symposium on Access Control Models and Technologies
        June 2024
        205 pages
        ISBN:9798400704918
        DOI:10.1145/3649158
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 25 June 2024

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. account management.
        2. critical security control
        3. llm
        4. prompt engineering

        Qualifiers

        • Research-article

        Conference

        SACMAT 2024
        Sponsor:

        Acceptance Rates

        Overall Acceptance Rate 177 of 597 submissions, 30%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 99
          Total Downloads
        • Downloads (Last 12 months)99
        • Downloads (Last 6 weeks)99
        Reflects downloads up to 27 Jul 2024

        Other Metrics

        Citations

        View Options

        Get Access

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media