Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

A Mathematical Framework for Evaluation of SOAR Tools with Limited Survey Data

  • Conference paper
  • First Online:
Computer Security. ESORICS 2021 International Workshops (ESORICS 2021)

Abstract

Security operation centers (SOCs) all over the world are tasked with reacting to cybersecurity alerts ranging in severity. Security Orchestration, Automation, and Response (SOAR) tools streamline cybersecurity alert responses by SOC operators. SOAR tool adoption is expensive both in effort and finances. Hence, it is crucial to limit adoption to those most worthwhile; yet no research evaluating or comparing SOAR tools exists. The goal of this work is to evaluate several SOAR tools using specific criteria pertaining to their usability. SOC operators were asked to first complete a survey about what SOAR tool aspects are most important. Operators were then assigned a set of SOAR tools for which they viewed demonstration and overview videos, and then operators completed a second survey wherein they were tasked with evaluating each of the tools on the aspects from the first survey. In addition, operators provided an overall rating to each of their assigned tools, and provided a ranking of their tools in order of preference. Due to time constraints on SOC operators for thorough testing, we provide a systematic method of downselecting a large pool of SOAR tools to a select few that merit next-step hands-on evaluation by SOC operators. Furthermore, the analyses conducted in this survey help to inform future development of SOAR tools to ensure that the appropriate functions are available for use in a SOC.

This manuscript has been co-authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The IRB works to ensure the rights of participants in any human subject study are fully protected.

  2. 2.

    https://www.nltk.org/_modules/nltk/sentiment/vader.html.

  3. 3.

    https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment.

References

  1. Goodbye SIEM, hello SOARX: Network Security, p. 20 (2019)

    Google Scholar 

  2. Islam, C., Babar, M.A., Nepal, S.: A multi-vocal review of security orchestration. ACM Comput. Surv. 52(2), 1–45 (2019)

    Article  Google Scholar 

  3. Brewer, R.: Could SOAR save skills-short SOCs? Comput. Fraud Secur. 2019(10), 8–11 (2019). https://www.sciencedirect.com/science/article/pii/S136137231930106X

  4. Trull, J.: Top 5 best practices to automate security operations (2017). https://www.microsoft.com/security/blog/2017/08/03/top-5-best-practices-to-automate-security-operations/

  5. Dutta, S.D., Prasad, R.: Cybersecurity for microgrid. In: 23rd International Symposium on Wireless Personal Multimedia Communications (WPMC) (2020)

    Google Scholar 

  6. Fielder, A., Panaousis, E., Malacaria, P., Hankin, C., Smeraldi, F.: Decision support approaches for cyber security investment. Decis. Support Syst. 86, 13–23 (2016)

    Article  Google Scholar 

  7. Dupuis, M., Geiger, T., Slayton, M., Dewing, F.: The use and non-use of cybersecurity tools among consumers. In: Proceedings of the 20th Annual SIG Conference on Information Technology Education (2019)

    Google Scholar 

  8. Loveland, J.L.: Mathematical justifcation of introductory hypothesis tests and development of reference materials, p. 133

    Google Scholar 

  9. Sauro, J., Lewis, J.R.: Quantifying the User Experience: Practical Statistics for User Research. Elsevier Morgan Kaufmann, New York (2012)

    Google Scholar 

  10. Hutto, C., Gilbert, E.: VADER: a parsimonious rule-based model for sentiment analysis of social media text. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 8, no. 1 (2014)

    Google Scholar 

  11. Baccianella, S., Esuli, A., Sebastiani, F.: SENTIWORDNET 3.0: an enhanced lexical resource for sentiment analysis and opinion mining. In: Lrec, vol. 10, no. 2010, pp. 2200–2204 (2010)

    Google Scholar 

  12. Barbieri, F., Camacho-Collados, F., Anke, L.E., Neves, L.: TweetEval: unified benchmark and comparative evaluation for tweet classification. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pp. 1644–1650 (2020)

    Google Scholar 

  13. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT, vol. 1 (2019)

    Google Scholar 

  14. Breese, J.S., Heckerman, D., Kadie, C.: Empirical analysis of predictive algorithms for collaborative filtering. In: Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, pp. 43–52 (1998)

    Google Scholar 

  15. Adomavicius, G., Kwon, Y.: New recommendation techniques for multicriteria rating systems. IEEE Intell. Syst. 22(3), 48–55 (2007)

    Article  Google Scholar 

  16. Hofmann, T., Puzicha, J.: Latent class models for collaborative filtering. In: IJCAI, vol. 99, no. 1999 (1999)

    Google Scholar 

  17. Si, L., Jin, R., Flexible mixture model for collaborative filtering. In: Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 704–711 (2003)

    Google Scholar 

  18. Sahoo, N., Krishnan, R., Duncan, G., Callan, J.: The halo effect in multicomponent ratings and its implications for recommender systems: the case of Yahoo! movies. Inf. Syst. Res. 23(1), 231–246 (2012)

    Article  Google Scholar 

  19. Knight, W.R.: A computer method for calculating Kendall’s Tau with ungrouped data. J. Am. Stat. Assoc. 61(314), 436–439 (1966)

    Article  Google Scholar 

  20. Page, L., Brin, S., Motwani, R., Winograd, T.: The PageRank citation ranking: Bringing order to the web. Technical Report, Stanford InfoLab (1999)

    Google Scholar 

  21. Hagberg, A., Swart, P., Chult, D.S.: Exploring network structure, dynamics, and function using NetworkX, Los Alamos National Lab (LANL), Los Alamos, NM (United States), Technical Report (2008)

    Google Scholar 

Download references

Acknowledgements

Special thanks to Jeff Meredith for assisting with the website. The research is based upon work supported by the Department of Defense (DOD), Naval Information Warfare Systems Command (NAVWAR), via the Department of Energy (DOE) under contract DE-AC05-00OR22725. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements, either expressed or implied, of the DOD, NAVWAR, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Savannah Norem .

Editor information

Editors and Affiliations

A Appendix

A Appendix

Table 3. Four methods used to report or derive overall ratings of the tools. Raw score: average of user-defined ratings; PR: average of overall ratings derived from PageRank algorithm on the raw data; ML: average of overall ratings derived from machine learning predictions on populated data; ML + PR: average of overall ratings derived from machine learning and PageRank algorithm on populated data.
Table 4. Survey questionnaire given to the SOC operators. The survey was delivered electronically and included 4 pre-survey questions, 10 questions about the specific tools (including their aspects), and 1 question about the overall ranking. All ratings questions were scored 1–5, with 1 being the worst and 5 being the best. On the two ranking questions, the operators ranked their most preferred aspect/tool as 1 and their least preferred as the highest value.

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Norem, S., Rice, A.E., Erwin, S., Bridges, R.A., Oesch, S., Weber, B. (2022). A Mathematical Framework for Evaluation of SOAR Tools with Limited Survey Data. In: Katsikas, S., et al. Computer Security. ESORICS 2021 International Workshops. ESORICS 2021. Lecture Notes in Computer Science(), vol 13106. Springer, Cham. https://doi.org/10.1007/978-3-030-95484-0_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-95484-0_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-95483-3

  • Online ISBN: 978-3-030-95484-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics