Abstract
Security operation centers (SOCs) all over the world are tasked with reacting to cybersecurity alerts ranging in severity. Security Orchestration, Automation, and Response (SOAR) tools streamline cybersecurity alert responses by SOC operators. SOAR tool adoption is expensive both in effort and finances. Hence, it is crucial to limit adoption to those most worthwhile; yet no research evaluating or comparing SOAR tools exists. The goal of this work is to evaluate several SOAR tools using specific criteria pertaining to their usability. SOC operators were asked to first complete a survey about what SOAR tool aspects are most important. Operators were then assigned a set of SOAR tools for which they viewed demonstration and overview videos, and then operators completed a second survey wherein they were tasked with evaluating each of the tools on the aspects from the first survey. In addition, operators provided an overall rating to each of their assigned tools, and provided a ranking of their tools in order of preference. Due to time constraints on SOC operators for thorough testing, we provide a systematic method of downselecting a large pool of SOAR tools to a select few that merit next-step hands-on evaluation by SOC operators. Furthermore, the analyses conducted in this survey help to inform future development of SOAR tools to ensure that the appropriate functions are available for use in a SOC.
This manuscript has been co-authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
The IRB works to ensure the rights of participants in any human subject study are fully protected.
- 2.
- 3.
References
Goodbye SIEM, hello SOARX: Network Security, p. 20 (2019)
Islam, C., Babar, M.A., Nepal, S.: A multi-vocal review of security orchestration. ACM Comput. Surv. 52(2), 1–45 (2019)
Brewer, R.: Could SOAR save skills-short SOCs? Comput. Fraud Secur. 2019(10), 8–11 (2019). https://www.sciencedirect.com/science/article/pii/S136137231930106X
Trull, J.: Top 5 best practices to automate security operations (2017). https://www.microsoft.com/security/blog/2017/08/03/top-5-best-practices-to-automate-security-operations/
Dutta, S.D., Prasad, R.: Cybersecurity for microgrid. In: 23rd International Symposium on Wireless Personal Multimedia Communications (WPMC) (2020)
Fielder, A., Panaousis, E., Malacaria, P., Hankin, C., Smeraldi, F.: Decision support approaches for cyber security investment. Decis. Support Syst. 86, 13–23 (2016)
Dupuis, M., Geiger, T., Slayton, M., Dewing, F.: The use and non-use of cybersecurity tools among consumers. In: Proceedings of the 20th Annual SIG Conference on Information Technology Education (2019)
Loveland, J.L.: Mathematical justifcation of introductory hypothesis tests and development of reference materials, p. 133
Sauro, J., Lewis, J.R.: Quantifying the User Experience: Practical Statistics for User Research. Elsevier Morgan Kaufmann, New York (2012)
Hutto, C., Gilbert, E.: VADER: a parsimonious rule-based model for sentiment analysis of social media text. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 8, no. 1 (2014)
Baccianella, S., Esuli, A., Sebastiani, F.: SENTIWORDNET 3.0: an enhanced lexical resource for sentiment analysis and opinion mining. In: Lrec, vol. 10, no. 2010, pp. 2200–2204 (2010)
Barbieri, F., Camacho-Collados, F., Anke, L.E., Neves, L.: TweetEval: unified benchmark and comparative evaluation for tweet classification. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pp. 1644–1650 (2020)
Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT, vol. 1 (2019)
Breese, J.S., Heckerman, D., Kadie, C.: Empirical analysis of predictive algorithms for collaborative filtering. In: Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, pp. 43–52 (1998)
Adomavicius, G., Kwon, Y.: New recommendation techniques for multicriteria rating systems. IEEE Intell. Syst. 22(3), 48–55 (2007)
Hofmann, T., Puzicha, J.: Latent class models for collaborative filtering. In: IJCAI, vol. 99, no. 1999 (1999)
Si, L., Jin, R., Flexible mixture model for collaborative filtering. In: Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 704–711 (2003)
Sahoo, N., Krishnan, R., Duncan, G., Callan, J.: The halo effect in multicomponent ratings and its implications for recommender systems: the case of Yahoo! movies. Inf. Syst. Res. 23(1), 231–246 (2012)
Knight, W.R.: A computer method for calculating Kendall’s Tau with ungrouped data. J. Am. Stat. Assoc. 61(314), 436–439 (1966)
Page, L., Brin, S., Motwani, R., Winograd, T.: The PageRank citation ranking: Bringing order to the web. Technical Report, Stanford InfoLab (1999)
Hagberg, A., Swart, P., Chult, D.S.: Exploring network structure, dynamics, and function using NetworkX, Los Alamos National Lab (LANL), Los Alamos, NM (United States), Technical Report (2008)
Acknowledgements
Special thanks to Jeff Meredith for assisting with the website. The research is based upon work supported by the Department of Defense (DOD), Naval Information Warfare Systems Command (NAVWAR), via the Department of Energy (DOE) under contract DE-AC05-00OR22725. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements, either expressed or implied, of the DOD, NAVWAR, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A Appendix
A Appendix
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Norem, S., Rice, A.E., Erwin, S., Bridges, R.A., Oesch, S., Weber, B. (2022). A Mathematical Framework for Evaluation of SOAR Tools with Limited Survey Data. In: Katsikas, S., et al. Computer Security. ESORICS 2021 International Workshops. ESORICS 2021. Lecture Notes in Computer Science(), vol 13106. Springer, Cham. https://doi.org/10.1007/978-3-030-95484-0_32
Download citation
DOI: https://doi.org/10.1007/978-3-030-95484-0_32
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-95483-3
Online ISBN: 978-3-030-95484-0
eBook Packages: Computer ScienceComputer Science (R0)