Abstract
With the large-scale application of machine learning in various fields, the security of models has attracted great attention. Recent studies have shown that tree-based models are vulnerable to adversarial examples. This problem may cause serious security risks. It is important to verify the safety of models. In this paper, we study the robustness verification problem of Random Forests (RF) which is a fundamental machine learning technique. We reduce the verification problem of an RF model into a constraint solving problem solved by modern SMT solvers. Then we present a novel method based on the minimal unsatisfiable core to explain the robustness over a sample. Furthermore, we propose an algorithm for measuring Local Robustness Feature Importance (LRFI). The LRFI builds a link between the features and the robustness. It can identify which features are more important for providing robustness of the model. We have implemented these methods into a tool named VARF. We evaluate VARF on two public datasets, demonstrating its scalability and ability to verify large models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Barrett, C., Tinelli, C.: Satisfiability modulo theories. In: Clarke, E., Henzinger, T., Veith, H., Bloem, R. (eds.) Handbook of Model Checking, pp. 305–343. Springer, Heidelberg (2018). https://doi.org/10.1007/978-3-319-10575-8_11
Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)
Bride, H., Dong, J., Dong, J.S., Hóu, Z.: Towards dependable and explainable machine learning using automated reasoning. In: Formal Methods and Software Engineering - 20th International Conference on Formal Engineering Methods, ICFEM 2018, Gold Coast, QLD, Australia, 12–16 November 2018, Proceedings, pp. 412–416 (2018). https://doi.org/10.1007/978-3-030-02450-5_25
Bride, H., Hou, Z., Dong, J., Dong, J.S., Mirjalili, S.M.: Silas: High performance, explainable and verifiable machine learning. CoRR abs/1910.01382 (2019). http://arxiv.org/abs/1910.01382
Chen, H., Zhang, H., Boning, D., Hsieh, C.J.: Robust decision trees against adversarial examples. In: Chaudhuri, K., Salakhutdinov, R. (eds.) Proceedings of the 36th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 97, pp. 1122–1131. PMLR, Long Beach, California, USA, 09–15 June 2019. http://proceedings.mlr.press/v97/chen19m.html
Chen, H., Zhang, H., Si, S., Li, Y., Boning, D., Hsieh, C.J.: Robustness verification of tree-based models. In: Advances in Neural Information Processing Systems, pp. 12317–12328 (2019)
Cheng, M., Le, T., Chen, P., Zhang, H., Yi, J., Hsieh, C.: Query-efficient hard-label black-box attack: an optimization-based approach. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. OpenReview.net (2019). https://openreview.net/forum?id=rJlk6iRqKX
Chu, L., Hu, X., Hu, J., Wang, L., Pei, J.: Exact and consistent interpretation for piecewise linear neural networks: a closed form solution. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2018)
Cidon, A., Gavish, L., Bleier, I., Korshun, N., Schweighauser, M., Tsitkin, A.: High precision detection of business email compromise. In: 28th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 19), pp. 1291–1307 (2019)
de Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78800-3_24
Einziger, G., Goldstein, M., Sa’ar, Y., Segall, I.: Verifying robustness of gradient boosted models. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 2446–2453 (2019)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015, Conference Track Proceedings (2015). http://arxiv.org/abs/1412.6572
Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 3–29. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_1
Kantchelian, A., Tygar, J.D., Joseph, A.: Evasion and hardening of tree ensemble classifiers. In: International Conference on Machine Learning, pp. 2387–2396 (2016)
Kharraz, A., Robertson, W., Kirda, E.: Surveylance: automatically detecting online survey scams. In: 2018 IEEE Symposium on Security and Privacy (SP), pp. 70–86. IEEE (2018). https://doi.org/10.1109/SP.2018.00044
Kim, B., Koyejo, O., Khanna, R.: Examples are not enough, learn to criticize! criticism for interpretability. In: NIPS (2016)
Lundberg, S., Lee, S.I.: A unified approach to interpreting model predictions. In: NIPS (2017)
Narodytska, N., Kasiviswanathan, S., Ryzhyk, L., Sagiv, M., Walsh, T.: Verifying properties of binarized deep neural networks. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
Nelms, T., Perdisci, R., Antonakakis, M., Ahamad, M.: Towards measuring and mitigating social engineering software download attacks. In: 25th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 16), pp. 773–789 (2016)
OpenML: openml.org. https://www.openml.org. Accessed 2020
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS&P), pp. 372–387. IEEE (2016)
Rafique, M.Z., Van Goethem, T., Joosen, W., Huygens, C., Nikiforakis, N.: It’s free for a reason: exploring the ecosystem of free live streaming services. In: Proceedings of the 23rd Network and Distributed System Security Symposium (NDSS 2016), pp. 1–15. Internet Society (2016)
Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI (2018)
Sato, N., Kuruma, H., Nakagawa, Y., Ogawa, H.: Formal verification of decision-tree ensemble model and detection of its violating-input-value ranges. CoRR abs/1904.11753 (2019). http://arxiv.org/abs/1904.11753
Seshia, S.A., Sadigh, D.: Towards verified artificial intelligence. CoRR abs/1606.08514 (2016). http://arxiv.org/abs/1606.08514
Szegedy, C., et al.: Intriguing properties of neural networks. In: Bengio, Y., LeCun, Y. (eds.) 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, 14–16 April 2014, Conference Track Proceedings (2014). http://arxiv.org/abs/1312.6199
Törnblom, J., Nadjm-Tehrani, S.: An abstraction-refinement approach to formal verification of tree ensembles. In: Romanovsky, A., Troubitsyna, E., Gashi, I., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2019. LNCS, vol. 11699, pp. 301–313. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26250-1_24
Wachter, S., Mittelstadt, B.D., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Eur. Econ.: Microeconomics Ind. Organ. eJournal (2017)
Zilke, J.R., Loza Mencía, E., Janssen, F.: DeepRED – rule extraction from deep neural networks. In: Calders, T., Ceci, M., Malerba, D. (eds.) DS 2016. LNCS (LNAI), vol. 9956, pp. 457–473. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46307-0_29
Acknowledgments
This work is partially supported by STCSM Projects (No. 18QB1402000 and No. 18ZR1411600), SHEITC Project (2018-GYHLW-02012) and the Fundamental Research Funds for the Central Universities.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Nie, C., Shi, J., Huang, Y. (2020). VARF: Verifying and Analyzing Robustness of Random Forests. In: Lin, SW., Hou, Z., Mahony, B. (eds) Formal Methods and Software Engineering. ICFEM 2020. Lecture Notes in Computer Science(), vol 12531. Springer, Cham. https://doi.org/10.1007/978-3-030-63406-3_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-63406-3_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-63405-6
Online ISBN: 978-3-030-63406-3
eBook Packages: Computer ScienceComputer Science (R0)