Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3460120.3484776acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article
Open access

Learning Security Classifiers with Verified Global Robustness Properties

Published: 13 November 2021 Publication History

Abstract

Many recent works have proposed methods to train classifiers with local robustness properties, which can provably eliminate classes of evasion attacks for most inputs, but not all inputs. Since data distribution shift is very common in security applications, e.g., often observed for malware detection, local robustness cannot guarantee that the property holds for unseen inputs at the time of deploying the classifier. Therefore, it is more desirable to enforce global robustness properties that hold for all inputs, which is strictly stronger than local robustness. In this paper, we present a framework and tools for training classifiers that satisfy global robustness properties. We define new notions of global robustness that are more suitable for security classifiers. We design a novel booster-fixer training framework to enforce global robustness properties. We structure our classifier as an ensemble of logic rules and design a new verifier to verify the properties. In our training algorithm, the booster increases the classifier's capacity, and the fixer enforces verified global robustness properties following counterexample guided inductive synthesis.
We show that we can train classifiers to satisfy different global robustness properties for three security datasets, and even multiple properties at the same time, with modest impact on the classifier's performance. For example, we train a Twitter spam account classifier to satisfy five global robustness properties, with 5.4% decrease in true positive rate, and 0.1% increase in false positive rate, compared to a baseline XGBoost model that doesn't satisfy any property.

References

[1]
Aws Albarghouthi, Paraschos Koutris, Mayur Naik, and Calvin Smith. 2017. Constraint-based synthesis of Datalog programs. In International Conference on Principles and Practice of Constraint Programming. Springer, 689--706.
[2]
Kevin Allix, Tegawendé F Bissyandé, Jacques Klein, and Yves Le Traon. 2015. Are your training datasets yet relevant?. In International Symposium on Engineering Secure Software and Systems. Springer, 51--67.
[3]
Cem Anil, James Lucas, and Roger Grosse. 2019. Sorting out Lipschitz function approximation. In International Conference on Machine Learning. PMLR, 291--301.
[4]
Norman P Archer and Shouhong Wang. 1993. Application of the back propagation neural network algorithm with monotonicity constraints for two-group classification problems. Decision Sciences, Vol. 24, 1 (1993), 60--75.
[5]
Mislav Balunović, Maximilian Baader, Gagandeep Singh, Timon Gehr, and Martin Vechev. 2019. Certifying geometric robustness of neural networks. Advances in Neural Information Processing Systems (NeurIPS) (2019).
[6]
Arie Ben-David. 1995. Monotonicity maintenance in information-theoretic machine learning algorithms. Machine Learning, Vol. 19, 1 (1995), 29--43.
[7]
Akhilan Boopathy, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, and Luca Daniel. 2019. Cnn-cert: An efficient framework for certifying robustness of convolutional neural networks. In AAAI Conference on Artificial Intelligence (AAAI) .
[8]
Akhilan Boopathy, Tsui-Wei Weng, Sijia Liu, Pin-Yu Chen, Gaoyuan Zhang, and Luca Daniel. 2021. Fast Training of Provably Robust Neural Networks by SingleProp. AAAI Conference on Artificial Intelligence (AAAI) (2021).
[9]
Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. ACM, 785--794.
[10]
Yizheng Chen, Shiqi Wang, Dongdong She, and Suman Jana. 2020. On Training Robust PDF Malware Classifiers. In USENIX Security Symposium .
[11]
Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin, and Nicolas Usunier. 2017. Parseval networks: Improving robustness to adversarial examples. In International Conference on Machine Learning. PMLR, 854--863.
[12]
Jeremy EJ Cohen, Todd Huster, and Ra Cohen. 2019 a. Universal lipschitz approximation in bounded depth neural networks. arXiv preprint arXiv:1904.04861 (2019).
[13]
Jeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. 2019 b. Certified adversarial robustness via randomized smoothing. International Conference on Machine Learning (2019).
[14]
Andrew Cropper, Sebastijan Dumanvc ić, and Stephen H Muggleton. 2020. Turning 30: New ideas in inductive logic programming. In International Joint Conferences on Artifical Intelligence (IJCAI) .
[15]
Hennie Daniels and Marina Velikova. 2010. Monotone and partially monotone neural networks. IEEE Transactions on Neural Networks, Vol. 21, 6 (2010), 906--917.
[16]
Wouter Duivesteijn and Ad Feelders. 2008. Nearest neighbour classification with monotonicity constraints. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 301--316.
[17]
Souradeep Dutta, Susmit Jha, Sriram Sankaranarayanan, and Ashish Tiwari. 2018. Output Range Analysis for Deep Feedforward Neural Networks. In NASA Formal Methods Symposium. Springer, 121--138.
[18]
Krishnamurthy Dvijotham, Sven Gowal, Robert Stanforth, Relja Arandjelovic, Brendan O'Donoghue, Jonathan Uesato, and Pushmeet Kohli. 2018a. Training verified learners with learned verifiers. arXiv preprint arXiv:1805.10265 (2018).
[19]
Krishnamurthy Dvijotham, Robert Stanforth, Sven Gowal, Timothy Mann, and Pushmeet Kohli. 2018b. A dual approach to scalable verification of deep networks. arXiv preprint arXiv:1803.06567 (2018).
[20]
Ruediger Ehlers. 2017. Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks. 15th International Symposium on Automated Technology for Verification and Analysis (2017).
[21]
Farzan Farnia, Jesse Zhang, and David Tse. 2018. Generalizable Adversarial Training via Spectral Normalization. In International Conference on Learning Representations .
[22]
Ad Feelders. 2010. Monotone relabeling in ordinal classification. In 2010 IEEE International Conference on Data Mining. IEEE, 803--808.
[23]
Chris Finlay and Adam M Oberman. 2021. Scaleable input gradient regularization for adversarial robustness. Machine Learning with Applications, Vol. 3 (2021), 100017.
[24]
Marc Fischer, Mislav Balunovic, Dana Drachsler-Cohen, Timon Gehr, Ce Zhang, and Martin Vechev. 2019. DL2: Training and Querying Neural Networks with Logic. In International Conference on Machine Learning (ICML) .
[25]
Matteo Fischetti and Jason Jo. 2017. Deep Neural Networks as 0--1 Mixed Integer Linear Programs: A Feasibility Study. arXiv preprint arXiv:1712.06174 (2017).
[26]
Jerome H Friedman, Bogdan E Popescu, et al. 2008. Predictive learning via rule ensembles. The Annals of Applied Statistics, Vol. 2, 3 (2008), 916--954.
[27]
Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev. 2018. Ai 2: Safety and robustness certification of neural networks with abstract interpretation. In IEEE Symposium on Security and Privacy (SP) .
[28]
Maxim Goncharov. [n.d.]. Traffic direction systems as malware distribution tools. http://www.trendmicro.es/media/misc/malware-distribution-tools-research-paper-en.pdf .
[29]
Henry Gouk, Eibe Frank, Bernhard Pfahringer, and Michael J Cree. 2021. Regularisation of neural networks by enforcing lipschitz continuity. Machine Learning, Vol. 110, 2 (2021), 393--416.
[30]
Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel. 2016. Adversarial perturbations against deep neural networks for malware classification. arXiv preprint arXiv:1606.04435 (2016).
[31]
Maya Gupta, Andrew Cotter, Jan Pfeifer, Konstantin Voevodski, Kevin Canini, Alexander Mangylov, Wojciech Moczydlowski, and Alexander Van Esbroeck. 2016. Monotonic calibrated interpolated look-up tables. The Journal of Machine Learning Research, Vol. 17, 1 (2016), 3790--3836.
[32]
Matthias Hein and Maksym Andriushchenko. 2017. Formal guarantees on the robustness of a classifier against adversarial manipulation. In Advances in Neural Information Processing Systems .
[33]
Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. 2017. Safety verification of deep neural networks. In International Conference on Computer Aided Verification (CAV). Springer, 3--29.
[34]
Inigo Incer, Michael Theodorides, Sadia Afroz, and David Wagner. 2018. Adversarially Robust Malware Detection Using Monotonic Classification. In Proceedings of the Fourth ACM International Workshop on Security and Privacy Analytics. ACM, 54--63.
[35]
Jinyuan Jia, Xiaoyu Cao, Binghui Wang, and Neil Zhenqiang Gong. 2019. Certified robustness for top-k predictions against adversarial perturbations via randomized smoothing. International Conference on Learning Representations (ICLR) (2019).
[36]
Alex Kantchelian, JD Tygar, and Anthony Joseph. 2016. Evasion and hardening of tree ensemble classifiers. In International Conference on Machine Learning. 2387--2396.
[37]
Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. 2017. Reluplex: An efficient SMT solver for verifying deep neural networks. In International Conference on Computer Aided Verification (CAV). Springer, 97--117.
[38]
Guy Katz, Derek A Huang, Duligur Ibeling, Kyle Julian, Christopher Lazarus, Rachel Lim, Parth Shah, Shantanu Thakoor, Haoze Wu, Aleksandar Zeljić, et al. 2019. The marabou framework for verification and analysis of deep neural networks. In International Conference on Computer Aided Verification (CAV) .
[39]
Herbert Kay and Lyle H Ungar. 2000. Estimating monotonic functions and their bounds. AIChE Journal, Vol. 46, 12 (2000), 2426--2434.
[40]
Amin Kharraz, Zane Ma, Paul Murley, Charles Lever, Joshua Mason, Andrew Miller, Nikita Borisov, Manos Antonakakis, and Michael Bailey. 2019. Outguard: Detecting in-browser covert cryptocurrency mining in the wild. In The World Wide Web Conference. 840--852.
[41]
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2017. Adversarial machine learning at scale. In International Conference on Learning Representations (ICLR) .
[42]
Heeyoung Kwon, Mirza Basim Baig, and Leman Akoglu. 2017. A domain-agnostic approach to spam-URL detection via redirects. In Pacific-Asia Conference on Knowledge Discovery and Data Mining. Springer, 220--232.
[43]
Pavel Laskov et al. 2014. Practical evasion of a learning-based classifier: A case study. In Security and Privacy (SP), 2014 IEEE Symposium on. IEEE, 197--211.
[44]
Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. 2019. Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy (SP). IEEE.
[45]
Kyumin Lee, James Caverlee, and Steve Webb. 2010. Uncovering social spammers: social honeypots
[46]
machine learning. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval. 435--442.
[47]
Kyumin Lee, Brian Eoff, and James Caverlee. 2011. Seven months with the devils: A long-term study of content polluters on twitter. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 5.
[48]
Sangho Lee and Jong Kim. 2013. Warningbird: A near real-time detection system for suspicious urls in twitter stream. IEEE transactions on dependable and secure computing, Vol. 10, 3 (2013), 183--195.
[49]
Sungyoon Lee, Jaewook Lee, and Saerom Park. 2020. Lipschitz-Certifiable Training with a Tight Outer Bound. Advances in Neural Information Processing Systems (NeurIPS) (2020).
[50]
Klas Leino, Zifan Wang, and Matt Fredrikson. 2021. Globally-Robust Neural Networks. arXiv preprint arXiv:2102.08452 (2021).
[51]
Bai Li, Changyou Chen, Wenlin Wang, and Lawrence Carin. 2018. Second-order adversarial attack and certifiable robustness. (2018).
[52]
Bai Li, Changyou Chen, Wenlin Wang, and Lawrence Carin. 2019 a. Certified adversarial robustness with additive noise. Advances in Neural Information Processing Systems (NeurIPS) (2019).
[53]
Linyi Li, Xiangyu Qi, Tao Xie, and Bo Li. 2020. SoK: Certified Robustness for Deep Neural Networks. arXiv preprint arXiv:2009.04131 (2020).
[54]
Linyi Li, Zexuan Zhong, Bo Li, and Tao Xie. 2019 b. Robustra: Training Provable Robust Neural Networks over Reference Adversarial Space. In IJCAI .
[55]
Zhou Li, Sumayah Alrwais, Yinglian Xie, Fang Yu, and XiaoFeng Wang. 2013. Finding the linchpins of the dark web: a study on topologically dedicated hosts on malicious web infrastructures. In 2013 IEEE Symposium on Security and Privacy. IEEE, 112--126.
[56]
Xuankang Lin, He Zhu, Roopsha Samanta, and Suresh Jagannathan. 2020. ART: abstraction refinement-guided training for provably correct neural networks. In 2020 Formal Methods in Computer Aided Design (FMCAD). IEEE, 148--157.
[57]
Alessio Lomuscio and Lalit Maganti. 2017. An approach to reachability analysis for feed-forward relu neural networks. arXiv preprint arXiv:1706.07351 (2017).
[58]
Justin Ma, Lawrence K Saul, Stefan Savage, and Geoffrey M Voelker. 2009. Identifying suspicious URLs: an application of large-scale online learning. In Proceedings of the 26th annual international conference on machine learning. 681--688.
[59]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. International Conference on Learning Representations (ICLR) (2018).
[60]
Stefano Melacci, Gabriele Ciravegna, Angelo Sotgiu, Ambra Demontis, Battista Biggio, Marco Gori, and Fabio Roli. 2020. Can Domain Knowledge Alleviate Adversarial Attacks in Multi-Label Classifiers? arXiv preprint arXiv:2006.03833 (2020).
[61]
Brad Miller, Alex Kantchelian, Michael Carl Tschantz, Sadia Afroz, Rekha Bachwani, Riyaz Faizullabhoy, Ling Huang, Vaishaal Shankar, Tony Wu, George Yiu, et al. 2016. Reviewer integration and performance measurement for malware detection. In International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment. Springer, 122--141.
[62]
Matthew Mirman, Timon Gehr, and Martin Vechev. 2018. Differentiable Abstract Interpretation for Provably Robust Neural Networks. In International Conference on Machine Learning (ICML). 3575--3583.
[63]
Christoph Müller, Francc ois Serre, Gagandeep Singh, Markus Püschel, and Martin Vechev. 2021 b. Scaling Polyhedral Neural Network Verification on GPUs. Proceedings of Machine Learning and Systems, Vol. 3 (2021).
[64]
Mark Niklas Müller, Gleb Makarchuk, Gagandeep Singh, Markus Püschel, and Martin Vechev. 2021 a. Precise Multi-Neuron Abstractions for Neural Network Certification. arXiv preprint arXiv:2103.03638 (2021).
[65]
Patricia Pauli, Anne Koch, Julian Berberich, Paul Kohler, and Frank Allgower. 2021. Training robust neural networks using Lipschitz bounds. IEEE Control Systems Letters (2021).
[66]
Feargus Pendlebury, Fabio Pierazzi, Roberto Jordaney, Johannes Kinder, and Lorenzo Cavallaro. 2019. TESSERACT: Eliminating experimental bias in malware classification across space and time. In 28th USENIX Security Symposium (USENIX Security 19). 729--746.
[67]
F. Pierazzi, F. Pendlebury, J. Cortellazzi, and L. Cavallaro. 2020. Intriguing Properties of Adversarial ML Attacks in the Problem Space. In 2020 IEEE Symposium on Security and Privacy (SP). IEEE Computer Society, 1308--1325. https://doi.org/10.1109/SP40000.2020.00073
[68]
Chongli Qin, James Martens, Sven Gowal, Dilip Krishnan, Krishnamurthy Dvijotham, Alhussein Fawzi, Soham De, Robert Stanforth, and Pushmeet Kohli. 2019. Adversarial robustness through local linearization. Advances in Neural Information Processing Systems (NIPS) (2019).
[69]
Mukund Raghothaman, Jonathan Mendelson, David Zhao, Mayur Naik, and Bernhard Scholz. 2019. Provenance-guided synthesis of Datalog programs. Proceedings of the ACM on Programming Languages, Vol. 4, POPL (2019), 1--27.
[70]
Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. 2018a. Certified defenses against adversarial examples. International Conference on Learning Representations (ICLR) (2018).
[71]
Aditi Raghunathan, Jacob Steinhardt, and Percy S Liang. 2018b. Semidefinite relaxations for certifying robustness to adversarial examples. In Advances in Neural Information Processing Systems. 10900--10910.
[72]
Gabriel Ryan, Justin Wong, Jianan Yao, Ronghui Gu, and Suman Jana. 2020. CLN2INV: Learning Loop Invariants with Continuous Logic Networks. In International Conference on Learning Representations (ICLR) .
[73]
Hadi Salman, Greg Yang, Jerry Li, Pengchuan Zhang, Huan Zhang, Ilya Razenshteyn, and Sebastien Bubeck. 2019 a. Provably robust deep learning via adversarially trained smoothed classifiers. Advances in Neural Information Processing Systems (NeurIPS) (2019).
[74]
Hadi Salman, Greg Yang, Huan Zhang, Cho-Jui Hsieh, and Pengchuan Zhang. 2019 b. A convex relaxation barrier to tight robustness verification of neural networks. Advances in Neural Information Processing Systems (NeurIPS) (2019).
[75]
Xujie Si, Woosuk Lee, Richard Zhang, Aws Albarghouthi, Paraschos Koutris, and Mayur Naik. 2018. Syntax-guided synthesis of datalog programs. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 515--527.
[76]
Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, and Martin Vechev. 2019 a. Beyond the single neuron convex barrier for neural network certification. Advances in Neural Information Processing Systems (NeurIPS) (2019).
[77]
Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin Vechev. 2019 b. An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages, Vol. 3, POPL (2019), 1--30.
[78]
Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin T Vechev. 2019 c. Boosting Robustness Certification of Neural Networks. In ICLR (Poster) .
[79]
Sahil Singla and Soheil Feizi. 2019. Bounding singular values of convolution layers. arXiv preprint arXiv:1911.10258 (2019).
[80]
Armando Solar-Lezama, Liviu Tancau, Rastislav Bodik, Sanjit Seshia, and Vijay Saraswat. 2006. Combinatorial sketching for finite programs. In Proceedings of the 12th international conference on Architectural support for programming languages and operating systems. 404--415.
[81]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. International Conference on Learning Representations (ICLR) (2013).
[82]
Kurt Thomas, Chris Grier, Justin Ma, Vern Paxson, and Dawn Song. 2011. Design and evaluation of a real-time url spam filtering service. In 2011 IEEE symposium on security and privacy. IEEE, 447--462.
[83]
Vincent Tjeng, Kai Xiao, and Russ Tedrake. 2017. Evaluating Robustness of Neural Networks with Mixed Integer Programming. arXiv preprint arXiv:1711.07356 (2017).
[84]
David Wagner and Paolo Soto. 2002. Mimicry attacks on host-based intrusion detection systems. In Proceedings of the 9th ACM Conference on Computer and Communications Security. ACM, 255--264.
[85]
Shiqi Wang, Yizheng Chen, Ahmed Abdou, and Suman Jana. 2018a. MixTrain: Scalable Training of Formally Robust Neural Networks. arXiv preprint arXiv:1811.02625 (2018).
[86]
Shiqi Wang, Kexin Pei, Whitehouse Justin, Junfeng Yang, and Suman Jana. 2018b. Efficient Formal Safety Analysis of Neural Networks. Advances in Neural Information Processing Systems (NIPS) (2018).
[87]
Shiqi Wang, Kexin Pei, Whitehouse Justin, Junfeng Yang, and Suman Jana. 2018c. Formal Security Analysis of Neural Networks using Symbolic Intervals. 27th USENIX Security Symposium (2018).
[88]
Shiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, and J Zico Kolter. 2021. Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Verification. arXiv preprint arXiv:2103.06624 (2021).
[89]
Antoine Wehenkel and Gilles Louppe. 2019. Unconstrained monotonic neural networks. In Advances in Neural Information Processing Systems .
[90]
Lily Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Luca Daniel, Duane Boning, and Inderjit Dhillon. 2018a. Towards fast computation of certified robustness for relu networks. In International Conference on Machine Learning (ICML) .
[91]
Tsui-Wei Weng, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, and Luca Daniel. 2018b. Evaluating the robustness of neural networks: An extreme value theory approach. In International Conference on Learning Representations (ICLR) .
[92]
Eric Wong and Zico Kolter. 2018. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning. 5283--5292.
[93]
Eric Wong, Frank Schmidt, Jan Hendrik Metzen, and J Zico Kolter. 2018. Scaling provable adversarial defenses. Advances in Neural Information Processing Systems (NIPS) (2018).
[94]
Kaidi Xu, Huan Zhang, Shiqi Wang, Yihan Wang, Suman Jana, Xue Lin, and Cho-Jui Hsieh. 2021. Fast and complete: Enabling complete neural network verification with rapid and massively parallel incomplete verifiers. International Conference on Learning Representations (ICLR) (2021).
[95]
Greg Yang, Tony Duan, J Edward Hu, Hadi Salman, Ilya Razenshteyn, and Jerry Li. 2020. Randomized smoothing of all shapes and sizes. In International Conference on Machine Learning (ICML). PMLR.
[96]
Jianan Yao, Gabriel Ryan, Justin Wong, Suman Jana, and Ronghui Gu. 2020. Learning Nonlinear Loop Invariants with Gated Continuous Logic Networks. In Proceedings of the 41st ACM SIGPLAN Conference on Programming Language Design and Implementation .
[97]
Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane Boning, and Cho-Jui Hsieh. 2020. Towards stable and efficient training of verifiably robust neural networks. International Conference on Learning Representations (ICLR) (2020).
[98]
Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. 2018. Efficient neural network robustness certification with general activation functions. arXiv preprint arXiv:1811.00866 (2018).
[99]
Xiao Zhang and David Evans. 2019. Cost-Sensitive Robustness against Adversarial Examples. International Conference on Learning Representations (ICLR) (2019).

Cited By

View all
  • (2024)Verification of Neural Networks’ Global RobustnessProceedings of the ACM on Programming Languages10.1145/36498478:OOPSLA1(1010-1039)Online publication date: 29-Apr-2024
  • (2024)Verifying Global Two-Safety Properties in Neural Networks with ConfidenceComputer Aided Verification10.1007/978-3-031-65630-9_17(329-351)Online publication date: 25-Jul-2024
  • (2023)Efficient global robustness certification of neural networks via interleaving twin-network encoding (extended abstract)Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence10.24963/ijcai.2023/727(6498-6503)Online publication date: 19-Aug-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CCS '21: Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security
November 2021
3558 pages
ISBN:9781450384544
DOI:10.1145/3460120
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 November 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adversarial machine learning
  2. formal verification
  3. global robustness properties
  4. security classifier
  5. verifiable machine learning

Qualifiers

  • Research-article

Funding Sources

Conference

CCS '21
Sponsor:
CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security
November 15 - 19, 2021
Virtual Event, Republic of Korea

Acceptance Rates

Overall Acceptance Rate 1,261 of 6,999 submissions, 18%

Upcoming Conference

CCS '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)547
  • Downloads (Last 6 weeks)38
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Verification of Neural Networks’ Global RobustnessProceedings of the ACM on Programming Languages10.1145/36498478:OOPSLA1(1010-1039)Online publication date: 29-Apr-2024
  • (2024)Verifying Global Two-Safety Properties in Neural Networks with ConfidenceComputer Aided Verification10.1007/978-3-031-65630-9_17(329-351)Online publication date: 25-Jul-2024
  • (2023)Efficient global robustness certification of neural networks via interleaving twin-network encoding (extended abstract)Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence10.24963/ijcai.2023/727(6498-6503)Online publication date: 19-Aug-2023
  • (2023)A Method for Summarizing and Classifying Evasive MalwareProceedings of the 26th International Symposium on Research in Attacks, Intrusions and Defenses10.1145/3607199.3607207(455-470)Online publication date: 16-Oct-2023
  • (2023)"Get in Researchers; We're Measuring Reproducibility": A Reproducibility Study of Machine Learning Papers in Tier 1 Security ConferencesProceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security10.1145/3576915.3623130(3433-3459)Online publication date: 15-Nov-2023
  • (2023)Trustworthy AI: From Principles to PracticesACM Computing Surveys10.1145/355580355:9(1-46)Online publication date: 16-Jan-2023
  • (2023)Learning Approximate Execution Semantics From Traces for Binary Function SimilarityIEEE Transactions on Software Engineering10.1109/TSE.2022.323162149:4(2776-2790)Online publication date: 1-Apr-2023
  • (2023)Explainable Global Fairness Verification of Tree-Based Classifiers2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)10.1109/SaTML54575.2023.00011(1-17)Online publication date: Feb-2023
  • (2023)SoK: Certified Robustness for Deep Neural Networks2023 IEEE Symposium on Security and Privacy (SP)10.1109/SP46215.2023.10179303(1289-1310)Online publication date: May-2023
  • (2022)Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding2022 Design, Automation & Test in Europe Conference & Exhibition (DATE)10.23919/DATE54114.2022.9774719(1087-1092)Online publication date: 14-Mar-2022

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media