Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Security Evaluation of PatternClassifiers under Attack

Published: 01 April 2014 Publication History

Abstract

Pattern classification systems are commonly used in adversarial applications, like biometric authentication, network intrusion detection, and spam filtering, in which data can be purposely manipulated by humans to undermine their operation. As this adversarial scenario is not taken into account by classical design methods, pattern classification systems may exhibit vulnerabilities, whose exploitation may severely affect their performance, and consequently limit their practical utility. Extending pattern classification theory and design methods to adversarial settings is thus a novel and very relevant research direction, which has not yet been pursued in a systematic way. In this paper, we address one of the main open issues: evaluating at design phase the security of pattern classifiers, namely, the performance degradation under potential attacks they may incur during operation. We propose a framework for empirical evaluation of classifier security that formalizes and generalizes the main ideas proposed in the literature, and give examples of its use in three real applications. Reported results show that security evaluation can provide a more complete understanding of the classifier’s behavior in adversarial environments, and lead to better design choices.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Knowledge and Data Engineering  Volume 26, Issue 4
April 2014
259 pages

Publisher

IEEE Educational Activities Department

United States

Publication History

Published: 01 April 2014

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 01 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)An Efficient PDF Malware Detection Method Using Highly Compact FeaturesProceedings of the ACM Symposium on Document Engineering 202410.1145/3685650.3685668(1-4)Online publication date: 20-Aug-2024
  • (2024)Novel poisoning attacks for clustering methods via robust feature generationNeurocomputing10.1016/j.neucom.2024.127925598:COnline publication date: 14-Sep-2024
  • (2024)Intelligent architecture and platforms for private edge cloud systemsFuture Generation Computer Systems10.1016/j.future.2024.06.024160:C(457-471)Online publication date: 1-Nov-2024
  • (2024)A survey on robustness attacks for deep code modelsAutomated Software Engineering10.1007/s10515-024-00464-731:2Online publication date: 1-Nov-2024
  • (2024)Bayesian Learned Models Can Detect Adversarial Malware for FreeComputer Security – ESORICS 202410.1007/978-3-031-70879-4_3(45-65)Online publication date: 16-Sep-2024
  • (2023)Enhancing the antidoteProceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence10.1609/aaai.v37i7.26065(8861-8869)Online publication date: 7-Feb-2023
  • (2023)Feature-space Bayesian adversarial learning improved malware detector robustnessProceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence10.1609/aaai.v37i12.26727(14783-14791)Online publication date: 7-Feb-2023
  • (2023)Breaking Boundaries: Balancing Performance and Robustness in Deep Wireless Traffic ForecastingProceedings of the 2023 Workshop on Recent Advances in Resilient and Trustworthy ML Systems in Autonomous Networks10.1145/3605772.3624002(17-28)Online publication date: 30-Nov-2023
  • (2023)Evasion Attack and Defense on Machine Learning Models in Cyber-Physical Systems: A SurveyIEEE Communications Surveys & Tutorials10.1109/COMST.2023.334480826:2(930-966)Online publication date: 20-Dec-2023
  • (2023)Adversarially regularized graph attention networks for inductive learning on partially labeled graphsKnowledge-Based Systems10.1016/j.knosys.2023.110456268:COnline publication date: 23-May-2023
  • Show More Cited By

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media