Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2046684.2046698acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
short-paper

Understanding the risk factors of learning in adversarial environments

Published: 21 October 2011 Publication History

Abstract

Learning for security applications is an emerging field where adaptive approaches are needed but are complicated by changing adversarial behavior. Traditional approaches to learning assume benign errors in data and thus may be vulnerable to adversarial errors. In this paper, we incorporate the notion of adversarial corruption directly into the learning framework and derive a new criteria for classifier robustness to adversarial contamination.

References

[1]
M. Barreno, B. Nelson, A. D. Joseph, and J. D. Tygar. The security of machine learning. Machine Learning, 81(2):121--148, 2010.
[2]
L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In Advances in Neural Information Processing Systems (NIPS), volume 20, pages 161--168, 2008.
[3]
N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006.
[4]
N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma. Adversarial classification. In Proceedings of the International Conference on Knowledge Discovery and Data Mining (KDD), pages 99--108, 2004.
[5]
S. Dasgupta, A. T. Kalai, and C. Monteleoni. Analysis of perceptron-based active learning. Journal of Machine Learning Research, 10:281--299, 2009.
[6]
A. Globerson and S. Roweis. Nightmare at test time: Robust learning by feature deletion. In Proceedings of the 23rd International Conference on Machine Learning (ICML), pages 353--360, 2006.
[7]
F. R. Hampel, E. M. Ronchetti, P. J. Rousseeuw, and W. A. Stahel. Robust Statistics: The Approach Based on Influence Functions. John Wiley and Sons, 1986.
[8]
P. Huber. Robust Statistics. John Wiley & Sons, 1981.
[9]
M. Kearns and M. Li. Learning in the presence of malicious errors. SIAM Journal on Computing, 22(4):807--837, 1993.
[10]
P. Laskov and M. Kloft. A framework for quantitative security analysis of machine learning. In Proceedings of the 2nd ACM Workshop on Security and Artificial Intelligence (AISec), pages 1--4, 2009.
[11]
D. Lowd and C. Meek. Adversarial learning. In Proceedings of the International Conference on Knowledge Discovery and Data Mining (KDD), pages 641--647, 2005.
[12]
C. H. Teo, A. Globerson, S. T. Roweis, and A. J. Smola. Convex learning with invariances. In Advances in Neural Information Processing Systems (NIPS), 2007.
[13]
V. N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag New York, Inc., 1995.
[14]
H. Xu, C. Caramanis, and S. Mannor. Robustness and regularization of support vector machines. Journal of Machine Learning Research, 10:1485--1510, 2009.

Cited By

View all
  • (2024)Framework Based on Simulation of Real-World Message Streams to Evaluate Classification SolutionsAlgorithms10.3390/a1701004717:1(47)Online publication date: 21-Jan-2024
  • (2023)Towards Neuro-Symbolic AI for Assured and Trustworthy Human-Autonomy Teaming2023 5th IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA)10.1109/TPS-ISA58951.2023.00030(177-179)Online publication date: 1-Nov-2023
  • (2022)Artificial Intelligence Meets Tactical Autonomy: Challenges and Perspectives2022 IEEE 4th International Conference on Cognitive Machine Intelligence (CogMI)10.1109/CogMI56440.2022.00017(49-51)Online publication date: Dec-2022
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
AISec '11: Proceedings of the 4th ACM workshop on Security and artificial intelligence
October 2011
124 pages
ISBN:9781450310031
DOI:10.1145/2046684
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 October 2011

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adversarial learning
  2. computer security
  3. machine learning
  4. robust classification
  5. statistical learning

Qualifiers

  • Short-paper

Conference

CCS'11
Sponsor:

Acceptance Rates

Overall Acceptance Rate 94 of 231 submissions, 41%

Upcoming Conference

CCS '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2
  • Downloads (Last 6 weeks)0
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Framework Based on Simulation of Real-World Message Streams to Evaluate Classification SolutionsAlgorithms10.3390/a1701004717:1(47)Online publication date: 21-Jan-2024
  • (2023)Towards Neuro-Symbolic AI for Assured and Trustworthy Human-Autonomy Teaming2023 5th IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA)10.1109/TPS-ISA58951.2023.00030(177-179)Online publication date: 1-Nov-2023
  • (2022)Artificial Intelligence Meets Tactical Autonomy: Challenges and Perspectives2022 IEEE 4th International Conference on Cognitive Machine Intelligence (CogMI)10.1109/CogMI56440.2022.00017(49-51)Online publication date: Dec-2022
  • (2022)A review of spam email detection: analysis of spammer strategies and the dataset shift problemArtificial Intelligence Review10.1007/s10462-022-10195-456:2(1145-1173)Online publication date: 11-May-2022
  • (2021)With Great Dispersion Comes Greater Resilience: Efficient Poisoning Attacks and Defenses for Linear Regression ModelsIEEE Transactions on Information Forensics and Security10.1109/TIFS.2021.308733216(3709-3723)Online publication date: 2021
  • (2020)Countering Inconsistent Labelling by Google’s Vision API for Rotated ImagesInnovations in Computational Intelligence and Computer Vision10.1007/978-981-15-6067-5_23(202-213)Online publication date: 22-Sep-2020
  • (2017)Analysis of Causative Attacks against SVMs Learning from Data StreamsProceedings of the 3rd ACM on International Workshop on Security And Privacy Analytics10.1145/3041008.3041012(31-36)Online publication date: 24-Mar-2017
  • (2015)Support vector machines under adversarial label contaminationNeurocomputing10.5555/2779626.2779777160:C(53-62)Online publication date: 21-Jul-2015
  • (2015)Systematic Poisoning Attacks on and Defenses for Machine Learning in HealthcareIEEE Journal of Biomedical and Health Informatics10.1109/JBHI.2014.234409519:6(1893-1905)Online publication date: Nov-2015
  • (2015)Support vector machines under adversarial label contaminationNeurocomputing10.1016/j.neucom.2014.08.081160(53-62)Online publication date: Jul-2015
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media