Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

AdverSPAM: Adversarial SPam Account Manipulation in Online Social Networks

Published: 14 March 2024 Publication History

Abstract

In recent years, the widespread adoption of Machine Learning (ML) at the core of complex IT systems has driven researchers to investigate the security and reliability of ML techniques. A very specific kind of threats concerns the adversary mechanisms through which an attacker could induce a classification algorithm to provide the desired output. Such strategies, known as Adversarial Machine Learning (AML), have a twofold purpose: to calculate a perturbation to be applied to the classifier’s input such that the outcome is subverted, while maintaining the underlying intent of the original data. Although any manipulation that accomplishes these goals is theoretically acceptable, in real scenarios perturbations must correspond to a set of permissible manipulations of the input, which is rarely considered in the literature. In this article, we present AdverSPAM, an AML technique designed to fool the spam account detection system of an Online Social Network (OSN). The proposed black-box evasion attack is formulated as an optimization problem that computes the adversarial sample while maintaining two important properties of the feature space, namely statistical correlation and semantic dependency. Although being demonstrated in an OSN security scenario, such an approach might be applied in other context where the aim is to perturb data described by mutually related features. Experiments conducted on a public dataset show the effectiveness of AdverSPAM compared to five state-of-the-art competitors, even in the presence of adversarial defense mechanisms.

References

[1]
Charu C. Aggarwal, Alexander Hinneburg, and Daniel A. Keim. 2001. On the surprising behavior of distance metrics in high dimensional space. In Database Theory — ICDT 2001. Jan Van den Bussche and Victor Vianu (Eds.), Springer, Berlin, 420–434. DOI:
[2]
Elie Alhajjar, Paul Maxwell, and Nathaniel Bastian. 2021. Adversarial machine learning in network intrusion detection systems. Expert Systems with Applications 186 (2021), 115782. DOI:
[3]
Eirini Anthi, Lowri Williams, Matilda Rhode, Pete Burnap, and Adam Wedgbury. 2021. Adversarial attacks on machine learning cybersecurity defences in industrial control systems. Journal of Information Security and Applications 58 (2021), 102717. DOI:
[4]
Jacob Benesty, Jingdong Chen, Yiteng Huang, and Israel Cohen. 2009. Pearson Correlation Coefficient. Springer, Berlin, 1–4. DOI:
[5]
Sourav Bhattacharya, Dionysis Manousakas, Alberto Gil C. P. Ramos, Stylianos I. Venieris, Nicholas D. Lane, and Cecilia Mascolo. 2020. Countering acoustic adversarial attacks in microphone-equipped smart home devices. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 2, Article 73 (June2020), 24 pages. DOI:
[6]
Battista Biggio, Giorgio Fumera, Gian Luca Marcialis, and Fabio Roli. 2017. Statistical meta-analysis of presentation attacks for secure multibiometric systems. IEEE Transactions on Pattern Analysis and Machine Intelligence 39, 3 (2017), 561–575. DOI:
[7]
Battista Biggio, Giorgio Fumera, and Fabio Roli. 2010. Multiple classifier systems for robust classifier design in adversarial environments. International Journal of Machine Learning and Cybernetics 1, 1 (01 Dec2010), 27–41. DOI:
[8]
Battista Biggio and Fabio Roli. 2018. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition 84 (2018), 317–331. DOI:
[9]
John Boutsikas, Maksim Ekin Eren, Charles K. Varga, Edward Raff, Cynthia Matuszek, and Charles Nicholas. 2021. Evading malware classifiers via Monte Carlo mutant feature discovery. In Proceedings of the 12th Annual Malware Technical Exchange Meeting.
[10]
Cameron Buckner. 2020. Understanding adversarial examples requires a theory of artefacts for deep learning. Nature Machine Intelligence 2, 12 (01 Dec2020), 731–736. DOI:
[11]
N. Carlini and D. Wagner. 2017. Towards evaluating the robustness of neural networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP). IEEE Computer Society, Los Alamitos, CA, USA, 39–57. DOI:
[12]
Guoqin Chang, Haichang Gao, Zhou Yao, and Haoquan Xiong. 2023. TextGuise: Adaptive adversarial example attacks on text classification model. Neurocomputing 529 (2023), 190–203. DOI:
[13]
Nitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. 2002. SMOTE: Synthetic minority over-sampling technique. Journal of Artificial Intelligence Research 16 (2002), 321–357. DOI:
[14]
Jinyin Chen, Xiang Lin, Ziqiang Shi, and Yi Liu. 2020. Link prediction adversarial attack via iterative gradient attack. IEEE Transactions on Computational Social Systems 7, 4 (2020), 1081–1094. DOI:
[15]
Minhao Cheng, Thong Le, Pin-Yu Chen, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. 2019. Query-efficient hard-label black-box attack: An optimization-based approach. In Proceedings of the 7th International Conference on Learning Representations, ICLR 2019. OpenReview.net.
[16]
Yupeng Cheng, Qing Guo, Felix Juefei-Xu, Shang-Wei Lin, Wei Feng, Weisi Lin, and Yang Liu. 2022. Pasadena: Perceptually aware and stealthy adversarial denoise attack. IEEE Transactions on Multimedia 24 (2022), 3807–3822. DOI:
[17]
Alesia Chernikova and Alina Oprea. 2022. FENCE: Feasible evasion attacks on neural networks in constrained environments. ACM Transactions on Privacy and Security 25, 4, Article 34 (July2022), 34 pages. DOI:
[18]
Federico Concone, Giuseppe Lo Re, Marco Morana, and Sajal K. Das. 2022. SpADe: Multi-stage spam account detection for online social networks. IEEE Transactions on Dependable and Secure Computing 20, 4 (2022), 1–16. DOI:
[19]
Federico Concone, Giuseppe Lo Re, Marco Morana, and Claudio Ruocco. 2019. Assisted labeling for spam account detection on Twitter. In Proceedings of the 2019 IEEE International Conference on Smart Computing (SMARTCOMP). 359–366. DOI:
[20]
Federico Concone, Giuseppe Lo Re, Marco Morana, and Claudio Ruocco. 2019. Twitter spam account detection by effective labeling. In Proceedings of the 3rd Italian Conference on Cyber Security . Pierpaolo Degano and Roberto Zunino (Eds.), CEUR Workshop Proceedings, Vol. 2315, CEUR-WS.org.
[21]
Stefano Cresci, Marinella Petrocchi, Angelo Spognardi, and Stefano Tognazzi. 2022. Adversarial machine learning for protecting against online manipulation. IEEE Internet Computing 26, 2 (2022), 47–52. DOI:
[22]
Stefano Cresci, Roberto Di Pietro, Marinella Petrocchi, Angelo Spognardi, and Maurizio Tesconi. 2018. Social fingerprinting: Detection of spambot groups through DNA-inspired behavioral modeling. IEEE Transactions on Dependable and Secure Computing 15, 4 (2018), 561–576. DOI:
[23]
Emmanuel Gbenga Dada, Joseph Stephen Bassi, Haruna Chiroma, Shafi’i Muhammad Abdulhamid, Adebayo Olusola Adetunmbi, and Opeyemi Emmanuel Ajibuwa. 2019. Machine learning for email spam filtering: Review, approaches and open research problems. Heliyon 5, 6 (2019), e01802. DOI:
[24]
Shaveta Dargan, Munish Kumar, Maruthi Rohit Ayyagari, and Gulshan Kumar. 2020. A survey of deep learning and its applications: A new paradigm to machine learning. Archives of Computational Methods in Engineering 27, 4 (01 Sep2020), 1071–1092. DOI:
[25]
Kalyanmoy Deb. 2005. Multi-objective optimization. In Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques. Edmund K. Burke and Graham Kendall (Eds.), Springer US, Boston, MA, 273–316. DOI:
[26]
Luca Demetrio, Battista Biggio, Giovanni Lagorio, Fabio Roli, and Alessandro Armando. 2021. Functionality-preserving black-box optimization of adversarial windows malware. IEEE Transactions on Information Forensics and Security 16 (2021), 3469–3478. DOI:
[27]
Luca Demetrio, Scott E. Coull, Battista Biggio, Giovanni Lagorio, Alessandro Armando, and Fabio Roli. 2021. Adversarial EXEmples: A survey and experimental evaluation of practical attacks on machine learning for windows malware detection. ACM Transactions on Privacy and Security 24, 4, Article 27 (Sep2021), 31 pages. DOI:
[28]
Ambra Demontis, Marco Melis, Battista Biggio, Davide Maiorca, Daniel Arp, Konrad Rieck, Igino Corona, Giorgio Giacinto, and Fabio Roli. 2019. Yes, machine learning can be more secure! A case study on android malware detection. IEEE Transactions on Dependable and Secure Computing 16, 4 (2019), 711–724. DOI:
[29]
Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, and Fabio Roli. 2019. Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks. In Proceedings of the 28th USENIX Conference on Security Symposium (SEC’19). USENIX Association, USA, 321–338. DOI:
[30]
Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, and Jun Zhu. 2020. Benchmarking adversarial robustness on image classification. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 318–328. DOI:
[31]
Mohd Fazil and Muhammad Abulaish. 2018. A hybrid approach for detecting automated spammers in Twitter. IEEE Transactions on Information Forensics and Security 13, 11 (2018), 2707–2719. DOI:
[32]
Salvatore Gaglio, Andrea Giammanco, Giuseppe Lo Re, and Marco Morana. 2022. Adversarial machine learning in e-health: Attacking a smart prescription system. In AIxIA 2021 – Advances in Artificial Intelligence. Stefania Bandini, Francesca Gasparini, Viviana Mascardi, Matteo Palmonari, and Giuseppe Vizzari (Eds.), Springer International Publishing, Cham, 490–502. DOI:
[33]
I. Goodfellow, J. Shlens, and C. Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
[34]
Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. 2018. Black-box adversarial attacks with limited queries and information. In Proceedings of the 35th International Conference on Machine Learning . Jennifer Dy and Andreas Krause (Eds.), Proceedings of Machine Learning Research, Vol. 80, PMLR, 2137–2146.
[35]
Christian Janiesch, Patrick Zschech, and Kai Heinrich. 2021. Machine learning and deep learning. Electronic Markets 31, 3 (01 Sep2021), 685–695. DOI:
[36]
Arindam Jati, Chin-Cheng Hsu, Monisankha Pal, Raghuveer Peri, Wael AbdAlmageed, and Shrikanth Narayanan. 2021. Adversarial attack and defense strategies for deep speaker recognition systems. Computer Speech & Language 68 (2021), 101199. DOI:
[37]
Bhargav Kuchipudi, Ravi Teja Nannapaneni, and Qi Liao. 2020. Adversarial machine learning for spam filters. In Proceedings of the 15th International Conference on Availability, Reliability and Security (ARES’20). ACM, New York, NY, USA, Article 38, 6 pages. DOI:
[38]
Rahul Kumar, Ridhi Arora, Vipul Bansal, Vinodh J. Sahayasheela, Himanshu Buckchash, Javed Imran, Narayanan Narayanan, Ganesh N. Pandian, and Balasubramanian Raman. 2022. Classification of COVID-19 from chest x-ray images using deep features and correlation coefficient. Multimedia Tools and Applications 81, 19 (Aug.2022), 27631–27655. DOI:
[39]
Wassila Lalouani, Mohamed Younis, and Uthman Baroudi. 2021. Countering radiometric signature exploitation using adversarial machine learning based protocol switching. Computer Communications 174 (2021), 109–121. DOI:
[40]
Guanzhi Li, Aining Zhang, Qizhi Zhang, Di Wu, and Choujun Zhan. 2022. Pearson correlation coefficient-based performance enhancement of broad learning system for stock price prediction. IEEE Transactions on Circuits and Systems II: Express Briefs 69, 5 (2022), 2413–2417. DOI:
[41]
Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2017. Delving into transferable adversarial examples and black-box attacks. In Proceedings of the 5th International Conference on Learning Representations, ICLR 2017. OpenReview.net.
[42]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In Proceedings of the International Conference on Learning Representations.
[43]
Davide Maiorca, Battista Biggio, and Giorgio Giacinto. 2019. Towards adversarial malware detection: Lessons learned from pdf-based attacks. ACM Computing Surveys 52, 4, Article 78 (Aug2019), 36 pages. DOI:
[44]
Davide Maiorca, Igino Corona, and Giorgio Giacinto. 2013. Looking at the bag is not enough to find the bomb: An evasion of structural methods for malicious PDF files detection. In Proceedings of the 8th ACM SIGSAC Symposium on Information, Computer and Communications Security (ASIA CCS’13). ACM, New York, NY, USA, 119–130. DOI:
[45]
Davide Maiorca, Ambra Demontis, Battista Biggio, Fabio Roli, and Giorgio Giacinto. 2020. Adversarial detection of flash malware: Limitations and open issues. Computers & Security 96 (2020), 101901. DOI:
[46]
Dongyu Meng and Hao Chen. 2017. MagNet: A two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS’17). ACM, New York, NY, USA, 135–147. DOI:
[47]
S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. 2016. DeepFool: A simple and accurate method to fool deep neural networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, Los Alamitos, CA, USA, 2574–2582. DOI:
[48]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal adversarial perturbations. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 86–94. DOI:
[49]
Akm Iqtidar Newaz, Nur Imtiazul Haque, Amit Kumar Sikder, Mohammad Ashiqur Rahman, and A. Selcuk Uluagac. 2020. Adversarial attacks to machine learning-based smart healthcare systems. In Proceedings of the 2020 IEEE Global Communications Conference. 1–6. DOI:
[50]
Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security (ASIA CCS’17). ACM, New York, NY, USA, 506–519. DOI:
[51]
N. Papernot, P. McDaniel, and I. Goodfellow. 2016. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277.
[52]
Bowen Peng, Bo Peng, Jie Zhou, Jianyue Xie, and Li Liu. 2022. Scattering model guided adversarial examples for SAR target recognition: Attack and defense. IEEE Transactions on Geoscience and Remote Sensing 60 (2022), 1–17. DOI:
[53]
Xiao Peng, Weiqing Huang, and Zhixin Shi. 2019. Adversarial attack against DoS intrusion detection: An improved boundary-based method. In Proceedings of the 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI). 1288–1295. DOI:
[54]
Fabio Pierazzi, Feargus Pendlebury, Jacopo Cortellazzi, and Lorenzo Cavallaro. 2020. Intriguing properties of adversarial ML attacks in the problem space. In Proceedings of the 2020 IEEE Symposium on Security and Privacy (SP). 1332–1349. DOI:
[55]
Fabio Pierazzi, Feargus Pendlebury, Jacopo Cortellazzi, and Lorenzo Cavallaro. 2020. Intriguing properties of adversarial ML attacks in the problem space. In Proceedings of the 2020 IEEE Symposium on Security and Privacy (SP). IEEE Computer Society, 1308–1325. DOI:
[56]
M. J. D. Powell. 1994. A Direct Search Optimization Method That Models the Objective and Constraint Functions by Linear Interpolation. Springer, Dordrecht, Netherlands, 51–67. DOI:
[57]
Yao Qin, Nicholas Carlini, Garrison Cottrell, Ian Goodfellow, and Colin Raffel. 2019. Imperceptible, robust, and targeted adversarial examples for automatic speech recognition. In Proceedings of the 36th International Conference on Machine Learning. Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.), Proceedings of Machine Learning Research, Vol. 97, PMLR, 5231–5240.
[58]
Jonas Rauber, Roland Zimmermann, Matthias Bethge, and Wieland Brendel. 2020. Foolbox native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX. Journal of Open Source Software 5, 53 (2020), 2607. DOI:
[59]
Paolo Russu, Ambra Demontis, Battista Biggio, Giorgio Fumera, and Fabio Roli. 2016. Secure kernel machines against evasion attacks. In Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security (AISec’16). ACM, New York, NY, USA, 59–69. DOI:
[60]
Yi Shi, Kemal Davaslioglu, and Yalin E. Sagduyu. 2021. Generative adversarial network in the air: Deep adversarial learning for wireless signal spoofing. IEEE Transactions on Cognitive Communications and Networking 7, 1 (2021), 294–303. DOI:
[61]
Gianluca Stringhini, Manuel Egele, Christopher Kruegel, and Giovanni Vigna. 2012. Poultry markets: On the underground economy of Twitter followers. SIGCOMM Computer Communication Review 42, 4 (Sep2012), 527–532. DOI:
[62]
Gianluca Stringhini, Gang Wang, Manuel Egele, Christopher Kruegel, Giovanni Vigna, Haitao Zheng, and Ben Y. Zhao. 2013. Follow the green: Growth and dynamics in Twitter follower markets. In Proceedings of the 2013 Conference on Internet Measurement Conference (IMC’13). ACM, New York, NY, USA, 163–176. DOI:
[63]
Octavian Suciu, Radu Mărginean, Yiğitcan Kaya, Hal Daumé, and Tudor Dumitraş. 2018. When does machine learning FAIL? Generalized transferability for evasion and poisoning attacks. In Proceedings of the 27th USENIX Conference on Security Symposium (SEC’18). USENIX Association, USA, 1299–1316.
[64]
Jiaxi Tang, Hongyi Wen, and Ke Wang. 2020. Revisiting adversarially learned injection attacks against recommender systems. In Proceedings of the 14th ACM Conference on Recommender Systems (RecSys’20). ACM, New York, NY, USA, 318–327. DOI:
[65]
Sanli Tang, Xiaolin Huang, Mingjian Chen, Chengjin Sun, and Jie Yang. 2021. Adversarial attack type I: Cheat classifiers by significant changes. IEEE Transactions on Pattern Analysis and Machine Intelligence 43, 3 (2021), 1100–1109. DOI:
[66]
Qiuhua Wang, Hui Yang, Guohua Wu, Kim-Kwang Raymond Choo, Zheng Zhang, Gongxun Miao, and Yizhi Ren. 2022. Black-box adversarial attacks on XSS attack detection model. Computers & Security 113 (2022), 102554. DOI:
[67]
Qian Wang, Baolin Zheng, Qi Li, Chao Shen, and Zhongjie Ba. 2021. Towards query-efficient adversarial attacks against automatic speech recognition systems. IEEE Transactions on Information Forensics and Security 16 (2021), 896–908. DOI:
[68]
Svante Wold, Kim Esbensen, and Paul Geladi. 1987. Principal component analysis. Chemometrics and Intelligent Laboratory Systems 2, 1 (1987), 37–52. DOI:Proceedings of the Multivariate Statistical Workshop for Geologists and Geochemists.
[69]
Jie Wu, Na Li, Yan Zhao, and Jujie Wang. 2022. Usage of correlation analysis and hypothesis test in optimizing the gated recurrent unit network for wind speed forecasting. Energy 242 (2022), 122960. DOI:
[70]
Tingmin Wu, Sheng Wen, Yang Xiang, and Wanlei Zhou. 2018. Twitter spam detection: Survey of new approaches and comparative study. Computers and Security 76 (July2018), 265–284. DOI:
[71]
Mingfu Xue, Chengxiang Yuan, Can He, Jian Wang, and Weiqiang Liu. 2021. NaturalAE: Natural and robust physical adversarial examples for object detectors. Journal of Information Security and Applications 57 (2021), 102694. DOI:
[72]
Chao Yang, Robert Harkreader, and Guofei Gu. 2013. Empirical evaluation and new design for fighting evolving Twitter spammers. IEEE Transactions on Information Forensics and Security 8, 8 (2013), 1280–1293. DOI:
[73]
Zhenqin Yin, Yue Zhuo, and Zhiqiang Ge. 2023. Transfer adversarial attacks across industrial intelligent systems. Reliability Engineering & System Safety 237 (2023), 109299. DOI:
[74]
Dazhi Zhan, Yexin Duan, Yue Hu, Lujia Yin, Zhisong Pan, and Shize Guo. 2023. AMGmal: Adaptive mask-guided adversarial attack against malware detection with minimal perturbation. Computers & Security 127 (2023), 103103. DOI:
[75]
Chunkai Zhang, Xiaofeng Luo, and Peiyi Han. 2022. On-manifold adversarial attack based on latent space substitute model. Computers & Security 120 (2022), 102770. DOI:
[76]
Fei Zhang, Patrick P. K. Chan, Battista Biggio, Daniel S. Yeung, and Fabio Roli. 2016. Adversarial feature selection against evasion attacks. IEEE Transactions on Cybernetics 46, 3 (2016), 766–777. DOI:
[77]
Jiale Zhang, Bing Chen, Xiang Cheng, Huynh Thi Thanh Binh, and Shui Yu. 2021. PoisonGAN: Generative poisoning attacks against federated learning in edge computing systems. IEEE Internet of Things Journal 8, 5 (2021), 3310–3322. DOI:
[78]
Xinyu Zhang, Yang Xu, Sicong Zhang, and Xiaojian Li. 2022. A highly stealthy adaptive decay attack against speaker recognition. IEEE Access 10 (2022), 118789–118805. DOI:
[79]
Ping Zhao, Haojun Huang, Xiaohui Zhao, and Daiyu Huang. 2020. P3: Privacy-preserving scheme against poisoning attacks in mobile-edge computing. IEEE Transactions on Computational Social Systems 7, 3 (2020), 818–826. DOI:
[80]
Daniel Zügner, Oliver Borchert, Amir Akbarnejad, and Stephan Günnemann. 2020. Adversarial attacks on graph neural networks: Perturbations and their patterns. ACM Transactions on Knowledge Discovery from Data 14, 5, Article 57 (June2020), 31 pages. DOI:
[81]
Nedim Šrndić and Pavel Laskov. 2014. Practical evasion of a learning-based classifier: A case study. In Proceedings of the 2014 IEEE Symposium on Security and Privacy. 197–211. DOI:

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Privacy and Security
ACM Transactions on Privacy and Security  Volume 27, Issue 2
May 2024
192 pages
EISSN:2471-2574
DOI:10.1145/3613601
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 March 2024
Online AM: 26 January 2024
Accepted: 17 January 2024
Revised: 10 July 2023
Received: 24 February 2023
Published in TOPS Volume 27, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Adversarial machine learning
  2. spammer detection
  3. online social networks
  4. evasion attacks

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 313
    Total Downloads
  • Downloads (Last 12 months)313
  • Downloads (Last 6 weeks)31
Reflects downloads up to 17 Oct 2024

Other Metrics

Citations

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media