Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Cheating your apps: : Black‐box adversarial attacks on deep learning apps

Published: 01 January 2023 Publication History

Abstract

Deep learning is a powerful technique to boost application performance in various fields, including face recognition, image classification, natural language understanding, and recommendation system. With the rapid increase in the computing power of mobile devices, developers can embed deep learning models into their apps for building more competitive products with more accurate and faster responses. Although there are several works of adversarial attacks against deep learning models in apps, they all need information about the models' internals (i.e., structures and weights) or need to modify the models. In this paper, we propose an effective black‐box approach by training substitute models to spoof the deep learning systems inside the apps. We evaluate our approach on 10 real‐world deep‐learning apps from Google Play to perform black‐box adversarial attacks. Through the study, we find three factors that can affect the performance of attacks. Our approach can reach a relatively high attack success rate of 66.60% on average. Compared with other adversarial attacks on mobile deep learning models, in terms of the average attack success rates, our approach outperforms its counterparts by 27.63%.

Graphical Abstract

In this paper, we propose an effective black‐box approach by training substitute models to spoof the deep learning systems inside the apps. We evaluate our approach on 10 real‐world deep‐learning apps from Google Play to perform black‐box adversarial attacks. The evaluation shows that our approach can reach an attack success rate of 66.60% on average, which outperforms counterparts by 27.63%.

References

[1]
Xu M, Liu J, Liu Y, Lin FX, Liu Y, Liu X. A first look at deep learning apps on smartphones. In: Proceedings of WWW; 2019:2125‐2136.
[2]
Tensorflow . Tensorflow lite Tensorflow, ed.: Tensorflow; 2021. https://tensorflow.google.cn/lite/
[3]
Pytorch . Pytorch mobile Pytorch, ed.: Pytorch; 2021. https://pytorch.org/mobile/home/
[4]
Caffe2 . Caffe2 mobile Caffe2, ed.: Caffe2; 2021. https://caffe2.ai/docs/mobile-integration.html
[5]
Mindspore . Mindspore lite Mindspore, ed.: Mindspore; 2021. https://www.mindspore.cn/lite/en
[6]
[7]
Firebase . Firebase‐cloud vs on‐device Firebase, ed.: Firebase; 2021. https://firebase.google.com/docs/ml#cloud_vs_on-device
[8]
McIntosh A, Hassan S, Hindle A. What can android mobile app developers do about the energy consumption of machine learning? Empir Softw Eng. 2019;24(2):562‐601.
[9]
Kumar C, Ryan R, Shao M. Adversary for social good: protecting familial privacy through joint adversarial attacks. In: Proceedings of AAAI; 2020:11304‐11311.
[10]
Sun Z, Sun R, Lu L, Mislove A. Mind your weight(s): a large‐scale study on insufficient machine learning model protection in mobile apps. In: Proceedings of the 30th usenix security symposium, Processings of USENIX Security; 2021:1‐17.
[11]
Li Y, Hua J, Wang H, Chen C, Liu Y. Deeppayload: Black‐box backdoor attack on deep learning models through neural payload injection. In: Proceedings of ICSE‐SEIP; 2021:1‐12.
[12]
Huang Y, Hu H, Chen C. Robustness of on‐device models: adversarial attack to deep learning models on android apps. In: Proceedings of ICSE‐SEIP; 2021:1‐12.
[13]
Cao H. Online Artefact Cao H, ed.: Hongchen Cao; 2021. https://sites.google.com/view/blackbox-attack-on-dl-apps/home
[14]
Yin X, Yu X, Sohn K, Liu X, Chandraker M. Feature transfer learning for face recognition with under‐represented data. In: Proceedings of CVPR; 2019:5704‐5713.
[15]
Zoran D, Chrzanowski M, Huang P‐S, Gowal S, Mott A, Kohli P. Towards robust image classification using sequential attention models. In: Proceedings of CVPR; 2020:9480‐9489.
[16]
Tensorflow. Tensorflow hub Tensorflow, ed.: Tensorflow; 2021. https://www.tensorflow.org/hub
[17]
MindSpore. Mindspore MindSpore, ed.: MindSpore; 2021. https://www.mindspore.cn/lite/models/en
[18]
Zhang Q, Li X, Che X, othes. A comprehensive benchmark of deep learning libraries on mobile devices. In: Laforest F, Troncy R, Simperl E, Agarwal D, Gionis A, Herman I, Médini L, eds. WWW '22: The ACM web conference 2022, virtual event, lyon, france, april 25 ‐ 29, 2022: ACM; 2022:3298‐3307.
[19]
Tan T, Cao G. Fastva: Deep learning video analytics through edge processing and npu in mobile. In: Proceedings of INFOCOM; 2020:1947‐1956.
[20]
Goodfellow IJ, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. In: Proceedings of ICLR; 2015:1‐11.
[21]
Soot . Soot:a java optimization framework Soot, ed.: Soot; 2021. https://github.com/soot-oss/soot
[22]
Flowdroid . Flowdroid Edited by Flowdroid. : Flowdroid. https://github.com/secure-software-engineering/FlowDroid; 2021.
[23]
Tensorflow . Tensorflow lite inference Tensorflow, ed.: Tensorflow; 2021. https://tensorflow.google.cn/lite/guide/inference#load_and_run_a_model_in_java
[24]
Pytorch . Pytorch mobile inference Pytorch, ed.: Pytorch; 2021. https://pytorch.org/mobile/android/#api-docs
[25]
Takeda H, Yoshida S, Muneyasu M. Learning from noisy labeled data using symmetric cross‐entropy loss for image classification. In: Proceedings on GCCE; 2020:709‐711.
[26]
Simonyan K, Zisserman A. Very deep convolutional networks for large‐scale image recognition. In: Proceedings of ICLR; 2015:1‐14.
[27]
Liu Y, Chen X, Liu C, Song D. Delving into transferable adversarial examples and black‐box attacks. In: Proceedings of ICLR; 2017:1‐14.
[28]
Tramèr F, Kurakin A, Papernot N, Goodfellow IJ, Boneh D, McDaniel PD. Ensemble adversarial training: attacks and defenses. In: Proceedings of ICLR; 2018:1‐22.
[29]
Kim H. Torchattacks: A pytorch repository for adversarial attacks. arXiv preprint arXiv:2010.01950; 1–6; 2020.
[30]
Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J. Boosting adversarial attacks with momentum. In: Proceedings of CVPR; 2018:9185‐9193.
[31]
Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A. Towards deep learning models resistant to adversarial attacks; 2018:1‐28.
[32]
Zhang H, Yu Y, Jiao J, Xing EP, Ghaoui LE, Jordan MI. Theoretically principled trade‐off between robustness and accuracy. In: Proceedings of ICML; 2019:7472‐7482.
[33]
Kurakin A, Goodfellow IJ, Bengio S. Adversarial examples in the physical world. In: Proceedings of ICLR; 2017:1‐15.
[34]
ImageNet. Imagenet Edited by ImageNet. : ImageNet. http://www.image-net.org/challenges/LSVRC/2012/index; 2021.
[35]
Pytorch . Pytorch sgd Pytorch, ed.: Pytorch; 2021. https://pytorch.org/docs/master/generated/torch.optim.SGD.html
[36]
Pytorch . Pytorch crossentropyloss Pytorch, ed.: Pytorch; 2021. https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html
[37]
Kaggle . Kaggle dataset‐fruit Kaggle, ed.: Kaggle; 2021. https://www.kaggle.com/sriramr/fruits-fresh-and-rotten-for-classification
[38]
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of CVPR; 2016:770‐778.
[39]
Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. In: Proceedings of CVPR; 2017:4700‐4708.
[40]
Sandler M, Howard AG, Zhu M, Zhmoginov A, Chen L‐C. Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of CVPR; 2018:4510‐4520.
[41]
Ma N, Zhang X, Zheng H‐T, Sun J. Shufflenet v2: Practical guidelines for efficient CNN architecture design. In: Proceedings of ECCV; 2018:116‐131.
[42]
Iandola FN, Han S, Moskewicz MW, Ashraf K, Dally WJ, Keutzer K. Squeezenet: Alexnet‐level accuracy with 50x fewer parameters and< 0.5 mb model size. arXiv preprint arXiv:1602.07360; p. 1–13; 2016.
[43]
Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions. In: Proceedings of CVPR; 2015:1‐9.
[44]
TensorFlow . Tensorflow dataset Edited by Tensorflow. : TensorFlow. https://www.tensorflow.org/datasets/catalog/tf_flowers; 2021.
[45]
Kaggle . Kaggle dataset‐road Kaggle, ed.: Kaggle; 2021. https://www.kaggle.com/virenbr11/pothole-and-plain-rode-images
[46]
Kaggle . Kaggle dataset‐pokemon Kaggle, ed.: Kaggle; 2021. https://www.kaggle.com/lantian773030/pokemonclassification
[48]
Google . Google images Google, ed.: Google; 2021. https://images.google.com/
[49]
Dong J, Guan Z, Wu L, Du X, Guizani M. A sentence‐level text adversarial attack algorithm against IIoT based smart grid. Comput Netw. 2021;190:1‐11.
[50]
Wang X, Jin H, He K. Natural language adversarial attacks and defenses in word level. arXiv preprint arXiv:1909.06723. 2019:1‐16.
[51]
Dong Y, Su H, Wu B, Li Z, Liu W, Zhang T, Zhu J. Efficient decision‐based black‐box adversarial attacks on face recognition. In: Proceedings of CVPR; 2019:7714‐7722.
[52]
Sun L, Tan M, Zhou Z. A survey of practical adversarial example attacks. Cybersecur. 2018;1:1‐9.
[53]
Boloor A, He X, Gill CD, Vorobeychik Y, Zhang X. Simple physical adversarial examples against end‐to‐end autonomous driving models. In: Proceedings of ICESS; 2019:1‐7.
[54]
Xu X, Chen J, Xiao J, Gao L, Shen F, Shen HT. What machines see is not what they get: fooling scene text recognition models with adversarial text images. In: Proceedings of CVPR; 2020:12304‐12314.
[55]
Oh SJ, Fritz M, Schiele B. Adversarial image perturbation for privacy protection a game theory perspective. In: Proceedings of ICCV; 2017:1491‐1500.
[56]
Dai H, Li H, Tian T, Huang X, Wang L, Zhu J, Song L. Adversarial attack on graph structured data. In: Proceedings of ICML; 2018:1115‐1124.
[57]
Komkov S, Petiushko A. Advhat: real‐world adversarial attack on arcface face ID system. CoRR. 2019:1‐9.
[58]
Baluja S, Fischer I. Learning to attack: adversarial transformation networks. In: Proceedings of AAAI; 2018:1‐13.
[59]
Jan STK, Messou J, Lin Y‐C, Huang J‐B, Wang G. Connecting the digital and physical world: improving the robustness of adversarial attacks. Proc AAAI Confer Artif Intell. 2019;33(01):962‐969.
[60]
Xu K, Zhang G, Liu S, et al. Adversarial t‐shirt! evading person detectors in a physical world. In: Vedaldi A, Bischof H, Brox T, Frahm J‐M, eds. Computer vision – eccv 2020. Springer; 2020:665‐681.
[61]
Cui W, Li X, Huang J, Wang W, Wang S, Chen J. Substitute model generation for black‐box adversarial attack based on knowledge distillation. In: Proceedings of ICIP; 2020:648‐652.
[62]
Gao X, Tan Y, Jiang H, Zhang Q, Kuang X. Boosting targeted black‐box attacks via ensemble substitute training and linear augmentation. Appl Sci. 2019;9:1‐14.
[63]
Wang W, Yin B, Yao T, et al. Delving into data: Effectively substitute training for black‐box attack. In: Proceedings of the ieee/cvf conference on computer vision and pattern recognition (CVPR); 2021:4761‐4770.
[64]
Zhou M, Wu J, Liu Y, Liu S, Zhu C. Dast: Data‐free substitute training for adversarial attacks. In: Proceedings of the ieee/cvf conference on computer vision and pattern recognition (CVPR); 2020:234‐243.
[65]
Li Y, Cheng S, Su H, Zhu J. Defense against adversarial attacks via controlling gradient leaking on embedded manifolds. In: Proceedings of ECCV; 2020:753‐769.
[66]
Liao F, Liang M, Dong Y, Pang T, Hu X, Zhu J. Defense against adversarial attacks using high‐level representation guided denoiser. In: Proceedings of CVPR; 2018:1778‐1787.
[67]
Zhou M, Niu Z, Wang L, Zhang Q, Hua G. Adversarial ranking attack and defense. In: Proceedings of ECCV; 2020:781‐799.
[68]
Cissé M, Bojanowski P, Grave E, Dauphin YN, Usunier N. Parseval networks: Improving robustness to adversarial examples. In: Proceedings of ICML; 2017:854‐863.
[69]
Araujo A, Pinot R, Négrevergne B, Meunier L, Chevaleyre Y, Yger F, Atif J. Robust neural networks using randomized adversarial training. CoRR. 2019:1‐9.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Journal of Software: Evolution and Process
Journal of Software: Evolution and Process  Volume 36, Issue 4
April 2024
709 pages
EISSN:2047-7481
DOI:10.1002/smr.v36.4
Issue’s Table of Contents

Publisher

John Wiley & Sons, Inc.

United States

Publication History

Published: 01 January 2023

Author Tags

  1. android
  2. black‐box attacks
  3. deep learning apps

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 13 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media