Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3510003.3510088acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article
Open access

EREBA: black-box energy testing of adaptive neural networks

Published: 05 July 2022 Publication History

Abstract

Recently, various Deep Neural Network (DNN) models have been proposed for environments like embedded systems with stringent energy constraints. The fundamental problem of determining the robustness of a DNN with respect to its energy consumption (energy robustness) is relatively unexplored compared to accuracy-based robustness. This work investigates the energy robustness of Adaptive Neural Networks (AdNNs), a type of energy-saving DNNs proposed for many energy-sensitive domains and have recently gained traction. We propose EREBA, the first black-box testing method for determining the energy robustness of an AdNN. EREBA explores and infers the relationship between inputs and the energy consumption of AdNNs to generate energy surging samples. Extensive implementation and evaluation using three state-of-the-art AdNNs demonstrate that test inputs generated by EREBA could degrade the performance of the system substantially. The test inputs generated by EREBA can increase the energy consumption of AdNNs by 2,000% compared to the original inputs. Our results also show that test inputs generated via EREBA are valuable in detecting energy surging inputs.

References

[1]
Jacob Benesty, Jingdong Chen, Yiteng Huang, and Israel Cohen. 2009. Pearson Correlation Coefficient. In Noise Reduction in Speech Processing. Springer, 37--40.
[2]
Tolga Bolukbasi, Joseph Wang, Ofer Dekel, and Venkatesh Saligrama. 2017. Adaptive Neural Networks for Efficient Inference. In Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 527--536.
[3]
Andrew P. Bradley. 1997. The Use of the Area under the ROC Curve in the Evaluation of Machine Learning Algorithms. Pattern Recogn. 30, 7 (July 1997), 1145--1159.
[4]
Nicholas Carlini and David Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 39--57.
[5]
Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. 2019. Improving Black-box Adversarial Attacks with a Transfer-based Prior. In Advances in Neural Information Processing Systems. 10932--10942.
[6]
Corinna Cortes and Vladimir Vapnik. 1995. Support-vector Networks. Machine learning 20, 3 (1995), 273--297.
[7]
Michael Figurnov, Maxwell D Collins, Yukun Zhu, Li Zhang, Jonathan Huang, Dmitry Vetrov, and Ruslan Salakhutdinov. 2017. Spatially Adaptive Computation Time for Residual Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1039--1048.
[8]
Xitong Gao, Yiren Zhao, Łukasz Dudziak, Robert Mullins, and Cheng-zhong Xu. 2018. Dynamic Channel Pruning: Feature Boosting and Suppression. arXiv preprint arXiv:1810.05331 (2018).
[9]
Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, and Wieland Brendel. 2018. ImageNet-trained CNNs Are Biased Towards Texture; Increasing Shape Bias Improves Accuracy and Robustness. arXiv preprint arXiv:1811.12231 (2018).
[10]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and Harnessing Adversarial Examples. arXiv preprint arXiv:1412.6572 (2014).
[11]
Alex Graves. 2016. Adaptive Computation time for Recurrent Neural Networks. arXiv preprint arXiv:1603.08983 (2016).
[12]
Jiaqi Guan, Yang Liu, Qiang Liu, and Jian Peng. 2017. Energy-efficient Amortized Inference with Cascaded Deep Classifiers. arXiv preprint arXiv:1710.03368 (2017).
[13]
Mirazul Haque, Anki Chauhan, Cong Liu, and Wei Yang. 2020. ILFO: Adversarial Attack on Adaptive Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14264--14273.
[14]
Dan Hendrycks and Thomas Dietterich. 2019. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations. Proceedings of the International Conference on Learning Representations (2019).
[15]
Sanghyun Hong, Yiğitcan Kaya, Ionuţ-Vlad Modoranu, and Tudor Dumitraş. 2020. A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference. arXiv preprint arXiv:2010.02432 (2020).
[16]
Weizhe Hua, Yuan Zhou, Christopher M De Sa, Zhiru Zhang, and G Edward Suh. 2019. Channel Gating Neural Networks. In Advances in Neural Information Processing Systems. 1884--1894.
[17]
Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. 2018. Blackbox Adversarial Attacks with Limited Queries and Information. arXiv preprint arXiv:1804.08598 (2018).
[18]
Sven Köhler, Benedict Herzog, Timo Hönig, Lukas Wenzel, Max Plauth, Jörg Nolte, Andreas Polze, and Wolfgang Schröder-Preikschat. 2020. Pinpoint the Joules: Unifying Runtime-Support for Energy Measurements on Heterogeneous Systems. In 2020 IEEE/ACM International Workshop on Runtime and Operating Systems for Supercomputers (ROSS). IEEE, 31--40.
[19]
Alex Krizhevsky. 2009. Learning multiple layers of features from tiny images. (2009).
[20]
Lanlan Liu and Jia Deng. 2018. Dynamic Deep Neural Networks: Optimizing Accuracy-efficiency Trade-offs by Selective Execution. In Thirty-Second AAAI Conference on Artificial Intelligence.
[21]
Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2016. Delving into Transferable Adversarial Examples and Black-box Attacks. arXiv preprint arXiv:1611.02770 (2016).
[22]
Lei Ma, Felix Juefei-Xu, Fuyuan Zhang, Jiyuan Sun, Minhui Xue, Bo Li, Chunyang Chen, Ting Su, Li Li, Yang Liu, et al. 2018. Deepgauge: Multi-granularity Testing Criteria for Deep Learning Systems. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering. 120--131.
[23]
Lei Ma, Fuyuan Zhang, Jiyuan Sun, Minhui Xue, Bo Li, Felix Juefei-Xu, Chao Xie, Li Li, Yang Liu, Jianjun Zhao, et al. 2018. Deepmutation: Mutation Testing of Deep Learning Systems. In 2018 IEEE 29th International Symposium on Software Reliability Engineering (ISSRE). IEEE, 100--111.
[24]
Feng Nan and Venkatesh Saligrama. 2017. Adaptive Classification for Prediction Under a Budget. In Advances in Neural Information Processing Systems. 4727--4737.
[25]
Nvidia. 2017. Nvidia TX2 User Manual.
[26]
Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, David Sculley, Sebastian Nowozin, Joshua Dillon, Balaji Lakshminarayanan, and Jasper Snoek. 2019. Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift. In Advances in Neural Information Processing Systems. 13991--14002.
[27]
Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. 2016. Transferability in Machine Learning: from Phenomena to Black-box Attacks using Adversarial Samples. arXiv preprint arXiv:1605.07277 (2016).
[28]
Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical Black-box Attacks against Machine Learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security. 506--519.
[29]
Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. 2016. The Limitations of Deep Learning in Adversarial Settings. In 2016 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 372--387.
[30]
Kexin Pei, Yinzhi Cao, Junfeng Yang, and Suman Jana. 2017. Deepxplore: Automated Whitebox Testing of Deep Learning Systems. In proceedings of the 26th Symposium on Operating Systems Principles. 1--18.
[31]
Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. 2009. Dataset shift in machine learning. The MIT Press.
[32]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing Properties of Neural Networks. arXiv preprint arXiv:1312.6199 (2013).
[33]
Surat Teerapittayanon, Bradley McDanel, and Hsiang-Tsung Kung. 2016. Branchynet: Fast Inference via Early Exiting from Deep Neural Networks. In 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, 2464--2469.
[34]
Surat Teerapittayanon, Bradley McDanel, and Hsiang-Tsung Kung. 2017. Distributed Deep Neural Networks over the Cloud, the Edge and End Devices. In 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS). IEEE, 328--339.
[35]
Tensorflow. 2009. Tensorflow Deep Learning Framework. https://www.tensorflow.org/datasets/catalog/cifar10.
[36]
Tensorflow. 2009. Tensorflow Deep Learning Framework. https://www.tensorflow.org/datasets/catalog/cifar100.
[37]
Yuchi Tian, Kexin Pei, Suman Jana, and Baishakhi Ray. 2018. Deeptest: Automated Testing of Deep-neural-network-driven Autonomous Cars. In Proceedings of the 40th international conference on software engineering. 303--314.
[38]
Xin Wang, Fisher Yu, Zi-Yi Dou, Trevor Darrell, and Joseph E Gonzalez. 2018. Skipnet: Learning Dynamic Routing in Convolutional Networks. In Proceedings of the European Conference on Computer Vision (ECCV). 409--424.
[39]
Wikipedia. [n.d.]. Peak Signal-to-Noise Ratio. https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio.
[40]
Wikipedia. [n.d.]. Structural Similarity Index Measure. https://en.wikipedia.org/wiki/Structural_similarity.
[41]
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S Davis, Kristen Grauman, and Rogerio Feris. 2018. Blockdrop: Dynamic Inference Paths in Residual Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8817--8826.
[42]
Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. 2020. Self-training with Noisy Student Improves Imagenet Classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10687--10698.
[43]
Xiaofei Xie, Lei Ma, Felix Juefei-Xu, Minhui Xue, Hongxu Chen, Yang Liu, Jianjun Zhao, Bo Li, Jianxiong Yin, and Simon See. 2019. DeepHunter: a Coverage-guided Fuzz Testing Framework for Deep Neural Networks. In Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis. 146--157.
[44]
Le Yang, Yizeng Han, Xi Chen, Shiji Song, Jifeng Dai, and Gao Huang. 2020. Resolution Adaptive Networks for Efficient Inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2366--2375.

Cited By

View all
  • (2024)Greening Large Language Models of CodeProceedings of the 46th International Conference on Software Engineering: Software Engineering in Society10.1145/3639475.3640097(142-153)Online publication date: 14-Apr-2024
  • (2024)Understanding, Uncovering, and Mitigating the Causes of Inference Slowdown for Language Models2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)10.1109/SaTML59370.2024.00042(723-740)Online publication date: 9-Apr-2024
  • (2023)AntiNODE: Evaluating Efficiency Robustness of Neural ODEs2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)10.1109/ICCVW60793.2023.00164(1499-1509)Online publication date: 2-Oct-2023
  • Show More Cited By

Index Terms

  1. EREBA: black-box energy testing of adaptive neural networks

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICSE '22: Proceedings of the 44th International Conference on Software Engineering
    May 2022
    2508 pages
    ISBN:9781450392211
    DOI:10.1145/3510003
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    In-Cooperation

    • IEEE CS

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 05 July 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. AI energy testing
    2. adversarial machine learning
    3. green AI

    Qualifiers

    • Research-article

    Funding Sources

    • NSF
    • Siemens Fellowship

    Conference

    ICSE '22
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 276 of 1,856 submissions, 15%

    Upcoming Conference

    ICSE 2025

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)142
    • Downloads (Last 6 weeks)18
    Reflects downloads up to 26 Sep 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Greening Large Language Models of CodeProceedings of the 46th International Conference on Software Engineering: Software Engineering in Society10.1145/3639475.3640097(142-153)Online publication date: 14-Apr-2024
    • (2024)Understanding, Uncovering, and Mitigating the Causes of Inference Slowdown for Language Models2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)10.1109/SaTML59370.2024.00042(723-740)Online publication date: 9-Apr-2024
    • (2023)AntiNODE: Evaluating Efficiency Robustness of Neural ODEs2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)10.1109/ICCVW60793.2023.00164(1499-1509)Online publication date: 2-Oct-2023
    • (2023)Dynamic Neural Network is All You Need: Understanding the Robustness of Dynamic Mechanisms in Neural Networks2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)10.1109/ICCVW60793.2023.00163(1489-1498)Online publication date: 2-Oct-2023
    • (2022)NICGSlowDown: Evaluating the Efficiency Robustness of Neural Image Caption Generation Models2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52688.2022.01493(15344-15353)Online publication date: Jun-2022

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media