Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Advertisement

AdaInNet: an adaptive inference engine for distributed deep neural networks offloading in IoT-FOG applications based on reinforcement learning

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

The increasing expansion of Internet-of-Things (IoT) in the world requires Big Data analytic infrastructures to produce valuable knowledge in IoT applications. IoT includes devices with limited resources, whereby it requires efficient platforms to process massive data obtained from sensors. Nowadays, many IoT applications such as audio and video recognition depend on state-of-the-art Deep Neural Networks (DNNs). Therefore, we need to execute DNNs on IoT devices. DNNs offer excellent recognition accuracy but they suffer from high computational and memory resource demands. Due to these constraints, currently, IoT applications that depend on deep learning are mostly offloaded to cloudlets and clouds. Offloading imposes extra network bandwidth consumption costs in addition to delayed response for IoT devices. In this paper, we propose a method that instead of using all layers of DNN for inference, only selects a subset of layers that provide sufficient accuracy for each task. We propose AdaInNet, a method to significantly reduce computational cost and network latency in DNN-based IoT applications while maintaining prediction accuracy based on Distributed DNNs (DDNNs). The method uses modified Distributed DNNs with early exits in order to minimize computation costs and network latency by selecting sub-layers or exit branches of DDNNs with early exits. We also proposed a hybrid Classifier-Wise (CW)—Interactive learning method for the training of DDNNs and Agent’s networks. Furthermore, we create a custom agent model for the Advantage Actor-Critic Deep Reinforcement Learning method in order to preserve recognition accuracy while utilizing a minimum number of layers. Finally, we execute the extensive numerical simulation, in order to evaluate and compare our proposed AdaInNet method with rival methods under standard CIFAR 100 and CIFAR 10 datasets and ResNet-110 and ResNet-32 DNNs which are used in IoT applications in previous works. The results provide strong quantitative evidence that the AdaInNet method not only accelerates inference but also reduces computational cost and latency.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Availability of data and materials

The datasets analyzed during the current study are available at https://www.cs.toronto.edu/~kriz/cifar.html.

References

  1. Mutlag AA, Ghani MKA, Arunkumar N, Mohammed MA, Mohd O (2019) Enabling technologies for fog computing in healthcare IoT systems. Futur Gener Comput Sys 90:62–78

    Article  Google Scholar 

  2. Mahmoud MME, Rodrigues JJPC, Saleem K, Al-Muhtadi J, Kumar N, Korotaev V (2018) Towards energy-aware fog-enabled cloud of things for healthcare. Comput Electr Eng 67:58–69

    Article  Google Scholar 

  3. Wang Xiaonan, Wang Xingwei, Li Yanli (2021) NDN-based IoT with edge computing. Futur Gener Comput Sys 115:397–405

    Article  Google Scholar 

  4. Deebak BD, Al-Turjman F, Aloqaily M, Alfandi O (2020) IoT-BSFCAN: a smart context-aware system in IoT-Cloud using mobile-fogging. Futur Gener Comput Sys 109:368–381

    Article  Google Scholar 

  5. Zhang C (2020) Design and application of fog computing and Internet of Things service platform for smart city. Futur Gener Comput Sys 112:630–640

    Article  Google Scholar 

  6. Al-khafajiy M, Baker T, Al-Libawy H, Maamar Z, Aloqaily M, Jararweh Y (2019) Improving fog computing performance via Fog-2-Fog collaboration. Futur Gener Comput Sys 100:266–280

    Article  Google Scholar 

  7. Jin Y, Cai J, Jiawei X, Huan Y, Yan Y, Huang B, Guo Y, Zheng L, Zou Z (2021) Self-aware distributed deep learning framework for heterogeneous IoT edge devices. Futur Gener Comput Sys 125:908–920

    Article  Google Scholar 

  8. Konda VR, Tsitsiklis JN (2003) On actor-critic algorithms. SIAM J Control Optim 42(4):1143

    Article  MathSciNet  MATH  Google Scholar 

  9. Krizhevsky A (2009) Learning Multiple Layers of Features from Tiny Images. Science Department, University of Toronto, Tech

  10. Sainath TN, Kingsbury B, Sindhwani V, Arisoy E, Ramabhadran B (2013) Low-rank matrix factorization for deep neural network training with high-dimensional output targets. In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, pp 6655–6659

  11. Burer S, Monteiro RDC (2003) A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization. Math Prog 95(2):329–357

    Article  MathSciNet  MATH  Google Scholar 

  12. Sajid A, Kyuyeon H, Wonyong S (2017) Structured pruning of deep convolutional neural networks. ACM J Emerg Technol Comput Sys 13(3):1–18

    Google Scholar 

  13. Iandola FN, Han S, Moskewicz MW, Ashraf K, Dally WJ, Keutzer K (2016) SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size

  14. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Efficient convolutional neural networks for mobile vision applications, MobileNets

  15. Graves A (2016) Adaptive computation time for recurrent neural networks

  16. Huang G, Chen D, Li T, Wu F, Van Der Maaten L, Weinberger K (2018) Multi-scale dense networks for resource efficient image classification. In: 6th International Conference on Learning Representations, ICLR 2018—Conference Track Proceedings

  17. Ren M, Pokrovsky A, Yang B, Urtasun R (2018) SBNet: sparse blocks network for fast inference. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp 8711–8720

  18. Dong X, Huang J, Yang Y, Yan S (2017) More is less: a more complicated network with less inference complexity. In: Proceedings—30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, vol 2017

  19. Campos V, Jou B, Giró-I-Nieto X, Torres J, Chang SF (2018) SkIp RNN: learning to skip state updates in recurrent neural networks. In: 6th International Conference on Learning Representations, ICLR 2018—Conference Track Proceedings

  20. Seo M, Min S, Farhadi A, Hajishirzi H (2018) Neural speed reading via skim-rnn

  21. Wu Z, Nagarajan T, Kumar A, Rennie S, Davis LS, Grauman K, Feris R (2018) BlockDrop: dynamic inference paths in residual networks. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

  22. Teerapittayanon S, McDanel B, Kung HT (2016) BranchyNet: fast inference via early exiting from deep neural networks. In: Proceedings - International Conference on Pattern Recognition, vol 0

  23. Zoph B, Le QV (2017) Neural architecture search with reinforcement learning. In: 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings

  24. Baker B, Gupta O, Naik N, Raskar R (2017) Designing neural network architectures using reinforcement learning. In: 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings

  25. Stanley KO, Miikkulainen R (2002) Evolving neural networks through augmenting topologies. Evol Comput 10(2):99

    Article  Google Scholar 

  26. Real E, Aggarwal A, Huang Y, Le QV (2019) Regularized evolution for image classifier architecture search. In: 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019

  27. Phan LA, Nguyen DT, Lee M, Park DH, Kim T (2021) Dynamic fog-to-fog offloading in SDN-based fog computing systems. Futur Gener Comput Sys 117:486–497

    Article  Google Scholar 

  28. Elaziz MA, Abualigah L, Attiya I (2021) Advanced optimization technique for scheduling IoT tasks in cloud-fog computing environments. Futur Gener Comp Sys 124:142–154

    Article  Google Scholar 

  29. Aburukba RO, AliKarrar M, Landolsi T, El-Fakih K (2020) Scheduling Internet of Things requests to minimize latency in hybrid Fog-Cloud computing. Fut Gener Comput Sys 111:539–551

    Article  Google Scholar 

  30. Albawi S, Mohammed TA, Al-Zawi S (2018) Understanding of a convolutional neural network. In: Proceedings of 2017 International Conference on Engineering and Technology, ICET 2017, vol 2018

  31. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol 2016

  32. Joshi DJ, Kale I, Gandewar S, Korate O, Patwari D, Patil S (2021) Reinforcement learning: a survey. In: Advances in Intelligent Systems and Computing, vol 1311 AISC

  33. Wang X, Yu F, Dou ZY, Darrell T, Gonzalez JE (2018) SkipNet: learning dynamic routing in convolutional networks. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol 11217 LNCS

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Saeed Sharifian.

Ethics declarations

Conflict of interest

No funds, Grants, or other support was received.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Etefaghi, A., Sharifian, S. AdaInNet: an adaptive inference engine for distributed deep neural networks offloading in IoT-FOG applications based on reinforcement learning. J Supercomput 79, 1592–1621 (2023). https://doi.org/10.1007/s11227-022-04728-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-022-04728-5

Keywords