Abstract
The increasing expansion of Internet-of-Things (IoT) in the world requires Big Data analytic infrastructures to produce valuable knowledge in IoT applications. IoT includes devices with limited resources, whereby it requires efficient platforms to process massive data obtained from sensors. Nowadays, many IoT applications such as audio and video recognition depend on state-of-the-art Deep Neural Networks (DNNs). Therefore, we need to execute DNNs on IoT devices. DNNs offer excellent recognition accuracy but they suffer from high computational and memory resource demands. Due to these constraints, currently, IoT applications that depend on deep learning are mostly offloaded to cloudlets and clouds. Offloading imposes extra network bandwidth consumption costs in addition to delayed response for IoT devices. In this paper, we propose a method that instead of using all layers of DNN for inference, only selects a subset of layers that provide sufficient accuracy for each task. We propose AdaInNet, a method to significantly reduce computational cost and network latency in DNN-based IoT applications while maintaining prediction accuracy based on Distributed DNNs (DDNNs). The method uses modified Distributed DNNs with early exits in order to minimize computation costs and network latency by selecting sub-layers or exit branches of DDNNs with early exits. We also proposed a hybrid Classifier-Wise (CW)—Interactive learning method for the training of DDNNs and Agent’s networks. Furthermore, we create a custom agent model for the Advantage Actor-Critic Deep Reinforcement Learning method in order to preserve recognition accuracy while utilizing a minimum number of layers. Finally, we execute the extensive numerical simulation, in order to evaluate and compare our proposed AdaInNet method with rival methods under standard CIFAR 100 and CIFAR 10 datasets and ResNet-110 and ResNet-32 DNNs which are used in IoT applications in previous works. The results provide strong quantitative evidence that the AdaInNet method not only accelerates inference but also reduces computational cost and latency.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Availability of data and materials
The datasets analyzed during the current study are available at https://www.cs.toronto.edu/~kriz/cifar.html.
References
Mutlag AA, Ghani MKA, Arunkumar N, Mohammed MA, Mohd O (2019) Enabling technologies for fog computing in healthcare IoT systems. Futur Gener Comput Sys 90:62–78
Mahmoud MME, Rodrigues JJPC, Saleem K, Al-Muhtadi J, Kumar N, Korotaev V (2018) Towards energy-aware fog-enabled cloud of things for healthcare. Comput Electr Eng 67:58–69
Wang Xiaonan, Wang Xingwei, Li Yanli (2021) NDN-based IoT with edge computing. Futur Gener Comput Sys 115:397–405
Deebak BD, Al-Turjman F, Aloqaily M, Alfandi O (2020) IoT-BSFCAN: a smart context-aware system in IoT-Cloud using mobile-fogging. Futur Gener Comput Sys 109:368–381
Zhang C (2020) Design and application of fog computing and Internet of Things service platform for smart city. Futur Gener Comput Sys 112:630–640
Al-khafajiy M, Baker T, Al-Libawy H, Maamar Z, Aloqaily M, Jararweh Y (2019) Improving fog computing performance via Fog-2-Fog collaboration. Futur Gener Comput Sys 100:266–280
Jin Y, Cai J, Jiawei X, Huan Y, Yan Y, Huang B, Guo Y, Zheng L, Zou Z (2021) Self-aware distributed deep learning framework for heterogeneous IoT edge devices. Futur Gener Comput Sys 125:908–920
Konda VR, Tsitsiklis JN (2003) On actor-critic algorithms. SIAM J Control Optim 42(4):1143
Krizhevsky A (2009) Learning Multiple Layers of Features from Tiny Images. Science Department, University of Toronto, Tech
Sainath TN, Kingsbury B, Sindhwani V, Arisoy E, Ramabhadran B (2013) Low-rank matrix factorization for deep neural network training with high-dimensional output targets. In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, pp 6655–6659
Burer S, Monteiro RDC (2003) A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization. Math Prog 95(2):329–357
Sajid A, Kyuyeon H, Wonyong S (2017) Structured pruning of deep convolutional neural networks. ACM J Emerg Technol Comput Sys 13(3):1–18
Iandola FN, Han S, Moskewicz MW, Ashraf K, Dally WJ, Keutzer K (2016) SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size
Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Efficient convolutional neural networks for mobile vision applications, MobileNets
Graves A (2016) Adaptive computation time for recurrent neural networks
Huang G, Chen D, Li T, Wu F, Van Der Maaten L, Weinberger K (2018) Multi-scale dense networks for resource efficient image classification. In: 6th International Conference on Learning Representations, ICLR 2018—Conference Track Proceedings
Ren M, Pokrovsky A, Yang B, Urtasun R (2018) SBNet: sparse blocks network for fast inference. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp 8711–8720
Dong X, Huang J, Yang Y, Yan S (2017) More is less: a more complicated network with less inference complexity. In: Proceedings—30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, vol 2017
Campos V, Jou B, Giró-I-Nieto X, Torres J, Chang SF (2018) SkIp RNN: learning to skip state updates in recurrent neural networks. In: 6th International Conference on Learning Representations, ICLR 2018—Conference Track Proceedings
Seo M, Min S, Farhadi A, Hajishirzi H (2018) Neural speed reading via skim-rnn
Wu Z, Nagarajan T, Kumar A, Rennie S, Davis LS, Grauman K, Feris R (2018) BlockDrop: dynamic inference paths in residual networks. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Teerapittayanon S, McDanel B, Kung HT (2016) BranchyNet: fast inference via early exiting from deep neural networks. In: Proceedings - International Conference on Pattern Recognition, vol 0
Zoph B, Le QV (2017) Neural architecture search with reinforcement learning. In: 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings
Baker B, Gupta O, Naik N, Raskar R (2017) Designing neural network architectures using reinforcement learning. In: 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings
Stanley KO, Miikkulainen R (2002) Evolving neural networks through augmenting topologies. Evol Comput 10(2):99
Real E, Aggarwal A, Huang Y, Le QV (2019) Regularized evolution for image classifier architecture search. In: 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019
Phan LA, Nguyen DT, Lee M, Park DH, Kim T (2021) Dynamic fog-to-fog offloading in SDN-based fog computing systems. Futur Gener Comput Sys 117:486–497
Elaziz MA, Abualigah L, Attiya I (2021) Advanced optimization technique for scheduling IoT tasks in cloud-fog computing environments. Futur Gener Comp Sys 124:142–154
Aburukba RO, AliKarrar M, Landolsi T, El-Fakih K (2020) Scheduling Internet of Things requests to minimize latency in hybrid Fog-Cloud computing. Fut Gener Comput Sys 111:539–551
Albawi S, Mohammed TA, Al-Zawi S (2018) Understanding of a convolutional neural network. In: Proceedings of 2017 International Conference on Engineering and Technology, ICET 2017, vol 2018
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol 2016
Joshi DJ, Kale I, Gandewar S, Korate O, Patwari D, Patil S (2021) Reinforcement learning: a survey. In: Advances in Intelligent Systems and Computing, vol 1311 AISC
Wang X, Yu F, Dou ZY, Darrell T, Gonzalez JE (2018) SkipNet: learning dynamic routing in convolutional networks. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol 11217 LNCS
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
No funds, Grants, or other support was received.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Etefaghi, A., Sharifian, S. AdaInNet: an adaptive inference engine for distributed deep neural networks offloading in IoT-FOG applications based on reinforcement learning. J Supercomput 79, 1592–1621 (2023). https://doi.org/10.1007/s11227-022-04728-5
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11227-022-04728-5