Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Deep Reinforcement Learning for Vehicular Edge Computing: An Intelligent Offloading System

Published: 18 October 2019 Publication History

Abstract

The development of smart vehicles brings drivers and passengers a comfortable and safe environment. Various emerging applications are promising to enrich users’ traveling experiences and daily life. However, how to execute computing-intensive applications on resource-constrained vehicles still faces huge challenges. In this article, we construct an intelligent offloading system for vehicular edge computing by leveraging deep reinforcement learning. First, both the communication and computation states are modelled by finite Markov chains. Moreover, the task scheduling and resource allocation strategy is formulated as a joint optimization problem to maximize users’ Quality of Experience (QoE). Due to its complexity, the original problem is further divided into two sub-optimization problems. A two-sided matching scheme and a deep reinforcement learning approach are developed to schedule offloading requests and allocate network resources, respectively. Performance evaluations illustrate the effectiveness and superiority of our constructed system.

References

[1]
Zhiguang Cao, Hongliang Guo, and Jie Zhang. 2018. A multiagent-based approach for vehicle routing by considering both arriving on time and total travel time. ACM Trans. Intell. Syst. Technol. 9, 3 (2018), 25.
[2]
Xu Chen, Lei Jiao, Wenzhong Li, and Xiaoming Fu. 2016. Efficient multi-user computation offloading for mobile-edge cloud computing. IEEE/ACM Trans. Netw. 5 (2016), 2795--2808.
[3]
Ruilong Deng, Rongxing Lu, Chengzhe Lai, and Tom H. Luan. 2015. Towards power consumption-delay tradeoff by workload allocation in cloud-fog computing. In Proceedings of the IEEE International Conference on Communications (ICC’15). IEEE, 3909--3914.
[4]
Ruilong Deng, Rongxing Lu, Chengzhe Lai, Tom H. Luan, and Hao Liang. 2016. Optimal workload allocation in fog-cloud computing toward balanced delay and power consumption. IEEE Int. Things J. 3, 6 (2016), 1171--1181.
[5]
Ruilong Deng, Zaiyue Yang, Jiming Chen, Navid Rahbari Asr, and Mo-Yuen Chow. 2014. Residential energy consumption scheduling: A coupled-constraint game approach. IEEE Trans. Smart Grid 5, 3 (2014), 1340--1350.
[6]
Boya Di, Lingyang Song, Yonghui Li, and Geoffrey Ye Li. 2017. Non-orthogonal multiple access for high-reliable and low-latency V2X communications in 5G systems. IEEE J. Select. Areas Commun. 35, 10 (2017), 2383--2397.
[7]
Lin Gu, Deze Zeng, Song Guo, Ahmed Barnawi, and Yong Xiang. 2017. Cost efficient resource management in fog computing supported medical cyber-physical system. IEEE Trans. Emerg. Topics Comput. 5, 1 (2017), 108--119.
[8]
Gabriel Guerrero-Contreras, Jose Luis Garrido, Sara Balderas-Diaz, and Carlos Rodriguez-Dominguez. 2017. A context-aware architecture supporting service availability in mobile cloud computing. IEEE Trans. Serv. Comput. 10, 6 (2017), 956--968.
[9]
Ying He, Chengchao Liang, Richard Yu, and Zhu Han. 2018. Trust-based social networks with computing, caching, and communications: A deep reinforcement learning approach. IEEE Trans. Netw. Sci. Eng.
[10]
Ying He, F. Richard Yu, Nan Zhao, Victor C. M. Leung, and Hongxi Yin. 2017. Software-defined networks with mobile edge computing and caching for smart cities: A big data deep reinforcement learning approach. IEEE Commun. Mag. 55, 12 (2017), 31--37.
[11]
Ying He, Nan Zhao, and Hongxi Yin. 2018. Integrated networking, caching, and computing for connected vehicles: A deep reinforcement learning approach. IEEE Trans. Vehic. Technol. 67, 1 (2018), 44--55.
[12]
Bhavesh Khemka, Ryan Friese, Luis D. Briceno, Howard Jay Siegel, Anthony A. Maciejewski, Gregory A. Koenig, Chris Groer, Gene Okonski, Marcia M. Hilton, Rajendra Rambharos et al. 2015. Utility functions and resource management in an oversubscribed heterogeneous computing environment. IEEE Trans. Comput. 64, 8 (2015), 2394--2407.
[13]
Jun Li, Xue Mei, Danil Prokhorov, and Dacheng Tao. 2017. Deep neural network for structural prediction and lane detection in traffic scene. IEEE Trans. Neural Netw. Learn. Syst. 28, 3 (2017), 690--703.
[14]
Yu-Jui Liu, Shin-Ming Cheng, and Yu-Lin Hsueh. 2017. eNB selection for machine type communications using reinforcement learning based Markov decision process. IEEE Trans. Vehic. Technol. 66, 12 (2017), 11330--11338.
[15]
Nguyen Cong Luong, Zehui Xiong, Ping Wang, and Dusit Niyato. 2018. Optimal auction for edge computing resource management in mobile blockchain networks: A deep learning approach. In Proceedings of the IEEE International Conference on Communications (ICC’18). IEEE, 1--6.
[16]
Yisheng Lv, Yanjie Duan, Wenwen Kang, Zhengxi Li, Fei-Yue Wang et al. 2015. Traffic flow prediction with big data: A deep learning approach. IEEE Trans. Intell. Transport. Syst. 16, 2 (2015), 865--873.
[17]
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski et al. 2015. Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529.
[18]
K. Mohamed, Etienne Côme, Latifa Oukhellou, and Michel Verleysen. 2017. Clustering smart card data for urban mobility analysis. IEEE Trans. Intell. Transport. Syst. 18, 3 (2017), 712--728.
[19]
Zhaolong Ning, Xiping Hu, Zhikui Chen, MengChu Zhou, Bin Hu, Jun Cheng, and Mohammad S. Obaidat. 2018. A cooperative quality-aware service access system for social Internet of vehicles. IEEE Int. Things J. 5, 4 (2018), 2506--2517.
[20]
Zhaolong Ning, Xiaojie Wang, and Jun Huang. 2019. Mobile edge computing-enabled 5G vehicular networks: toward the integration of communication and computing. IEEE Vehic. Technol. Mag. 14, 1 (2019), 54--61.
[21]
Zhaolong Ning, Xiaojie Wang, Feng Xia, and Joel Jose Rodrigues. 2019. Joint computation offloading, power allocation, and channel assignment for 5G-enabled traffic management systems. IEEE Trans. Industr. Inform. 15, 5 (2019), 3058--3067.
[22]
Zhaolong Ning, Feng Xia, Noor Ullah, Xiangjie Kong, and Xiping Hu. 2017. Vehicular social networks: Enabling smart mobility. IEEE Commun. Mag. 55, 5 (2017), 49--55.
[23]
Le Thanh Tan and Rose Qingyang Hu. 2018. Mobility-aware edge caching and computing in vehicle networks: A deep reinforcement learning. IEEE Trans. Vehic. Technol. 67, 11 (2018), 10190--10203.
[24]
David Tse and Pramod Viswanath. 2005. Fundamentals of Wireless Communication. Cambridge University Press.
[25]
Hado Van Hasselt, Arthur Guez, and David Silver. 2016. Deep reinforcement learning with double Q-learning. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI’05), Vol. 2. 5.
[26]
Chenmeng Wang, Chengchao Liang, F. Richard Yu, Qianbin Chen, and Lun Tang. 2017. Computation offloading and resource allocation in wireless cellular networks with mobile edge computing. IEEE Trans. Wirel. Commun. 16, 8 (2017), 4924--4938.
[27]
Xiaojie Wang, Zhaolong Ning, and Lei Wang. 2018. Offloading in Internet of vehicles: A fog-enabled real-time traffic management system. IEEE Trans. Industr. Inform. 14, 10 (2018), 4568--4578.
[28]
Xiaokang Wang, Laurence T. Yang, Xia Xie, Jirong Jin, and M. Jamal Deen. 2017. A cloud-edge computing framework for cyber-physical-social services. IEEE Commun. Mag. 55, 11 (2017), 80--85.
[29]
Zehua Wang, Derrick Wing Kwan Ng, Vincent W. S. Wong, and Robert Schober. 2017. Robust beamforming design in C-RAN with sigmoidal utility and capacity-limited backhaul. IEEE Trans. Wirel. Commun. 16, 9 (2017), 5583--5598.
[30]
Yifei Wei, F. Richard Yu, and Mei Song. 2010. Distributed optimal relay selection in wireless cooperative networks with finite-state Markov channels. IEEE Trans. Vehic. Technol. 59, 5 (2010), 2149--2158.
[31]
Jinming Wen, Chao Ren, and Arun Kumar Sangaiah. 2018. Energy-efficient device-to-device edge computing network: An approach offloading both traffic and computation. IEEE Commun. Mag. 56, 9 (2018), 96--102.
[32]
Guangquan Xu, Jia Liu, Yanrong Lu, Xianjiao Zeng, Yao Zhang, and Xiaoming Li. 2018. A novel efficient MAKA protocol with desynchronization for anonymous roaming service in global mobility networks. J. Netw. Comput. Appl. 107 (2018), 83--92.
[33]
Guangquan Xu, Yao Zhang, Arun Kumar Sangaiah, Xiaohong Li, Aniello Castiglione, and Xi Zheng. 2019. CSP-E2: An abuse-free contract signing protocol with low-storage TTP for energy-efficient electronic transaction ecosystems. Inform. Sci. 476 (2019), 505--515.
[34]
Jie Xu, Lixing Chen, and Shaolei Ren. 2017. Online learning for offloading and autoscaling in energy harvesting mobile edge computing. IEEE Trans. Cog. Commun. Netw. 3, 3 (2017), 361--373.
[35]
Chenggang Yan, Hongtao Xie, Dongbao Yang, Jian Yin, Yongdong Zhang, and Qionghai Dai. 2018. Supervised hash coding with deep neural network for environment perception of intelligent vehicles. IEEE Trans. Intell. Transport. Syst. 19, 1 (2018), 284--295.
[36]
Xianjiao Zeng, Guangquan Xu, Xi Zheng, Yang Xiang, and Wanlei Zhou. 2018. E-AUA: An efficient anonymous user authentication protocol for mobile IoT. IEEE Int. Things J. 6, 2 (2018), 1506--1519.
[37]
Desheng Zhang, Tian He, and Fan Zhang. 2018. Real-time human mobility modeling with multi-view learning. ACM Trans. Intell. Syst. Technol 9, 3 (2018), 22.
[38]
Shuhang Zhang, Boya Di, Lingyang Song, and Yonghui Li. 2017. Sub-channel and power allocation for non-orthogonal multiple access relay networks with amplify-and-forward protocol. IEEE Trans. Wirel. Commun. 16, 4 (2017), 2249--2261.
[39]
Zixing Zhang, Jürgen Geiger, Jouni Pohjalainen, Amr El-Desoky Mousa, Wenyu Jin, and Björn Schuller. 2018. Deep learning for environmentally robust speech recognition: An overview of recent developments. ACM Trans. Intell. Syst. Technol 9, 5 (2018), 49.
[40]
Bowen Zhou and Rajkumar Buyya. 2018. Augmentation techniques for mobile cloud computing: A taxonomy, survey, and future directions. ACM Comput. Surv. 51, 1 (2018), 13.
[41]
Chao Zhu, Giancarlo Pastor, Yu Xiao, Yong Li, and Antti Ylae-Jaeaeski. 2018. Fog following me: Latency and quality balanced task allocation in vehicular fog computing. In Proceedings of the 15th IEEE International Conference on Sensing, Communication, and Networking (SECON’18). IEEE, 1--9.

Cited By

View all
  • (2024)SMT-as-a-Service for Fog-Supported Cyber-Physical SystemsProceedings of the 25th International Conference on Distributed Computing and Networking10.1145/3631461.3631562(154-163)Online publication date: 4-Jan-2024
  • (2024)Joint Optimization of Preference-Aware Caching and Content Migration in Cost-Efficient Mobile Edge NetworksIEEE Transactions on Wireless Communications10.1109/TWC.2023.332346423:5(4918-4931)Online publication date: May-2024
  • (2024)Deep Deterministic Policy Gradient-Based Intelligent Task Offloading for Vehicular Computing With Priority Experience PlaybackIEEE Transactions on Vehicular Technology10.1109/TVT.2024.337891973:7(10655-10667)Online publication date: Jul-2024
  • Show More Cited By

Index Terms

  1. Deep Reinforcement Learning for Vehicular Edge Computing: An Intelligent Offloading System

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Intelligent Systems and Technology
    ACM Transactions on Intelligent Systems and Technology  Volume 10, Issue 6
    Special Section on Intelligent Edge Computing for Cyber Physical and Cloud Systems and Regular Papers
    November 2019
    267 pages
    ISSN:2157-6904
    EISSN:2157-6912
    DOI:10.1145/3368406
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 18 October 2019
    Accepted: 01 March 2019
    Revised: 01 February 2019
    Received: 01 December 2018
    Published in TIST Volume 10, Issue 6

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Vehicular system
    2. deep reinforcement learning
    3. edge computing
    4. intelligent offloading

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Funding Sources

    • State Key Laboratory of Integrated Services Networks, Xidian University (ISN20-01)
    • National Natural Science Foundation of China
    • China Postdoctoral Science Foundation
    • State Key Laboratory for Novel Software Technology, Nanjing University
    • Dalian Science and Technology Innovation Fund
    • National Natural Science Foundation of Chongqing
    • Brazilian National Council for Research and Development (CNPq)
    • National Funding from the FCT—Fundação para a Ciência e a Tecnologia
    • RNP, with resources from MCTIC
    • Centro de Referência em Radiocomunicações—CRR project of the Instituto Nacional de Telecomunicações (Inatel), Brazil

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)239
    • Downloads (Last 6 weeks)16
    Reflects downloads up to 30 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)SMT-as-a-Service for Fog-Supported Cyber-Physical SystemsProceedings of the 25th International Conference on Distributed Computing and Networking10.1145/3631461.3631562(154-163)Online publication date: 4-Jan-2024
    • (2024)Joint Optimization of Preference-Aware Caching and Content Migration in Cost-Efficient Mobile Edge NetworksIEEE Transactions on Wireless Communications10.1109/TWC.2023.332346423:5(4918-4931)Online publication date: May-2024
    • (2024)Deep Deterministic Policy Gradient-Based Intelligent Task Offloading for Vehicular Computing With Priority Experience PlaybackIEEE Transactions on Vehicular Technology10.1109/TVT.2024.337891973:7(10655-10667)Online publication date: Jul-2024
    • (2024)DRL-Based Resource Allocation Game With Influence of Review Information for Vehicular Edge Computing SystemsIEEE Transactions on Vehicular Technology10.1109/TVT.2024.336765773:7(9591-9603)Online publication date: Jul-2024
    • (2024)Multi-Agent Reinforcement Learning-Based Trading Decision-Making in Platooning-Assisted Vehicular NetworksIEEE/ACM Transactions on Networking10.1109/TNET.2023.334202032:3(2143-2158)Online publication date: Jun-2024
    • (2024)A Survey of Computation Offloading With Task TypesIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2024.341089625:8(8313-8333)Online publication date: Aug-2024
    • (2024)A Lightweight Group Authentication Protocol for Blockchain-Based Vehicular Edge Computing NetworksIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2024.340243225:8(8556-8567)Online publication date: Aug-2024
    • (2024)Edge-AI: IoT Request Service Provisioning in Federated Edge Computing Using Actor-Critic Reinforcement LearningIEEE Transactions on Engineering Management10.1109/TEM.2022.316676971(12519-12528)Online publication date: 2024
    • (2024)DRL-Based Contract Incentive for Wireless-Powered and UAV-Assisted Backscattering MEC SystemIEEE Transactions on Cloud Computing10.1109/TCC.2024.336044312:1(264-276)Online publication date: Jan-2024
    • (2024)Health Monitoring and Diagnostic Platform Based on AI and IoMT Sensors: An Overview of Methodologies and Challenges2024 1st International Conference on Trends in Engineering Systems and Technologies (ICTEST)10.1109/ICTEST60614.2024.10576159(1-5)Online publication date: 11-Apr-2024
    • Show More Cited By

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media