Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

VEC Collaborative Task Offloading and Resource Allocation Based on Deep Reinforcement Learning Under Parking Assistance

Published: 17 June 2024 Publication History

Abstract

With the emergence of autonomous vehicles, meeting the vehicle’s computing needs for computationally intensive and latency-sensitive tasks has become a challenge. Cellular Vehicle-to-Everything (C-V2X), an essential Internet of Vehicles technology, is expected to be enhanced and strengthened in the 6G era to improve road traffic safety and realize intelligent transportation. However, when too many computationally intensive and latency-sensitive tasks of autonomous vehicles are offloaded to the MEC server, the server can become overloaded, unable to meet the offloading demands of numerous vehicles. Inspired by the idle parked cars on both sides of the road, this paper proposes utilizing idle vehicles to assist VEC servers in offloading computing tasks, thereby increasing resource capacity and expanding communication range. Therefore, this paper firstly utilizes roadside parked vehicles with idle computing resources as a task offloading platform to develop a mobile edge computing task offloading strategy based on roadside parked vehicle collaboration. Secondly, a more flexible offloading solution is proposed based on comprehensive consideration of offloading decisions and resource allocation in a multi-user and multi-server parked vehicle-assisted MEC environment. Subsequently, to ensure service quality for end users, we consider weighted total delay cost and energy consumption as optimization objectives. The optimization problem is further formulated as a Markov decision process, and a joint computing offloading and resource allocation optimization algorithm based on deep reinforcement learning is proposed to minimize the total delay and energy consumption of vehicle users. Finally, experimental results validate the algorithm. Compared with other benchmark solutions, the proposed scheme improves system performance by 78%, 77.3%, 72.7%, and 71.1%, respectively. The proposed scheme reduces the total system cost during task offloading and enhances system performance.

References

[1]
Qureshi KN, Alhudhaif A, Haidar SW, Majeed S, and Jeon G Secure data communication for wireless mobile nodes in intelligent transportation systems Microprocessors and Microsystems 2022 90
[2]
Liu L, Chen C, Pei Q, Maharjan S, and Zhang Y Vehicular edge computing and networking: A survey Mobile Networks and Applications 2021 26 1145-1168
[3]
Abbas N, Zhang Y, Taherkordi A, and Skeie T Mobile edge computing: A survey IEEE Internet of Things Journal 2017 5 1 450-465
[4]
Ouyang T, Zhou Z, and Chen X Follow me at the edge: Mobility-aware dynamic service placement for mobile edge computing IEEE Journal on Selected Areas in Communications 2018 36 10 2333-2345
[5]
Mohammed B, Hamdan M, Bassi JS, Jamil HA, Khan S, Elhigazi A, and Marso-no MN Edge computing intelligence using robust feature selection for network traffic classification in Internet-of-Things IEEE Access 2020 8 224059-224070
[6]
Al-Mayouf YRB, Abdullah NF, Mahdi OA, Khan S, Ismail M, Guizani M, and Ahmed SH Real-time intersection-based segment aware routing algorithm for urban vehicular networks IEEE Transactions on Intelligent Transportation Systems 2018 19 7 2125-2141
[7]
Xu, J., Chen, L., & Zhou, P. (2018). Joint service caching and task offloading for mobile edge computing in dense networks. In IEEE INFOCOM 2018-IEEE conference on computer communications (pp 207–215).
[8]
Zhou, Z., Yu, H., Xu, C., Xiong, F., Jia, Y., & Li, G. (2017). Joint relay selection and spectrum allocation in D2D-based cooperative vehicular networks. In 2017 International conference on information and communication technology convergence (ICTC) (pp. 241–246).
[9]
Liu J, Wan J, Zeng B, Wang Q, Song H, and Qiu M A scalable and quick-response software defined vehicular network assisted by mobile edge computing IEEE Communications Magazine 2017 55 7 94-100
[10]
Lu J, Jiang J, Balasubramanian V, Khosravi MR, and Xu X Deep reinforcement learning-based multi-objective edge server placement in Internet of Vehicles Computer Communications 2022 187 172-180
[11]
Nguyen, K., Drew, S., Huang, C., & Zhou, J. (2021). EdgePV: Collaborative edge computing framework for task offloading. In ICC 2021-IEEE international conference on communications (pp. 1–6).
[12]
Qi W, Li Q, Song Q, Guo L, and Jamalipour A Extensive edge intelligence for future vehicular networks in 6G IEEE Wireless Communications 2021 28 4 128-135
[13]
Rahman, F. H., Iqbal, A. Y. M., Newaz, S. S., Wan, A. T., & Ahsan, M. S. (2019). Street parked vehicles based vehicular fog computing: TCP throughput evaluation and future research direction. In 2019 21st International conference on advanced communication technology (ICACT) (pp. 26–31).
[14]
Al-Mayouf YRB, Ismail M, Abdullah NF, Wahab AWA, Mahdi OA, Khan S, and Choo KKR Efficient and stable routing algorithm based on user mobility and node density in urban vehicular network PLoS ONE 2016 11 11
[15]
Guo K, Gao R, Xia W, and Quek TQ Online learning based computation offloading in MEC systems with communication and computation dynamics IEEE Transactionson Communications 2020 69 2 1147-1162
[16]
Liu, Q., Han, T., & Ansari, N. (2018). Joint radio and computation resource management for low latency mobile edge computing. In 2018 IEEE global communications conference (GLOBECOM) (pp. 1–7).
[17]
Saleem U, Liu Y, Jangsher S, Li Y, and Jiang T Mobility-aware joint task scheduling and resource allocation for cooperative mobile edge computing IEEE Transactions on Wireless Communications 2020 20 1 360-374
[18]
Wang, G., Xu, F., & Zhao, C. (2020). Multi-access edge computing based vehicular network: Joint task scheduling and resource allocation strategy. In 2020 IEEE international conference on communications workshops (ICC Workshops) (pp.1–6).
[19]
Zhang C and Zheng Z Task migration for mobile edge computing using deep reinforcement learning Future Generation Computer Systems 2019 96 111-118
[20]
Huang B, Li Z, Xu Y, Pan L, Wang S, Hu H, and Chang V Deep reinforcement learning for performance-aware adaptive resource allocation in mobile edge computing Wireless Communications and Mobile Computing 2020 2020 1-17
[21]
Ma S, Guo S, Wang K, Jia W, and Guo M A cyclic game for service-oriented resource allocation in edge computing IEEE Transactions on Services Computing 2020 13 4 723-734
[22]
Du T, Zhu J, Liu N, Cao K, and Guo Parking edge computing: Task offloading based on roadside parking in vehicle ad hoc networks Journal of Chinese Computer Systems 2022 43 2 416-421
[23]
Du J, Yu FR, Chu X, Feng J, and Lu G Computation offloading and resource allocation in vehicular networks based on dual-side cost minimization IEEE Transactions onVehicular Technology 2018 68 2 1079-1092
[24]
Ren, J., Yu, G., Cai, Y., He, Y., & Qu, F. (2017). Partial offloading for latency minimization in mobile-edge computing. In GLOBECOM 2017—2017 IEEE global communications conference (pp. 1–6).
[25]
You C, Huang K, Chae H, and Kim BH Energy-efficient resource allocation for mobile-edge computation offloading IEEE Transactions on Wireless Communications 2016 16 3 1397-1411
[26]
Mao, Y., Zhang, J., & Letaief, K. B. (2017). Joint task offloading scheduling and transmit power allocation for mobile-edge computing systems. In 2017 IEEE wireless communications and networking conference (WCNC) (pp. 1–6).
[27]
Liu, L., Chang, Z., Guo, X., & Ristaniemi, T. (2017). Multi-objective optimization for computation offloading in mobile-edge computing. In 2017 IEEE symposium on computers and communications (ISCC) (pp. 832–837).
[28]
Tran TX and Pompili D Joint task offloading and resource allocation for multi-server mobile-edge computing networks IEEE Transactions on Vehicular Technology 2018 68 1 856-868
[29]
Li B, Hou F, Ding H, and Wu H Community based parking: Finding and predicting available parking spaces based on the Internet of Things and crowdsensing Computers & Industrial Engineering 2021 162
[30]
Qu, X., Ong, Y. S., Hou, Y., & Shen, X. (2019). Memetic evolution strategy for reinforcement learning. In 2019 IEEE congress on evolutionary computation (CEC) (pp. 1922–1928).
[31]
Tong Z, Deng X, Chen H, Mei J, and Liu H QL-HEFT: A novel machine learning scheduling scheme base on cloud computing environment Neural Computing and Applications 2020 32 5553-5570
[32]
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.
[33]
Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, and Hassabis D Human-level control through deep reinforcement learning Nature 2015 518 7540 529-533
[34]
Yang J, Lin F, Saini DK, Zhu Y, Li Y, and Guo Z Energy-efficient computation offloading and resource allocation in delay-constrained vehicular edge network International Journal of Communication Systems 2022

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Wireless Personal Communications: An International Journal
Wireless Personal Communications: An International Journal  Volume 136, Issue 1
May 2024
614 pages

Publisher

Kluwer Academic Publishers

United States

Publication History

Published: 17 June 2024
Accepted: 25 May 2024

Author Tags

  1. Parked vehicles
  2. Vehicle edge computing (VEC)
  3. Task offloading
  4. Resource allocation
  5. Deep reinforcement learning (DRL)

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media