Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Multi-agent reinforcement learning for vehicular task offloading with multi-step trajectory prediction

  • Regular Paper
  • Published:
CCF Transactions on Pervasive Computing and Interaction Aims and scope Submit manuscript

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

With the development of multi-access edge computing (MEC), task offloading has become a promising paradigm to help the resource-constrained vehicles with their computation-intensive and time-sensitive tasks. Recently, deep reinforcement learning based methods are used to tackle the task offloading problems in the vehicular networks with multiple MEC servers. Due to the dynamic movements of the vehicles, we address the importance of cooperative resource utilization among servers and vehicles to reduce the task completion latency of the system. Therefore, we propose a multi-agent reinforcement learning approach to perform collaborative decision making for the vehicles in a multi-edge task offloading scenario. Moreover, we design and pre-train a vehicular trajectory prediction module based on Transformer to provide supplemental trajectory information for better collaboration. We compare our proposed model with several other offloading strategies on four simulation scenarios. The results show the effectiveness of our approach in terms of both task latency and resource utilization.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

The data that support the findings of this study will be made available upon reasonable request.

References

  • Ahmed, M., Raza, S., Mirza, M.A., Aziz, A., Khan, M.A., Khan, W.U., Li, J., Han, Z.: A survey on vehicular task offloading: classification, issues, and challenges. J. King Saud Univ. Comput. Inf. Sci. 34, 4135–4162 (2022)

    Google Scholar 

  • Alvarez Lopez, P., Behrisch, M., Bieker-Walz, L., Erdmann, J., Flötteröd, Y.-P., Hilbrich, R., Lücken, L., Rummel, J., Wagner, P., Wießner, E.: Microscopic traffic simulation using sumo. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Maui, HI, USA, pp. 2575–2582. IEEE (2018)

  • Arulkumaran, K., Deisenroth, M.P., Brundage, M., Bharath, A.A.: Deep reinforcement learning: a brief survey. IEEE Signal Process. Mag. 34(6), 26–38 (2017)

    Article  Google Scholar 

  • Bieker, L., Krajzewicz, D., Morra, A., Michelacci, C., Cartolano, F.: Traffic simulation for all: a real world traffic scenario from the city of bologna. In: Behrisch, M., Weber, M. (eds.) Modeling Mobility with Open Data, pp. 47–60. Springer International Publishing, Cham (2015)

    Chapter  Google Scholar 

  • Guo, H., Rui, L.-L., Gao, Z.-P.: V2v task offloading algorithm with lstm-based spatiotemporal trajectory prediction model in svcns. IEEE Trans. Veh. Technol. 71(10), 11017–11032 (2022)

    Article  Google Scholar 

  • Hu, S., Ren, T., Niu, J., Hu, Z., Xing, G.: Distributed task offloading based on multi-agent deep reinforcement learning. In: 2021 17th International Conference on Mobility, Sensing and Networking (MSN), Exeter, United Kingdom, pp. 575–583 (2021)

  • Huang, L., Feng, X., Zhang, C., Qian, L., Wu, Y.: Deep reinforcement learning-based joint task offloading and bandwidth allocation for multi-user mobile edge computing. Digit. Commun. Netw. 5(1), 10–17 (2019). (Artificial Intelligence for Future Wireless Communications and Networking)

    Article  Google Scholar 

  • Jiang, H., Dai, X., Xiao, Z., Iyengar, A.: Joint task offloading and resource allocation for energy-constrained mobile edge computing. IEEE Trans. Mob. Comput. 22(7), 4000–4015 (2023)

    Article  Google Scholar 

  • Ke, H., Wang, J., Deng, L., Ge, Y., Wang, H.: Deep reinforcement learning-based adaptive computation offloading for mec in heterogeneous vehicular networks. IEEE Trans. Veh. Technol. 69(7), 7916–7929 (2020)

    Article  Google Scholar 

  • Liu, Y., Chen, Y., Deng, T., Lin, F.: A data offloading strategy based on ue movement prediction in mec. In: 2022 IEEE 10th International Conference on Information, Communication and Networks (ICICN), Zhangye, China, pp. 229–235 (2022)

  • Luo, Q., Luan, T.H., Shi, W., Fan, P.: Deep reinforcement learning based computation offloading and trajectory planning for multi-uav cooperative target search. IEEE J. Sel. Areas Commun. 41(2), 504–520 (2023)

    Article  Google Scholar 

  • Ma, H., Huang, P., Zhou, Z., Zhang, X., Chen, X.: Greenedge: joint green energy scheduling and dynamic task offloading in multi-tier edge computing systems. IEEE Trans. Veh. Technol. 71(4), 4322–4335 (2022)

    Article  Google Scholar 

  • Peng, H., Shen, X.: Multi-agent reinforcement learning based resource management in mec- and uav-assisted vehicular networks. IEEE J. Sel. Areas Commun. 39(1), 131–141 (2021)

    Article  MathSciNet  Google Scholar 

  • Sun, F., Cheng, N., Zhang, S., Zhou, H., Gui, L., Shen, X.: Reinforcement learning based computation migration for vehicular cloud computing. In: 2018 IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates, pp. 1–6 (2018)

  • Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, Phoenix, Arizona, vol. 30 (2016)

  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, pp. 6000–6010, Curran Associates Inc., Red Hook (2017)

  • Wu, C.-L., Chiu, T.-C., Wang, C.-Y., Pang, A.-C.: Mobility-aware deep reinforcement learning with glimpse mobility prediction in edge computing. In: ICC 2020–2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, pp. 1–7 (2020)

  • Wu, C.-L., Chiu, T.-C., Wang, C.-Y., Pang, A.-C.: Mobility-aware deep reinforcement learning with seq2seq mobility prediction for offloading and allocation in edge computing. IEEE Trans. Mob. Comput. 1–17 (2023)

  • Zhang, Y., Liu, T., Zhu, Y., Yang, Y.: A deep reinforcement learning approach for online computation offloading in mobile edge computing. In: 2020 IEEE/ACM 28th International Symposium on Quality of Service (IWQoS), Hang Zhou, China, pp. 1–10 (2020)

  • Zhao, N., Ye, Z., Pei, Y., Liang, Y.-C., Niyato, D.: Multi-agent deep reinforcement learning for task offloading in uav-assisted mobile edge computing. IEEE Trans. Wirel. Commun. 21(9), 6949–6960 (2022)

    Article  Google Scholar 

  • Zhou, W., Fan, L., Zhou, F., Li, F., Lei, X., Xu, W., Nallanathan, A.: Priority-aware resource scheduling for uav-mounted mobile edge computing networks. IEEE Trans. Veh. Technol. 72(7), 9682–9687 (2023)

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported in part by the Program of Technology Innovation of the Science and Technology Commission of Shanghai Municipality (no. 21511104700), National Science Foundation of China (no. 62072304), Shanghai Municipal Science and Technology Commission (no. 21511104700), the Shanghai East Talents Program, the Oceanic Interdisciplinary Program of Shanghai Jiao Tong University (no. SL2020MS032), and Zhejiang Aoxin Co. Ltd.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yanmin Zhu.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, X., Zhu, Y., Wang, C. et al. Multi-agent reinforcement learning for vehicular task offloading with multi-step trajectory prediction. CCF Trans. Pervasive Comp. Interact. 6, 101–114 (2024). https://doi.org/10.1007/s42486-023-00146-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42486-023-00146-5

Keywords