Abstract
In order to adapt to the dynamic changes of the network environment, it is necessary to select the most suitable network for each session to serve the heterogeneous network and achieve network load balancing at the same time. Based on the heterogeneous network composed of PDT and B-TrunC, and based on deep Q learning algorithm, the network selection Markov decision process (NSMDP) is adopted. Based on Markov decision-making process, we establish a network selection problem and propose an algorithm for wireless access network selection in heterogeneous network environment. The algorithm considers not only the load of the network, but also the business attributes of the initiating session, the mobility of the terminal and the location of the terminal in the network. The simulation results show that the algorithm reduces the system blocking rate and achieves the autonomy of network selection.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Wen-feng MA (2018) User association for load-balance in heterogeneous M2M networks. Adv Sci Ind Res Cent. (Proceedings of 2018 2nd international conference on modeling, simulation and optimization technologies and applications (MSOTA 2018). Advanced science and industry research center: science and engineering research center) 8
WANG XG (2018) Heterogeneous network selection algorithm based on principal component analysis. Adv Sci Ind Res Cent. (Proceedings of 2018 international conference on computer, communications and mechatronics engineering (CCME 2018). Advanced science and industry research center: science and engineering research center) 6
Kaelbling LP, Littman ML, Moore AW (2005) Reinforcement learning: an introduction. IEEE Trans Neural Netw 16(1):285–286
Kaelbling LP, Littman ML, Moore AW (1996) Reinforcement learning: a survey. J Artif Organs
Doppler K, Rinne M, Wijting C, Ribeiro C, Hugl K (2009) Device-to-device communication as an underlay to LTE-advanced networks. IEEE Commun Mag
Nie J, Haykin S (1999) Q-learning-based dynamic channel assignment technique for mobile communication systems. IEEE Trans Veh Technol
Haddad M, Altman Z, Elayoubi SE et al (2010) A nash-stackelberg fuzzy q-learning decision approach in heterogeneous cognitive networks. In: IEEE global telecommunications conference (GLOBECOM2010)
Simsek M, Czylwik A (2012) Decentralized q-learning of LTE-femtocells for interference reduction in heterogeneous networks using cooperation. In: Proceedings of 2012 international ITG workshop on smart antennas WSA
Tabrizi H, Farhadi G, Cioffi J (2012) Dynamic handoff decision in heterogeneous wireless systems: q-learning approach. In: Proceedings of 2012 IEEE international conference on communications (ICC)
Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G, Petersen S (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533
Acknowledegments
This paper is supported by National Key R&D Program of China (No.2018YFC0807101).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Yu, S., He, CG., Meng, WX., Wei, S., Wei, SM. (2020). Heterogeneous Network Selection Algorithm Based on Deep Q Learning. In: Liang, Q., Wang, W., Liu, X., Na, Z., Jia, M., Zhang, B. (eds) Communications, Signal Processing, and Systems. CSPS 2019. Lecture Notes in Electrical Engineering, vol 571. Springer, Singapore. https://doi.org/10.1007/978-981-13-9409-6_243
Download citation
DOI: https://doi.org/10.1007/978-981-13-9409-6_243
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-9408-9
Online ISBN: 978-981-13-9409-6
eBook Packages: EngineeringEngineering (R0)