Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Application Research of Time Delay System Control in Mobile Sensor Networks Based on Deep Reinforcement Learning

Published: 01 January 2022 Publication History

Abstract

One of the main problems of networked control systems is that signal transmission delay is inevitable due to long distance transmission. This will affect the performance of the system, such as stability range, adjustment time, and rise time, and in serious cases, the system cannot maintain a stable state. In this regard, a definite method is adopted to realize the compensation of network control system. To improve the control ability of mobile sensor network time delay system, the control model of mobile sensor network time delay system based on reinforcement learning is proposed, and the control objective function of mobile sensor network time delay system is constructed by using high-order approximate differential equation, combined with maximum likelihood estimation method for parameter estimation of mobile sensor network time delay, the convergence of reinforcement learning methods for mobile sensor network control and adaptive scheduling, and sensor network time delay system control model of multidimensional measure information registration in strengthening tracking learning optimization mode to realize the adaptive control of mobile sensor network time delay system. The simulation results show that the proposed method has good adaptability, high accuracy of estimation of delay parameters, and strong robustness of the control process.

References

[1]
Y. Zhan, S. Guo, P. Li, and J. Zhang, “A deep reinforcement learning based offloading game in edge computing,” IEEE Transactions on Computers, vol. 69, no. 6, pp. 883–893, 2020.
[2]
S. Saxena, S. Vyas, B. S. Kumar, and S. Gupta, “Survey on online electronic paymentss security,” in 2019 Amity International Conference on Artificial Intelligence (AICAI), p. 756, Dubai, United Arab Emirates, 2019.
[3]
W. Hong and Y. Peng, “Delay control system of intelligent traffic scheduling based on deep learning and fuzzy control,” Journal of Intelligent and Fuzzy Systems, vol. 38, no. 6, pp. 7329–7339, 2020.
[4]
P. Lin, Q. Song, J. Song, A. Jamalipour, and F. R. Yu, “Cooperative caching and transmission in CoMP-integrated cellular networks using reinforcement learning,” IEEE Transactions on Vehicular Technology, vol. 69, no. 5, pp. 5508–5520, 2020.
[5]
M. Sehgal and D. Bhargava, “Knowledge mining: an approach using comparison of data cleansing tools,” Journal of Information and Optimization Sciences, vol. 39, no. 1, pp. 337–343, 2018.
[6]
Y. Lin and W. Yan, “Study of soft sensor modeling based on deep learning,” in 2015 American Control Conference (ACC), pp. 5830–5835, Chicago, IL, USA, 2015.
[7]
M. N. Kumar, V. Jagota, and M. Shabaz, “Retrospection of the optimization model for designing the power train of a formula student race car,” Scientific Programming, vol. 2021, 9 pages, 2021.
[8]
Y. Xiao and J. Bhola, “Design and optimization of prefabricated building system based on BIM technology,” International Journal of System Assurance Engineering and Management, pp. 1–10, 2021.
[9]
Y. Guo, R. Yu, J. An, K. Yang, and V. Leung, “Adaptive bitrate streaming in wireless networks with transcoding at network edge using deep reinforcement learning,” IEEE Transactions on Vehicular Technology, vol. 69, no. 4, pp. 3879–3892, 2020.
[10]
R. Bharti, A. Khamparia, M. Shabaz, G. Dhiman, S. Pande, and P. Singh, “Prediction of heart disease using a combination of machine learning and deep learning,” Computational Intelligence and Neuroscience, vol. 2021, 11 pages, 2021.
[11]
M. Poongodi, M. Hamdi, A. Sharma, M. Ma, and P. K. Singh, “DDoS detection mechanism using trust-based evaluation system in VANET,” IEEE Access, vol. 7, pp. 183532–183544, 2019.
[12]
K. Khujamatov, K. Ahmad, E. Reypnazarov, and D. Khasanov, “Markov chain based modeling bandwith states of the wireless sensor networks of monitoring system,” International Journal of Advanced Science and Technology, vol. 27, no. 4, pp. 4889–4903, 2020.
[13]
G. Veselov, A. Tselykh, and A. Sharma, “Introduction to the special issue: futuristic trends and emergence of technology in biomedical, nonlinear dynamics and control engineering,” Journal of Vibroengineering, vol. 23, no. 6, pp. 1315–1317, 2021.
[14]
J. Bhola, M. Shabaz, G. Dhiman, S. Vimal, P. Subbulakshmi, and S. K. Soni, “Performance evaluation of multilayer clustering network using distributed energy efficient clustering with enhanced threshold protocol,” Wireless Personal Communications, pp. 1–15, 2021.
[15]
A. Kumar, V. Jagota, R. Q. Shawl, V. Sharma, K. Sargam, M. Shabaz, M. T. Khan, B. Rabani, and S. Gandhi, “Wire EDM process parameter optimization for D2 steel,” Materials Today: Proceedings, vol. 37, pp. 2478–2482, 2021.
[16]
J. Shi, J. Xu, J. Sun, and Y. Yang, “Iterative learning control for time-varying systems subject to variable pass lengths: application to robot manipulators,” IEEE Transactions on Industrial Electronics, vol. 67, no. 10, pp. 8629–8637, 2020.
[17]
V. Desnitsky, “Modeling and analysis of information security threats in wireless sensor networks,” Telecom IT, vol. 8, no. 3, pp. 102–111, 2020.
[18]
Z. Jing, Q. Chang, L. Yong, and J. Arinez, “Event-based modeling and analysis of sensor enabled networked manufacturing systems,” IEEE Transactions on Automation ence & Engineering, vol. 15, no. 4, pp. 1930–1945, 2018.
[19]
M. K. Loganathan, C. M. Tan, B. Mishra, T. A. Msagati, and L. W. Snyman, “Review and selection of advanced battery technologies for post 2020 era electric vehicles,” in 2019 IEEE Transportation Electrification Conference (ITEC-India), pp. 1–5, Bengaluru, India, 2019.
[20]
G. Verma and R. Chakraborty, “A hybrid privacy preserving scheme using finger print detection in cloud environment,” International Information & Engineering Technology Association, vol. 24, no. 3, pp. 343–351, 2019.
[21]
S. Bi, L. Huang, H. Wang, and Y. Zhang, “Lyapunov-guided deep reinforcement learning for stable online computation offloading in mobile-edge computing networks,” IEEE Transactions on Wireless Communications, vol. 20, no. 11, pp. 7519–7537, 2021.
[22]
R. A. Addad, D. L. C. Dutra, T. Taleb, and H. Flinck, “Toward using reinforcement learning for trigger selection in network slice mobility,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 7, pp. 2241–2253, 2021.
[23]
P. Luong, F. Gagnon, L. N. Tran, and F. Labeau, “Deep reinforcement learning based resource allocation in cooperative UAV-assisted wireless networks,” IEEE Transactions on Wireless Communications, vol. 20, no. 11, pp. 7610–7625, 2021.
[24]
J. Guo, “Mathematical modeling and simulation of wireless sensor network coverage problem,” Wireless Communications and Mobile Computing, vol. 2021, 12 pages, 2021.
[25]
V. Bhatia, S. Kaur, K. Sharma, P. Rattan, V. Jagota, and M. A. Kemal, “Design and simulation of capacitive MEMS switch for Ka band application,” Wireless Communications and Mobile Computing, vol. 2021, 8 pages, 2021.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Wireless Communications & Mobile Computing
Wireless Communications & Mobile Computing  Volume 2022, Issue
2022
25330 pages
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Publisher

John Wiley and Sons Ltd.

United Kingdom

Publication History

Published: 01 January 2022

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

Cited By

View all

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media