Abstract
Traffic jams are common as a result of the heavy traffic caused by the vast number of cars on the road. Despite the fact that traffic congestion is prevalent nowadays, enhancing the effectiveness of traffic signal control for effective traffic management is an important goal. The goals of a cooperative intelligent traffic manage scheme are to increase transportation movement and decrease the common wait time of every vehicle. Each signal aspires to make a more efficient journey motion. Throughout the course, signals build a cooperative strategy as well as a constraint for adjacent signals to maximize their particular benefits. However, although most current traffic management schemes rely on simple heuristics, a more effective traffic regulator can be researched using multi-agent reinforcement learning, where in each agent is in charge of only traffic light. The traffic controller model may be influenced by a number of variables. Learning the best feasible result is difficult. Agents in earlier methods chose only the most favorable actions that were close by without cooperating in their activities. Traffic light controllers are not trained to analyze previous data. Due to this, they are not capable to account for the unpredictable shift of traffic flow. A traffic controller model using reinforcement learning was used to obtain fine timing rules by appropriately describing real-time features of the real-world traffic scenario. This research broadens the scope of this technique to include clear cooperation between adjacent traffic lights. The proposed real-time traffic controller prototype can successfully follow traffic signal scheduling guidelines. The model learns and sets up the ideal actions by expanding the vehicle's traffic value, which includes delay time, the number of vehicles halted at a signal, and newly incoming vehicles. The experimentation results show a significant improvement in traffic management, proving that the projected prototype is smart enough for providing real-time dynamic traffic management.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Zhu F, Ning J, Ren Y, Peng J (2012) Optimization of image processing in video-based traffic monitoring. Elektronika ir Elektrotechnika 18(8):91–96
de Schutter B (1999) Optimal traffic light control for a single intersection, vol 3. In: Proceedings of the American control conference (ACC ’99), June 1999, pp 2195–2199
Findler N, Stapp J (1992) A distributed approach to optimized control of street traffic signals. J Transp Eng 118(1):99–110
Baskar LD, Hellendoorn H (2012) Traffic management for automated highway systems using model-based control. IEEE Trans Intell Transp Syst 3(2):838–847
Vidhate DA, Kulkarni P (2017) A framework for improved cooperative learning algorithms with expertness (ICLAE). In: International conference on advanced computing and communication technologies advances in intelligent systems and computing, vol 562. Springer Singapore, pp 149–160
Artificial intelligence in transportation information for application, transportation research circular, Number E–C 113, Transportation on Research Board of the National Academies, January 2007
Abdulhai B et al (2003) Reinforcement learning for true adaptive traffic signal control. ASCE J Transp Eng 129(3):278–285
Wiering M (2000) Multi-agent reinforcement learning for traffic light control. In: Proceedings of 17th international conference on machine learning, pp 1151–1158
Guestrin C, Lagoudakis MG, Parr R (2002) Coordinated reinforcement learning. In: Proceedings nineteenth international conference on machine learning, pp 227–234
Kok JR, Vlassis N (2006) Collaborative multi-agent reinforcement learning by payoff propagation. J Mach Learn Res 7:1789–1828
Chiu S (1992) Adaptive traffic signal control using fuzzy logic. In: Proceedings of the IEEE intelligent vehicles symposium, pp 98–107
Spall JC (1997) Traffic-responsive signal timing for system-wide traffic control in transportation research part c: emerging technologies, 5(3):153–163
Foy MD, Benekohal RF, Goldberg DE Signal timing determination using genetic algorithms. Transportation Research Record No. 1365, pp 108–115
Shoufeng M et al (2002) Agent-based learning control method for urban traffic signal of the single intersection. J Syst Eng 17(6):526–530
Thorpe TL, Andersson C (1996) Traffic light control using SARSA with three state representations. Technical report, IBM Corporation
Bellman RE (1957) Dynamic programming. Princeton University Press, Princeton, NJ
Vidhate DA, Kulkarni P (2017) Expertise based cooperative reinforcement learning methods (ECRLM). In: International conference on information and communication technology for intelligent system, Springer book series Smart innovation, systems and technologies (SIST) , vol 84. Springer Cham, pp 350–360
Mase K, Yamamoto H (1990) Advanced traffic control methods for network management. IEEE Mag 28(10):82–88
Baskar LD, de Schutter B (2011) Traffic control and intelligent vehicle highway systems: a survey. IET Intell Transp Syst 5(1):38–52
Vidhate DA, Kulkarni P (2016) Innovative approach towards cooperation models for multi-agent reinforcement learning (CMMARL). In: International conference on smart trends for information technology and computer communications. Springer, Singapore, pp 468–478
Choi W, Yoon H, Kim K, Lee S (2002) A traffic light controlling FLC considering the traffic congestion. In: Proceedings of the international conference on fuzzy systems (AFSS ’02), pp 69–75
Vidhate DA, Kulkarni P (2016) New approach for advanced cooperative learning algorithms using RL methods (ACLA). In: VisionNet’16 proceedings of the third international symposium on computer vision and the internet. ACM DL pp 12–20
Zegeye S, De Schutter B, Hellendoorn J, Breunesse EA, Hegyi A (2012) A predictive traffic controller for sustainable mobility using parameterized control policies. IEEE Trans Intell Transp Syst 13(3):1420–1429
Vidhate DA, Kulkarni P (2017) Enhanced cooperative multi-agent learning algorithms (ECMLA) using reinforcement learning. In: International conference on computing, analytics and security trends (CAST). IEEE Xplorer, pp 556–561
Chin YK, Wei YK, Teo KTK (2012) Q-learning traffic signal optimization within multiple intersections traffic network. In: Proceedings of the 6th UKSim/AMSS European symposium on computer modeling and simulation (EMS ’12), pp 343–348, November 2012
Vidhate DA, Kulkarni P (2017) Multi-agent cooperation models by reinforcement learning (MCMRL). Int J Comput Appl 176(1):25–29
Prashanth LA, Bhatnagar S (2011) Reinforcement learning with function approximation for traffic signal control. IEEE Trans Intell Transp Syst 12(2):412–421
Vidhate DA, Kulkarni P (2016) Performance enhancement of cooperative learning algorithms by improved decision-making for context-based application. In: International conference on automatic control and dynamic optimization techniques (ICACDOT). IEEE Xplorer, pp 246–252
Chin YK, Lee LK, Bolong N, Yang SS, Teo KTK (2011) Exploring Q-learning optimization in traffic signal timing plan management. In: Proceedings of the 3rd international conference on computational intelligence, communication systems and networks (CICSyN ’11), July 2011, pp 269–274
Vidhate DA, Kulkarni P (2016) Enhancement in decision making with improved performance by multiagent learning algorithms. IOSR J Comput Eng 1(18):18–25
Ben-Akiva M, Cuneo D, Yang Q (2003) Evaluation of freeway control using a microscopic simulation laboratory. Transp Res C 11(1):29–50
Vidhate DA, Kulkarni P (2016) Implementation of multi-agent learning algorithms for improved decision making. Int J Comput Trends Technol (IJCTT) 35(2)
Al-Khatib AM (2011) Cooperative machine learning method. World Comput Sci Inform Technol J (WCSIT) 1:380–383
Vidhate DA, Kulkarni P (2016) A step toward decision making in diagnostic applications using single agent learning algorithms. Int J Comput Sci Inform Technol (IJCSIT) 7(3):1337–1342
Choi Y-C, Ahn H-S (2010) A survey on multi-agent reinforcement learning: coordination problems. In: IEEE/ASME international conference on mechatronics and embedded systems and applications, pp 81–86
Vidhate DA, Kulkarni P (2014) Multilevel relationship algorithm for association rule mining used for cooperative learning. Int J Comput Appl 86(4):20–27
Prabuchandran KJ, Kumar ANH, Bhatnagar S (2014) Multi-agent reinforcement learning for traffic signal control. In: 17th IEEE international conference on intelligent transportation systems (ITSC), pp 2529–2534
Vidhate DA, Kulkarni P (2014) A novel approach to association rule mining using multilevel relationship algorithm for cooperative learning. In: Proceedings of 4th international conference on advanced computing and communication technologies (ACCT-2014), pp 230–236
Camara M, Bonham-Carter O, Jumadinova J (2015) A multi-agent system with reinforcement learning agents for biomedical text mining. In: Proceedings of the 6th ACM conference on bioinformatics, computational biology, and health informatics, BCB’15. ACM, NY, USA, pp 634–643
Vidhate DA, Kulkarni P (2014) To improve association rule mining using new technique: multilevel relationship algorithm towards cooperative learning. In: International conference on circuits, systems, communication and information technology applications (CSCITA). IEEE pp 241–246
Iima H, Kuroe Y (2015) Swarm reinforcement learning methods improving certainty of learning for a multi-robot formation problem. CEC, May 2015, pp 3026–3033
Vidhate DA, Kulkarni P (2019) Performance comparison of multi agent cooperative reinforcement learning algorithms for dynamic decision making in retail shop application. Int J Comput Syst Eng 5(3):169–178. Inderscience Publishers (IEL)
Vidhate DA, Kulkarni P (2019) A framework for dynamic decision making by multi-agent cooperative fault pair algorithm (MCFPA) in retail shop application. In: Information and communication technology for intelligent systems. Springer, Singapore, pp 693–703
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Vidhate, D.A., Kulkarni, P. (2022). A Framework for Smart Traffic Controller by Improved Cooperative Multi-agent Learning Algorithms (ICMALA). In: Pandit, M., Gaur, M.K., Rana, P.S., Tiwari, A. (eds) Artificial Intelligence and Sustainable Computing. Algorithms for Intelligent Systems. Springer, Singapore. https://doi.org/10.1007/978-981-19-1653-3_14
Download citation
DOI: https://doi.org/10.1007/978-981-19-1653-3_14
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-1652-6
Online ISBN: 978-981-19-1653-3
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)