Design & Implementation of Real Time Autonomous Car by Using Image Processing & IoT
Design & Implementation of Real Time Autonomous Car by Using Image Processing & IoT
Abstract— Because of the inaccessibility of Vehicle -to- profound learning procedures [13]-[18] or AI [8]-[12] to
Infrastructure correspondence in the present delivering prepare the model in an information-driven way. Profound
frameworks, (TLD), Traffic S ign Detection and path adaptive based convolutional neural system (CNN) model
identification are as yet thought to be a significant task in self- accomplishes preferable execution or results [7],[19],[20] over
governing vehicles and Driver Assistance S ystems (DAS ) or S elf the AI algorithms dependent on the Histogram of Oriented
Driving Car. For progressively exact outcome, businesses are Gradients (HOG) highlights, (for example, SVM, and the
moving to profound Neural Network Models Like Convolutional Hidden Markov Model). This is a way that CNN can extricate
Neural Network (CNN) as opposed to Traditional models like and take in increasingly unadulterated highlights from the
HOG and so forth. Profound neural Network can remove and
crude RGB channels than old calculations, for example, HOG.
take in increasingly unadulterated highlights from the Raw RGB
picture got from nature. In any case, profound neural systems In any case, calculation multifaceted nature of CNN models is
like CNN have a highly complex calculation. This paper proposes a lot greater than that of most AI calculations. Along these
an Autonomous vehicle or robot that can identify the diverse lines, in this paper, a profound neural system is proposed to
article in condition and group them utilizing CNN model and utilize distinguish different segment in condition to settle on
through this information can take some continuous choice which some significant choice which will be helpful in the field of a
can be utilized in the S elf Driving vehicle or Autonomous Car or self-driving vehicle or Autonomous Vehicle or Driving
Driving Assistant S ystem (DAS ). Assistant System (DAS).
Authorized licensed use limited to: Auckland University of Technology. Downloaded on October 24,2020 at 06:28:22 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Smart Systems and Inventive Technology (ICSSIT 2020)
IEEE Xplore Part Number: CFP20P17-ART; ISBN: 978-1-7281-5821-1
line scan photography [26] these type of research areas are From camera, input gave to raspberry pi, in raspberry pi
analyzed by using computer vision. python language used to communicate between CNN network
and different sensors in the network. Through raspberry pi,
total data will be uploaded to the cloud platform. From the
III. PROPOSED WORK website, can be seen where the driving is. This total setup is
The proposed technology for the autonomous car by using made and shown in figure 3.
CNN is shown in figure 1 and the experimental setup shown in
figure 3. This total system has to be fixing on any motor
vehicle so that camera has to be in the direction of the road.
IoT cloud
RASBERRY-Pi
Pi-CAM
Sensors
Motors
Authorized licensed use limited to: Auckland University of Technology. Downloaded on October 24,2020 at 06:28:22 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Smart Systems and Inventive Technology (ICSSIT 2020)
IEEE Xplore Part Number: CFP20P17-ART; ISBN: 978-1-7281-5821-1
Using a deep convolutional neural network and various image Live feed will have data regarding the environment
processing Techniques detecting real-time objects such as seen by the robot using pi-Cam.
vehicles, Traffic Light, Traffic Signs etc. Detecting Lane using Remote controlling of the vehicle on our Web Server
a neural network. Classifying the detected object us ing deep using IoT
neural network. Making of a small robot with various sensors like
Procedure for line detection: Ultrasonic sensor etc. to illustrate above-mentioned
Giving insights regarding the detected object about features.
their distance and location with regard to concern A semi-automated robot having the ability to make
vehicle. some real-time decision (Lane detection, stop & run,
Giving live feed of our vehicles on our web server Traffic light condition etc.)
using IoT.
Authorized licensed use limited to: Auckland University of Technology. Downloaded on October 24,2020 at 06:28:22 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Smart Systems and Inventive Technology (ICSSIT 2020)
IEEE Xplore Part Number: CFP20P17-ART; ISBN: 978-1-7281-5821-1
Authorized licensed use limited to: Auckland University of Technology. Downloaded on October 24,2020 at 06:28:22 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Smart Systems and Inventive Technology (ICSSIT 2020)
IEEE Xplore Part Number: CFP20P17-ART; ISBN: 978-1-7281-5821-1
Fig 7: Shows the caany detection output from the input lane to edge detected lane
Authorized licensed use limited to: Auckland University of Technology. Downloaded on October 24,2020 at 06:28:22 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Smart Systems and Inventive Technology (ICSSIT 2020)
IEEE Xplore Part Number: CFP20P17-ART; ISBN: 978-1-7281-5821-1
REFERENCES
Authorized licensed use limited to: Auckland University of Technology. Downloaded on October 24,2020 at 06:28:22 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Smart Systems and Inventive Technology (ICSSIT 2020)
IEEE Xplore Part Number: CFP20P17-ART; ISBN: 978-1-7281-5821-1
[2] R. Atallah, M. Khabbaz, and C. Assi, “Multihop v2i communications: A [16] Z. Ouyang and et al., “A cgans-based scene reconstruction model using
feasibility study, modeling, and performance analysis,” IEEE lidar point cloud,” in IEEE International Symposium on Parallel and
T ransactions on Vehicular Technology, vol. 18(2), pp. 416–430, 2017. Distributed Processing with Applications. IEEE, 2017.
[3] G. T rehard and et al., “Tracking both pose and status of a traffic light [17] K. Behrendt, L. Novak, and R. Botros, “A deep learning approach to
via an interacting multiple model filter,” in 17th International traffic lights: Detection, tracking, and classification,” in 2017 IEEE
Conference on Information Fusion (FUSION). IEEE, 2014, pp. 1 –7. International Conference on Robotics and Automation (ICRA). IEEE,
[4] A. Almagambetov, S. Velipasalar, and A. Baitassova, “Mobile 2017, pp. 1370–1377.
standards-based traffic light detection in assistive devices for individuals [18] M. Weber, P. Wolf, and J. M. Zollner, “Deeptlr: A single deep
with color-vision deficiency,” in IEEE T ransactions on Intelligent convolutional network for detection and classification of traffic lights,”
T ransportation Systems, vol. 16(3). IEEE, 2015, pp. 1305 – 1320. in 2016 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2016, pp.
[5] J. Al-Nabulsi, A. Mesleh, and A. Yunis, “T raffic light detection for 342–348.
colorblind individuals,” in 2017 IEEE Jordan Conference on Applied [19] A. Mogelmose, M. M. T rivedi, and T . B. Moeslund, “ Vision -based
Electrical Engineering and Computing T echnologies (AEECT ). IEEE, traffic sign detection and analysis for intelligent driver assistance
2017, pp. 1–6. systems: Perspectives and survey,” in IEEE Transactions on Intelligent
[6] X. Li and et al., “ Traffic light recognition for complex scene with fusion T ransportation Systems, vol. 13(4). IEEE, 2012, pp. 1484 – 1497.
detections,” in IEEE T ransactions on Intelligent T ransportation Systems, [20] M. Chen, U. Challita, and e. a. Walid Saad, “Machine learning for
vol. 19(1). IEEE, 2018, pp. 199–208. wireless networks with artificial intelligence: A tutorial on neural
[7] M. B. Jensen and et al., “Vision for looking at traffic lights: Issues, networks,” vol. arXiv:1710.02913. arXiv.incident detection in intelligent
survey, and perspectives,” in IEEE T ransactions on Intelligent [21] Soni, Vishal Dineshkumar, Challenges and Solution for Artificial
T ransportation Systems, vol. 18(2). IEEE, 2016, pp. 1800 –1815. Intelligence in Cybersecurity of the USA (June 10, 2020). Available at
[8] Y. Ji and et al., “ Integrating visual selective attention model with hog SSRN: https://ssrn.com/abstract=3624487.
features for traffic light detection and recognition,” in 2015 IEEE [22] A. Shaout, D. Colella and S. Awad, "Advanced Driver Assistance
Intelligent Vehicles Symposium (IV). IEEE, 2015, pp. 280–285. Systems - Past, present and future," 2011 Seventh International
[9] Z. Chen, Q. Shi, and X. Huang, “Automatic detection of traffic lights Computer Engineering Conference (ICENCO'2011), Giza, 2011, pp. 72-
using support vector machine,” in 2015 IEEE Intelligent Vehicles 82, doi: 10.1109/ICENCO.2011.6153935.
Symposium (IV). IEEE, 2015, pp. 37–40. [23] L. Li and X. Zhu, "Design Concept and Method of Advan ced Driver
[10] Z. Chen and X. Huang, “Accurate and reliable detection of traffic lights Assistance Systems," 2013 Fifth International Conference on Measuring
using multiclass learning and multiobject tracking,” in IEEE Intelligent T echnology and Mechatronics Automation, Hong Kong, 2013, pp. 434 -
T ransportation Systems Magazine, vol. 8(4). IEEE, 2016, pp. 28 –42. 437, doi: 10.1109/ICMT MA.2013.109.
[11] Z. Shi, Z. Zou, and C. Zhang, “Real-time traffic light detection with [24] Soni, Vishal Dineshkumar, Information T echnologies: Shaping the
adaptive background suppression filter,” in IEEE T ransactions on World under the Pandemic COVID-19 (June 23, 2020). Journal of
Intelligent T ransportation Systems. IEEE, 2016, pp. 690 –700. Engineering Sciences, Vol 11, Issue 6,June/2020, ISSN NO:0377 -9254;
DOI:10.15433.JES.2020.V11I06.43P.112 , Available at
[12] X. Du and et al., “ Vision-based traffic light detection for intelligent
SSRN: https://ssrn.com/abstract=3634361
vehicles,” in 2017 4th International Conference on Information Science
and Control Engineering (ICISCE). IEEE, 2017, pp. 1323 –1326. [25] Q. Wu, Y. Liu, Q. Li, S. Jin and F. Li, "T he application of deep learning
in computer vision," 2017 Chinese Automation Congress (CAC), Jinan,
[13] V. John and et al., “ Saliency map generation by the convolutional neural 2017, pp. 6522-6527, doi: 10.1109/CAC.2017.8243952.
network for real-time traffic light detection using template matching,” in
IEEE T ransactions on Computational Imaging, vol. 1(3). IEEE, 2015, [26] M. J. Cree et al., "Computer vision and image processing at the
pp. 159–173. University of Waikato," 2010 25th International Conference of Image
and Vision Computing New Zealand, Queenstown, 2010, pp. 1 -15, doi:
[14] S. Saini and et al., “ An efficient vision-based traffic light detection and 10.1109/IVCNZ.2010.6148863.
state recognition for autonomous vehicles,” in 2017 IEEE Intelligent
Vehicles Symposium (IV). IEEE, 2017, pp. 606 –611. [27] ‘A Comparative Study of Real T ime Operating Systems for Embedded
Systems’ International Journal of Innovative Research in Computer and
[15] J. Campbell and et al., “Traffic light status detection using movement
Communication Engineering, Vol. 5, Issue 1, June 2016, ISSN: 2320-
patterns of vehicles,” in 19th International Conference on Intelligent
9801, IMPACT FACT OR 6.7
T ransportation Systems (IT SC). IEEE, 2016, pp. 283 –288.
Authorized licensed use limited to: Auckland University of Technology. Downloaded on October 24,2020 at 06:28:22 UTC from IEEE Xplore. Restrictions apply.