Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Self-annotated Labelling and Training Data for Traffic Video Object Detection Using Machine Learning Techniques

  • Conference paper
  • First Online:
Computational Intelligence in Data Science (ICCIDS 2024)

Part of the book series: IFIP Advances in Information and Communication Technology ((IFIPAICT,volume 717))

Included in the following conference series:

  • 160 Accesses

Abstract

Classical machine learning algorithms are susceptible to objective elements like video quality and the weather, which results in inferior detection results in an erroneous identification of Unmanned Aerial Vehicle (UAV) action photos. Deep learning is suggested for identifying cars in UAV video. Thanks to the technology offered by road management systems, real-time visual information is now accessible in thousands of locations on road networks. The first step in identifying or preventing accidents is locating the vehicles on the path of travel. Convolutional neural networks have made substantial advancements in the field of object detection. Enhancing conventional computer vision techniques. However, there are drawbacks because the pre-trained models that are currently available only provide a low detection rate, especially for small objects. The major drawback is that, in order to retrain the automobile maker, they must manually identify the vehicles that appear in the photographs captured by each IP camera on the road infrastructure. This task will be impossible to complete even if we install hundreds of cameras throughout the extensive road network. The technique utilized in this study uses CCTV footage with a comparable range of images with the aim of identifying cars in an image. The spatial brightness of the original sample is first translated using the HSV (Hue, Saturation, Value) method to maximize sample diversity and adaptability to different lighting conditions. The performance of the basic SSD is enhanced by incorporating focus loss because of feature extraction. Following a trained neural network model’s examination of the UAV footage, the effectiveness of drone identification is assessed. The results show that the proposed method have a car detection rate of 96.49%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Unger, A., Gelautz, M., Seitner, F.: A study on training data selection for object detection in nighttime traffic scenes. Electron. Imaging 2020(16), 203–211 (2020)

    Google Scholar 

  2. Chen, Z., et al.: Dynamic supervisor for cross-dataset object detection. Neurocomputing 469, 310–320 (2022)

    Article  Google Scholar 

  3. Billast, M., et al.: Object detection to enable autonomous vessels on european inland waterways. In: IECON 2022–48th Annual Conference of the IEEE Industrial Electronics Society, pp. 1– 6. IEEE (2022)

    Google Scholar 

  4. Liang, H., Song, H., Li, H., Dai, Z.: Vehicle counting system using deep learning and multi-object tracking methods. Transp. Res. Rec. 2674(4), 114–128 (2020)

    Article  Google Scholar 

  5. He, P., Wu, A., Huang, X., Scott, J., Rangarajan, A., Ranka, S.: Deep learning based geometric features for effective truck selection and classification from highway videos. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 824–830. IEEE (2019)

    Google Scholar 

  6. Mentasti, S., Simsek, Y.C., Matteucci, M.: Traffic lights detection and tracking for HD map creation. Front. Robot. AI 10, 1065394 (2023)

    Article  Google Scholar 

  7. Wei, X., Zhang, H., Liu, S., Lu, Y.: Pedestrian detection in underground mines via parallel feature transfer network. Pattern Recogn. 103, 107195 (2020)

    Article  Google Scholar 

  8. Negi, A., Kesarwani, Y., Saranya, P.: Text based traffic signboard detection using YOLO v7 architecture. In: Singh, M., Tyagi, V., Gupta, P., Flusser, J., Ören, T. (eds.) ICACDS 2023. CCIS, vol. 1848, pp. 1–11. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-37940-6_1

  9. Pohjola, S.: Object detector fine-tuning for computer vision applications. Master’s thesis (2022)

    Google Scholar 

  10. Wu, T., Martelaro, N., Stent, S., Ortiz, J., Ju, W.: Learning when agents can talk to drivers using the INAGT dataset and multisensor fusion. Proc. ACM Interact. Mob. Wearable Ubiquit. Technol. 5(3), 1–28 (2021)

    Article  Google Scholar 

  11. Talwar, D., Guruswamy, S., Ravipati, N., Eirinaki, M.: Evaluating validity of synthetic data in perception tasks for autonomous vehicles. In: 2020 IEEE International Conference On Artificial Intelligence Testing (AITest), pp. 73–80. IEEE (2020)

    Google Scholar 

  12. Salau, J., Krieter, J.: Instance segmentation with Mask R-CNN applied to loose-housed dairy cows in a multi- camera setting. Animals 10(12), 2402 (2020)

    Article  Google Scholar 

  13. Patnaik, D.: Dash camera based real time video assisted driving tools (2020)

    Google Scholar 

  14. Ahmed, A.: Contextual scene understanding: template objects detector and feature descriptors for indoor/outdoor scenarios. Doctoral dissertation, AIR UNIVERSITY (2020)

    Google Scholar 

  15. Wu, C., Li, A., Li, B., Chen, Y.: Efficiently learning a robust self-driving model with neuron coverage aware adaptive filter reuse. In: 2019 IEEE International Workshop on Signal Processing Systems (SiPS), pp. 109–114. IEEE (2019)

    Google Scholar 

  16. Wu, Z., Wu, X., Zhang, X., Ju, L., Wang, S.: SiamDoGe: domain generalizable semantic segmentation using Siamese network. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) ECCV 2022. LNCS, vol. 13698, pp. 603–620. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-37940-6_1

  17. Alonso, P., de Gordoa, J.A.I., Ortega, J.D., García, S., Iriarte, F.J., Nieto, M.: Automatic UAV-based airport pavement inspection using mixed real and virtual scenarios. In: Fifteenth International Conference on Machine Vision (ICMV 2022), vol. 12701, pp. 361–372. SPIE (2023)

    Google Scholar 

  18. NG, C.H.: Development of vehicular-pedestrian traffic counting and forecasting framework. Doctoral dissertation, Monash University (2022)

    Google Scholar 

  19. Duan, C., Liu, Z., Xia, J., Zhang, M., Liao, J., Cao, L.: Enhancing cross-dataset performance of distracted driving detection with score-softmax classifier. arXiv preprint arXiv:2310.05202 (2023)

  20. Lai, Y.L.: Car over-speeding detection using time-distance approximation. Doctoral dissertation, UTAR (2022)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to V. Rahul Chiranjeevi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rahul Chiranjeevi, V., Swamy, M.M., Krishna Prasath, M.K., Kumar, P. (2024). Self-annotated Labelling and Training Data for Traffic Video Object Detection Using Machine Learning Techniques. In: Owoc, M.L., Varghese Sicily, F.E., Rajaram, K., Balasundaram, P. (eds) Computational Intelligence in Data Science. ICCIDS 2024. IFIP Advances in Information and Communication Technology, vol 717. Springer, Cham. https://doi.org/10.1007/978-3-031-69982-5_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-69982-5_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-69981-8

  • Online ISBN: 978-3-031-69982-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics