Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

LLD-YOLO: a multi-module network for robust vehicle detection in low-light conditions

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Vehicle detection under low-light conditions remains a significant challenge in the field of computer vision, exerting a notable impact on crucial applications such as autonomous driving and surveillance systems. Although the existing deep learning-based detection methods have achieved remarkable success under normal lighting conditions, their performance degrades significantly in low-light environments due to issues like insufficient brightness, low contrast, and loss of detailed features. This paper presents LLD-YOLO, an enhanced YOLOv11 for low-light vehicle detection. It incorporates improvements from a DarkNet module adapted from Self-Calibrating Illumination Learning for enhancing low-light images via adaptive illumination adjustment, a C3k2-RA feature extraction enhancement module that combines convolutional operations with self-attention mechanisms to overcome local receptive field limitations and capture global contextual information, and a Con-AM feature fusion module that optimizes multi-scale feature integration through an attention mechanism for adaptive feature selection and enhancement. Extensive experiments on Exdark demonstrate that our proposed LLD-YOLO achieves superior detection performance compared to existing methods, with significant improvements in detection accuracy and robustness under various low-light conditions. The mean average precision (mAP) of our method reaches 83.3%, which is a 4.5% improvement over the baseline model, while maintaining efficient computational performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data Availability

No datasets were generated or analysed during the current study.

References

  1. Zhang, B., Zhou, L., Zhang, J.: A methodology for obtaining spatiotemporal information of the vehicles on bridges based on computer vision. Comput-Aided Civ. Infrastruct. Eng. 34(6), 471–487 (2019)

    Article  MATH  Google Scholar 

  2. Ge, L., Dan, D., Koo, K.Y., Chen, Y.: An improved system for long-term monitoring of full-bridge traffic load distribution on long-span bridges. Structures 54, 1076–1089 (2023)

    Article  MATH  Google Scholar 

  3. Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5637–5646 (2022)

  4. He, M., Wang, R., Wang, Y., Zhou, F., Guo, N.: DMPH-net: a deep multi-scale pyramid hybrid network for low-light image enhancement with attention mechanism and noise reduction. SIViP 17(8), 4533–4542 (2023)

    Article  MATH  Google Scholar 

  5. Huang, Y., Fu, G., Ren, W., Tu, X., Feng, Z., Liu, B., Liu, J., Zhou, C., Liu, Y., Zhang, X.: Low-light images enhancement via a dense transformer network. Digit. Signal Process. 148, 104467 (2024)

    Article  MATH  Google Scholar 

  6. Kou, K., Yin, X., Gao, X., Nie, F., Liu, J., Zhang, G.: Lightweight two-stage transformer for low-light image enhancement and object detection. Digit. Signal Process. 150, 104521 (2024)

    Article  MATH  Google Scholar 

  7. Wang, W., Wang, X., Yang, W., Liu, J.: Unsupervised face detection in the dark. IEEE Trans. Pattern Anal. Mach. Intell. 45(1), 1250–1266 (2022)

    Article  MATH  Google Scholar 

  8. Girshick, R.: Fast r-cnn. arXiv preprint arXiv:1504.08083 (2015)

  9. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)

  10. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., Berg, A.C.: Ssd: Single shot multibox detector. In: Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14, pp. 21–37 (2016). Springer

  11. Redmon, J., Farhadi, A.: Yolov3: An incremental improvement 2018. arxiv. arXiv preprint arXiv:1804.02767 20 (2018)

  12. Li, C., Li, L., Jiang, H., Weng, K., Geng, Y., Li, L., Ke, Z., Li, Q., Cheng, M., Nie, W., et al.: Yolov6: A single-stage object detection framework for industrial applications. arXiv preprint arXiv:2209.02976 (2022)

  13. Wang, C.-Y., Bochkovskiy, A., Liao, H.-Y.M.: Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7464–7475 (2023)

  14. Fu, X., Zeng, D., Huang, Y., Zhang, X.-P., Ding, X.: A weighted variational model for simultaneous reflectance and illumination estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2782–2790 (2016)

  15. Guo, X., Li, Y., Ling, H.: Lime: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26(2), 982–993 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  16. Wei, C., Wang, W., Yang, W., Liu, J.: Deep retinex decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560 (2018)

  17. Zhang, Y., Zhang, J., Guo, X.: Kindling the darkness: a practical low-light image enhancer. In: Proceedings of the 27th ACM International Conference on Multimedia, pp. 1632–1640 (2019)

  18. Wang, R., Zhang, Q., Fu, C.-W., Shen, X., Zheng, W.-S., Jia, J.: Underexposed photo enhancement using deep illumination estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6849–6857 (2019)

  19. Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., Yang, J., Zhou, P., Wang, Z.: Enlightengan: deep light enhancement without paired supervision. IEEE Trans. Image Process. 30, 2340–2349 (2021)

    Article  Google Scholar 

  20. Ul Amin, S., Kim, B., Jung, Y., Seo, S., Park, S.: Video anomaly detection utilizing efficient spatiotemporal feature fusion with 3d convolutions and long short-term memory modules. Adv. Intell. Syst. (2024). https://doi.org/10.1002/aisy.202300706

    Article  MATH  Google Scholar 

  21. Xu, X., Wang, R., Lu, J.: Low-light image enhancement via structure modeling and guidance. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9893–9903 (2023)

  22. Wu, Y., Pan, C., Wang, G., Yang, Y., Wei, J., Li, C., Shen, H.T.: Learning semantic-aware knowledge guidance for low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1662–1671 (2023)

  23. Liu, Y., Huang, T., Dong, W., Wu, F., Li, X., Shi, G.: Low-light image enhancement with multi-stage residue quantization and brightness-aware attention. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12140–12149 (2023)

  24. Sasagawa, Y., Nagahara, H.: Yolo in the dark-domain adaptation method for merging multiple models. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXI 16, pp. 345–359 (2020). Springer

Download references

Acknowledgements

This work was supported in part by the National Key R&D Program of China under Grant Number 2022YFB2602200, in part by the National Natural Science Foundation of China under Grant 62273263, Grant 72171172, Grant 71771176 and Grant 92367101, in part by the Aeronautical Science Foundation of China under Grant 2023Z066038001, in part by the National Natural Science Foundation of China Basic Science Research Center Program under Grant 62088101, in part by Municipal Science and Technology Major Project under Grant 2022-5-YB-09, in part by the Natural Science Foundation of Shanghai under Grant 23ZR1465400, and in part by the Fujian Province’s Education and Scientific Research Projects for Young and Middle-aged Teachers in 2022 under Grant JAT220652.

Author information

Authors and Affiliations

Authors

Contributions

Qin Zhang wrote the entire manuscript text. The second author reviewed the manuscript and provided valuable suggestions for revisions. The third author also reviewed the manuscript. All authors have participated in the review process of the manuscript to ensure its quality and integrity.

Corresponding author

Correspondence to Qin Zhang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Q., Guo, W. & Lin, M. LLD-YOLO: a multi-module network for robust vehicle detection in low-light conditions. SIViP 19, 271 (2025). https://doi.org/10.1007/s11760-025-03858-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11760-025-03858-6

Keywords