Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

A Network That Balances Accuracy and Efficiency for Lane Detection

Published: 01 January 2021 Publication History

Abstract

In the automatic lane-keeping system (ALKS), the vehicle must stably and accurately detect the boundary of its current lane for precise positioning. At present, the detection accuracy of the lane algorithm based on deep learning has a greater leap than that of the traditional algorithm, and it can achieve better recognition results for corners and occlusion situations. However, mainstream algorithms are difficult to balance between accuracy and efficiency. In response to this situation, we propose a single-step method that directly outputs lane shape model parameters. This method uses MobileNet v2 and spatial CNN (SCNN) to construct a network to quickly extract lane features and learn global context information. Then, through depth polynomial regression, a polynomial representing each lane mark in the image is output. Finally, the proposed method was verified in the TuSimple dataset. Compared with existing algorithms, it achieves a balance between accuracy and efficiency. Experiments show that the recognition accuracy and detection speed of our method in the same environment have reached the level of mainstream algorithms, and an effective balance has been achieved between the two.

References

[1]
J. Philion, “Fastdraw: addressing the long tail of lane detection by adapting a sequential prediction network,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11582–11591, Long Beach, CA, USA, June 2019.
[2]
S. P. Narote, P. N. Bhujbal, A. S. Narote, and D. M. Dhane, “A review of recent advances in lane detection and departure warning system,” Pattern Recognition, vol. 73, pp. 216–234, 2018.
[3]
J. Niu, J. Lu, M. Xu, P. Lv, and X. Zhao, “Robust lane detection using two-stage feature extraction with curve fitting,” Pattern Recognition, vol. 59, pp. 225–233, 2016.
[4]
D. Neven, B. De Brabandere, S. Georgoulis, M. Proesmans, and L. Van Gool, “Towards end-to-end lane detection: an instance segmentation approach,” in Proceedings of the Intelligent Vehicles Symposium, pp. 286–291, Changshu, China, June 2018.
[5]
Y. M. Ko, J. Jun, D. Ko, and M. Jeon, “Key points estimation and point instance segmentation approach for lane detection,” 2020, https://arxiv.org/abs/2002.06604.
[6]
X. Pan, J. Shi, P. Luo, X. Wang, and X. Tang, “Spatial as deep: spatial CNN for traffic scene understanding,” in Proceedings of the Thirty-Second AAAI Conference, pp. 7276–7283, New Orleans, LA, USA, February 2018.
[7]
M. Ghafoorian, C. Nugteren, N. Baka, O. Booij, and M. Hofmann, “EL-GAN: embedding loss driven generative adversarial networks for lane detection,” in Proceedings of the European Conference on Computer Vision, pp. 256–272, Springer, Munich, Germany, September 2018, Volume 11129 of Lecture Notes in Computer Science.
[8]
J. Zhang, Y. Xu, B. Ni, and Z. Duan, “Geometric constrained joint lane segmentation and lane boundary detection,” in Proceedings of the European Conference on Computer Vision, pp. 502–518, Springer, Munich, Germany, September 2018, Volume 11205 of Lecture Notes in Computer Science.
[9]
L. T. Torres, R. F. Berriel, T. M. Paixao, C. Badue, F. Alberto, D. Souza, and T. Oliveira-Santos, “Polylanenet: lane estimation via deep polynomial regression,” 2020, https://arxiv.org/abs/2004.10924.
[10]
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: inverted residuals and linear bottlenecks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520, Salt Lake City, UT, USA, June 2018.
[11]
A. Bar Hillel, R. Lerner, D. Levi, and G. Raz, “Recent progress in road and lane detection: a survey,” Machine Vision and Applications, vol. 25, no. 3, pp. 727–745, 2014.
[12]
Z. Teng, J.-H. Kim, and D.-J. Kang, “Real-time lane detection by using multiple cues,” in Proceedings of the International Conference on Control, Automation and Systems, pp. 2334–2337, Gyeonggi-do, Republic of Korea, Octomber 2010.
[13]
X. Wang, D. Yan, K. Chen, Y. Deng, C. Long, K. Zhang, and S. Yan, “Lane extraction and quality evaluation: a Hough transform based approach,” in Proceedings of the IEEE Conference on Multimedia Information Processing and Retrieval, pp. 7–12, Shenzhen, China, August 2020.
[14]
S. Lee, J.-S. Kim, J. S. Yoon, S. Shin, O. Bailo, N. Kim, T.-H. Lee, H. S. Hong, S.-H. Han, and I. S. Kweon, “VPGNet: vanishing point guided network for lane and road marking detection and recognition,” 2017, https://arxiv.org/abs/1710.06288.
[15]
Y. Hou, M. Zheng, C. Liu, and C. C. Loy, “Learning lightweight lane detection CNNs by self attention distillation,” in Proceedings of the International Conference on Computer Vision, pp. 1013–1021, Seoul, Republic of Korea, August 2019.
[16]
H. Xu, S. Wang, X. Cai, W. Zhang, X. Liang, and Z. Li, “CurveLane-NAS: unifying lane-sensitive architecture search and adaptive point blending,” in Proceedings of the European Conference on Computer Vision, Glasgow, UK, August 2020.
[17]
Li Xiang, J. Li, X. Hu, and J. Yang, “Line-CNN: end-to-end traffic line detection with line proposal unit,” IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 1, pp. 248–258, 2020.
[18]
Keep your eyes on the lane: real-time attention-guided lane detection, CVPR, 2021.
[19]
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, Las Vegas, NV, USA, June 2016.
[20]
TuSimple, “Tusimple dataset,” 2017, https://cvpr2017.tusimple.ai/.
[21]
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: a large-scale hierarchical image database,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255, Miami, FL, USA, June 2009.

Index Terms

  1. A Network That Balances Accuracy and Efficiency for Lane Detection
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Mobile Information Systems
      Mobile Information Systems  Volume 2021, Issue
      2021
      6406 pages
      ISSN:1574-017X
      EISSN:1875-905X
      Issue’s Table of Contents
      This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

      Publisher

      IOS Press

      Netherlands

      Publication History

      Published: 01 January 2021

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 0
        Total Downloads
      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 01 Jan 2025

      Other Metrics

      Citations

      View Options

      View options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media