Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3653781.3653828acmotherconferencesArticle/Chapter ViewAbstractPublication PagescvdlConference Proceedingsconference-collections
research-article

Research on Visual Navigation Technology of Citrus Orchard Based on Improved DeepLabv3+ Model

Published: 01 June 2024 Publication History

Abstract

Abstract: Aiming at the lack of a high precision visual navigation technology for citrus orchards. This study proposes a citrus orchard navigation line extraction method based on the DeepLabv3+ model. Firstly, replace the main network Xception of DeepLabv3+ with a more lightweight MobileNetV3, and replace the ordinary convolutions in the decoder with Depthwise Separable Convolution to reduce model parameters and improve computational speed. Secondly, add a 2*2 convolutional kernel to the ASPP module in DeepLabv3+ to improve the model's segmentation accuracy at the edges of citrus trees and the background. The Improved DeepLabv3+ model is used to output semantic segmentation images for road edge and path keypoint extraction. Based on the extracted road keypoints, multiple segments of 3rd-degree B-spline curves are fitted to generate the final road navigation lines. Experimental results demonstrate that the improved model exhibits significant advantages compared to other mainstream models, with an mPA value of 94.71% and a processing speed of 96.62 frames per second. The average deviation of generated paths in terms of yaw pixels is 3.56%, meeting navigation requirements in various environments. Therefore, this method can provide support for autonomous navigation technology in citrus orchard agricultural drones.
Keywords: DeepLabv3+; Depthwise Separable Convolution; MobileNetV3; Semantic Segmentation; route fitting

References

[1]
Cai M Y, Wan B Q, Yuan J X, Research on citrus planting technology and management strategy [J]. Friends of Fruit Farmers,2023(9):35-37. (in Chinese)
[2]
Cai M Y, Shi X J, Yuan J X, Thinking on the promotion path of green citrus planting technology [J]. Jilin Vegetables,2023(4):176. (in Chinese)
[3]
HU G. Research on citrus monitoring method based on UAV in complex environment [D]. Guangdong: Guangdong University of Technology,2021. (in Chinese)
[4]
Zheng X X. Research on path planning and control System of mountain citrus picking robot [D]. Hubei: Hubei University of Technology,2020. (in Chinese)
[5]
Zhang L, Li M, Zhu X, Navigation path recognition between rows of fruit trees based on semantic segmentation[J]. Computers and Electronics in Agriculture, 2024, 216: 108511.
[6]
Xie B, Jin Y, Faheem M, Research progress of autonomous navigation technology for multi-agricultural scenes[J]. Computers and Electronics in Agriculture, 2023, 211: 107963.
[7]
Li Y W, Xu J J, Liu D X, Field road Scene recognition in hilly area based on improved cavity convolutional neural network [J]. Transactions of the Chinese Society of Agricultural Engineering, 2019, 35(7). (in Chinese)
[8]
Rao X Q, Zhu Y H, Zhang Y N, Crop interrow navigation path recognition based on semantic segmentation [J]. Transactions of the Chinese So-ciety of Agricultural Engineering, 2021, 37(20). (in Chinese)
[9]
Basso M, Pignaton de Freitas E. A UAV guidance system using crop row detection and line follower algorithms[J]. Journal of Intelligent & Robotic Systems, 2020, 97(3-4): 605-621.
[10]
Yu J, Zhang J, Shu A, Study of convolutional neural network-based semantic segmentation methods on edge intelligence devices for field agricultural robot navigation line extraction[J]. Computers and Electronics in Agriculture, 2023, 209: 107811.
[11]
Sun Q, Zhang R, Chen L, Semantic segmentation and path planning for orchards based on UAV images[J]. Computers and Electronics in Agriculture, 2022, 200: 107222.
[12]
Chen L C, Zhu Y, Papandreou G, Encoder-decoder with atrous separable convolution for semantic image segmentation[C]// Proceedings of the European conference on computer vision (ECCV). 2018: 801-818.
[13]
Howard A, Sandler M, Chu G, Searching for mobilenetv3[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 1314-1324.
[14]
Howard A G, Zhu M, Chen B, Mobilenets: Efficient convolutional neural networks for mobile vision appli-cations[J]. arXiv preprint arXiv:1704.04861, 2017.
[15]
Chen H, Zhang Z G, Xie K T, Research on Interridge navigation line extraction algorithm based on Deeplab-MV3 [J]. Journal of kunming university of science and technology (natural science edition), 2023 (05) 13:95-106. The / j.carol carroll nki. 53-1223 / n. 2023.05.332(in Chinese)
[16]
Zhao Y, Zhang R T, Dong C W, Navigation path recognition method for tea garden using improved Unet network [J]. Transactions of the Chinese Society of Agricultural Engineering,2022,38(19):162-171. (in Chinese)
[17]
Xiao K, Xia W G, Liang C Z. Path extraction algorithm for orchard visual navigation under complex background [J]. Transactions of the Chinese Society for Agricultural Machinery,2023,54(06):197-204+252.
[18]
Hu J, Che G, Wan L, High light conditions of soybean lines extraction method research [J]. China's agricultural science and technology leader, 2023, 25 (5) : 106-111. The / j. ykjdb. 2022.0217.
[19]
Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation[C]//Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. Springer International Publishing, 2015: 234-241.
[20]
Zhao H, Shi J, Qi X, Pyramid scene parsing network[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 2881-2890.

Cited By

View all
  • (2025)GPS-free autonomous navigation in cluttered tree rows with deep semantic segmentationRobotics and Autonomous Systems10.1016/j.robot.2024.104854183(104854)Online publication date: Jan-2025

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
CVDL '24: Proceedings of the International Conference on Computer Vision and Deep Learning
January 2024
506 pages
ISBN:9798400718199
DOI:10.1145/3653804
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 June 2024

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

CVDL 2024

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)12
  • Downloads (Last 6 weeks)2
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2025)GPS-free autonomous navigation in cluttered tree rows with deep semantic segmentationRobotics and Autonomous Systems10.1016/j.robot.2024.104854183(104854)Online publication date: Jan-2025

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media