Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Exploring the solutions via Retinex enhancements for fruit recognition impacts of outdoor sunlight: a case study of navel oranges

  • Research Paper
  • Published:
Evolutionary Intelligence Aims and scope Submit manuscript

Abstract

Machine vision-based techniques are one of the critical means to realize intelligent orchard management. Additionally, the third wave of artificial intelligence guided by deep learning has promoted the application of machine vision technology in fruit recognition. Currently, multiple detection models can extract the fruits from collected images, yet the accuracy has often existed deviation, due to different observation times cause corresponding changes of sunlight throughout a day. Consequently, exploring a method to solve this problem is of great significance to facilitating smart orchards. On this basis, this article takes the navel orange as a study object, dividing four observation periods (10:00–11:00, 12:00–13:00, 14:00–15:00, 16:00–17:00) and two viewing distances of one-meter and two-meter to collect image data. Corresponding algorithms are designed according to each Retinex processing modes to assist the detection of YOLOv5, including single-scale Retinex (SSR), multi-scale Retinex (MSR), multi-scale Retinex with color restoration (MSRCR), multi-scale Retinex with chromaticity preservation (MSRCP), and MSRCR with automatic color gradation adjustment (AutoMSRCR). The experimental results showed that two observation periods of 10:00–11:00 and 14:00–15:00 are more conducive to the data collection, where MSR-based and MSRCR-based models respectively from the two periods improved 9.28% and 6.32% of mean average precision (MAP) than original YOLOv5 under one-meter viewing distance. Also, MSRCR-based and AutoMSRCR-based models achieved 4.92% and 16.91% of MAP higher than the original under two-meter viewing distance. Simultaneously, this article also provides technical selection schemes and analyzes the sensitivity of each model in typical impact scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Thenmozhi K, Srinivasulu RU (2019) Crop pest classification based on deep convolutional neural network and transfer learning. Comput Electron Agr 164:104906. https://doi.org/10.1016/j.compag.2019.104906

    Article  Google Scholar 

  2. Zeng NY, Zhang H, Song BY, Liu WB, Li YR, Dobaie AM (2018) Facial expression recognition via learning deep sparse autoencoders. Neurocomputing 273:643–649. https://doi.org/10.1016/j.neucom.2017.08.043

    Article  Google Scholar 

  3. Kumar MP, Rajagopal MK (2019) Detecting facial emotions using normalized minimal feature vectors and semi-supervised twin support vector machines classifier. Appl Intell 49(12):4150–4174. https://doi.org/10.1007/s10489-019-01500-w

    Article  Google Scholar 

  4. Zeng NY, Zhang H, Li YR, Liang JL, Dobaie AM (2017) Denoising and deblurring gold immunochromatographic strip images via gradient projection algorithms. Neurocomputing 247:165–172. https://doi.org/10.1016/j.neucom.2017.03.056

    Article  Google Scholar 

  5. Yu J, Schumann AW, Cao Z, Sharpe SM, Boyd NS (2019) Weed detection in perennial ryegrass with deep learning convolutional neural network. Front Plant Sci 10:1422

    Article  Google Scholar 

  6. Hani N, Roy P, Isler V (2020) A comparative study of fruit detection and counting methods for yield mapping in apple orchards. J Field Robot 37(2):263–282. https://doi.org/10.1002/rob.21902

    Article  Google Scholar 

  7. Xing S, Lee M, Lee KK (2019) Citrus pests and diseases recognition model using weakly dense connected convolution network. Sensors 19(14). https://doi.org/https://doi.org/10.3390/s19143195

  8. Liu J, Wang X, Wang T (2019) Classification of tree species and stock volume estimation in ground forest images using Deep Learning. Comput Electron Agr 166:105012. https://doi.org/10.1016/j.compag.2019.105012

    Article  Google Scholar 

  9. Sadeghi-Tehran P, Virlet N, Ampe EM, Reyns P, Hawkesford MJ (2019) DeepCount: in-field automatic quantification of wheat spikes using simple linear iterative clustering and deep convolutional neural networks. Front Plant Sci 10:1176. https://doi.org/10.3389/fpls.2019.01176

    Article  Google Scholar 

  10. Marino S, Beauseroy P, Smolarz A (2019) Weakly-supervised learning approach for potato defects segmentation. Eng Appl Artif Intel 85:337–346. https://doi.org/10.1016/j.engappai.2019.06.024

    Article  Google Scholar 

  11. Koirala A, Walsh KB, Wang ZL, McCarthy C (2019) Deep learning – Method overview and review of use for fruit detection and yield estimation. Comput Electron Agr 162:219–234. https://doi.org/10.1016/j.compag.2019.04.017

    Article  Google Scholar 

  12. Stein M, Bargoti S, Underwood J (2016) Image based mango fruit detection, localisation and yield estimation using multiple view geometry. Sensors 16(11). https://doi.org/https://doi.org/10.3390/s16111915

  13. Sun J, He X, Ge X, Wu XH, Shen JF, Song YY (2018) Detection of key organs in tomato based on deep migration learning in a complex background. Agriculture 8(12):196. https://doi.org/10.3390/agriculture8120196

    Article  Google Scholar 

  14. Xiong Y, Ge Y, From PJ (2020) An obstacle separation method for robotic picking of fruits in clusters. Comput Electron Agr 175:105397

    Article  Google Scholar 

  15. Bargoti S, Underwood J (2017) Deep fruit detection in orchards. In: IEEE International Conference on Robotics and Automation (ICRA), pp 3626–3633 https://doi.org/https://doi.org/10.1109/ICRA.2017.7989417

  16. Skovsen SK, Laursen MS, Kristensen RK, Rasmussen J, Dyrmann M, Eriksen J, Gislum R, Jørgensen RN, Karstoft H (2020) Robust species distribution mapping of crop mixtures using color images and convolutional neural networks. Sensors 21(1):175

    Article  Google Scholar 

  17. Zhou H, Zhuang Z, Liu Y, Liu Y, Zhang X (2020) Defect classification of green plums based on deep learning. Sensors 20(23):6993. https://doi.org/10.3390/s20236993

    Article  Google Scholar 

  18. Wang Y, Yoshihashi R, Kawakami R, You S, Harano T, Ito M, Komagome K, Iida M, Naemura T (2019) Unsupervised anomaly detection with compact deep features for wind turbine blade images taken by a drone. IPSJ T Comput Vis Appl 11(1):1–7

    Article  Google Scholar 

  19. Rong D, Ying Y, Rao X (2017) Embedded vision detection of defective orange by fast adaptive lightness correction algorithm. Comput Electron Agr 138:48–59

    Article  Google Scholar 

  20. Gongal A, Silwal A, Amatya S, Karkee M, Zhang Q, Lewis K (2016) Apple crop-load estimation with over-the-row machine vision system. Comput Electron Agr 120:26–35. https://doi.org/10.1016/j.compag.2015.10.022

    Article  Google Scholar 

  21. Yu Y, Zhang K, Yang L, Zhang DX (2019) Fruit detection for strawberry harvesting robot in non-structural environment based on Mask-RCNN. Comput Electron Agr 163(104846). https://doi.org/https://doi.org/10.1016/j.compag.2019.06.001.

  22. Li Y, Chao X (2020) ANN-based continual classification in agriculture. Agriculture 10(5):178. https://doi.org/10.3390/agriculture10050178

    Article  Google Scholar 

  23. Unlu E, Zenou E, Riviere N, Dupouy PE (2019) Deep learning-based strategies for the detection and tracking of drones using several cameras. IPSJ T Comput Vis Appl 11(1):1–13

    Article  Google Scholar 

  24. Watt N, Plessis MCD (2020) Towards robot vision using deep neural networks in evolutionary robotics. Evol. Intel. (8). https://doi.org/https://doi.org/10.1007/s12065-020-00490-w

  25. Zhou J, Zhang D, Zou P, Zhang W, Zhang W (2019) Retinex-based Laplacian pyramid method for image defogging. IEEE Access 7:122459–122472. https://doi.org/10.1109/access.2019.2934981

    Article  Google Scholar 

  26. Tang C, Von LUF, Vahl M, Wang S, Wang Y, Tan M (2019) Efficient underwater image and video enhancement based on Retinex. SIViP 13:1011–1018. https://doi.org/10.1007/s11760-019-01439-y

    Article  Google Scholar 

  27. Jobson DJ, Rahman Z (1997) Properties and performance of a center/surround Retinex. IEEE Trans Image Process 6:451–462. https://doi.org/10.1109/83.557356

    Article  Google Scholar 

  28. Liu X, Zhai D, Chen R, Ji X, Zhao D, Gao W (2019) Depth super-resolution via joint color-guided internal and external regularizations. IEEE Trans Image Process 4:1636–1645. https://doi.org/10.1109/TIP.2018.2875506

    Article  MathSciNet  Google Scholar 

  29. Jobson DJ, Rahman Z, Woodell GA (1997) A multiscale Retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans Image Process 6:965–976. https://doi.org/10.1109/83.597272

    Article  Google Scholar 

  30. Heng BC, Xiao D, Zhang X (2019) Night color image mosaic algorithm combined with MSRCP. Comput Eng Desig 40(11):3200–3204

    Google Scholar 

  31. AlAjlan SA, Saudagar A (2020) Machine learning approach for threat detection on social media posts containing Arabic text. Intel, Evol. https://doi.org/10.1007/s12065-020-00458-w

    Book  Google Scholar 

  32. Son H, Kim C (2020) A deep learning approach to forecasting monthly demand for residential–sector electricity. Sustainability 8:3103. https://doi.org/10.3390/su12083103

    Article  Google Scholar 

  33. Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation Tech report (v5). In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 580–587. https://doi.org/https://doi.org/10.1109/CVPR.2014.81

  34. He K, Zhang X, Ren S, Sun J (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell 37:1904–1916. https://doi.org/10.1109/TPAMI.2015.2389824

    Article  Google Scholar 

  35. Girshick R (2015) Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp 1440–1448. https://doi.org/https://doi.org/10.1109/ICCV.2015.169

  36. Ren S, He K, Girshick R, Sun J (2017) Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39:1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031

    Article  Google Scholar 

  37. Dai JF, Li Y, He KM, Sun J (2019) R-FCN: object detection via region-based fully convolutional networks. arXiv:1605.06409v2

  38. He K, Gkioxari G, Dollar P, Girshick R (2020) Mask R-CNN. IEEE Trans Pattern Anal Mach Intell 42:386–397. https://doi.org/10.1109/TPAMI.2018.2844175

    Article  Google Scholar 

  39. Takeki A, Trinh TT, Yoshihashi R, Kawakami R, Iida M, Naemura T (2016) Combining deep features for object detection at various scales: finding small birds in landscape images. IPSJ T Comput Vis Appl 8:1–7. https://doi.org/10.1186/s41074-016-0006-z

    Article  Google Scholar 

  40. Redmon J, Divvala S, Ross G, Ali F (2016) You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 779–788. https://doi.org/https://doi.org/10.1109/CVPR.2016.91

  41. Xie J, Liu R (2019) The study progress of object detection algorithms based on deep learning. Journal of Shaanxi Normal University (Natural Science Edition) 47:1–9

    Google Scholar 

  42. Redmon J, Ali F (2017) YOLO9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 7263–727124. https://doi.org/https://doi.org/10.1109/CVPR.2017.690

  43. Redmon J, Ali F (2020) YOLOv3: an incremental improvement. arXiv:1804.02767v1

  44. Bochkovskiy A, Wang C, Liao H-M (2020) YOLOv4: optimal speed and accuracy of object detection. arXiv:2004.10934v1.

  45. Liu Wei, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, Berg AC (2016) SSD: Single Shot MultiBox Detector. In: Computer Vision – ECCV 2016, pp 21–37. https://doi.org/https://doi.org/10.1007/978-3-319-46448-0_2

  46. Lin TY, Dollár P, Girshick R, He KM, Hariharan B, Belongie S (2016) Feature pyramid networks for object detection. arXiv:1612.03144v2

  47. Lin TY, Goyal P, Girshick R, He K, Dollar P (2020) Focal loss for dense object detection. IEEE Trans Pattern Anal Mach Intell 42(2):318–327

    Article  Google Scholar 

Download references

Acknowledgements

We would like to express our great appreciation to Shixingling Ecological Orchard Farm in Ganzhou for the experimental images.

Funding

This research was funded by “Natural Science Foundation of Jiangxi Province, China” OF 20161BAB203091, 20202BAB202025, and “National Natural Science Foundation of China” OF 41361077 and 41561085.

Author information

Authors and Affiliations

Authors

Contributions

WJ and YM designed and implemented the proposed detection method and drafted the manuscript. WJ and QL acquisite the data. WJ, YM, and QL edited the manuscript. DL supervised the work. All authors reviewed and approved the final manuscript.

Corresponding author

Correspondence to Deer Liu.

Ethics declarations

Conflict of interest

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ji, W., Liu, D., Meng, Y. et al. Exploring the solutions via Retinex enhancements for fruit recognition impacts of outdoor sunlight: a case study of navel oranges. Evol. Intel. 15, 1875–1911 (2022). https://doi.org/10.1007/s12065-021-00595-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12065-021-00595-w

Keywords