Research on Classification Model of Panax notoginseng Taproots Based on Machine Vision Feature Fusion
Abstract
:1. Introduction
2. Materials and Methods
2.1. Test Materials
2.2. The Grading Standard of Panax notoginseng Taproots
2.3. Image Acquisition System Introduction
2.4. Image Preprocessing
2.5. Feature Extraction
2.5.1. Shape and Size Feature Extraction
2.5.2. Color Feature Extraction
2.5.3. Texture Feature Extraction
2.6. Data Processing and Model Evaluation
2.6.1. Pretreatment and Dimensionality Reduction of the Panax notoginseng Taproot Features
2.6.2. Modeling Method and Model Evaluation
3. Test Results and Analysis
3.1. Comparison of Fusion Feature Classification Models in Different Dimensions
- (1)
- When shape and size were taken as the features, among the three classification models, SVM achieved the highest accuracy, with an accuracy of 76.467% on the training set and an accuracy of 75.185% on the test set.
- (2)
- When shape, size, and texture were taken as the features, on the test set, the accuracy of the BP neural network was 14.782% higher than that without texture features; the accuracy of ELM was 17.222% higher than that without texture features; the accuracy of SVM was 15.222% higher than that without texture features. Thus, texture features were important in the main root classification of Panax notoginseng. Among the three classification models, SVM achieved the highest accuracy on the training set and test set.
- (3)
- When shape, size, and color were taken as the features, on the test set, the accuracy of the BP neural network was 23.309% higher than that without color features; the accuracy of ELM was 36.297% higher than that without texture features; the accuracy of SVM was 16.111% higher than that without texture features. Thus, color features were important in the classification of the main root of Panax notoginseng. Among the three classification models, SVM achieved the highest accuracy on the training set and test set.
- (4)
- When shape, size, texture, and color were taken as the features, on the test set, the accuracy of the BP neural network was 1.47% higher than that with color features without texture features, and it was 10.005% higher than that with texture features without color features; the accuracy of ELM was 5.135% higher than that with color features without texture features, and it was 24.21% higher than that with texture features without color features; the accuracy of SVM was 0.741% higher than that with color features without texture features, and it was 1.63% is higher than that with texture features without color features.
3.2. Comparison and Analysis of Different Feature Selection Methods
3.2.1. Selection of Characteristic Variables Based on IRIV
3.2.2. Selection of Characteristic Variables Based on VISSA
3.2.3. Selection of Characteristic Variables Based on SRA
3.3. Establishment of the Classification Model
3.3.1. Establishment of SVM Classification Model Based on Feature Selection
3.3.2. The Establishment of Deep Learning Network Model Based on Semantic Segmentation
3.4. Model Comparison
3.5. Optimization of IRIV-SVM Hierarchical Model
3.6. Model Validation
4. Discussion
5. Conclusions
- (1)
- A model based on image feature fusion was established for classifying Panax notoginseng taproots. By preprocessing the taproot images of Panax notoginseng collected using a CCD camera, 40 features such as shape, size, color, and texture were extracted. Through the classification accuracy of the three classification models of BP, ELM, and SVM, the importance of color, texture, and fusion features to the classification model of Panax notoginseng root was proved. Three feature selection algorithms, i.e., IRIV, VISSA, and SRA, were used to reduce the dimension of the whole feature. After feature selection, 22, 21, and 10 optimal feature variable combinations were obtained respectively, and redundant feature data were eliminated.
- (2)
- Deep learning can automatically extract image features layer by layer from images through convolutional neural networks and then perform classification and recognition through classifiers. This research established a deep learning classification model for Panax notoginseng taproots and selected PSPnet, U-net, and DeepLabv3+ for semantic segmentation. Meanwhile, three types of convolutional neural networks including ResNet50, VGG16, and MobileNet were used as feature extraction networks. Additionally, the average pixel accuracy and average interaction ratio of the categories were used as evaluation indicators. The results show that the PSPNet model achieved the best overall performance, with MPA of 77.98% and MioU of 88.97% on the test set.
- (3)
- The traditional machine learning SVM classification model based on feature selection and the deep learning model based on semantic segmentation were established. Among the traditional machine learning SVM classification models, the IRIV-SVM model achieved an accuracy of 94.048% on the training set and an accuracy of 95.370% on the test set, both of which were the highest. Among the deep learning models, PSPNet achieved a MAP of 77.98% and a MioU of 88.97%, both of which were the highest. Compared with IRIV-SVM, PSPNet automatically extracts image features layer by layer. It avoids manual feature selection, but its model size was 0.65 G, and the training time was 9 h, which has high requirements for the hardware equipment. Though IRIV-SVM involves data preparation and manual feature selection, its model as only 125 kb, and the training time was 3.4 s. While the number of features was reduced, it effectively increased the efficiency and accuracy of the model. Therefore, IRIV-SVM was chosen as the classification model of Panax notoginseng taproot.
- (4)
- The GWO, GA, and PSO algorithms were introduced to optimize the IRIV-SVM model. The results show that the IRIV-GWO-SVM classification model achieved the best optimization effect. The classification accuracy was 98.704% on the test set, and the optimization effect was increased by 3.334%.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Que, Z.; Pang, D.; Chen, Y.; Chen, Z.; Li, J.; Wei, J. Planting, harvesting and processing status of Panax notoginseng. Jiangsu Agric. Sci. 2020, 48, 41–45. [Google Scholar]
- Zhi, S. Development trend analysis of Panax notoginseng. China Mod. Chin. Mater. Med. 2014, 16, 662–665. [Google Scholar]
- Andrew, A.G.; Isabella, F.; Gaya, S.; Roland, C.; Alfredo, I.; Elie, C.; Eyad, E. Artificial Intelligence Surgery: How Do We Get to Autonomous Actions in Surgery? Sensors 2021, 21, 5526. [Google Scholar]
- Giovanni, P.; Alberto, M.; Federico, T.; Stefano, V. Functional Safety Networks and Protocols in the Industrial Internet of Things Era. Sensors 2021, 21, 6073. [Google Scholar]
- Wan-Soo, K.; Dae-Hyun, L.; Taehyeong, K.; Hyunggun, K.; Taeyong, S.; Yong-Joo, K. Weakly Supervised Crop Area Segmentation for an Autonomous Combine Harvester. Sensors 2021, 21, 4801. [Google Scholar]
- Adeyemi, O.A.; Liu, L.; Michael, O.N. Non-Destructive Assessment of Chicken Egg Fertility. Sensors 2020, 20, 5546. [Google Scholar]
- Li, Y.; Hong, Z.; Cai, D.; Huang, Y.; Gong, L.; Liu, L. A SVM and SLIC Based Detection Method for Paddy Field Boundary Line. Sensors 2020, 20, 2610. [Google Scholar] [CrossRef]
- Jiang, T.; Cui, H.; Cheng, X. A calibration strategy for vision-guided robot assembly system of large cabin. Measurement 2020, 163, 107991. [Google Scholar] [CrossRef]
- Huang, K. Detection and classification of areca nuts with machine vision. Comput. Math. Appl. 2012, 64, 739–746. [Google Scholar] [CrossRef] [Green Version]
- Ebrahimi, M.A.; Khoshtaghaza, M.H.; Minael, S.; Jamshidi, B. Vision-based pest detection based on SVM classification method. Comput. Electron. Agric. 2017, 137, 52–58. [Google Scholar] [CrossRef]
- Murat, K.; Ilker, A. Multiclass classification of dry beans using computer vision and machine learning techniques. Comput. Electron. Agric. 2020, 174, 105507. [Google Scholar]
- Juliano, P.G.; Francisco, A.C.P.; Daniel, M.Q.; Flora, M.M.V.; Jayme, G.A.B.; Emerson, M.D.P. Deep learning architectures for semantic segmentation and automatic estimation of severity of foliar symptoms caused by diseases or pests. Comput. Electron. Agric. 2021, 210, 129–142. [Google Scholar]
- Wu, Z.; Yang, R.; Gao, F.; Wang, W.; Fu, L.; Li, R. Segmentation of abnormal leaves of hydroponic lettuce based on DeepLabV3+ for robotic sorting. Comput. Electron. Agric. 2021, 190, 106443. [Google Scholar] [CrossRef]
- Zhou, Z.; Huang, Y.; Li, X.; Wen, D.; Wang, C.; Tao, H. Automatic detecting and grading method of potatoes based on machine vision. Trans. Chin. Soc. Agric. Eng. 2012, 28, 178–183. [Google Scholar]
- Wang, F.; Feng, W.; Zheng, J.; Sun, J.; Niu, L.; Chen, Z.; Zhang, X.; Wang, L. Design and experiment of automatic sorting and grading system based on machine vision for white Agaricus bisporus. Trans. Chin. Soc. Agric. Eng. 2018, 34, 256–263. [Google Scholar]
- Dang, M.; Meng, Q.; Gu, F.; Gu, B.; Hu, Y. Rapid recognition of potato late blight based on machine vision. Trans. Chin. Soc. Agric. Eng. 2020, 36, 193–200. [Google Scholar]
- Yao, Q.; Gu, J.; Lu, J.; Guo, L.; Tang, J.; Yang, B.; Xu, W. Improved RetinaNet-based automatic detection model for pests in rice canopy. Trans. Chin. Soc. Agric. Eng. 2020, 36, 182–188. [Google Scholar]
- Xie, W.; Wei, S.; Zheng, Z.; Yang, G.; Ding, X.; Yang, D. Carrot defect recognition and segmentation based on deep multi-branch model fusion network. Trans. Chin. Soc. Agric. Eng. 2021, 37, 177–186. [Google Scholar]
- Yu, J.; Wang, F.; Zhang, Z.; Yang, W.; Zhu, H. Quality classification method of Panax notoginseng taproot based on computer vision. J. Hunan Agric. Univ. 2016, 42, 682–685. [Google Scholar]
- GB/T 19086-2008 Geographical Indication Product Wenshan Sanqi.2018-07-31. Available online: https://ishare.iask.sina.com.cn/f/17MhWQhZqMn.html (accessed on 31 July 2018).
- Xiao, M.; Ma, Y.; Feng, Z.; Deng, Z.; Hou, S.; Shu, L.; Lu, Z. Rice blast recognition based on principal component analysis and neural network. Comput. Electron. Agric. 2018, 154, 482–490. [Google Scholar] [CrossRef]
- Xiang, J.; Yang, S.; Fan, H.; Zhang, Y.; Zhai, R.; Peng, H. Grading for Tobacco Leaf Quality Based on Sparse Representation. Trans. Chin. Soc. Agric. Mach. 2013, 44, 287–292. [Google Scholar]
- Satorres, M.; Beyaz, A.; Gómez, O.; Gámez, G. A computer vision approach based on endocarp features for the identification of olive cultivars. Comput. Electron. Agric. 2018, 154, 341–346. [Google Scholar] [CrossRef]
- Song, Y.; Xie, H.; Ning, J.; Zhang, Z. Grading Keemun black tea based on shape feature parameters of machine vision. Trans. Chin. Soc. Agric. Eng. 2018, 34, 279–286. [Google Scholar]
- Xu, J.; Wang, L.; Wang, Y.; Zhao, Z.; Han, M. Remote sensing monitoring of soil surface water content based on LM algorithm. Trans. Chin. Soc. Agric. Mach. 2019, 50, 233–240. [Google Scholar]
- Md, A.; Kazuya, Y.; Munenori, M.; Naoshi, K.; Tony, G. Machine vision based soybean quality evaluation. Comput. Electron. Agric. 2017, 140, 452–460. [Google Scholar]
- Tongcham, P.; Supa, P.; Pornwongthong, P.; Prasitmeeboon, P. Mushroom spawn quality classification with machine learning. Comput. Electron. Agric. 2020, 179, 105865. [Google Scholar] [CrossRef]
- Sana, U.; Naveed, I.; Zahoor, J.; Khalid, H.; Syed, I.; Muhammad, H. A machine learning-based approach for the segmentation and classification of malignant cells in breast cytology images using gray level co-occurrence matrix (GLCM) and support vector machine (SVM). Neural Comput. Appl. 2021, 23, 6456724. [Google Scholar]
- Zhang, C.; Guo, C.; Liu, F.; Kong, W.; He, Y.; Lou, B. Hyperspectral imaging analysis for ripeness evaluation of strawberry with support vector machine. J. Food Eng. 2016, 179, 11–18. [Google Scholar] [CrossRef]
- Taskeen, A.; Yasir, N. Weed density classification in rice crop using computer vision. Comput. Electron. Agric. 2020, 175, 105590. [Google Scholar]
- Song, H.; Yan, Y.; Song, Z.; Sun, J.; Li, Y.; Li, F. Nondestructive testing model for maize grain moisture content established by screening dielectric parameters and variables. Trans. Chin. Soc. Agric. Eng. 2019, 35, 262–272. [Google Scholar]
- Zhang, L.; Sun, J.; Zhou, X.; Adria, N.; Dai, R. Classification detection of saccharin jujube based on hyperspectral imaging technology. J. Food Process. Preserv. 2020, 44, e14591. [Google Scholar] [CrossRef]
- Li, C.; Shi, J.; Ma, C.; Cui, Y.; Wang, Y.; Li, Y. Estimation of Chlorophyll Content in Winter Wheat Based on Wavelet Transform and Fractional Differential. Trans. Chin. Soc. Agric. 2021, 52, 172–182. [Google Scholar]
- Yue, X.; Ma, G.; Liu, F.; Gao, X. Research on image classification method of strip steel surface defects based on improved Bat algorithm optimized BP neural network. J. Intell. Fuzzy Syst. 2021, 41, 1509–1521. [Google Scholar] [CrossRef]
- Fajar, R.; Putri, S. Classification of eye condition based on electroencephalogram signals using extreme learning machines algorithm (ELM). J. Neurol. Sci. 2021, 429, 119956. [Google Scholar] [CrossRef]
- Wu, W.; Li, A.; He, X.; Ma, R.; Liu, H.; Lv, J. A comparison of support vector machines, artificial neural network and classification tree for identifying soil texture classes in southwest China. Comput. Electron. Agric. 2018, 144, 86–93. [Google Scholar] [CrossRef]
- Xie, B.; Jiao, W.; Wen, C.; Hou, S.; Zhang, F.; Liu, K.; Li, J. Feature detection method for hind leg segmentation of sheep carcass based on multi-scale dual attention U-Net. Comput. Electron. Agric. 2021, 191, 106482. [Google Scholar] [CrossRef]
- Zhang, H.; Mo, J.; Jiang, H.; Li, Z.; Hu, W.; Zhang, C.; Wang, Y.; Wang, X.; Liu, C.; Zhao, B.; et al. Deep LearningModel for the Automated Detection and Histopathological Prediction of Meningioma. Neuroinformatics 2021, 3, 393–402. [Google Scholar] [CrossRef]
- Wang, C.; Du, P.; Wu, H.; Li, J.; Zhao, C.; Zhu, H. A cucumber leaf disease severity classification method based on the fusion of DeepLabV3+ and U-Net. Comput. Electron. Agric. 2021, 189, 106373. [Google Scholar] [CrossRef]
- Zhang, H.; Yao, C.; Jiang, M.; Ji, Y.; Li, H. Research on Wheat Seed Classification and Recognition Based on Hyperspectral Imaging. J. Triticeae Crop. 2019, 39, 96–1044. [Google Scholar]
Grades | Root Weight/g | Number of Heads/pcs |
---|---|---|
Grade 1 | ≥25.0 | ≤20 |
Grade 2 | ≥17.0 | ≤30 |
Grade 3 | ≥12.5 | ≤40 |
Grade 4 | ≥8.5 | ≤60 |
Model | Fusion Feature | Feature Number/pcs | Training Set Accuracy/% | Test Set Accuracy/% |
---|---|---|---|---|
BP | Shape, size | 9 | 67.963 | 65.234 |
ELM | Shape, size | 9 | 41.825 | 35.185 |
SVM | Shape, size | 9 | 76.746 | 75.185 |
BP | Shape, size, texture | 16 | 83.889 | 80.016 |
ELM | Shape, size, texture | 16 | 53.095 | 52.407 |
SVM | Shape, size, texture | 16 | 89.508 | 90.407 |
BP | Shape, size, color | 33 | 89.630 | 88.542 |
ELM | Shape, size, color | 33 | 73.254 | 71.482 |
SVM | Shape, size, color | 33 | 90.793 | 91.296 |
BP | Shape, size, texture, color | 40 | 90.741 | 90.021 |
ELM | Shape, size, texture, color | 40 | 79.683 | 76.617 |
SVM | Shape, size, texture, color | 40 | 91.191 | 92.037 |
Model | Model Size/kb | Training Time/s | Feature Number/pcs | Training Set Accuracy/% | Test Set Accuracy/% |
---|---|---|---|---|---|
IRIV-SVM | 125 | 3.4 | 10 | 94.048 | 95.370 |
VISSA-SVM | 264 | 0.371 | 21 | 92.222 | 92.778 |
SR-SVM | 270 | 0.318 | 22 | 91.984 | 92.963 |
Model | Model Size/G | Training Time/h | MPA/% | MIoU/% |
---|---|---|---|---|
U-net | 0.21 | 5.5 | 69.21 | 80.89 |
PSPNet | 0.65 | 9 | 77.98 | 88.97 |
DeepLabv3+ | 1.05 | 11.5 | 75.89 | 86.24 |
Model | Training Set Accuracy/% | Test Set Accuracy/% | Best c | Best g |
---|---|---|---|---|
IRIV-GWO-SVM | 97.460 | 98.704 | 67.889 | 0.201 |
IRIV-GA-SVM | 97.540 | 98.519 | 60.138 | 0.224 |
IRIV-PSO-SVM | 97.937 | 97.778 | 100.000 | 0.836 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhu, Y.; Zhang, F.; Li, L.; Lin, Y.; Zhang, Z.; Shi, L.; Tao, H.; Qin, T. Research on Classification Model of Panax notoginseng Taproots Based on Machine Vision Feature Fusion. Sensors 2021, 21, 7945. https://doi.org/10.3390/s21237945
Zhu Y, Zhang F, Li L, Lin Y, Zhang Z, Shi L, Tao H, Qin T. Research on Classification Model of Panax notoginseng Taproots Based on Machine Vision Feature Fusion. Sensors. 2021; 21(23):7945. https://doi.org/10.3390/s21237945
Chicago/Turabian StyleZhu, Yinlong, Fujie Zhang, Lixia Li, Yuhao Lin, Zhongxiong Zhang, Lei Shi, Huan Tao, and Tao Qin. 2021. "Research on Classification Model of Panax notoginseng Taproots Based on Machine Vision Feature Fusion" Sensors 21, no. 23: 7945. https://doi.org/10.3390/s21237945
APA StyleZhu, Y., Zhang, F., Li, L., Lin, Y., Zhang, Z., Shi, L., Tao, H., & Qin, T. (2021). Research on Classification Model of Panax notoginseng Taproots Based on Machine Vision Feature Fusion. Sensors, 21(23), 7945. https://doi.org/10.3390/s21237945