Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
sensors-logo

Journal Browser

Journal Browser

Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Smart Agriculture".

Deadline for manuscript submissions: 31 October 2024 | Viewed by 4850

Special Issue Editors


E-Mail Website
Guest Editor
College of Biological and Agricultural Engineering, Jilin University, 5988 Renmin Street, Changchun 130025, China
Interests: bionic intelligent agricultural machinery; autonomous navigation; target recognition based on visual bionics; agricultural drones; agricultural artificial intelligence; soil and plant sensing; agricultural machinery information collection and control
Special Issues, Collections and Topics in MDPI journals
College of Biological and Agricultural Engineering, Jilin University, 5988 Renmin Street, Changchun 130025, China
Interests: agricultural machinery; conservation tillage; sensors; automation; intelligence; plant protection
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, sensors and artificial intelligence (AI) technologies have received increasing interest from both academia and industry and have been extensively applied in intelligent agriculture. Accelerating the application of agriculture sensors and AI technologies in intelligent agriculture is urgently needed for the development of modern agriculture, as it will help promote the development of smart agriculture. This Special Issue aims to showcase the excellent implementation of agricultural sensors and AI technologies for intelligent agricultural applications and to provide opportunities for researchers to publish their work related to this topic. Articles that address agricultural sensors and AI technologies applied to crop and animal production are welcome. This Special Issue seeks to amass original research articles and reviews. The scope of this Special Issue includes but is not limited to the following topics:

  • Crop sensing and sensors;
  • Animal perception and sensors;
  • Environmental information perception and sensors;
  • Agricultural equipment information collection and processing;
  • Key technologies of smart agriculture;
  • Artificial intelligence in agriculture;
  • Farm-intelligent equipment;
  • Orchard-intelligent equipment;
  • Garden-intelligent equipment;
  • Pasture-intelligent equipment;
  • Fishing ground-intelligent equipment.

Prof. Dr. Jiangtao Qi
Dr. Gang Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensors
  • artificial intelligence
  • intelligent agriculture

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 5563 KiB  
Article
Calibration and Validation of Simulation Parameters for Maize Straw Based on Discrete Element Method and Genetic Algorithm–Backpropagation
by Fandi Zeng, Hongwei Diao, Yinzeng Liu, Dong Ji, Meiling Dou, Ji Cui and Zhihuan Zhao
Sensors 2024, 24(16), 5217; https://doi.org/10.3390/s24165217 - 12 Aug 2024
Viewed by 367
Abstract
There is a significant difference between the simulation effect and the actual effect in the design process of maize straw-breaking equipment due to the lack of accurate simulation model parameters in the breaking and processing of maize straw. This article used a combination [...] Read more.
There is a significant difference between the simulation effect and the actual effect in the design process of maize straw-breaking equipment due to the lack of accurate simulation model parameters in the breaking and processing of maize straw. This article used a combination of physical experiments, virtual simulation, and machine learning to calibrate the simulation parameters of maize straw. A bimodal-distribution discrete element model of maize straw was established based on the intrinsic and contact parameters measured via physical experiments. The significance analysis of the simulation parameters was conducted via the Plackett–Burman experiment. The Poisson ratio, shear modulus, and normal stiffness of the maize straw significantly impacted the peak compression force of the maize straw and steel plate. The steepest-climb test was carried out for the significance parameter, and the relative error between the peak compression force in the simulation test and the peak compression force in the physical test was used as the evaluation index. It was found that the optimal range intervals for the Poisson ratio, shear modulus, and normal stiffness of the maize straw were 0.32–0.36, 1.24 × 108–1.72 × 108 Pa, and 5.9 × 106–6.7 × 106 N/m3, respectively. Using the experimental data of the central composite design as the dataset, a GA–BP neural network prediction model for the peak compression force of maize straw was established, analyzed, and evaluated. The GA–BP prediction model’s accuracy was verified via experiments. It was found that the ideal combination of parameters was a Poisson ratio of 0.357, a shear modulus of 1.511 × 108 Pa, and a normal stiffness of 6.285 × 106 N/m3 for the maize straw. The results provide a basis for analyzing the damage mechanism of maize straw during the grinding process. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
Show Figures

Figure 1

17 pages, 20371 KiB  
Article
YOLOv8 Model for Weed Detection in Wheat Fields Based on a Visual Converter and Multi-Scale Feature Fusion
by Yinzeng Liu, Fandi Zeng, Hongwei Diao, Junke Zhu, Dong Ji, Xijie Liao and Zhihuan Zhao
Sensors 2024, 24(13), 4379; https://doi.org/10.3390/s24134379 - 5 Jul 2024
Viewed by 570
Abstract
Accurate weed detection is essential for the precise control of weeds in wheat fields, but weeds and wheat are sheltered from each other, and there is no clear size specification, making it difficult to accurately detect weeds in wheat. To achieve the precise [...] Read more.
Accurate weed detection is essential for the precise control of weeds in wheat fields, but weeds and wheat are sheltered from each other, and there is no clear size specification, making it difficult to accurately detect weeds in wheat. To achieve the precise identification of weeds, wheat weed datasets were constructed, and a wheat field weed detection model, YOLOv8-MBM, based on improved YOLOv8s, was proposed. In this study, a lightweight visual converter (MobileViTv3) was introduced into the C2f module to enhance the detection accuracy of the model by integrating input, local (CNN), and global (ViT) features. Secondly, a bidirectional feature pyramid network (BiFPN) was introduced to enhance the performance of multi-scale feature fusion. Furthermore, to address the weak generalization and slow convergence speed of the CIoU loss function for detection tasks, the bounding box regression loss function (MPDIOU) was used instead of the CIoU loss function to improve the convergence speed of the model and further enhance the detection performance. Finally, the model performance was tested on the wheat weed datasets. The experiments show that the YOLOv8-MBM proposed in this paper is superior to Fast R-CNN, YOLOv3, YOLOv4-tiny, YOLOv5s, YOLOv7, YOLOv9, and other mainstream models in regards to detection performance. The accuracy of the improved model reaches 92.7%. Compared with the original YOLOv8s model, the precision, recall, mAP1, and mAP2 are increased by 10.6%, 8.9%, 9.7%, and 9.3%, respectively. In summary, the YOLOv8-MBM model successfully meets the requirements for accurate weed detection in wheat fields. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
Show Figures

Figure 1

12 pages, 2248 KiB  
Communication
Automatic Shrimp Fry Counting Method Using Multi-Scale Attention Fusion
by Xiaohong Peng, Tianyu Zhou, Ying Zhang and Xiaopeng Zhao
Sensors 2024, 24(9), 2916; https://doi.org/10.3390/s24092916 - 2 May 2024
Viewed by 1445
Abstract
Shrimp fry counting is an important task for biomass estimation in aquaculture. Accurate counting of the number of shrimp fry in tanks can not only assess the production of mature shrimp but also assess the density of shrimp fry in the tanks, which [...] Read more.
Shrimp fry counting is an important task for biomass estimation in aquaculture. Accurate counting of the number of shrimp fry in tanks can not only assess the production of mature shrimp but also assess the density of shrimp fry in the tanks, which is very helpful for the subsequent growth status, transportation management, and yield assessment. However, traditional manual counting methods are often inefficient and prone to counting errors; a more efficient and accurate method for shrimp fry counting is urgently needed. In this paper, we first collected and labeled the images of shrimp fry in breeding tanks according to the constructed experimental environment and generated corresponding density maps using the Gaussian kernel function. Then, we proposed a multi-scale attention fusion-based shrimp fry counting network called the SFCNet. Experiments showed that our proposed SFCNet model reached the optimal performance in terms of shrimp fry counting compared to CNN-based baseline counting models, with MAEs and RMSEs of 3.96 and 4.682, respectively. This approach was able to effectively calculate the number of shrimp fry and provided a better solution for accurately calculating the number of shrimp fry. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
Show Figures

Figure 1

19 pages, 10732 KiB  
Article
Intrarow Uncut Weed Detection Using You-Only-Look-Once Instance Segmentation for Orchard Plantations
by Rizky Mulya Sampurno, Zifu Liu, R. M. Rasika D. Abeyrathna and Tofael Ahamed
Sensors 2024, 24(3), 893; https://doi.org/10.3390/s24030893 - 30 Jan 2024
Cited by 7 | Viewed by 2032
Abstract
Mechanical weed management is a drudging task that requires manpower and has risks when conducted within rows of orchards. However, intrarow weeding must still be conducted by manual labor due to the restricted movements of riding mowers within the rows of orchards due [...] Read more.
Mechanical weed management is a drudging task that requires manpower and has risks when conducted within rows of orchards. However, intrarow weeding must still be conducted by manual labor due to the restricted movements of riding mowers within the rows of orchards due to their confined structures with nets and poles. However, autonomous robotic weeders still face challenges identifying uncut weeds due to the obstruction of Global Navigation Satellite System (GNSS) signals caused by poles and tree canopies. A properly designed intelligent vision system would have the potential to achieve the desired outcome by utilizing an autonomous weeder to perform operations in uncut sections. Therefore, the objective of this study is to develop a vision module using a custom-trained dataset on YOLO instance segmentation algorithms to support autonomous robotic weeders in recognizing uncut weeds and obstacles (i.e., fruit tree trunks, fixed poles) within rows. The training dataset was acquired from a pear orchard located at the Tsukuba Plant Innovation Research Center (T-PIRC) at the University of Tsukuba, Japan. In total, 5000 images were preprocessed and labeled for training and testing using YOLO models. Four versions of edge-device-dedicated YOLO instance segmentation were utilized in this research—YOLOv5n-seg, YOLOv5s-seg, YOLOv8n-seg, and YOLOv8s-seg—for real-time application with an autonomous weeder. A comparison study was conducted to evaluate all YOLO models in terms of detection accuracy, model complexity, and inference speed. The smaller YOLOv5-based and YOLOv8-based models were found to be more efficient than the larger models, and YOLOv8n-seg was selected as the vision module for the autonomous weeder. In the evaluation process, YOLOv8n-seg had better segmentation accuracy than YOLOv5n-seg, while the latter had the fastest inference time. The performance of YOLOv8n-seg was also acceptable when it was deployed on a resource-constrained device that is appropriate for robotic weeders. The results indicated that the proposed deep learning-based detection accuracy and inference speed can be used for object recognition via edge devices for robotic operation during intrarow weeding operations in orchards. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
Show Figures

Figure 1

Back to TopTop