Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (26)

Search Parameters:
Keywords = modified faster R-CNN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 10870 KiB  
Article
An Improved Instance Segmentation Method for Fast Assessment of Damaged Buildings Based on Post-Earthquake UAV Images
by Ran Zou, Jun Liu, Haiyan Pan, Delong Tang and Ruyan Zhou
Sensors 2024, 24(13), 4371; https://doi.org/10.3390/s24134371 - 5 Jul 2024
Cited by 1 | Viewed by 1131
Abstract
Quickly and accurately assessing the damage level of buildings is a challenging task for post-disaster emergency response. Most of the existing research mainly adopts semantic segmentation and object detection methods, which have yielded good results. However, for high-resolution Unmanned Aerial Vehicle (UAV) imagery, [...] Read more.
Quickly and accurately assessing the damage level of buildings is a challenging task for post-disaster emergency response. Most of the existing research mainly adopts semantic segmentation and object detection methods, which have yielded good results. However, for high-resolution Unmanned Aerial Vehicle (UAV) imagery, these methods may result in the problem of various damage categories within a building and fail to accurately extract building edges, thus hindering post-disaster rescue and fine-grained assessment. To address this issue, we proposed an improved instance segmentation model that enhances classification accuracy by incorporating a Mixed Local Channel Attention (MLCA) mechanism in the backbone and improving small object segmentation accuracy by refining the Neck part. The method was tested on the Yangbi earthquake UVA images. The experimental results indicated that the modified model outperformed the original model by 1.07% and 1.11% in the two mean Average Precision (mAP) evaluation metrics, mAPbbox50 and mAPseg50, respectively. Importantly, the classification accuracy of the intact category was improved by 2.73% and 2.73%, respectively, while the collapse category saw an improvement of 2.58% and 2.14%. In addition, the proposed method was also compared with state-of-the-art instance segmentation models, e.g., Mask-R-CNN and YOLO V9-Seg. The results demonstrated that the proposed model exhibits advantages in both accuracy and efficiency. Specifically, the efficiency of the proposed model is three times faster than other models with similar accuracy. The proposed method can provide a valuable solution for fine-grained building damage evaluation. Full article
Show Figures

Figure 1

15 pages, 3252 KiB  
Article
Precision Agriculture: Computer Vision-Enabled Sugarcane Plant Counting in the Tillering Phase
by Muhammad Talha Ubaid and Sameena Javaid
J. Imaging 2024, 10(5), 102; https://doi.org/10.3390/jimaging10050102 - 26 Apr 2024
Cited by 1 | Viewed by 2295
Abstract
The world’s most significant yield by production quantity is sugarcane. It is the primary source for sugar, ethanol, chipboards, paper, barrages, and confectionery. Many people are affiliated with sugarcane production and their products around the globe. The sugarcane industries make an agreement with [...] Read more.
The world’s most significant yield by production quantity is sugarcane. It is the primary source for sugar, ethanol, chipboards, paper, barrages, and confectionery. Many people are affiliated with sugarcane production and their products around the globe. The sugarcane industries make an agreement with farmers before the tillering phase of plants. Industries are keen on knowing the sugarcane field’s pre-harvest estimation for planning their production and purchases. The proposed research contribution is twofold: by publishing our newly developed dataset, we also present a methodology to estimate the number of sugarcane plants in the tillering phase. The dataset has been obtained from sugarcane fields in the fall season. In this work, a modified architecture of Faster R-CNN with feature extraction using VGG-16 with Inception-v3 modules and sigmoid threshold function has been proposed for the detection and classification of sugarcane plants. Significantly promising results with 82.10% accuracy have been obtained with the proposed architecture, showing the viability of the developed methodology. Full article
(This article belongs to the Special Issue Imaging Applications in Agriculture)
Show Figures

Figure 1

17 pages, 19458 KiB  
Technical Note
CamoNet: A Target Camouflage Network for Remote Sensing Images Based on Adversarial Attack
by Yue Zhou, Wanghan Jiang, Xue Jiang, Lin Chen and Xingzhao Liu
Remote Sens. 2023, 15(21), 5131; https://doi.org/10.3390/rs15215131 - 27 Oct 2023
Cited by 2 | Viewed by 1849
Abstract
Object detection algorithms based on convolutional neural networks (CNNs) have achieved remarkable success in remote sensing images (RSIs), such as aircraft and ship detection, which play a vital role in military and civilian fields. However, CNNs are fragile and can be easily fooled. [...] Read more.
Object detection algorithms based on convolutional neural networks (CNNs) have achieved remarkable success in remote sensing images (RSIs), such as aircraft and ship detection, which play a vital role in military and civilian fields. However, CNNs are fragile and can be easily fooled. There have been a series of studies on adversarial attacks for image classification in RSIs. However, the existing gradient attack algorithms designed for classification cannot achieve excellent performance when directly applied to object detection, which is an essential task in RSI understanding. Although we can find some works on adversarial attacks for object detection, they are weak in concealment and easily detected by the naked eye. To handle these problems, we propose a target camouflage network for object detection in RSIs, called CamoNet, to deceive CNN-based detectors by adding imperceptible perturbation to the image. In addition, we propose a detection space initialization strategy to maximize the diversity in the detector’s outputs among the generated samples. It can enhance the performance of the gradient attack algorithms in the object detection task. Moreover, a key pixel distillation module is employed, which can further reduce the modified pixels without weakening the concealment effect. Compared with several of the most advanced adversarial attacks, the proposed attack has advantages in terms of both peak signal-to-noise ratio (PSNR) and attack success rate. The transferability of the proposed target camouflage network is evaluated on three dominant detection algorithms (RetinaNet, Faster R-CNN, and RTMDet) with two commonly used remote sensing datasets (i.e., DOTA and DIOR). Full article
(This article belongs to the Special Issue Deep Learning in Optical Satellite Images)
Show Figures

Figure 1

13 pages, 16053 KiB  
Article
Analysis of the Possibility of Using Selected Tools and Algorithms in the Classification and Recognition of Type of Microstructure
by Michał Szatkowski, Dorota Wilk-Kołodziejczyk, Krzysztof Jaśkowiec, Marcin Małysza, Adam Bitka and Mirosław Głowacki
Materials 2023, 16(21), 6837; https://doi.org/10.3390/ma16216837 - 24 Oct 2023
Cited by 1 | Viewed by 955
Abstract
The aim of this research was to develop a solution based on existing methods and tools that would allow the automatic classification of selected images of cast iron microstructures. As part of the work, solutions based on artificial intelligence were tested and modified. [...] Read more.
The aim of this research was to develop a solution based on existing methods and tools that would allow the automatic classification of selected images of cast iron microstructures. As part of the work, solutions based on artificial intelligence were tested and modified. Their task is to assign a specific class in the analyzed microstructure images. In the analyzed set, the examined samples appear in various zoom levels, photo sizes and colors. As is known, the components of the microstructure are different. In the examined photo, there does not have to be only one type of precipitate in each photo that indicates the correct microstructure of the same type of alloy, different shapes may appear in different amounts. This article also addresses the issue of data preparation. In order to isolate one type of structure element, the possibilities of using methods such as HOG (histogram of oriented gradients) and thresholding (the image was transformed into black objects on a white background) were checked. In order to avoid the slow preparation of training data, our solution was proposed to facilitate the labeling of data for training. The HOG algorithm combined with SVM and random forest were used for the classification process. In order to compare the effectiveness of the operation, the Faster R-CNN and Mask R-CNN algorithms were also used. The results obtained from the classifiers were compared to the microstructure assessment performed by experts. Full article
Show Figures

Figure 1

23 pages, 5810 KiB  
Article
A Lightweight Crop Pest Detection Algorithm Based on Improved Yolov5s
by Jing Zhang, Jun Wang and Maocheng Zhao
Agronomy 2023, 13(7), 1779; https://doi.org/10.3390/agronomy13071779 - 30 Jun 2023
Cited by 8 | Viewed by 2301
Abstract
The real-time target detection of crop pests can help detect and control pests in time. In this study, we built a lightweight agricultural pest identification method based on modified Yolov5s and reconstructed the original backbone network in tandem with MobileNetV3 to considerably reduce [...] Read more.
The real-time target detection of crop pests can help detect and control pests in time. In this study, we built a lightweight agricultural pest identification method based on modified Yolov5s and reconstructed the original backbone network in tandem with MobileNetV3 to considerably reduce the number of parameters in the network model. At the same time, the ECA attention mechanism was introduced into the MobileNetV3 shallow network to meet the aim of effectively enhancing the network’s performance by introducing a limited number of parameters. A weighted bidirectional feature pyramid network (BiFPN) was utilized to replace the path aggregation network (PAnet) in the neck network to boost the feature extraction of tiny targets. The SIoU loss function was utilized to replace the CIoU loss function to increase the convergence speed and accuracy of the model prediction frame. The updated model was designated ECMB-Yolov5. In this study, we conducted experiments on eight types of common pest dataset photos, and comparative experiments were conducted using common target identification methods. The final model was implemented on an embedded device, the Jetson Nano, for real-time detection, which gave a reference for further application to UAV or unmanned cart real-time detection systems. The experimental results indicated that ECMB-Yolov5 decreased the number of parameters by 80.3% and mAP by 0.8% compared to the Yolov5s model. The real-time detection speed deployed on embedded devices reached 15.2 FPS, which was 5.7 FPS higher than the original model. mAP was improved by 7.1%, 7.3%, 9.9%, and 8.4% for ECMB-Yolov5 compared to Faster R-CNN, Yolov3, Yolov4, and Yolov4-tiny models, respectively. It was verified through experiments that the improved lightweight method in this study had a high detection accuracy while significantly reducing the number of parameters and accomplishing real-time detection. Full article
Show Figures

Figure 1

16 pages, 1558 KiB  
Article
On-Board Small-Scale Object Detection for Unmanned Aerial Vehicles (UAVs)
by Zubair Saeed, Muhammad Haroon Yousaf, Rehan Ahmed, Sergio A. Velastin and Serestina Viriri
Drones 2023, 7(5), 310; https://doi.org/10.3390/drones7050310 - 6 May 2023
Cited by 12 | Viewed by 3779
Abstract
Object detection is a critical task that becomes difficult when dealing with onboard detection using aerial images and computer vision technique. The main challenges with aerial images are small target sizes, low resolution, occlusion, attitude, and scale variations, which affect the performance of [...] Read more.
Object detection is a critical task that becomes difficult when dealing with onboard detection using aerial images and computer vision technique. The main challenges with aerial images are small target sizes, low resolution, occlusion, attitude, and scale variations, which affect the performance of many object detectors. The accuracy of the detection and the efficiency of the inference are always trade-offs. We modified the architecture of CenterNet and used different CNN-based backbones of ResNet18, ResNet34, ResNet50, ResNet101, ResNet152, Res2Net50, Res2Net101, DLA-34, and hourglass14. A comparison of the modified CenterNet with nine CNN-based backbones is conducted and validated using three challenging datasets, i.e., VisDrone, Stanford Drone dataset (SSD), and AU-AIR. We also implemented well-known off-the-shelf object detectors, i.e., YoloV1 to YoloV7, SSD-MobileNet-V2, and Faster RCNN. The proposed approach and state-of-the-art object detectors are optimized and then implemented on cross-edge platforms, i.e., NVIDIA Jetson Xavier, NVIDIA Jetson Nano, and Neuro Compute Stick 2 (NCS2). A detailed comparison of performance between edge platforms is provided. Our modified CenterNet combination with hourglass as a backbone achieved 91.62%, 75.61%, and 34.82% mAP using the validation sets of AU-AIR, SSD, and VisDrone datasets, respectively. An FPS of 40.02 was achieved using the ResNet18 backbone. We also compared our approach with the latest cutting-edge research and found promising results for both discrete GPU and edge platforms. Full article
(This article belongs to the Special Issue Advances in UAV Detection, Classification and Tracking-II)
Show Figures

Figure 1

15 pages, 6581 KiB  
Article
Deep Learning-Based Modified YOLACT Algorithm on Magnetic Resonance Imaging Images for Screening Common and Difficult Samples of Breast Cancer
by Wei Wang and Yisong Wang
Diagnostics 2023, 13(9), 1582; https://doi.org/10.3390/diagnostics13091582 - 28 Apr 2023
Cited by 1 | Viewed by 1746
Abstract
Computer-aided methods have been extensively applied for diagnosing breast lesions with magnetic resonance imaging (MRI), but fully-automatic diagnosis using deep learning is rarely documented. Deep-learning-technology-based artificial intelligence (AI) was used in this work to classify and diagnose breast cancer based on MRI images. [...] Read more.
Computer-aided methods have been extensively applied for diagnosing breast lesions with magnetic resonance imaging (MRI), but fully-automatic diagnosis using deep learning is rarely documented. Deep-learning-technology-based artificial intelligence (AI) was used in this work to classify and diagnose breast cancer based on MRI images. Breast cancer MRI images from the Rider Breast MRI public dataset were converted into processable joint photographic expert group (JPG) format images. The location and shape of the lesion area were labeled using the Labelme software. A difficult-sample mining mechanism was introduced to improve the performance of the YOLACT algorithm model as a modified YOLACT algorithm model. Diagnostic efficacy was compared with the Mask R-CNN algorithm model. The deep learning framework was based on PyTorch version 1.0. Four thousand and four hundred labeled data with corresponding lesions were labeled as normal samples, and 1600 images with blurred lesion areas as difficult samples. The modified YOLACT algorithm model achieved higher accuracy and better classification performance than the YOLACT model. The detection accuracy of the modified YOLACT algorithm model with the difficult-sample-mining mechanism is improved by nearly 3% for common and difficult sample images. Compared with Mask R-CNN, it is still faster in running speed, and the difference in recognition accuracy is not obvious. The modified YOLACT algorithm had a classification accuracy of 98.5% for the common sample test set and 93.6% for difficult samples. We constructed a modified YOLACT algorithm model, which is superior to the YOLACT algorithm model in diagnosis and classification accuracy. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

11 pages, 1259 KiB  
Article
Improved Feature Extraction and Similarity Algorithm for Video Object Detection
by Haotian You, Yufang Lu and Haihua Tang
Information 2023, 14(2), 115; https://doi.org/10.3390/info14020115 - 12 Feb 2023
Viewed by 2444
Abstract
Video object detection is an important research direction of computer vision. The task of video object detection is to detect and classify moving objects in a sequence of images. Based on the static image object detector, most of the existing video object detection [...] Read more.
Video object detection is an important research direction of computer vision. The task of video object detection is to detect and classify moving objects in a sequence of images. Based on the static image object detector, most of the existing video object detection methods use the unique temporal correlation of video to solve the problem of missed detection and false detection caused by moving object occlusion and blur. Another video object detection model guided by an optical flow network is widely used. Feature aggregation of adjacent frames is performed by estimating the optical flow field. However, there are many redundant computations for feature aggregation of adjacent frames. To begin with, this paper improved Faster RCNN by Feature Pyramid and Dynamic Region Aware Convolution. Then the S-SELSA module is proposed from the perspective of semantic and feature similarity. Feature similarity is obtained by a modified SSIM algorithm. The module can aggregate the features of frames globally to avoid redundancy. Finally, the experimental results on the ImageNet VID and DET datasets show that the mAP of the method proposed in this paper is 83.55%, which is higher than the existing methods. Full article
(This article belongs to the Special Issue Computer Vision for Security Applications)
Show Figures

Figure 1

22 pages, 6920 KiB  
Article
Individual Tree Detection in Coal Mine Afforestation Area Based on Improved Faster RCNN in UAV RGB Images
by Meng Luo, Yanan Tian, Shengwei Zhang, Lei Huang, Huiqiang Wang, Zhiqiang Liu and Lin Yang
Remote Sens. 2022, 14(21), 5545; https://doi.org/10.3390/rs14215545 - 3 Nov 2022
Cited by 10 | Viewed by 3185
Abstract
Forests are the most important part of terrestrial ecosystems. In the context of China’s industrialization and urbanization, mining activities have caused huge damage to the forest ecology. In the Ulan Mulun River Basin (Ordos, China), afforestation is standard method for reclamation of coal [...] Read more.
Forests are the most important part of terrestrial ecosystems. In the context of China’s industrialization and urbanization, mining activities have caused huge damage to the forest ecology. In the Ulan Mulun River Basin (Ordos, China), afforestation is standard method for reclamation of coal mine degraded land. In order to understand, manage and utilize forests, it is necessary to collect local mining area’s tree information. This paper proposed an improved Faster R-CNN model to identify individual trees. There were three major improved parts in this model. First, the model applied supervised multi-policy data augmentation (DA) to address the unmanned aerial vehicle (UAV) sample label size imbalance phenomenon. Second, we proposed Dense Enhance Feature Pyramid Network (DE-FPN) to improve the detection accuracy of small sample. Third, we modified the state-of-the-art Alpha Intersection over Union (Alpha-IoU) loss function. In the regression stage, this part effectively improved the bounding box accuracy. Compared with the original model, the improved model had the faster effect and higher accuracy. The result shows that the data augmentation strategy increased AP by 1.26%, DE-FPN increased AP by 2.82%, and the improved Alpha-IoU increased AP by 2.60%. Compared with popular target detection algorithms, our improved Faster R-CNN algorithm had the highest accuracy for tree detection in mining areas. AP was 89.89%. It also had a good generalization, and it can accurately identify trees in a complex background. Our algorithm detected correct trees accounted for 91.61%. In the surrounding area of coal mines, the higher the stand density is, the smaller the remote sensing index value is. Remote sensing indices included Green Leaf Index (GLI), Red Green Blue Vegetation Index (RGBVI), Visible Atmospheric Resistance Index (VARI), and Normalized Green Red Difference Index (NGRDI). In the drone zone, the western area of Bulianta Coal Mine (Area A) had the highest stand density, which was 203.95 trees ha−1. GLI mean value was 0.09, RGBVI mean value was 0.17, VARI mean value was 0.04, and NGRDI mean value was 0.04. The southern area of Bulianta Coal Mine (Area D) was 105.09 trees ha−1 of stand density. Four remote sensing indices were all the highest. GLI mean value was 0.15, RGBVI mean value was 0.43, VARI mean value was 0.12, and NGRDI mean value was 0.09. This study provided a sustainable development theoretical guidance for the Ulan Mulun River Basin. It is crucial information for local ecological environment and economic development. Full article
(This article belongs to the Special Issue Applications of Individual Tree Detection (ITD))
Show Figures

Graphical abstract

20 pages, 3458 KiB  
Article
A Robust Framework for Object Detection in a Traffic Surveillance System
by Malik Javed Akhtar, Rabbia Mahum, Faisal Shafique Butt, Rashid Amin, Ahmed M. El-Sherbeeny, Seongkwan Mark Lee and Sarang Shaikh
Electronics 2022, 11(21), 3425; https://doi.org/10.3390/electronics11213425 - 22 Oct 2022
Cited by 33 | Viewed by 3907
Abstract
Object recognition is the technique of specifying the location of various objects in images or videos. There exist numerous algorithms for the recognition of objects such as R-CNN, Fast R-CNN, Faster R-CNN, HOG, R-FCN, SSD, SSP-net, SVM, CNN, YOLO, etc., based on the [...] Read more.
Object recognition is the technique of specifying the location of various objects in images or videos. There exist numerous algorithms for the recognition of objects such as R-CNN, Fast R-CNN, Faster R-CNN, HOG, R-FCN, SSD, SSP-net, SVM, CNN, YOLO, etc., based on the techniques of machine learning and deep learning. Although these models have been employed for various types of object detection applications, however, tiny object detection faces the challenge of low precision. It is essential to develop a lightweight and robust model for object detection that can detect tiny objects with high precision. In this study, we suggest an enhanced YOLOv2 (You Only Look Once version 2) algorithm for object detection, i.e., vehicle detection and recognition in surveillance videos. We modified the base network of the YOLOv2 by reducing the number of parameters and replacing it with DenseNet. We employed the DenseNet-201 technique for feature extraction in our improved model that extracts the most representative features from the images. Moreover, our proposed model is more compact due to the dense architecture of the base network. We utilized DenseNet-201 as a base network due to the direct connection among all layers, which helps to extract a valuable information from the very first layer and pass it to the final layer. The dataset gathered from the Kaggle and KITTI was used for the training of the proposed model, and we cross-validated the performance using MS COCO and Pascal VOC datasets. To assess the efficacy of the proposed model, we utilized extensive experimentation, which demonstrates that our algorithm beats existing vehicle detection approaches, with an average precision of 97.51%. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

19 pages, 3184 KiB  
Article
One-Stage Disease Detection Method for Maize Leaf Based on Multi-Scale Feature Fusion
by Ying Li, Shiyu Sun, Changshe Zhang, Guangsong Yang and Qiubo Ye
Appl. Sci. 2022, 12(16), 7960; https://doi.org/10.3390/app12167960 - 9 Aug 2022
Cited by 26 | Viewed by 3204
Abstract
Plant diseases such as drought stress and pest diseases significantly impact crops’ growth and yield levels. By detecting the surface characteristics of plant leaves, we can judge the growth state of plants and whether diseases occur. Traditional manual detection methods are limited by [...] Read more.
Plant diseases such as drought stress and pest diseases significantly impact crops’ growth and yield levels. By detecting the surface characteristics of plant leaves, we can judge the growth state of plants and whether diseases occur. Traditional manual detection methods are limited by the professional knowledge and practical experience of operators. In recent years, a detection method based on deep learning has been applied to improve detection accuracy and reduce detection time. In this paper, we propose a disease detection method using a convolutional neural network (CNN) with multi-scale feature fusion for maize leaf disease detection. Based on the one-stage plant disease network YoLov5s, the coordinate attention (CA) attention module is added, along with a key feature weight to enhance the effective information of the feature map, and the spatial pyramid pooling (SSP) module is modified by data augmentation to reduce the loss of feature information. Three experiments are conducted under complex conditions such as overlapping occlusion, sparse distribution of detection targets, and similar textures and backgrounds of disease areas. The experimental results show that the average accuracy of the MFF-CNN is higher than that of currently used methods such as YoLov5s, Faster RCNN, CenterNet, and DETR, and the detection time is also reduced. The proposed method provides a feasible solution not only for the diagnosis of maize leaf diseases, but also for the detection of other plant diseases. Full article
(This article belongs to the Special Issue Computational Intelligence in Image and Video Analysis)
Show Figures

Figure 1

15 pages, 8732 KiB  
Article
Smart and Rapid Design of Nanophotonic Structures by an Adaptive and Regularized Deep Neural Network
by Renjie Li, Xiaozhe Gu, Yuanwen Shen, Ke Li, Zhen Li and Zhaoyu Zhang
Nanomaterials 2022, 12(8), 1372; https://doi.org/10.3390/nano12081372 - 16 Apr 2022
Cited by 7 | Viewed by 4784
Abstract
The design of nanophotonic structures based on deep learning is emerging rapidly in the research community. Design methods using Deep Neural Networks (DNN) are outperforming conventional physics-based simulations performed iteratively by human experts. Here, a self-adaptive and regularized DNN based on Convolutional Neural [...] Read more.
The design of nanophotonic structures based on deep learning is emerging rapidly in the research community. Design methods using Deep Neural Networks (DNN) are outperforming conventional physics-based simulations performed iteratively by human experts. Here, a self-adaptive and regularized DNN based on Convolutional Neural Networks (CNNs) for the smart and fast characterization of nanophotonic structures in high-dimensional design parameter space is presented. This proposed CNN model, named LRS-RCNN, utilizes dynamic learning rate scheduling and L2 regularization techniques to overcome overfitting and speed up training convergence and is shown to surpass the performance of all previous algorithms, with the exception of two metrics where it achieves a comparable level relative to prior works. We applied the model to two challenging types of photonic structures: 2D photonic crystals (e.g., L3 nanocavity) and 1D photonic crystals (e.g., nanobeam) and results show that LRS-RCNN achieves record-high prediction accuracies, strong generalizibility, and substantially faster convergence speed compared to prior works. Although still a proof-of-concept model, the proposed smart LRS-RCNN has been proven to greatly accelerate the design of photonic crystal structures as a state-of-the-art predictor for both Q-factor and V. It can also be modified and generalized to predict any type of optical properties for designing a wide range of different nanophotonic structures. The complete dataset and code will be released to aid the development of related research endeavors. Full article
Show Figures

Figure 1

19 pages, 3019 KiB  
Article
Two-Stage Classification Model for the Prediction of Heart Disease Using IoMT and Artificial Intelligence
by S. Manimurugan, Saad Almutairi, Majed Mohammed Aborokbah, C. Narmatha, Subramaniam Ganesan, Naveen Chilamkurti, Riyadh A. Alzaheb and Hani Almoamari
Sensors 2022, 22(2), 476; https://doi.org/10.3390/s22020476 - 9 Jan 2022
Cited by 48 | Viewed by 4424
Abstract
Internet of Things (IoT) technology has recently been applied in healthcare systems as an Internet of Medical Things (IoMT) to collect sensor information for the diagnosis and prognosis of heart disease. The main objective of the proposed research is to classify data and [...] Read more.
Internet of Things (IoT) technology has recently been applied in healthcare systems as an Internet of Medical Things (IoMT) to collect sensor information for the diagnosis and prognosis of heart disease. The main objective of the proposed research is to classify data and predict heart disease using medical data and medical images. The proposed model is a medical data classification and prediction model that operates in two stages. If the result from the first stage is efficient in predicting heart disease, there is no need for stage two. In the first stage, data gathered from medical sensors affixed to the patient’s body were classified; then, in stage two, echocardiogram image classification was performed for heart disease prediction. A hybrid linear discriminant analysis with the modified ant lion optimization (HLDA-MALO) technique was used for sensor data classification, while a hybrid Faster R-CNN with SE-ResNet-101 modelwass used for echocardiogram image classification. Both classification methods were carried out, and the classification findings were consolidated and validated to predict heart disease. The HLDA-MALO method obtained 96.85% accuracy in detecting normal sensor data, and 98.31% accuracy in detecting abnormal sensor data. The proposed hybrid Faster R-CNN with SE-ResNeXt-101 transfer learning model performed better in classifying echocardiogram images, with 98.06% precision, 98.95% recall, 96.32% specificity, a 99.02% F-score, and maximum accuracy of 99.15%. Full article
(This article belongs to the Special Issue Biomedical Image and Signals for Treatment Monitoring)
Show Figures

Figure 1

21 pages, 8435 KiB  
Article
Smart Pothole Detection Using Deep Learning Based on Dilated Convolution
by Khaled R. Ahmed
Sensors 2021, 21(24), 8406; https://doi.org/10.3390/s21248406 - 16 Dec 2021
Cited by 79 | Viewed by 13797
Abstract
Roads make a huge contribution to the economy and act as a platform for transportation. Potholes in roads are one of the major concerns in transportation infrastructure. A lot of research has proposed using computer vision techniques to automate pothole detection that include [...] Read more.
Roads make a huge contribution to the economy and act as a platform for transportation. Potholes in roads are one of the major concerns in transportation infrastructure. A lot of research has proposed using computer vision techniques to automate pothole detection that include a wide range of image processing and object detection algorithms. There is a need to automate the pothole detection process with adequate accuracy and speed and implement the process easily and with low setup cost. In this paper, we have developed efficient deep learning convolution neural networks (CNNs) to detect potholes in real-time with adequate accuracy. To reduce the computational cost and improve the training results, this paper proposes a modified VGG16 (MVGG16) network by removing some convolution layers and using different dilation rates. Moreover, this paper uses the MVGG16 as a backbone network for the Faster R-CNN. In addition, this work compares the performance of YOLOv5 (Large (Yl), Medium (Ym), and Small (Ys)) models with ResNet101 backbone and Faster R-CNN with ResNet50(FPN), VGG16, MobileNetV2, InceptionV3, and MVGG16 backbones. The experimental results show that the Ys model is more applicable for real-time pothole detection because of its speed. In addition, using the MVGG16 network as the backbone of the Faster R-CNN provides better mean precision and shorter inference time than using VGG16, InceptionV3, or MobilNetV2 backbones. The proposed MVGG16 succeeds in balancing the pothole detection accuracy and speed. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

17 pages, 26552 KiB  
Article
A Fluorescent Biosensor for Sensitive Detection of Salmonella Typhimurium Using Low-Gradient Magnetic Field and Deep Learning via Faster Region-Based Convolutional Neural Network
by Qiwei Hu, Siyuan Wang, Hong Duan and Yuanjie Liu
Biosensors 2021, 11(11), 447; https://doi.org/10.3390/bios11110447 - 11 Nov 2021
Cited by 17 | Viewed by 2737
Abstract
In this study, a fluorescent biosensor was developed for the sensitive detection of Salmonella typhimurium using a low-gradient magnetic field and deep learning via faster region-based convolutional neural networks (R-CNN) to recognize the fluorescent spots on the bacterial cells. First, magnetic nanobeads (MNBs) [...] Read more.
In this study, a fluorescent biosensor was developed for the sensitive detection of Salmonella typhimurium using a low-gradient magnetic field and deep learning via faster region-based convolutional neural networks (R-CNN) to recognize the fluorescent spots on the bacterial cells. First, magnetic nanobeads (MNBs) coated with capture antibodies were used to separate target bacteria from the sample background, resulting in the formation of magnetic bacteria. Then, fluorescein isothiocyanate fluorescent microspheres (FITC-FMs) modified with detection antibodies were used to label the magnetic bacteria, resulting in the formation of fluorescent bacteria. After the fluorescent bacteria were attracted against the bottom of an ELISA well using a low-gradient magnetic field, resulting in the conversion from a three-dimensional (spatial) distribution of the fluorescent bacteria to a two-dimensional (planar) distribution, the images of the fluorescent bacteria were finally collected using a high-resolution fluorescence microscope and processed using the faster R-CNN algorithm to calculate the number of the fluorescent spots for the determination of target bacteria. Under the optimal conditions, this biosensor was able to quantitatively detect Salmonella typhimurium from 6.9 × 101 to 1.1 × 103 CFU/mL within 2.5 h with the lower detection limit of 55 CFU/mL. The fluorescent biosensor has the potential to simultaneously detect multiple types of foodborne bacteria using MNBs coated with their capture antibodies and different fluorescent microspheres modified with their detection antibodies. Full article
(This article belongs to the Special Issue Biosensors for Agriculture, Environment and Food)
Show Figures

Figure 1

Back to TopTop