Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (547)

Search Parameters:
Keywords = contour segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 6078 KiB  
Article
Multi-Feature-Filtering-Based Road Curb Extraction from Unordered Point Clouds
by Hong Lang, Yuan Peng, Zheng Zou, Shengxue Zhu, Yichuan Peng and Hao Du
Sensors 2024, 24(20), 6544; https://doi.org/10.3390/s24206544 (registering DOI) - 10 Oct 2024
Abstract
Road curb extraction is a critical component of road environment perception, being essential for calculating road geometry parameters and ensuring the safe navigation of autonomous vehicles. The existing research primarily focuses on extracting curbs from ordered point clouds, which are constrained by their [...] Read more.
Road curb extraction is a critical component of road environment perception, being essential for calculating road geometry parameters and ensuring the safe navigation of autonomous vehicles. The existing research primarily focuses on extracting curbs from ordered point clouds, which are constrained by their structure of point cloud organization, making it difficult to apply them to unordered point cloud data and making them susceptible to interference from obstacles. To overcome these limitations, a multi-feature-filtering-based method for curb extraction from unordered point clouds is proposed. This method integrates several techniques, including the grid height difference, normal vectors, clustering, an alpha-shape algorithm based on point cloud density, and the MSAC (M-Estimate Sample Consensus) algorithm for multi-frame fitting. The multi-frame fitting approach addresses the limitations of traditional single-frame methods by fitting the curb contour every five frames, ensuring more accurate contour extraction while preserving local curb features. Based on our self-developed dataset and the Toronto dataset, these methods are integrated to create a robust filter capable of accurately identifying curbs in various complex scenarios. Optimal threshold values were determined through sensitivity analysis and applied to enhance curb extraction performance under diverse conditions. Experimental results demonstrate that the proposed method accurately and comprehensively extracts curb points in different road environments, proving its effectiveness and robustness. Specifically, the average curb segmentation precision, recall, and F1 score values across scenarios A, B (intersections), C (straight road), and scenarios D and E (curved roads and ghosting) are 0.9365, 0.782, and 0.8523, respectively. Full article
Show Figures

Figure 1

34 pages, 2105 KiB  
Article
Analytical Solution for the Problem of Point Location in Arbitrary Planar Domains
by Vitor Santos
Algorithms 2024, 17(10), 444; https://doi.org/10.3390/a17100444 - 5 Oct 2024
Abstract
This paper presents a general analytical solution for the problem of locating points in planar regions with an arbitrary geometry at the boundary. The proposed methodology overcomes the traditional solutions used for polygonal regions. The method originated from the explicit evaluation of the [...] Read more.
This paper presents a general analytical solution for the problem of locating points in planar regions with an arbitrary geometry at the boundary. The proposed methodology overcomes the traditional solutions used for polygonal regions. The method originated from the explicit evaluation of the contour integral using the Residue and Cauchy theorems, which then evolved toward a technique very similar to the winding number and, finally, simplified into a variant of ray-crossing approach slightly more informed and more universal than the classic approach, which had been used for decades. The very close relation of both techniques also emerges during the derivation of the solution. The resulting algorithm becomes simpler and potentially faster than the current state of the art for point locations in arbitrary polygons because it uses fewer operations. For polygonal regions, it is also applicable without further processing for special cases of degeneracy, and it is possible to use in fully integer arithmetic; it can also be vectorized for parallel computation. The major novelty, however, is the extension of the technique to virtually any shape or segment delimiting a planar domain, be it linear, a circular arc, or a higher order curve. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
11 pages, 2478 KiB  
Article
Automated Quantification of Simple and Complex Aortic Flow Using 2D Phase Contrast MRI
by Rui Li, Hosamadin S. Assadi, Xiaodan Zhao, Gareth Matthews, Zia Mehmood, Ciaran Grafton-Clarke, Vaishali Limbachia, Rimma Hall, Bahman Kasmai, Marina Hughes, Kurian Thampi, David Hewson, Marianna Stamatelatou, Peter P. Swoboda, Andrew J. Swift, Samer Alabed, Sunil Nair, Hilmar Spohr, John Curtin, Yashoda Gurung-Koney, Rob J. van der Geest, Vassilios S. Vassiliou, Liang Zhong and Pankaj Gargadd Show full author list remove Hide full author list
Medicina 2024, 60(10), 1618; https://doi.org/10.3390/medicina60101618 - 3 Oct 2024
Abstract
(1) Background and Objectives: Flow assessment using cardiovascular magnetic resonance (CMR) provides important implications in determining physiologic parameters and clinically important markers. However, post-processing of CMR images remains labor- and time-intensive. This study aims to assess the validity and repeatability of fully [...] Read more.
(1) Background and Objectives: Flow assessment using cardiovascular magnetic resonance (CMR) provides important implications in determining physiologic parameters and clinically important markers. However, post-processing of CMR images remains labor- and time-intensive. This study aims to assess the validity and repeatability of fully automated segmentation of phase contrast velocity-encoded aortic root plane. (2) Materials and Methods: Aortic root images from 125 patients are segmented by artificial intelligence (AI), developed using convolutional neural networks and trained with a multicentre cohort of 160 subjects. Derived simple flow indices (forward and backward flow, systolic flow and velocity) and complex indices (aortic maximum area, systolic flow reversal ratio, flow displacement, and its angle change) were compared with those derived from manual contours. (3) Results: AI-derived simple flow indices yielded excellent repeatability compared to human segmentation (p < 0.001), with an insignificant level of bias. Complex flow indices feature good to excellent repeatability (p < 0.001), with insignificant levels of bias except flow displacement angle change and systolic retrograde flow yielding significant levels of bias (p < 0.001 and p < 0.05, respectively). (4) Conclusions: Automated flow quantification using aortic root images is comparable to human segmentation and has good to excellent repeatability. However, flow helicity and systolic retrograde flow are associated with a significant level of bias. Overall, all parameters show clinical repeatability. Full article
(This article belongs to the Section Cardiology)
Show Figures

Figure 1

21 pages, 13186 KiB  
Article
Ship Contour Extraction from Polarimetric SAR Images Based on Polarization Modulation
by Guoqing Wu, Shengbin Luo Wang, Yibin Liu, Ping Wang and Yongzhen Li
Remote Sens. 2024, 16(19), 3669; https://doi.org/10.3390/rs16193669 - 1 Oct 2024
Abstract
Ship contour extraction is vital for extracting the geometric features of ships, providing comprehensive information essential for ship recognition. The main factors affecting the contour extraction performance are speckle noise and amplitude inhomogeneity, which can lead to over-segmentation and missed detection of ship [...] Read more.
Ship contour extraction is vital for extracting the geometric features of ships, providing comprehensive information essential for ship recognition. The main factors affecting the contour extraction performance are speckle noise and amplitude inhomogeneity, which can lead to over-segmentation and missed detection of ship edges. Polarimetric synthetic aperture radar (PolSAR) images contain rich target scattering information. Under different transmitting and receiving polarization, the amplitude and phase of pixels can be different, which provides the potential to meet the uniform requirement. This paper proposes a novel ship contour extraction framework from PolSAR images based on polarization modulation. Firstly, the image is partitioned into the foreground and background using a super-pixel unsupervised clustering approach. Subsequently, an optimization criterion for target amplitude modulation to achieve uniformity is designed. Finally, the ship’s contour is extracted from the optimized image using an edge-detection operator and an adaptive edge extraction algorithm. Based on the contour, the geometric features of ships are extracted. Moreover, a PolSAR ship contour extraction dataset is established using Gaofen-3 PolSAR images, combined with expert knowledge and automatic identification system (AIS) data. With this dataset, we compare the accuracy of contour extraction and geometric features with state-of-the-art methods. The average errors of extracted length and width are reduced to 20.09 m and 8.96 m. The results demonstrate that the proposed method performs well in both accuracy and precision. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis (2nd Edition))
Show Figures

Figure 1

18 pages, 3141 KiB  
Article
Genetic Algorithm Empowering Unsupervised Learning for Optimizing Building Segmentation from Light Detection and Ranging Point Clouds
by Muhammad Sulaiman, Mina Farmanbar, Ahmed Nabil Belbachir and Chunming Rong
Remote Sens. 2024, 16(19), 3603; https://doi.org/10.3390/rs16193603 - 27 Sep 2024
Abstract
This study investigates the application of LiDAR point cloud datasets for building segmentation through a combined approach that integrates unsupervised segmentation with evolutionary optimization. The research evaluates the extent of improvement achievable through genetic algorithm (GA) optimization for LiDAR point cloud segmentation. The [...] Read more.
This study investigates the application of LiDAR point cloud datasets for building segmentation through a combined approach that integrates unsupervised segmentation with evolutionary optimization. The research evaluates the extent of improvement achievable through genetic algorithm (GA) optimization for LiDAR point cloud segmentation. The unsupervised methodology encompasses preprocessing, adaptive thresholding, morphological operations, contour filtering, and terrain ruggedness analysis. A genetic algorithm was employed to fine-tune the parameters for these techniques. Critical tunable parameters, such as the interpolation method for DSM and DTM generation, scale factor for contrast enhancement, adaptive constant and block size for adaptive thresholding, kernel size for morphological operations, squareness threshold to maintain the shape of predicted objects, and terrain ruggedness index (TRI) were systematically optimized. The study presents the top ten chromosomes with optimal parameter values, demonstrating substantial improvements of 29% in the average intersection over union (IoU) score (0.775) on test datasets. These findings offer valuable insights into LiDAR-based building segmentation, highlighting the potential for increased precision and effectiveness in future applications. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Graphical abstract

23 pages, 3241 KiB  
Article
Age-Friendly Urban Design for Older Pedestrian Road Safety: A Street Segment Level Analysis in Madrid
by Daniel Gálvez-Pérez, Begoña Guirao and Armando Ortuño
Sustainability 2024, 16(19), 8298; https://doi.org/10.3390/su16198298 - 24 Sep 2024
Abstract
Walking benefits older pedestrians but exposes them to traffic crashes. With an aging population, designing age-friendly cities is crucial, yet research on older pedestrian safety at a micro-level is limited. This study aims to reduce older pedestrian–vehicle collisions and create more livable environments [...] Read more.
Walking benefits older pedestrians but exposes them to traffic crashes. With an aging population, designing age-friendly cities is crucial, yet research on older pedestrian safety at a micro-level is limited. This study aims to reduce older pedestrian–vehicle collisions and create more livable environments through infrastructure policies derived from statistical data analysis. Special attention is focused on collecting a holistic set of infrastructure variables to reflect most of the street built environment elements, which helps policymakers implement short-term safety measures. Using Bayesian Poisson regression, this study analyzes factors contributing to the occurrence of crashes involving older and non-older pedestrians on road segments in Madrid, Spain. The results indicate that different factors affect the occurrence of crashes for all pedestrians versus older pedestrians specifically. Traffic crashes involving all pedestrians are affected by leisure points of interest, bus stops, and crosswalk density. Older pedestrian traffic crashes are influenced by population density, the presence of trees and trash containers, and contour complexity. Proposed measures include relocating trees and trash containers, modifying bus stops, and adding crosswalks and traffic lights. This paper also shows that these countermeasures, aimed at creating age-friendly streets for older pedestrians, are not expected to worsen the road safety of other pedestrians. Full article
Show Figures

Figure 1

23 pages, 81877 KiB  
Article
A Multi-Layer Multi-Pass Weld Bead Cross-Section Morphology Extraction Method Based on Row–Column Grayscale Segmentation
by Ting Lei, Shixiang Gong and Chaoqun Wu
Materials 2024, 17(19), 4683; https://doi.org/10.3390/ma17194683 - 24 Sep 2024
Abstract
In the field of welding detection, weld bead cross-section morphology serves as a crucial indicator for analyzing welding quality. However, the extraction of weld bead cross-section morphology often relies on manual extraction based on human expertise, which can be limited in consistency and [...] Read more.
In the field of welding detection, weld bead cross-section morphology serves as a crucial indicator for analyzing welding quality. However, the extraction of weld bead cross-section morphology often relies on manual extraction based on human expertise, which can be limited in consistency and operational efficiency. To address this issue, this paper proposes a multi-layer multi-pass weld bead cross-section morphology extraction method based on row–column grayscale segmentation. The weld bead cross-section morphology image is pre-processed and then segmented into rows and columns based on the average gray value of the image. In order to extract the feature of multi-layer multi-pass weld feature images, an outline showing the binarization threshold is selected for each segmented image (ESI). Then, the weld contour of ESI is extracted before image fusion and morphological processing. Finally, the weld feature parameters (circumference, area, etc.) are extracted from the obtained weld feature image. The results indicate that the relative errors in circumference and area are within 10%, while the discrepancies in maximum weld seam width and maximum weld seam height can be close to the true value. The quality assessment falls within a reasonable range, the average value of SSIM is above 0.9 and the average value of PSNR is above 60 on average. The results demonstrate that this method is feasible for extracting the general contour features of multi-layer multi-pass weld bead cross-section morphology images, providing a basis for further detailed analysis and improvement in welding quality assessment. Full article
(This article belongs to the Special Issue Welding and Joining Processes of Metallic Materials)
Show Figures

Figure 1

21 pages, 7299 KiB  
Article
RDAG U-Net: An Advanced AI Model for Efficient and Accurate CT Scan Analysis of SARS-CoV-2 Pneumonia Lesions
by Chih-Hui Lee, Cheng-Tang Pan, Ming-Chan Lee, Chih-Hsuan Wang, Chun-Yung Chang and Yow-Ling Shiue
Diagnostics 2024, 14(18), 2099; https://doi.org/10.3390/diagnostics14182099 - 23 Sep 2024
Abstract
Background/Objective: This study aims to utilize advanced artificial intelligence (AI) image recog-nition technologies to establish a robust system for identifying features in lung computed tomog-raphy (CT) scans, thereby detecting respiratory infections such as SARS-CoV-2 pneumonia. Spe-cifically, the research focuses on developing a new [...] Read more.
Background/Objective: This study aims to utilize advanced artificial intelligence (AI) image recog-nition technologies to establish a robust system for identifying features in lung computed tomog-raphy (CT) scans, thereby detecting respiratory infections such as SARS-CoV-2 pneumonia. Spe-cifically, the research focuses on developing a new model called Residual-Dense-Attention Gates U-Net (RDAG U-Net) to improve accuracy and efficiency in identification. Methods: This study employed Attention U-Net, Attention Res U-Net, and the newly developed RDAG U-Net model. RDAG U-Net extends the U-Net architecture by incorporating ResBlock and DenseBlock modules in the encoder to retain training parameters and reduce computation time. The training dataset in-cludes 3,520 CT scans from an open database, augmented to 10,560 samples through data en-hancement techniques. The research also focused on optimizing convolutional architectures, image preprocessing, interpolation methods, data management, and extensive fine-tuning of training parameters and neural network modules. Result: The RDAG U-Net model achieved an outstanding accuracy of 93.29% in identifying pulmonary lesions, with a 45% reduction in computation time compared to other models. The study demonstrated that RDAG U-Net performed stably during training and exhibited good generalization capability by evaluating loss values, model-predicted lesion annotations, and validation-epoch curves. Furthermore, using ITK-Snap to convert 2D pre-dictions into 3D lung and lesion segmentation models, the results delineated lesion contours, en-hancing interpretability. Conclusion: The RDAG U-Net model showed significant improvements in accuracy and efficiency in the analysis of CT images for SARS-CoV-2 pneumonia, achieving a 93.29% recognition accuracy and reducing computation time by 45% compared to other models. These results indicate the potential of the RDAG U-Net model in clinical applications, as it can accelerate the detection of pulmonary lesions and effectively enhance diagnostic accuracy. Additionally, the 2D and 3D visualization results allow physicians to understand lesions' morphology and distribution better, strengthening decision support capabilities and providing valuable medical diagnosis and treatment planning tools. Full article
Show Figures

Figure 1

25 pages, 9183 KiB  
Article
A High-Accuracy Contour Segmentation and Reconstruction of a Dense Cluster of Mushrooms Based on Improved SOLOv2
by Shuzhen Yang, Jingmin Zhang and Jin Yuan
Agriculture 2024, 14(9), 1646; https://doi.org/10.3390/agriculture14091646 - 20 Sep 2024
Abstract
This study addresses challenges related to imprecise edge segmentation and low center point accuracy, particularly when mushrooms are heavily occluded or deformed within dense clusters. A high-precision mushroom contour segmentation algorithm is proposed that builds upon the improved SOLOv2, along with a contour [...] Read more.
This study addresses challenges related to imprecise edge segmentation and low center point accuracy, particularly when mushrooms are heavily occluded or deformed within dense clusters. A high-precision mushroom contour segmentation algorithm is proposed that builds upon the improved SOLOv2, along with a contour reconstruction method using instance segmentation masks. The enhanced segmentation algorithm, PR-SOLOv2, incorporates the PointRend module during the up-sampling stage, introducing fine features and enhancing segmentation details. This addresses the difficulty of accurately segmenting densely overlapping mushrooms. Furthermore, a contour reconstruction method based on the PR-SOLOv2 instance segmentation mask is presented. This approach accurately segments mushrooms, extracts individual mushroom masks and their contour data, and classifies reconstruction contours based on average curvature and length. Regular contours are fitted using least-squares ellipses, while irregular ones are reconstructed by extracting the longest sub-contour from the original irregular contour based on its corners. Experimental results demonstrate strong generalization and superior performance in contour segmentation and reconstruction, particularly for densely clustered mushrooms in complex environments. The proposed approach achieves a 93.04% segmentation accuracy and a 98.13% successful segmentation rate, surpassing Mask RCNN and YOLACT by approximately 10%. The center point positioning accuracy of mushrooms is 0.3%. This method better meets the high positioning requirements for efficient and non-destructive picking of densely clustered mushrooms. Full article
Show Figures

Figure 1

10 pages, 486 KiB  
Article
A Circle Center Location Algorithm Based on Sample Density and Adaptive Thresholding
by Yujin Min, Hao Chen, Zhuohang Chen and Faquan Zhang
Appl. Sci. 2024, 14(18), 8453; https://doi.org/10.3390/app14188453 - 19 Sep 2024
Abstract
How to acquire the exact center of a circular sample is an essential task in object recognition. Present algorithms suffer from the high time consumption and low precision. To tackle these issues, we propose a novel circle center location algorithm based on sample [...] Read more.
How to acquire the exact center of a circular sample is an essential task in object recognition. Present algorithms suffer from the high time consumption and low precision. To tackle these issues, we propose a novel circle center location algorithm based on sample density and adaptive thresholding. After obtaining circular contours through image pre-processing, these contours were segmented using a grid method to obtain the required coordinates. Based on the principle of three points forming a circle, a data set containing a large number of samples with circle center coordinates was constructed. It was highly probable that these circle center samples would fall within the near neighborhood of the actual circle center coordinates. Subsequently, an adaptive bandwidth fast Gaussian kernel was introduced to address the issue of sample point weighting. The mean shift clustering algorithm was employed to compute the optimal solution for the density of candidate circle center sample data. The final optimal center location was obtained by an iteration algorithm. Experimental results demonstrate that in the presence of interference, the average positioning error of this circle center localization algorithm is 0.051 pixels. Its localization accuracy is 64.1% higher than the Hough transform and 86.4% higher than the circle fitting algorithm. Full article
Show Figures

Figure 1

15 pages, 3249 KiB  
Article
The InterVision Framework: An Enhanced Fine-Tuning Deep Learning Strategy for Auto-Segmentation in Head and Neck
by Byongsu Choi, Chris J. Beltran, Sang Kyun Yoo, Na Hye Kwon, Jin Sung Kim and Justin Chunjoo Park
J. Pers. Med. 2024, 14(9), 979; https://doi.org/10.3390/jpm14090979 - 15 Sep 2024
Abstract
Adaptive radiotherapy (ART) workflows are increasingly adopted to achieve dose escalation and tissue sparing under dynamic anatomical conditions. However, recontouring and time constraints hinder the implementation of real-time ART workflows. Various auto-segmentation methods, including deformable image registration, atlas-based segmentation, and deep learning-based segmentation [...] Read more.
Adaptive radiotherapy (ART) workflows are increasingly adopted to achieve dose escalation and tissue sparing under dynamic anatomical conditions. However, recontouring and time constraints hinder the implementation of real-time ART workflows. Various auto-segmentation methods, including deformable image registration, atlas-based segmentation, and deep learning-based segmentation (DLS), have been developed to address these challenges. Despite the potential of DLS methods, clinical implementation remains difficult due to the need for large, high-quality datasets to ensure model generalizability. This study introduces an InterVision framework for segmentation. The InterVision framework can interpolate or create intermediate visuals between existing images to generate specific patient characteristics. The InterVision model is trained in two steps: (1) generating a general model using the dataset, and (2) tuning the general model using the dataset generated from the InterVision framework. The InterVision framework generates intermediate images between existing patient image slides using deformable vectors, effectively capturing unique patient characteristics. By creating a more comprehensive dataset that reflects these individual characteristics, the InterVision model demonstrates the ability to produce more accurate contours compared to general models. Models are evaluated using the volumetric dice similarity coefficient (VDSC) and the Hausdorff distance 95% (HD95%) for 18 structures in 20 test patients. As a result, the Dice score was 0.81 ± 0.05 for the general model, 0.82 ± 0.04 for the general fine-tuning model, and 0.85 ± 0.03 for the InterVision model. The Hausdorff distance was 3.06 ± 1.13 for the general model, 2.81 ± 0.77 for the general fine-tuning model, and 2.52 ± 0.50 for the InterVision model. The InterVision model showed the best performance compared to the general model. The InterVision framework presents a versatile approach adaptable to various tasks where prior information is accessible, such as in ART settings. This capability is particularly valuable for accurately predicting complex organs and targets that pose challenges for traditional deep learning algorithms. Full article
(This article belongs to the Section Methodology, Drug and Device Discovery)
Show Figures

Figure 1

19 pages, 18432 KiB  
Article
Low-Cost Lettuce Height Measurement Based on Depth Vision and Lightweight Instance Segmentation Model
by Yiqiu Zhao, Xiaodong Zhang, Jingjing Sun, Tingting Yu, Zongyao Cai, Zhi Zhang and Hanping Mao
Agriculture 2024, 14(9), 1596; https://doi.org/10.3390/agriculture14091596 - 13 Sep 2024
Abstract
Plant height is a crucial indicator of crop growth. Rapid measurement of crop height facilitates the implementation and management of planting strategies, ensuring optimal crop production quality and yield. This paper presents a low-cost method for the rapid measurement of multiple lettuce heights, [...] Read more.
Plant height is a crucial indicator of crop growth. Rapid measurement of crop height facilitates the implementation and management of planting strategies, ensuring optimal crop production quality and yield. This paper presents a low-cost method for the rapid measurement of multiple lettuce heights, developed using an improved YOLOv8n-seg model and the stacking characteristics of planes in depth images. First, we designed a lightweight instance segmentation model based on YOLOv8n-seg by enhancing the model architecture and reconstructing the channel dimension distribution. This model was trained on a small-sample dataset augmented through random transformations. Secondly, we proposed a method to detect and segment the horizontal plane. This method leverages the stacking characteristics of the plane, as identified in the depth image histogram from an overhead perspective, allowing for the identification of planes parallel to the camera’s imaging plane. Subsequently, we evaluated the distance between each plane and the centers of the lettuce contours to select the cultivation substrate plane as the reference for lettuce bottom height. Finally, the height of multiple lettuce plants was determined by calculating the height difference between the top and bottom of each plant. The experimental results demonstrated that the improved model achieved a 25.56% increase in processing speed, along with a 2.4% enhancement in mean average precision compared to the original YOLOv8n-seg model. The average accuracy of the plant height measurement algorithm reached 94.339% in hydroponics and 91.22% in pot cultivation scenarios, with absolute errors of 7.39 mm and 9.23 mm, similar to the sensor’s depth direction error. With images downsampled by a factor of 1/8, the highest processing speed recorded was 6.99 frames per second (fps), enabling the system to process an average of 174 lettuce targets per second. The experimental results confirmed that the proposed method exhibits promising accuracy, efficiency, and robustness. Full article
(This article belongs to the Special Issue Smart Agriculture Sensors and Monitoring Systems for Field Detection)
Show Figures

Figure 1

21 pages, 20841 KiB  
Article
Snow Detection in Gaofen-1 Multi-Spectral Images Based on Swin-Transformer and U-Shaped Dual-Branch Encoder Structure Network with Geographic Information
by Yue Wu, Chunxiang Shi, Runping Shen, Xiang Gu, Ruian Tie, Lingling Ge and Shuai Sun
Remote Sens. 2024, 16(17), 3327; https://doi.org/10.3390/rs16173327 - 8 Sep 2024
Abstract
Snow detection is imperative in remote sensing for various applications, including climate change monitoring, water resources management, and disaster warning. Recognizing the limitations of current deep learning algorithms in cloud and snow boundary segmentation, as well as issues like detail snow information loss [...] Read more.
Snow detection is imperative in remote sensing for various applications, including climate change monitoring, water resources management, and disaster warning. Recognizing the limitations of current deep learning algorithms in cloud and snow boundary segmentation, as well as issues like detail snow information loss and mountainous snow omission, this paper presents a novel snow detection network based on Swin-Transformer and U-shaped dual-branch encoder structure with geographic information (SD-GeoSTUNet), aiming to address the above issues. Initially, the SD-GeoSTUNet incorporates the CNN branch and Swin-Transformer branch to extract features in parallel and the Feature Aggregation Module (FAM) is designed to facilitate the detail feature aggregation via two branches. Simultaneously, an Edge-enhanced Convolution (EeConv) is introduced to promote snow boundary contour extraction in the CNN branch. In particular, auxiliary geographic information, including altitude, longitude, latitude, slope, and aspect, is encoded in the Swin-Transformer branch to enhance snow detection in mountainous regions. Experiments conducted on Levir_CS, a large-scale cloud and snow dataset originating from Gaofen-1, demonstrate that SD-GeoSTUNet achieves optimal performance with the values of 78.08%, 85.07%, and 92.89% for IoU_s, F1_s, and MPA, respectively, leading to superior cloud and snow boundary segmentation and thin cloud and snow detection. Further, ablation experiments reveal that integrating slope and aspect information effectively alleviates the omission of snow detection in mountainous areas and significantly exhibits the best vision under complex terrain. The proposed model can be used for remote sensing data with geographic information to achieve more accurate snow extraction, which is conducive to promoting the research of hydrology and agriculture with different geospatial characteristics. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

12 pages, 6254 KiB  
Article
A Method for Detecting the Yarn Roll’s Margin Based on VGG-UNet
by Junru Wang, Xiong Zhao, Laihu Peng and Honggeng Wang
Appl. Sci. 2024, 14(17), 7928; https://doi.org/10.3390/app14177928 - 5 Sep 2024
Abstract
The identification of the yarn roll’s margin represents a critical phase in the automated production of textiles. At present, conventional visual detection techniques are inadequate for accurately measuring, filtering out background noise, and generalizing the margin of the yarn roll. To address this [...] Read more.
The identification of the yarn roll’s margin represents a critical phase in the automated production of textiles. At present, conventional visual detection techniques are inadequate for accurately measuring, filtering out background noise, and generalizing the margin of the yarn roll. To address this issue, this study constructed a semantic segmentation dataset for the yarn roll and proposed a new method for detecting the margin of the yarn roll based on deep learning. By replacing the encoder component of the U-Net with the initial 13 convolutional layers of VGG16 and incorporating pre-trained weights, we constructed a VGG-UNet model that is well suited for yarn roll segmentation. A comparison of the results obtained on the test set revealed that the model achieved an average Intersection over Union (IoU) of 98.70%. Subsequently, the contour edge point set was obtained through the application of traditional image processing techniques, and contour fitting was performed. Finally, the actual yarn roll margin was calculated based on the relationship between pixel dimensions and actual dimensions. The experiments demonstrate that the margin of the yarn roll can be accurately measured with an error of less than 3 mm. This is particularly important in situations where the margin is narrow, as the detection accuracy remains high. This study provides significant technical support and a theoretical foundation for the automation of the textile industry. Full article
Show Figures

Figure 1

17 pages, 6059 KiB  
Article
ECF-Net: Enhanced, Channel-Based, Multi-Scale Feature Fusion Network for COVID-19 Image Segmentation
by Zhengjie Ji, Junhao Zhou, Linjing Wei, Shudi Bao, Meng Chen, Hongxing Yuan and Jianjun Zheng
Electronics 2024, 13(17), 3501; https://doi.org/10.3390/electronics13173501 - 3 Sep 2024
Viewed by 265
Abstract
Accurate segmentation of COVID-19 lesion regions in lung CT images aids physicians in analyzing and diagnosing patients’ conditions. However, the varying morphology and blurred contours of these regions make this task complex and challenging. Existing methods utilizing Transformer architecture lack attention to local [...] Read more.
Accurate segmentation of COVID-19 lesion regions in lung CT images aids physicians in analyzing and diagnosing patients’ conditions. However, the varying morphology and blurred contours of these regions make this task complex and challenging. Existing methods utilizing Transformer architecture lack attention to local features, leading to the loss of detailed information in tiny lesion regions. To address these issues, we propose a multi-scale feature fusion network, ECF-Net, based on channel enhancement. Specifically, we leverage the learning capabilities of both CNN and Transformer architectures to design parallel channel extraction blocks in three different ways, effectively capturing diverse lesion features. Additionally, to minimize irrelevant information in the high-dimensional feature space and focus the network on useful and critical information, we develop adaptive feature generation blocks. Lastly, a bidirectional pyramid-structured feature fusion approach is introduced to integrate features at different levels, enhancing the diversity of feature representations and improving segmentation accuracy for lesions of various scales. The proposed method is tested on four COVID-19 datasets, demonstrating mIoU values of 84.36%, 87.15%, 83.73%, and 75.58%, respectively, outperforming several current state-of-the-art methods and exhibiting excellent segmentation performance. These findings provide robust technical support for medical image segmentation in clinical practice. Full article
(This article belongs to the Special Issue Biomedical Image Processing and Classification, 2nd Edition)
Show Figures

Figure 1

Back to TopTop