Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (33)

Search Parameters:
Keywords = ultrasonic image segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 815 KiB  
Article
DAFT-Net: Dual Attention and Fast Tongue Contour Extraction Using Enhanced U-Net Architecture
by Xinqiang Wang, Wenhuan Lu, Hengxin Liu, Wei Zhang and Qiang Li
Entropy 2024, 26(6), 482; https://doi.org/10.3390/e26060482 - 31 May 2024
Viewed by 480
Abstract
In most silent speech research, continuously observing tongue movements is crucial, thus requiring the use of ultrasound to extract tongue contours. Precisely and in real-time extracting ultrasonic tongue contours presents a major challenge. To tackle this challenge, the novel end-to-end lightweight network DAFT-Net [...] Read more.
In most silent speech research, continuously observing tongue movements is crucial, thus requiring the use of ultrasound to extract tongue contours. Precisely and in real-time extracting ultrasonic tongue contours presents a major challenge. To tackle this challenge, the novel end-to-end lightweight network DAFT-Net is introduced for ultrasonic tongue contour extraction. Integrating the Convolutional Block Attention Module (CBAM) and Attention Gate (AG) module with entropy-based optimization strategies, DAFT-Net establishes a comprehensive attention mechanism with dual functionality. This innovative approach enhances feature representation by replacing traditional skip connection architecture, thus leveraging entropy and information-theoretic measures to ensure efficient and precise feature selection. Additionally, the U-Net’s encoder and decoder layers have been streamlined to reduce computational demands. This process is further supported by information theory, thus guiding the reduction without compromising the network’s ability to capture and utilize critical information. Ablation studies confirm the efficacy of the integrated attention module and its components. The comparative analysis of the NS, TGU, and TIMIT datasets shows that DAFT-Net efficiently extracts relevant features, and it significantly reduces extraction time. These findings demonstrate the practical advantages of applying entropy and information theory principles. This approach improves the performance of medical image segmentation networks, thus paving the way for real-world applications. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

14 pages, 5412 KiB  
Article
Swin-Net: A Swin-Transformer-Based Network Combing with Multi-Scale Features for Segmentation of Breast Tumor Ultrasound Images
by Chengzhang Zhu, Xian Chai, Yalong Xiao, Xu Liu, Renmao Zhang, Zhangzheng Yang and Zhiyuan Wang
Diagnostics 2024, 14(3), 269; https://doi.org/10.3390/diagnostics14030269 - 26 Jan 2024
Cited by 1 | Viewed by 1791
Abstract
Breast cancer is one of the most common cancers in the world, especially among women. Breast tumor segmentation is a key step in the identification and localization of the breast tumor region, which has important clinical significance. Inspired by the swin-transformer model with [...] Read more.
Breast cancer is one of the most common cancers in the world, especially among women. Breast tumor segmentation is a key step in the identification and localization of the breast tumor region, which has important clinical significance. Inspired by the swin-transformer model with powerful global modeling ability, we propose a semantic segmentation framework named Swin-Net for breast ultrasound images, which combines Transformer and Convolutional Neural Networks (CNNs) to effectively improve the accuracy of breast ultrasound segmentation. Firstly, our model utilizes a swin-transformer encoder with stronger learning ability, which can extract features of images more precisely. In addition, two new modules are introduced in our method, including the feature refinement and enhancement module (RLM) and the hierarchical multi-scale feature fusion module (HFM), given that the influence of ultrasonic image acquisition methods and the characteristics of tumor lesions is difficult to capture. Among them, the RLM module is used to further refine and enhance the feature map learned by the transformer encoder. The HFM module is used to process multi-scale high-level semantic features and low-level details, so as to achieve effective cross-layer feature fusion, suppress noise, and improve model segmentation performance. Experimental results show that Swin-Net performs significantly better than the most advanced methods on the two public benchmark datasets. In particular, it achieves an absolute improvement of 1.4–1.8% on Dice. Additionally, we provide a new dataset of breast ultrasound images on which we test the effect of our model, further demonstrating the validity of our method. In summary, the proposed Swin-Net framework makes significant advancements in breast ultrasound image segmentation, providing valuable exploration for research and applications in this domain. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

25 pages, 20599 KiB  
Article
Gene-Specific Discriminative Echocardiogram Findings in Hypertrophic Cardiomyopathy Determined Using Artificial Intelligence: A Pilot Study
by Mila Glavaški, Aleksandra Ilić and Lazar Velicki
Cardiogenetics 2024, 14(1), 1-25; https://doi.org/10.3390/cardiogenetics14010001 - 25 Dec 2023
Viewed by 1290
Abstract
Hypertrophic cardiomyopathy (HCM) is among the most common forms of cardiomyopathies, with a prevalence of 1:200 to 1:500 people. HCM is caused by variants in genes encoding cardiac sarcomeric proteins, of which a majority reside in MYH7, MYBPC3, and TNNT2. [...] Read more.
Hypertrophic cardiomyopathy (HCM) is among the most common forms of cardiomyopathies, with a prevalence of 1:200 to 1:500 people. HCM is caused by variants in genes encoding cardiac sarcomeric proteins, of which a majority reside in MYH7, MYBPC3, and TNNT2. Up to 40% of the HCM cases do not have any known HCM variant. Genotype–phenotype associations in HCM remain incompletely understood. This study involved two visits of 46 adult patients with a confirmed diagnosis of HCM. In total, 174 genes were analyzed on the Next-Generation Sequencing platform, and transthoracic echocardiography was performed. Gene-specific discriminative echocardiogram findings were identified using the computer vision library Fast AI. This was accomplished with the generation of deep learning models for the classification of ultrasonic images based on the underlying genotype and a later analysis of the most decisive image regions. Gene-specific echocardiogram findings were identified: for variants in the MYH7 gene (vs. variant not detected), the most discriminative structures were the septum, left ventricular outflow tract (LVOT) segment, anterior wall, apex, right ventricle, and mitral apparatus; for variants in MYBPC3 gene (vs. variant not detected) these were the septum, left ventricle, and left ventricle/chamber; while for variants in the TNNT2 gene (vs. variant not detected), the most discriminative structures were the septum and right ventricle. Full article
(This article belongs to the Section Cardiovascular Genetics in Clinical Practice)
Show Figures

Graphical abstract

13 pages, 3557 KiB  
Article
Rapid Segmentation and Diagnosis of Breast Tumor Ultrasound Images at the Sonographer Level Using Deep Learning
by Lei Yang, Baichuan Zhang, Fei Ren, Jianwen Gu, Jiao Gao, Jihua Wu, Dan Li, Huaping Jia, Guangling Li, Jing Zong, Jing Zhang, Xiaoman Yang, Xueyuan Zhang, Baolin Du, Xiaowen Wang and Na Li
Bioengineering 2023, 10(10), 1220; https://doi.org/10.3390/bioengineering10101220 - 19 Oct 2023
Cited by 2 | Viewed by 2491
Abstract
Background: Breast cancer is one of the most common malignant tumors in women. A noninvasive ultrasound examination can identify mammary-gland-related diseases and is well tolerated by dense breast, making it a preferred method for breast cancer screening and of significant clinical value. However, [...] Read more.
Background: Breast cancer is one of the most common malignant tumors in women. A noninvasive ultrasound examination can identify mammary-gland-related diseases and is well tolerated by dense breast, making it a preferred method for breast cancer screening and of significant clinical value. However, the diagnosis of breast nodules or masses via ultrasound is performed by a doctor in real time, which is time-consuming and subjective. Junior doctors are prone to missed diagnoses, especially in remote areas or grass-roots hospitals, due to limited medical resources and other factors, which bring great risks to a patient’s health. Therefore, there is an urgent need to develop fast and accurate ultrasound image analysis algorithms to assist diagnoses. Methods: We propose a breast ultrasound image-based assisted-diagnosis method based on convolutional neural networks, which can effectively improve the diagnostic speed and the early screening rate of breast cancer. Our method consists of two stages: tumor recognition and tumor classification. (1) Attention-based semantic segmentation is used to identify the location and size of the tumor; (2) the identified nodules are cropped to construct a training dataset. Then, a convolutional neural network for the diagnosis of benign and malignant breast nodules is trained on this dataset. We collected 2057 images from 1131 patients as the training and validation dataset, and 100 images of the patients with accurate pathological criteria were used as the test dataset. Results: The experimental results based on this dataset show that the MIoU of tumor location recognition is 0.89 and the average accuracy of benign and malignant diagnoses is 97%. The diagnosis performance of the developed diagnostic system is basically consistent with that of senior doctors and is superior to that of junior doctors. In addition, we can provide the doctor with a preliminary diagnosis so that it can be diagnosed quickly. Conclusion: Our proposed method can effectively improve diagnostic speed and the early screening rate of breast cancer. The system provides a valuable aid for the ultrasonic diagnosis of breast cancer. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Figure 1

11 pages, 2553 KiB  
Article
Improved Monitoring of Wildlife Invasion through Data Augmentation by Extract–Append of a Segmented Entity
by Jaekwang Lee, Kangmin Lim and Jeongho Cho
Sensors 2022, 22(19), 7383; https://doi.org/10.3390/s22197383 - 28 Sep 2022
Cited by 4 | Viewed by 1849
Abstract
Owing to the continuous increase in the damage to farms due to wild animals’ destruction of crops in South Korea, various methods have been proposed to resolve these issues, such as installing electric fences and using warning lamps or ultrasonic waves. Recently, new [...] Read more.
Owing to the continuous increase in the damage to farms due to wild animals’ destruction of crops in South Korea, various methods have been proposed to resolve these issues, such as installing electric fences and using warning lamps or ultrasonic waves. Recently, new methods have been attempted by applying deep learning-based object-detection techniques to a robot. However, for effective training of a deep learning-based object-detection model, overfitting or biased training should be avoided; furthermore, a huge number of datasets are required. In particular, establishing a training dataset for specific wild animals requires considerable time and labor. Therefore, this study proposes an Extract–Append data augmentation method where specific objects are extracted from a limited number of images via semantic segmentation and corresponding objects are appended to numerous arbitrary background images. Thus, the study aimed to improve the model’s detection performance by generating a rich dataset on wild animals with various background images, particularly images of water deer and wild boar, which are currently causing the most problematic social issues. The comparison between the object detector trained using the proposed Extract–Append technique and that trained using the existing data augmentation techniques showed that the mean Average Precision (mAP) improved by ≥2.2%. Moreover, further improvement in detection performance of the deep learning-based object-detection model can be expected as the proposed technique can solve the issue of the lack of specific data that are difficult to obtain. Full article
(This article belongs to the Special Issue Recognition Robotics)
Show Figures

Figure 1

16 pages, 6266 KiB  
Article
Zero-Defect Manufacturing and Automated Defect Detection Using Time of Flight Diffraction (TOFD) Images
by Sulochana Subramaniam, Jamil Kanfoud and Tat-Hean Gan
Machines 2022, 10(10), 839; https://doi.org/10.3390/machines10100839 - 21 Sep 2022
Cited by 10 | Viewed by 2373
Abstract
Ultrasonic time-of-flight diffraction (TOFD) is a non-destructive testing (NDT) technique for weld inspection that has gained popularity in the industry, due to its ability to detect, position, and size defects based on the time difference of the echo signal. Although the TOFD technique [...] Read more.
Ultrasonic time-of-flight diffraction (TOFD) is a non-destructive testing (NDT) technique for weld inspection that has gained popularity in the industry, due to its ability to detect, position, and size defects based on the time difference of the echo signal. Although the TOFD technique provides high-speed data, ultrasonic data interpretation is typically a manual and time-consuming process, thereby necessitating a trained expert. The main aim of this work is to develop a fully automated defect detection and data interpretation approach that enables predictive maintenance using signal and image processing. Through this research, the characterization of weld defects was achieved by identifying the region of interest from A-scan signals, followed by segmentation. The experimental results were compared with samples of known defect size for validation; it was found that this novel method is capable of automatically measuring the defect size with considerable accuracy. It is anticipated that using such a system will significantly increase inspection speed, cost, and safety. Full article
(This article belongs to the Section Advanced Manufacturing)
Show Figures

Figure 1

14 pages, 4659 KiB  
Article
Semantic Segmentation of the Malignant Breast Imaging Reporting and Data System Lexicon on Breast Ultrasound Images by Using DeepLab v3+
by Wei-Chung Shia, Fang-Rong Hsu, Seng-Tong Dai, Shih-Lin Guo and Dar-Ren Chen
Sensors 2022, 22(14), 5352; https://doi.org/10.3390/s22145352 - 18 Jul 2022
Cited by 7 | Viewed by 2072
Abstract
In this study, an advanced semantic segmentation method and deep convolutional neural network was applied to identify the Breast Imaging Reporting and Data System (BI-RADS) lexicon for breast ultrasound images, thereby facilitating image interpretation and diagnosis by providing radiologists an objective second opinion. [...] Read more.
In this study, an advanced semantic segmentation method and deep convolutional neural network was applied to identify the Breast Imaging Reporting and Data System (BI-RADS) lexicon for breast ultrasound images, thereby facilitating image interpretation and diagnosis by providing radiologists an objective second opinion. A total of 684 images (380 benign and 308 malignant tumours) from 343 patients (190 benign and 153 malignant breast tumour patients) were analysed in this study. Six malignancy-related standardised BI-RADS features were selected after analysis. The DeepLab v3+ architecture and four decode networks were used, and their semantic segmentation performance was evaluated and compared. Subsequently, DeepLab v3+ with the ResNet-50 decoder showed the best performance in semantic segmentation, with a mean accuracy and mean intersection over union (IU) of 44.04% and 34.92%, respectively. The weighted IU was 84.36%. For the diagnostic performance, the area under the curve was 83.32%. This study aimed to automate identification of the malignant BI-RADS lexicon on breast ultrasound images to facilitate diagnosis and improve its quality. The evaluation showed that DeepLab v3+ with the ResNet-50 decoder was suitable for solving this problem, offering a better balance of performance and computational resource usage than a fully connected network and other decoders. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Imaging and Visual Sensing)
Show Figures

Figure 1

17 pages, 2806 KiB  
Article
De-Speckling Breast Cancer Ultrasound Images Using a Rotationally Invariant Block Matching Based Non-Local Means (RIBM-NLM) Method
by Gelan Ayana, Kokeb Dese, Hakkins Raj, Janarthanan Krishnamoorthy and Timothy Kwa
Diagnostics 2022, 12(4), 862; https://doi.org/10.3390/diagnostics12040862 - 30 Mar 2022
Cited by 11 | Viewed by 2564
Abstract
The ultrasonic technique is an indispensable imaging modality for diagnosis of breast cancer in young women due to its ability in efficiently capturing the tissue properties, and decreasing nega-tive recognition rate thereby avoiding non-essential biopsies. Despite the advantages, ultrasound images are affected by [...] Read more.
The ultrasonic technique is an indispensable imaging modality for diagnosis of breast cancer in young women due to its ability in efficiently capturing the tissue properties, and decreasing nega-tive recognition rate thereby avoiding non-essential biopsies. Despite the advantages, ultrasound images are affected by speckle noise, generating fine-false structures that decrease the contrast of the images and diminish the actual boundaries of tissues on ultrasound image. Moreover, speckle noise negatively impacts the subsequent stages in image processing pipeline, such as edge detec-tion, segmentation, feature extraction, and classification. Previous studies have formulated vari-ous speckle reduction methods in ultrasound images; however, these methods suffer from being unable to retain finer edge details and require more processing time. In this study, we propose a breast ultrasound de-speckling method based on rotational invariant block matching non-local means (RIBM-NLM) filtering. The effectiveness of our method has been demonstrated by com-paring our results with three established de-speckling techniques, the switching bilateral filter (SBF), the non-local means filter (NLMF), and the optimized non-local means filter (ONLMF) on 250 images from public dataset and 6 images from private dataset. Evaluation metrics, including Self-Similarity Index Measure (SSIM), Peak Signal to Noise Ratio (PSNR), and Mean Square Error (MSE) were utilized to measure performance. With the proposed method, we were able to record average SSIM of 0.8915, PSNR of 65.97, MSE of 0.014, RMSE of 0.119, and computational speed of 82 seconds at noise variance of 20dB using the public dataset, all with p-value of less than 0.001 compared against NLMF, ONLMF, and SBF. Similarly, the proposed method achieved av-erage SSIM of 0.83, PSNR of 66.26, MSE of 0.015, RMSE of 0.124, and computational speed of 83 seconds at noise variance of 20dB using the private dataset, all with p-value of less than 0.001 compared against NLMF, ONLMF, and SBF. Full article
(This article belongs to the Section Point-of-Care Diagnostics and Devices)
Show Figures

Figure 1

11 pages, 3028 KiB  
Article
Comparative Analysis of Ease of Removal of Fractured NiTi Endodontic Rotary Files from the Root Canal System—An In Vitro Study
by Vicente Faus-Matoses, Eva Burgos Ibáñez, Vicente Faus-Llácer, Celia Ruiz-Sánchez, Álvaro Zubizarreta-Macho and Ignacio Faus-Matoses
Int. J. Environ. Res. Public Health 2022, 19(2), 718; https://doi.org/10.3390/ijerph19020718 - 10 Jan 2022
Cited by 2 | Viewed by 2451
Abstract
This study aimed at analyzing and comparing the ease of removal of fractured nickel–titanium (NiTi) endodontic rotary files from the root canal system between the ultrasonic tips and the Endo Rescue appliance removal systems, as well as comparing the volume of dentin removed [...] Read more.
This study aimed at analyzing and comparing the ease of removal of fractured nickel–titanium (NiTi) endodontic rotary files from the root canal system between the ultrasonic tips and the Endo Rescue appliance removal systems, as well as comparing the volume of dentin removed between ultrasonic tips and the Endo Rescue appliance using a micro-computed tomography (micro-CT) scan. Material and Methods: Forty NiTi endodontic rotary files were intentionally fractured in 40 root canal systems of 20 lower first molar teeth and distributed into the following study groups: A: Ultrasonic tips (n = 20) (US) and B: Endo Rescue device (n = 20) (ER). Preoperative and postoperative micro-CT scans were uploaded into image processing software to analyze the volumetric variations of dentin using an algorithm that enables progressive differentiation between neighboring pixels after defining and segmenting the fractured NiTi endodontic rotary files and the root canal systems in both micro-CT scans. A non-parametric Mann–Whitney–Wilcoxon test or t-test for independent samples was used to analyze the results. Results: The US and ES study groups saw 8 (1 mesiobuccal and 7 distal root canal system) and 3 (distal root canal system) fractured NiTi endodontic rotary files removed, respectively. No statistically significant differences were found in the amount of dentin removed between the US and ER study groups at the mesiobuccal (p = 0.9109) and distal root canal system (p = 0.8669). Conclusions: Ultrasonic tips enable greater ease of removal of NiTi endodontic rotary files from the root canal system, with similar amounts of dentin removal between the two methods. Full article
(This article belongs to the Special Issue New Advances in Dentistry)
Show Figures

Figure 1

15 pages, 2009 KiB  
Article
Automatic Classification of Fatty Liver Disease Based on Supervised Learning and Genetic Algorithm
by Ahmed Gaber, Hassan A. Youness, Alaa Hamdy, Hammam M. Abdelaal and Ammar M. Hassan
Appl. Sci. 2022, 12(1), 521; https://doi.org/10.3390/app12010521 - 5 Jan 2022
Cited by 21 | Viewed by 3877
Abstract
Fatty liver disease is considered a critical illness that should be diagnosed and detected at an early stage. In advanced stages, liver cancer or cirrhosis arise, and to identify this disease, radiologists commonly use ultrasound images. However, because of their low quality, radiologists [...] Read more.
Fatty liver disease is considered a critical illness that should be diagnosed and detected at an early stage. In advanced stages, liver cancer or cirrhosis arise, and to identify this disease, radiologists commonly use ultrasound images. However, because of their low quality, radiologists found it challenging to recognize this disease using ultrasonic images. To avoid this problem, a Computer-Aided Diagnosis technique is developed in the current study, using Machine Learning Algorithms and a voting-based classifier to categorize liver tissues as being fatty or normal, based on extracting ultrasound image features and a voting-based classifier. Four main contributions are provided by our developed method: firstly, the classification of liver images is achieved as normal or fatty without a segmentation phase. Secondly, compared to our proposed work, the dataset in previous works was insufficient. A combination of 26 features is the third contribution. Based on the proposed methods, the extracted features are Gray-Level Co-Occurrence Matrix (GLCM) and First-Order Statistics (FOS). The fourth contribution is the voting classifier used to determine the liver tissue type. Several trials have been performed by examining the voting-based classifier and J48 algorithm on a dataset. The obtained TP, TN, FP, and FN were 94.28%, 97.14%, 5.71%, and 2.85%, respectively. The achieved precision, sensitivity, specificity, and F1-score were 94.28%, 97.05%, 94.44%, and 95.64%, respectively. The achieved classification accuracy using a voting-based classifier was 95.71% and in the case of using the J48 algorithm was 93.12%. The proposed work achieved a high performance compared with the research works. Full article
(This article belongs to the Section Biomedical Engineering)
Show Figures

Figure 1

12 pages, 3125 KiB  
Article
Incorporating the Breast Imaging Reporting and Data System Lexicon with a Fully Convolutional Network for Malignancy Detection on Breast Ultrasound
by Yung-Hsien Hsieh, Fang-Rong Hsu, Seng-Tong Dai, Hsin-Ya Huang, Dar-Ren Chen and Wei-Chung Shia
Diagnostics 2022, 12(1), 66; https://doi.org/10.3390/diagnostics12010066 - 28 Dec 2021
Cited by 4 | Viewed by 1899
Abstract
In this study, we applied semantic segmentation using a fully convolutional deep learning network to identify characteristics of the Breast Imaging Reporting and Data System (BI-RADS) lexicon from breast ultrasound images to facilitate clinical malignancy tumor classification. Among 378 images (204 benign and [...] Read more.
In this study, we applied semantic segmentation using a fully convolutional deep learning network to identify characteristics of the Breast Imaging Reporting and Data System (BI-RADS) lexicon from breast ultrasound images to facilitate clinical malignancy tumor classification. Among 378 images (204 benign and 174 malignant images) from 189 patients (102 benign breast tumor patients and 87 malignant patients), we identified seven malignant characteristics related to the BI-RADS lexicon in breast ultrasound. The mean accuracy and mean IU of the semantic segmentation were 32.82% and 28.88, respectively. The weighted intersection over union was 85.35%, and the area under the curve was 89.47%, showing better performance than similar semantic segmentation networks, SegNet and U-Net, in the same dataset. Our results suggest that the utilization of a deep learning network in combination with the BI-RADS lexicon can be an important supplemental tool when using ultrasound to diagnose breast malignancy. Full article
(This article belongs to the Special Issue Novel Approaches in Oncologic Imaging)
Show Figures

Figure 1

20 pages, 12046 KiB  
Article
Design of Ultrasonic Synthetic Aperture Imaging Systems Based on a Non-Grid 2D Sparse Array
by Júlio Cesar Eduardo de Souza, Montserrat Parrilla Romero, Ricardo Tokio Higuti and Óscar Martínez-Graullera
Sensors 2021, 21(23), 8001; https://doi.org/10.3390/s21238001 - 30 Nov 2021
Cited by 4 | Viewed by 2461
Abstract
This work provides a guide to design ultrasonic synthetic aperture systems for non-grid two-dimensional sparse arrays such as spirals or annular segmented arrays. It presents an algorithm that identifies which elements have a more significant impact on the beampattern characteristics and uses this [...] Read more.
This work provides a guide to design ultrasonic synthetic aperture systems for non-grid two-dimensional sparse arrays such as spirals or annular segmented arrays. It presents an algorithm that identifies which elements have a more significant impact on the beampattern characteristics and uses this information to reduce the number of signals, the number of emitters and the number of parallel receiver channels involved in the beamforming process. Consequently, we can optimise the 3D synthetic aperture ultrasonic imaging system for a specific sparse array, reducing the computational cost, the hardware requirements and the system complexity. Simulations using a Fermat spiral array and experimental data based on an annular segmented array with 64 elements are used to assess this algorithm. Full article
(This article belongs to the Special Issue Ultrasonic Imaging and Sensors)
Show Figures

Figure 1

16 pages, 6620 KiB  
Article
A Novel Defect Estimation Approach in Wind Turbine Blades Based on Phase Velocity Variation of Ultrasonic Guided Waves
by Renaldas Raišutis, Kumar Anubhav Tiwari, Egidijus Žukauskas, Olgirdas Tumšys and Lina Draudvilienė
Sensors 2021, 21(14), 4879; https://doi.org/10.3390/s21144879 - 17 Jul 2021
Cited by 10 | Viewed by 2696
Abstract
The reliability of the wind turbine blade (WTB) evaluation using a new criterion is presented in the work. Variation of the ultrasonic guided waves (UGW) phase velocity is proposed to be used as a new criterion for defect detection. Based on an intermediate [...] Read more.
The reliability of the wind turbine blade (WTB) evaluation using a new criterion is presented in the work. Variation of the ultrasonic guided waves (UGW) phase velocity is proposed to be used as a new criterion for defect detection. Based on an intermediate value between the maximum and minimum values, the calculation of the phase velocity threshold is used for defect detection, location and sizing. The operation of the proposed technique is verified using simulation and experimental studies. The artificially milled defect having a diameter of 81 mm on the segment of WTB is used for verification of the proposed technique. After the application of the proposed evaluation technique for analysis of the simulated B-scan image, the coordinates of defect edges have been estimated with relative errors of 3.7% and 3%, respectively. The size of the defect was estimated with a relative error of 2.7%. In the case of an experimentally measured B-scan image, the coordinates of defect edges have been estimated with relative errors of 12.5% and 3.9%, respectively. The size of the defect was estimated with a relative error of 10%. The comparative results obtained by modelling and experiment show the suitability of the proposed new criterion to be used for the defect detection tasks solving. Full article
(This article belongs to the Special Issue Ultrasonic Imaging and Sensors)
Show Figures

Figure 1

14 pages, 3634 KiB  
Article
Distance Measurement of Unmanned Aerial Vehicles Using Vision-Based Systems in Unknown Environments
by Wahyu Rahmaniar, Wen-June Wang, Wahyu Caesarendra, Adam Glowacz, Krzysztof Oprzędkiewicz, Maciej Sułowicz and Muhammad Irfan
Electronics 2021, 10(14), 1647; https://doi.org/10.3390/electronics10141647 - 10 Jul 2021
Cited by 7 | Viewed by 2913
Abstract
Localization for the indoor aerial robot remains a challenging issue because global positioning system (GPS) signals often cannot reach several buildings. In previous studies, navigation of mobile robots without the GPS required the registration of building maps beforehand. This paper proposes a novel [...] Read more.
Localization for the indoor aerial robot remains a challenging issue because global positioning system (GPS) signals often cannot reach several buildings. In previous studies, navigation of mobile robots without the GPS required the registration of building maps beforehand. This paper proposes a novel framework for addressing indoor positioning for unmanned aerial vehicles (UAV) in unknown environments using a camera. First, the UAV attitude is estimated to determine whether the robot is moving forward. Then, the camera position is estimated based on optical flow and the Kalman filter. Semantic segmentation using deep learning is carried out to get the position of the wall in front of the robot. The UAV distance is measured using the comparison of the image size ratio based on the corresponding feature points between the current and the reference of the wall images. The UAV is equipped with ultrasonic sensors to measure the distance of the UAV from the surrounded wall. The ground station receives information from the UAV to show the obstacles around the UAV and its current location. The algorithm is verified by capture the images with distance information and compared with the current image and UAV position. The experimental results show that the proposed method achieves an accuracy of 91.7% and a computation time of 8 frames per second (fps). Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

20 pages, 23305 KiB  
Article
Dilated Semantic Segmentation for Breast Ultrasonic Lesion Detection Using Parallel Feature Fusion
by Rizwana Irfan, Abdulwahab Ali Almazroi, Hafiz Tayyab Rauf, Robertas Damaševičius, Emad Abouel Nasr and Abdelatty E. Abdelgawad
Diagnostics 2021, 11(7), 1212; https://doi.org/10.3390/diagnostics11071212 - 5 Jul 2021
Cited by 64 | Viewed by 4453
Abstract
Breast cancer is becoming more dangerous by the day. The death rate in developing countries is rapidly increasing. As a result, early detection of breast cancer is critical, leading to a lower death rate. Several researchers have worked on breast cancer segmentation and [...] Read more.
Breast cancer is becoming more dangerous by the day. The death rate in developing countries is rapidly increasing. As a result, early detection of breast cancer is critical, leading to a lower death rate. Several researchers have worked on breast cancer segmentation and classification using various imaging modalities. The ultrasonic imaging modality is one of the most cost-effective imaging techniques, with a higher sensitivity for diagnosis. The proposed study segments ultrasonic breast lesion images using a Dilated Semantic Segmentation Network (Di-CNN) combined with a morphological erosion operation. For feature extraction, we used the deep neural network DenseNet201 with transfer learning. We propose a 24-layer CNN that uses transfer learning-based feature extraction to further validate and ensure the enriched features with target intensity. To classify the nodules, the feature vectors obtained from DenseNet201 and the 24-layer CNN were fused using parallel fusion. The proposed methods were evaluated using a 10-fold cross-validation on various vector combinations. The accuracy of CNN-activated feature vectors and DenseNet201-activated feature vectors combined with the Support Vector Machine (SVM) classifier was 90.11 percent and 98.45 percent, respectively. With 98.9 percent accuracy, the fused version of the feature vector with SVM outperformed other algorithms. When compared to recent algorithms, the proposed algorithm achieves a better breast cancer diagnosis rate. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

Back to TopTop