Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (26,805)

Search Parameters:
Keywords = image processing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 2826 KiB  
Article
Automated Left Ventricle Segmentation in Echocardiography Using YOLO: A Deep Learning Approach for Enhanced Cardiac Function Assessment
by Madankumar Balasubramani, Chih-Wei Sung, Mu-Yang Hsieh, Edward Pei-Chuan Huang, Jiann-Shing Shieh and Maysam F. Abbod
Electronics 2024, 13(13), 2587; https://doi.org/10.3390/electronics13132587 (registering DOI) - 1 Jul 2024
Abstract
Accurate segmentation of the left ventricle (LV) using echocardiogram (Echo) images is essential for cardiovascular analysis. Conventional techniques are labor-intensive and exhibit inter-observer variability. Deep learning has emerged as a powerful tool for automated medical image segmentation, offering advantages in speed and potentially [...] Read more.
Accurate segmentation of the left ventricle (LV) using echocardiogram (Echo) images is essential for cardiovascular analysis. Conventional techniques are labor-intensive and exhibit inter-observer variability. Deep learning has emerged as a powerful tool for automated medical image segmentation, offering advantages in speed and potentially superior accuracy. This study explores the efficacy of employing a YOLO (You Only Look Once) segmentation model for automated LV segmentation in Echo images. YOLO, a cutting-edge object detection model, achieves exceptional speed–accuracy balance through its well-designed architecture. It utilizes efficient dilated convolutional layers and bottleneck blocks for feature extraction while incorporating innovations like path aggregation and spatial attention mechanisms. These attributes make YOLO a compelling candidate for adaptation to LV segmentation in Echo images. We posit that by fine-tuning a pre-trained YOLO-based model on a well-annotated Echo image dataset, we can leverage the model’s strengths in real-time processing and precise object localization to achieve robust LV segmentation. The proposed approach entails fine-tuning a pre-trained YOLO model on a rigorously labeled Echo image dataset. Model performance has been evaluated using established metrics such as mean Average Precision (mAP) at an Intersection over Union (IoU) threshold of 50% (mAP50) with 98.31% and across a range of IoU thresholds from 50% to 95% (mAP50:95) with 75.27%. Successful implementation of YOLO for LV segmentation has the potential to significantly expedite and standardize Echo image analysis. This advancement could translate to improved clinical decision-making and enhanced patient care. Full article
Show Figures

Figure 1

19 pages, 746 KiB  
Article
Fast Depth Map Coding Algorithm for 3D-HEVC Based on Gradient Boosting Machine
by Xiaoke Su, Yaqiong Liu and Qiuwen Zhang
Electronics 2024, 13(13), 2586; https://doi.org/10.3390/electronics13132586 (registering DOI) - 1 Jul 2024
Abstract
Three-Dimensional High-Efficiency Video Coding (3D-HEVC) has been extensively researched due to its efficient compression and deep image representation, but encoding complexity continues to pose a difficulty. This is mainly attributed to redundancy in the coding unit (CU) recursive partitioning process and rate–distortion (RD) [...] Read more.
Three-Dimensional High-Efficiency Video Coding (3D-HEVC) has been extensively researched due to its efficient compression and deep image representation, but encoding complexity continues to pose a difficulty. This is mainly attributed to redundancy in the coding unit (CU) recursive partitioning process and rate–distortion (RD) cost calculation, resulting in a complex encoding process. Therefore, enhancing encoding efficiency and reducing redundant computations are key objectives for optimizing 3D-HEVC. This paper introduces a fast-encoding method for 3D-HEVC, comprising an adaptive CU partitioning algorithm and a rapid rate–distortion-optimization (RDO) algorithm. Based on the ALV features extracted from each coding unit, a Gradient Boosting Machine (GBM) model is constructed to obtain the corresponding CU thresholds. These thresholds are compared with the ALV to further decide whether to continue dividing the coding unit. The RDO algorithm is used to optimize the RD cost calculation process, selecting the optimal prediction mode as much as possible. The simulation results show that this method saves 52.49% of complexity while ensuring good video quality. Full article
Show Figures

Figure 1

16 pages, 4323 KiB  
Article
Processing and Mechanics of Aromatic Vitrimeric Composites at Elevated Temperatures and Healing Performance
by Tanaya Mandal, Unal Ozten, Louis Vaught, Jacob L. Meyer, Ahmad Amiri, Andreas Polycarpou and Mohammad Naraghi
J. Compos. Sci. 2024, 8(7), 252; https://doi.org/10.3390/jcs8070252 (registering DOI) - 1 Jul 2024
Abstract
Carbon fiber reinforced polymer (CFRP) composites are renowned for their exceptional mechanical properties, with applications in industries such as automotive, aerospace, medical, civil, and beyond. Despite these merits, a significant challenge in CFRPs lies in their repairability and maintenance. This study, for the [...] Read more.
Carbon fiber reinforced polymer (CFRP) composites are renowned for their exceptional mechanical properties, with applications in industries such as automotive, aerospace, medical, civil, and beyond. Despite these merits, a significant challenge in CFRPs lies in their repairability and maintenance. This study, for the first time, delves into the processing and self-healing capability of aromatic thermosetting co-polyester vitrimer-based carbon fiber composites through mechanical testing. Vitrimers are an emerging class of thermosetting polymers, which, owing to their exchangeable covalent bonds, enable the re-formation of bonds across cracks. The specific vitrimer chosen for this study is an aromatic thermosetting co-polyester (ATSP). The mechanical properties of samples were analyzed initially through three-point bending (3PB) testing at room temperature before and after healing (by curing samples for 2 h at 280 °C). Samples were also 3PB tested at 100 °C to analyze their mechanical properties at an elevated temperature for comparison to the samples tested at room temperature. To investigate the fracture properties, optical microscopy images of samples were taken after 3PB tests, which were analyzed to observe crack initiation and crack growth behavior. Through load–displacement curves from double cantilever beam (DCB) mechanical testing, the Mode I crack initiation fracture toughness values of self-healed composites and control composites were calculated to evaluate healing efficiency in ATSP CFRP composites cured at 280 °C for 2 h. Scanning electron microscopy (SEM) showed a similar surface morphology of cracks before and after self-healing. Micro-computed tomography (CT) X-ray imaging confirmed that the healed samples closely resembled the as-fabricated ones, with the exception of some manufacturing voids, caused by outgassing in the initial healing cycle. This research demonstrated the ability for the in situ repair of ATSP CFRPs by restoring the fracture toughness to values comparable to the pristine composite (~289 J/m2). Full article
(This article belongs to the Special Issue Carbon Fiber Composites, Volume III)
Show Figures

Figure 1

26 pages, 12239 KiB  
Article
Deep Learning-Based Intelligent Diagnosis of Lumbar Diseases with Multi-Angle View of Intervertebral Disc
by Kaisi (Kathy) Chen, Lei Zheng, Honghao Zhao and Zihang Wang
Mathematics 2024, 12(13), 2062; https://doi.org/10.3390/math12132062 (registering DOI) - 1 Jul 2024
Abstract
The diagnosis of degenerative lumbar spine disease mainly relies on clinical manifestations and imaging examinations. However, the clinical manifestations are sometimes not obvious, and the diagnosis of medical imaging is usually time-consuming and highly relies on the doctor’s personal experiences. Therefore, a smart [...] Read more.
The diagnosis of degenerative lumbar spine disease mainly relies on clinical manifestations and imaging examinations. However, the clinical manifestations are sometimes not obvious, and the diagnosis of medical imaging is usually time-consuming and highly relies on the doctor’s personal experiences. Therefore, a smart diagnostic technology that can assist doctors in manual diagnosis has become particularly urgent. Taking advantage of the development of artificial intelligence, a series of solutions have been proposed for the diagnosis of spinal diseases by using deep learning methods. The proposed methods produce appealing results, but the majority of these approaches are based on sagittal and axial images separately, which limits the capability of different deep learning methods due to the insufficient use of data. In this article, we propose a two-stage classification process that fully utilizes image data. In particular, in the first stage, we used the Mask RCNN model to identify the lumbar spine in the spine image, locate the position of the vertebra and disc, and complete rough classification. In the fine classification stage, a multi-angle view of the intervertebral disc is generated by splicing the sagittal and axial slices of the intervertebral disc up and down based on the key position identified in the first stage, which provides more pieces of information to the deep learning methods for classification. The experimental results reveal substantial performance enhancements with the synthesized multi-angle view, achieving an F1 score of 96.67%. This represents a performance increase of approximately 15% over the sagittal images at 84.48% and nearly 14% over the axial images at 83.15%. This indicates that the proposed paradigm is feasible and more effective in identifying spinal-related degenerative diseases through medical images. Full article
(This article belongs to the Special Issue Algorithms and Models for Bioinformatics and Biomedical Applications)
Show Figures

Figure 1

16 pages, 3392 KiB  
Article
A Novel Part Refinement Tandem Transformer for Human–Object Interaction Detection
by Zhan Su and Hongzhe Yang
Sensors 2024, 24(13), 4278; https://doi.org/10.3390/s24134278 (registering DOI) - 1 Jul 2024
Abstract
Human–object interaction (HOI) detection identifies a “set of interactions” in an image involving the recognition of interacting instances and the classification of interaction categories. The complexity and variety of image content make this task challenging. Recently, the Transformer has been applied in computer [...] Read more.
Human–object interaction (HOI) detection identifies a “set of interactions” in an image involving the recognition of interacting instances and the classification of interaction categories. The complexity and variety of image content make this task challenging. Recently, the Transformer has been applied in computer vision and received attention in the HOI detection task. Therefore, this paper proposes a novel Part Refinement Tandem Transformer (PRTT) for HOI detection. Unlike the previous Transformer-based HOI method, PRTT utilizes multiple decoders to split and process rich elements of HOI prediction and introduces a new part state feature extraction (PSFE) module to help improve the final interaction category classification. We adopt a novel prior feature integrated cross-attention (PFIC) to utilize the fine-grained partial state semantic and appearance feature output obtained by the PSFE module to guide queries. We validate our method on two public datasets, V-COCO and HICO-DET. Compared to state-of-the-art models, the performance of detecting human–object interaction is significantly improved by the PRTT. Full article
(This article belongs to the Special Issue AI-Driven Sensing for Image Processing and Recognition)
Show Figures

Figure 1

20 pages, 10820 KiB  
Article
Mapping Crop Evapotranspiration by Combining the Unmixing and Weight Image Fusion Methods
by Xiaochun Zhang, Hongsi Gao, Liangsheng Shi, Xiaolong Hu, Liao Zhong and Jiang Bian
Remote Sens. 2024, 16(13), 2414; https://doi.org/10.3390/rs16132414 (registering DOI) - 1 Jul 2024
Abstract
The demand for freshwater is increasing with population growth and rapid socio-economic development. It is more and more important for refined irrigation water management to conduct research on crop evapotranspiration (ET) data with a high spatiotemporal resolution in agricultural regions. We propose the [...] Read more.
The demand for freshwater is increasing with population growth and rapid socio-economic development. It is more and more important for refined irrigation water management to conduct research on crop evapotranspiration (ET) data with a high spatiotemporal resolution in agricultural regions. We propose the unmixing–weight ET image fusion model (UWET), which integrates the advantages of the unmixing method in spatial downscaling and the weight-based method in temporal prediction to produce daily ET maps with a high spatial resolution. The Landsat-ET and MODIS-ET datasets for the UWET fusion data are retrieved from Landsat and MODIS images based on the surface energy balance model. The UWET model considers the effects of crop phenology, precipitation, and land cover in the process of the ET image fusion. The precision evaluation is conducted on the UWET results, and the measured ET values are monitored by eddy covariance at the Luancheng station, with average MAE values of 0.57 mm/day. The image results of UWET show fine spatial details and capture the dynamic ET changes. The seasonal ET values of winter wheat from the ET map mainly range from 350 to 660 mm in 2019–2020 and from 300 to 620 mm in 2020–2021. The average seasonal ET in 2019–2020 is 499.89 mm, and in 2020–2021, it is 459.44 mm. The performance of UWET is compared with two other fusion models: the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) and the Spatial and Temporal Reflectance Unmixing Model (STRUM). UWET performs better in the spatial details than the STARFM and is better in the temporal characteristics than the STRUM. The results indicate that UWET is suitable for generating ET products with a high spatial–temporal resolution in agricultural regions. Full article
Show Figures

Figure 1

18 pages, 2032 KiB  
Article
Receptive Field Space for Point Cloud Analysis
by Zhongbin Jiang, Hai Tao and Ye Liu
Sensors 2024, 24(13), 4274; https://doi.org/10.3390/s24134274 (registering DOI) - 1 Jul 2024
Abstract
Similar to convolutional neural networks for image processing, existing analysis methods for 3D point clouds often require the designation of a local neighborhood to describe the local features of the point cloud. This local neighborhood is typically manually specified, which makes it impossible [...] Read more.
Similar to convolutional neural networks for image processing, existing analysis methods for 3D point clouds often require the designation of a local neighborhood to describe the local features of the point cloud. This local neighborhood is typically manually specified, which makes it impossible for the network to dynamically adjust the receptive field’s range. If the range is too large, it tends to overlook local details, and if it is too small, it cannot establish global dependencies. To address this issue, we introduce in this paper a new concept: receptive field space (RFS). With a minor computational cost, we extract features from multiple consecutive receptive field ranges to form this new receptive field space. On this basis, we further propose a receptive field space attention mechanism, enabling the network to adaptively select the most effective receptive field range from RFS, thus equipping the network with the ability to adjust granularity adaptively. Our approach achieved state-of-the-art performance in both point cloud classification, with an overall accuracy (OA) of 94.2%, and part segmentation, achieving an mIoU of 86.0%, demonstrating the effectiveness of our method. Full article
Show Figures

Figure 1

15 pages, 3094 KiB  
Technical Note
Interactions between MSTIDs and Ionospheric Irregularities in the Equatorial Region Observed on 13–14 May 2013
by Kun Wu and Liying Qian
Remote Sens. 2024, 16(13), 2413; https://doi.org/10.3390/rs16132413 (registering DOI) - 1 Jul 2024
Abstract
We investigate the interactions between medium-scale traveling ionospheric disturbances (MSTIDs) and the equatorial ionization anomaly (EIA) as well as between MSTIDs and equatorial plasma bubbles (EPBs) on the night of 13–14 May 2013, based on observations from multiple instruments (an all-sky imager, digisonde, [...] Read more.
We investigate the interactions between medium-scale traveling ionospheric disturbances (MSTIDs) and the equatorial ionization anomaly (EIA) as well as between MSTIDs and equatorial plasma bubbles (EPBs) on the night of 13–14 May 2013, based on observations from multiple instruments (an all-sky imager, digisonde, and global positioning system (GPS)). Two dark bands (the low plasma density region) for the MSTIDs were observed moving toward each other, encountering and interacting with the EIA, and subsequently interacting again with the EIA before eventually dissipating. Then, a new dark band of MSTIDs moved in the southwest direction, drifted into the all-sky imager’s field of view (FOV), and interacted with the EIA. Following this interaction, a new dark band split off from the original dark band, slowly moved in the northeast direction, and eventually faded away in a short time. Subsequently, the original southwestward-propagating dark band of the MSTIDs encountered eastward-moving EPBs, leading to an interaction between the MSTIDs and the EPBs. Then, the dark band of the MSTIDs faded away, while the EPBs grew larger with a pronounced westward tilt. The results from various observational instruments indicate the pivotal role played by the high-density region of the EIA in the occurrence of various interaction processes. In addition, this study also revealed that MSTIDs propagating into the equatorial region can significantly impact the morphology and evolution characteristics of EPBs. Full article
Show Figures

Figure 1

20 pages, 12301 KiB  
Article
High-Precision Drilling by Anchor-Drilling Robot Based on Hybrid Visual Servo Control in Coal Mine
by Mengyu Lei, Xuhui Zhang, Wenjuan Yang, Jicheng Wan, Zheng Dong, Chao Zhang and Guangming Zhang
Mathematics 2024, 12(13), 2059; https://doi.org/10.3390/math12132059 - 1 Jul 2024
Abstract
Rock bolting is a commonly used method for stabilizing the surrounding rock in coal-mine roadways. It involves installing rock bolts after drilling, which penetrate unstable rock layers, binding loose rocks together, enhancing the stability of the surrounding rock, and controlling its deformation. Although [...] Read more.
Rock bolting is a commonly used method for stabilizing the surrounding rock in coal-mine roadways. It involves installing rock bolts after drilling, which penetrate unstable rock layers, binding loose rocks together, enhancing the stability of the surrounding rock, and controlling its deformation. Although recent progress in drilling and anchoring equipment has significantly enhanced the efficiency of roof support in coal mines and improved safety measures, how to deal with drilling rigs’ misalignment with the through-hole center remains a big issue, which may potentially compromise the quality of drilling and consequently affect the effectiveness of bolt support or even result in failure. To address this challenge, this article presents a robotic teleoperation system alongside a hybrid visual servo control strategy. Addressing the demand for high precision and efficiency in aligning the drilling rigs with the center of the drilling hole, a hybrid control strategy is introduced combining position-based and image-based visual servo control. The former facilitates an effective approach to the target area, while the latter ensures high-precision alignment with the center of the drilling hole. The robot teleoperation system employs the binocular vision measurement system to accurately determine the position and orientation of the drilling-hole center, which serves as the designated target position for the drilling rig. Leveraging the displacement and angle sensor information installed on each joint of the manipulator, the system utilizes the kinematic model of the manipulator to compute the spatial position of the end-effector. It dynamically adjusts the spatial pose of the end-effector in real time, aligning it with the target position relative to its current location. Additionally, it utilizes monocular vision information to fine-tune the movement speed and direction of the end-effector, ensuring rapid and precise alignment with the target drilling-hole center. Experimental results demonstrate that this method can control the maximum alignment error within 7 mm, significantly enhancing the alignment accuracy compared to manual control. Compared with the manual control method, the average error of this method is reduced by 41.2%, and the average duration is reduced by 4.3 s. This study paves a new path for high-precision drilling and anchoring of tunnel roofs, thereby improving the quality and efficiency of roof support while mitigating the challenges associated with significant errors and compromised safety during manual control processes. Full article
Show Figures

Figure 1

23 pages, 50566 KiB  
Article
Integrated Remote Sensing Investigation of Suspected Landslides: A Case Study of the Genie Slope on the Tibetan Plateau, China
by Wenlong Yu, Weile Li, Zhanglei Wu, Huiyan Lu, Zhengxuan Xu, Dong Wang, Xiujun Dong and Pengfei Li
Remote Sens. 2024, 16(13), 2412; https://doi.org/10.3390/rs16132412 (registering DOI) - 1 Jul 2024
Abstract
The current deformation and stable state of slopes with historical shatter signs is a concern for engineering construction. Suspected landslide scarps were discovered at the rear edge of the Genie slope on the Tibetan Plateau during a field investigation. To qualitatively determine the [...] Read more.
The current deformation and stable state of slopes with historical shatter signs is a concern for engineering construction. Suspected landslide scarps were discovered at the rear edge of the Genie slope on the Tibetan Plateau during a field investigation. To qualitatively determine the current status of the surface deformation of this slope, this study used high-resolution optical remote sensing, airborne light detection and ranging (LiDAR), and interferometric synthetic aperture radar (InSAR) technologies for comprehensive analysis. The interpretation of high-resolution optical and airborne LiDAR data revealed that the rear edge of the slope exhibits three levels of scarps. However, no deformation was detected with differential InSAR (D-InSAR) analysis of ALOS-1 radar images from 2007 to 2008 or with Stacking-InSAR and small baseline subset InSAR (SBAS-InSAR) processing of Sentinel-1A radar images from 2017 to 2020. This study verified the credibility of the InSAR results using the standard deviation of the phase residuals, as well as in-borehole displacement monitoring data. A conceptual model of the slope was developed by combining field investigation, borehole coring, and horizontal exploratory tunnel data, and the results indicated that the slope is composed of steep anti-dip layered dolomite limestone and that the scarps at the trailing edges of the slope were caused by historical shallow toppling. Unlike previous remote sensing studies of deformed landslides, this paper argues that remote sensing results with reliable accuracy are also applicable to the study of undeformed slopes and can help make preliminary judgments about the stability of unexplored slopes. The study demonstrates that the long-term consistency of InSAR results in integrated remote sensing can serve as an indicator for assessing slope stability. Full article
(This article belongs to the Topic Landslides and Natural Resources)
Show Figures

Figure 1

13 pages, 5922 KiB  
Article
Evaluating Multimodal Techniques for Predicting Visibility in the Atmosphere Using Satellite Images and Environmental Data
by Hui-Yu Tsai and Ming-Hseng Tseng
Electronics 2024, 13(13), 2585; https://doi.org/10.3390/electronics13132585 (registering DOI) - 1 Jul 2024
Abstract
Visibility is a measure of the atmospheric transparency at an observation point, expressed as the maximum horizontal distance over which a person can see and identify objects. Low atmospheric visibility often occurs in conjunction with air pollution, posing hazards to both traffic safety [...] Read more.
Visibility is a measure of the atmospheric transparency at an observation point, expressed as the maximum horizontal distance over which a person can see and identify objects. Low atmospheric visibility often occurs in conjunction with air pollution, posing hazards to both traffic safety and human health. In this study, we combined satellite remote sensing images with environmental data to explore the classification performance of two distinct multimodal data processing techniques. The first approach involves developing four multimodal data classification models using deep learning. The second approach integrates deep learning and machine learning to create twelve multimodal data classifiers. Based on the results of a five-fold cross-validation experiment, the inclusion of various environmental data significantly enhances the classification performance of satellite imagery. Specifically, the test accuracy increased from 0.880 to 0.903 when using the deep learning multimodal fusion technique. Furthermore, when combining deep learning and machine learning for multimodal data processing, the test accuracy improved even further, reaching 0.978. Notably, weather conditions, as part of the environmental data, play a crucial role in enhancing visibility prediction performance. Full article
(This article belongs to the Special Issue Data-Centric Artificial Intelligence: New Methods for Data Processing)
Show Figures

Figure 1

15 pages, 9503 KiB  
Article
A Real-Time Study on the Cracking Characteristics of Polyvinyl Alcohol Fiber-Reinforced Geopolymer Composites under Splitting Tensile Load Based on High-Speed Digital Image Correlations
by Yunhan Zhang, Yuhan Sun, Weiliang Zhong and Lifeng Fan
Buildings 2024, 14(7), 1986; https://doi.org/10.3390/buildings14071986 - 1 Jul 2024
Abstract
The cracking of geopolymer caused by its brittleness characteristics could reduce the stability and durability of the building structure. Studying the cracking behavior of fiber-reinforced geopolymer composites (FRGCs) is important to evaluate the toughness strengthening of geopolymer. This paper presents a real-time study [...] Read more.
The cracking of geopolymer caused by its brittleness characteristics could reduce the stability and durability of the building structure. Studying the cracking behavior of fiber-reinforced geopolymer composites (FRGCs) is important to evaluate the toughness strengthening of geopolymer. This paper presents a real-time study on the cracking characteristics of FRGCs under splitting tensile load based on high-speed digital image correlation (HDIC) technology. The splitting tensile test was conducted on the FRGC with different fiber content. The real-time variation of strain and displacement field during the splitting process was analyzed. The influence of fiber content on the mechanical properties and crack behavior of FRGCs was discussed. Considering the splitting strength and crack width, the optimal fiber content for FRGCs that satisfied the crack resistance requirement was proposed. The results show that the incorporation of fiber can delay the cracking time and reduce strain change during the splitting process. The splitting tensile strength and the deformation increase as fiber content increases, while the crack width decreases as fiber content increases. The FRGC with 2.0% fiber content can maintain a crack width smaller than 0.1 mm, which satisfies the crack resistance requirements of practical engineering for economic consideration. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

24 pages, 42566 KiB  
Article
Deblurring of Beamformed Images in the Ocean Acoustic Waveguide Using Deep Learning-Based Deconvolution
by Zijie Zha, Xi Yan, Xiaobin Ping, Shilong Wang and Delin Wang
Remote Sens. 2024, 16(13), 2411; https://doi.org/10.3390/rs16132411 (registering DOI) - 1 Jul 2024
Abstract
A horizontal towed linear coherent hydrophone array is often employed to estimate the spatial intensity distribution of incident plane waves scattered from the geological and biological features in an ocean acoustic waveguide using conventional beamforming. However, due to the physical limitations of the [...] Read more.
A horizontal towed linear coherent hydrophone array is often employed to estimate the spatial intensity distribution of incident plane waves scattered from the geological and biological features in an ocean acoustic waveguide using conventional beamforming. However, due to the physical limitations of the array aperture, the spatial resolution after conventional beamforming is often limited by the fat main lobe and the high sidelobes. Here, we propose a method originated from computer vision deblurring based on deep learning to enhance the spatial resolution of beamformed images. The effect of image blurring after conventional beamforming can be considered a convolution of beam pattern, which acts as a point spread function (PSF), and the original spatial intensity distributions of incident plane waves. A modified U-Net-like network is trained on a simulated dataset. The instantaneous acoustic complex amplitude is assumed following circular complex Gaussian random (CCGR) statistics. Both synthetic data and experimental data collected from the South China Sea Experiment in 2021 are used to illustrate the effectiveness of this approach, showing a maximum 700% reduction in a 3 dB width over conventional beamforming. A lower normalized mean square error (NMSE) is provided compared with other deconvolution-based algorithms, such as the Richardson–Lucy algorithm and the approximate likelihood model-based deconvolution algorithm. The method is applicable in various acoustic imaging applications that employ linear coherent hydrophone arrays with one-dimensional conventional beamforming, such as ocean acoustic waveguide remote sensing (OAWRS). Full article
(This article belongs to the Topic Advances in Underwater Acoustics and Aeroacoustics)
Show Figures

Figure 1

20 pages, 7579 KiB  
Article
AIRS and MODIS Satellite-Based Assessment of Air Pollution in Southwestern China: Impact of Stratospheric Intrusions and Cross-Border Transport of Biomass Burning
by Puyu Lian, Kaihui Zhao and Zibing Yuan
Remote Sens. 2024, 16(13), 2409; https://doi.org/10.3390/rs16132409 (registering DOI) - 1 Jul 2024
Abstract
The exacerbation of air pollution during spring in Yunnan province, China, has attracted widespread attention. However, many studies have focused solely on the impacts of anthropogenic emissions while ignoring the role of natural processes. This study used satellite data spanning 21 years from [...] Read more.
The exacerbation of air pollution during spring in Yunnan province, China, has attracted widespread attention. However, many studies have focused solely on the impacts of anthropogenic emissions while ignoring the role of natural processes. This study used satellite data spanning 21 years from the Moderate Resolution Imaging Spectroradiometer (MODIS) and the Atmospheric Infrared Sounder (AIRS) to reveal two natural processes closely related to springtime ozone (O3) and PM2.5 pollution: stratospheric intrusions (SIs) and cross-border transport of biomass burning (BB). We aimed to assess the mechanisms through which SIs and cross-border BB transport influence O3 and PM2.5 pollution in Southwestern China during the spring. The unique geographical conditions and prevalent southwest winds are considered the key driving factors for SIs and cross-border BB transport. Frequent tropopause folding provides favorable dynamic conditions for SIs in the upper troposphere. In the lower troposphere, the distribution patterns of O3 and stratospheric O3 tracer (O3S) are similar to the terrain, indicating that O3 is more likely to reach the surface with increasing altitude. Using stratospheric tracer tagging methods, we quantified the contributions of SIs to surface O3, ranging from 6 to 31 ppbv and accounting for 10–38% of surface O3 levels. Additionally, as Yunnan is located downwind of Myanmar and has complex terrain, it provides favorable conditions for PM2.5 and O3 generation from cross-border BB transport. The decreasing terrain distribution from north to south in Yunnan facilitates PM2.5 transport to lower-elevation border cities, whereas higher-elevation cities hinder PM2.5 transport, leading to spatial heterogeneity in PM2.5. This study provides scientific support for elucidating the two key processes governing springtime PM2.5 and O3 pollution in Yunnan, SIs and cross-border BB transport, and can assist policymakers in formulating optimal emission reduction strategies. Full article
(This article belongs to the Special Issue Application of Satellite Aerosol Remote Sensing in Air Quality)
Show Figures

Figure 1

16 pages, 1198 KiB  
Article
CCNN-SVM: Automated Model for Emotion Recognition Based on Custom Convolutional Neural Networks with SVM
by Metwally Rashad, Doaa M. Alebiary, Mohammed Aldawsari, Ahmed A. El-Sawy and Ahmed H. AbuEl-Atta
Information 2024, 15(7), 384; https://doi.org/10.3390/info15070384 (registering DOI) - 1 Jul 2024
Abstract
The expressions on human faces reveal the emotions we are experiencing internally. Emotion recognition based on facial expression is one of the subfields of social signal processing. It has several applications in different areas, specifically in the interaction between humans and computers. This [...] Read more.
The expressions on human faces reveal the emotions we are experiencing internally. Emotion recognition based on facial expression is one of the subfields of social signal processing. It has several applications in different areas, specifically in the interaction between humans and computers. This study presents a simple CCNN-SVM automated model as a viable approach for FER. The model combines a Convolutional Neural Network for feature extraction, certain image preprocessing techniques, and Support Vector Machine (SVM) for classification. Firstly, the input image is preprocessed using face detection, histogram equalization, gamma correction, and resizing techniques. Secondly, the images go through custom single Deep Convolutional Neural Networks (CCNN) to extract deep features. Finally, SVM uses the generated features to perform the classification. The suggested model was trained and tested on four datasets, CK+, JAFFE, KDEF, and FER. These datasets consist of seven primary emotional categories, which encompass anger, disgust, fear, happiness, sadness, surprise, and neutrality for CK+, and include contempt for JAFFE. The model put forward demonstrates commendable performance in comparison to existing facial expression recognition techniques. It achieves an impressive accuracy of 99.3% on the CK+ dataset, 98.4% on the JAFFE dataset, 87.18% on the KDEF dataset, and 88.7% on the FER. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop