Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,700)

Search Parameters:
Keywords = data fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2062 KiB  
Article
Camera-Radar Fusion with Radar Channel Extension and Dual-CBAM-FPN for Object Detection
by Xiyan Sun, Yaoyu Jiang, Hongmei Qin, Jingjing Li and Yuanfa Ji
Sensors 2024, 24(16), 5317; https://doi.org/10.3390/s24165317 (registering DOI) - 16 Aug 2024
Abstract
Abstract: When it comes to road environment perception, millimeter-wave radar with a camera facilitates more reliable detection than a single sensor. However, the limited utilization of radar features and insufficient extraction of important features remain pertinent issues, especially with regard to the detection [...] Read more.
Abstract: When it comes to road environment perception, millimeter-wave radar with a camera facilitates more reliable detection than a single sensor. However, the limited utilization of radar features and insufficient extraction of important features remain pertinent issues, especially with regard to the detection of small and occluded objects. To address these concerns, we propose a camera-radar fusion with radar channel extension and a dual-CBAM-FPN (CRFRD), which incorporates a radar channel extension (RCE) module and a dual-CBAM-FPN (DCF) module into the camera-radar fusion net (CRF-Net). In the RCE module, we design an azimuth-weighted RCS parameter and extend three radar channels, which leverage the secondary redundant information to achieve richer feature representation. In the DCF module, we present the dual-CBAM-FPN, which enables the model to focus on important features by inserting CBAM at the input and the fusion process of FPN simultaneously. Comparative experiments conducted on the NuScenes dataset and real data demonstrate the superior performance of the CRFRD compared to CRF-Net, as its weighted mean average precision (wmAP) increases from 43.89% to 45.03%. Furthermore, ablation studies verify the indispensability of the RCE and DCF modules and the effectiveness of azimuth-weighted RCS. Full article
(This article belongs to the Section Radar Sensors)
18 pages, 2640 KiB  
Article
An Unsupervised CNN-Based Pansharpening Framework with Spectral-Spatial Fidelity Balance
by Matteo Ciotola, Giuseppe Guarino and Giuseppe Scarpa
Remote Sens. 2024, 16(16), 3014; https://doi.org/10.3390/rs16163014 (registering DOI) - 16 Aug 2024
Abstract
In recent years, deep learning techniques for pansharpening multiresolution images have gained increasing interest. Due to the lack of ground truth data, most deep learning solutions rely on synthetic reduced-resolution data for supervised training. This approach has limitations due to the statistical mismatch [...] Read more.
In recent years, deep learning techniques for pansharpening multiresolution images have gained increasing interest. Due to the lack of ground truth data, most deep learning solutions rely on synthetic reduced-resolution data for supervised training. This approach has limitations due to the statistical mismatch between real full-resolution and synthetic reduced-resolution data, which affects the models’ generalization capacity. Consequently, there has been a shift towards unsupervised learning frameworks for pansharpening deep learning-based techniques. Unsupervised schemes require defining sophisticated loss functions with at least two components: one for spectral quality, ensuring consistency between the pansharpened image and the input multispectral component, and another for spatial quality, ensuring consistency between the output and the panchromatic input. Despite promising results, there has been limited investigation into the interaction and balance of these loss terms to ensure stability and accuracy. This work explores how unsupervised spatial and spectral consistency losses can be reliably combined preserving the outocome quality. By examining these interactions, we propose a general rule for balancing the two loss components to enhance the stability and performance of unsupervised pansharpening models. Experiments on three state-of-the-art algorithms using WorldView-3 images demonstrate that methods trained with the proposed framework achieve good performance in terms of visual quality and numerical indexes. Full article
(This article belongs to the Special Issue Weakly Supervised Deep Learning in Exploiting Remote Sensing Big Data)
20 pages, 19393 KiB  
Article
Integrating Multimodal Generative AI and Blockchain for Enhancing Generative Design in the Early Phase of Architectural Design Process
by Adam Fitriawijaya and Taysheng Jeng
Buildings 2024, 14(8), 2533; https://doi.org/10.3390/buildings14082533 (registering DOI) - 16 Aug 2024
Abstract
Multimodal generative AI and generative design empower architects to create better-performing, sustainable, and efficient design solutions and explore diverse design possibilities. Blockchain technology ensures secure data management and traceability. This study aims to design and evaluate a framework that integrates blockchain into generative [...] Read more.
Multimodal generative AI and generative design empower architects to create better-performing, sustainable, and efficient design solutions and explore diverse design possibilities. Blockchain technology ensures secure data management and traceability. This study aims to design and evaluate a framework that integrates blockchain into generative AI-driven design drawing processes in architectural design to enhance authenticity and traceability. We employed a scenario as an example to integrate generative AI and blockchain into architectural designs by using a generative AI tool and leveraging multimodal generative AI to enhance design creativity by combining textual and visual inputs. These images were stored on blockchain systems, where metadata were attached to each image before being converted into NFT format, which ensured secure data ownership and management. This research exemplifies the pragmatic fusion of generative AI and blockchain technology applied in architectural design for more transparent, secure, and effective results in the early stages of the architectural design process. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

17 pages, 3956 KiB  
Article
EEG–fNIRS-Based Emotion Recognition Using Graph Convolution and Capsule Attention Network
by Guijun Chen, Yue Liu and Xueying Zhang
Brain Sci. 2024, 14(8), 820; https://doi.org/10.3390/brainsci14080820 - 16 Aug 2024
Abstract
Electroencephalogram (EEG) and functional near-infrared spectroscopy (fNIRS) can objectively reflect a person’s emotional state and have been widely studied in emotion recognition. However, the effective feature fusion and discriminative feature learning from EEG–fNIRS data is challenging. In order to improve the accuracy of [...] Read more.
Electroencephalogram (EEG) and functional near-infrared spectroscopy (fNIRS) can objectively reflect a person’s emotional state and have been widely studied in emotion recognition. However, the effective feature fusion and discriminative feature learning from EEG–fNIRS data is challenging. In order to improve the accuracy of emotion recognition, a graph convolution and capsule attention network model (GCN-CA-CapsNet) is proposed. Firstly, EEG–fNIRS signals are collected from 50 subjects induced by emotional video clips. And then, the features of the EEG and fNIRS are extracted; the EEG–fNIRS features are fused to generate higher-quality primary capsules by graph convolution with the Pearson correlation adjacency matrix. Finally, the capsule attention module is introduced to assign different weights to the primary capsules, and higher-quality primary capsules are selected to generate better classification capsules in the dynamic routing mechanism. We validate the efficacy of the proposed method on our emotional EEG–fNIRS dataset with an ablation study. Extensive experiments demonstrate that the proposed GCN-CA-CapsNet method achieves a more satisfactory performance against the state-of-the-art methods, and the average accuracy can increase by 3–11%. Full article
(This article belongs to the Section Cognitive Social and Affective Neuroscience)
Show Figures

Figure 1

26 pages, 10106 KiB  
Article
DFLM-YOLO: A Lightweight YOLO Model with Multiscale Feature Fusion Capabilities for Open Water Aerial Imagery
by Chen Sun, Yihong Zhang and Shuai Ma
Drones 2024, 8(8), 400; https://doi.org/10.3390/drones8080400 - 16 Aug 2024
Abstract
Object detection algorithms for open water aerial images present challenges such as small object size, unsatisfactory detection accuracy, numerous network parameters, and enormous computational demands. Current detection algorithms struggle to meet the accuracy and speed requirements while being deployable on small mobile devices. [...] Read more.
Object detection algorithms for open water aerial images present challenges such as small object size, unsatisfactory detection accuracy, numerous network parameters, and enormous computational demands. Current detection algorithms struggle to meet the accuracy and speed requirements while being deployable on small mobile devices. This paper proposes DFLM-YOLO, a lightweight small-object detection network based on the YOLOv8 algorithm with multiscale feature fusion. Firstly, to solve the class imbalance problem of the SeaDroneSee dataset, we propose a data augmentation algorithm called Small Object Multiplication (SOM). SOM enhances dataset balance by increasing the number of objects in specific categories, thereby improving model accuracy and generalization capabilities. Secondly, we optimize the backbone network structure by implementing Depthwise Separable Convolution (DSConv) and the newly designed FasterBlock-CGLU-C2f (FC-C2f), which reduces the model’s parameters and inference time. Finally, we design the Lightweight Multiscale Feature Fusion Network (LMFN) to address the challenges of multiscale variations by gradually fusing the four feature layers extracted from the backbone network in three stages. In addition, LMFN incorporates the Dilated Re-param Block structure to increase the effective receptive field and improve the model’s classification ability and detection accuracy. The experimental results on the SeaDroneSee dataset indicate that DFLM-YOLO improves the mean average precision (mAP) by 12.4% compared to the original YOLOv8s, while reducing parameters by 67.2%. This achievement provides a new solution for Unmanned Aerial Vehicles (UAVs) to conduct object detection missions in open water efficiently. Full article
Show Figures

Figure 1

27 pages, 7130 KiB  
Article
Enhancing Tennis Practice: Sensor Fusion and Pose Estimation with a Smart Tennis Ball
by Yu Kit Fu, Xi Li and Rami Ghannam
Sensors 2024, 24(16), 5306; https://doi.org/10.3390/s24165306 - 16 Aug 2024
Viewed by 10
Abstract
This article demonstrates the integration of sensor fusion for pose estimation and data collection in tennis balls, aiming to create a smaller, less intrusive form factor for use in progressive learning during tennis practice. The study outlines the design and implementation of the [...] Read more.
This article demonstrates the integration of sensor fusion for pose estimation and data collection in tennis balls, aiming to create a smaller, less intrusive form factor for use in progressive learning during tennis practice. The study outlines the design and implementation of the Bosch BNO055 smart sensor, which features built-in managed sensor fusion capabilities. The article also discusses deriving additional data using various mathematical and simulation methods to present relevant orientation information from the sensor in Unity. Embedded within a Vermont practice foam tennis ball, the final prototype product communicates with Unity on a laptop via Bluetooth. The Unity interface effectively visualizes the ball’s rotation, the resultant acceleration direction, rotations per minute (RPM), and the orientation relative to gravity. The system successfully demonstrates accurate RPM measurement, provides real-time visualization of ball spin and offers a pathway for innovative applications in tennis training technology. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

17 pages, 2260 KiB  
Article
From Phantoms to Patients: Improved Fusion and Voxel-Wise Analysis of Diffusion-Weighted Imaging and FDG-Positron Emission Tomography in Positron Emission Tomography/Magnetic Resonance Imaging for Combined Metabolic–Diffusivity Index (cDMI)
by Katharina Deininger, Patrick Korf, Leonard Lauber, Robert Grimm, Ralph Strecker, Jochen Steinacker, Catharina S. Lisson, Bernd M. Mühling, Gerlinde Schmidtke-Schrezenmeier, Volker Rasche, Tobias Speidel, Gerhard Glatting, Meinrad Beer, Ambros J. Beer and Wolfgang Thaiss
Diagnostics 2024, 14(16), 1787; https://doi.org/10.3390/diagnostics14161787 - 16 Aug 2024
Viewed by 110
Abstract
Hybrid positron emission tomography/magnetic resonance imaging (PET/MR) opens new possibilities in multimodal multiparametric (m2p) image analyses. But even the simultaneous acquisition of positron emission tomography (PET) and magnetic resonance imaging (MRI) does not guarantee perfect voxel-by-voxel co-registration due to organs and distortions, especially [...] Read more.
Hybrid positron emission tomography/magnetic resonance imaging (PET/MR) opens new possibilities in multimodal multiparametric (m2p) image analyses. But even the simultaneous acquisition of positron emission tomography (PET) and magnetic resonance imaging (MRI) does not guarantee perfect voxel-by-voxel co-registration due to organs and distortions, especially in diffusion-weighted imaging (DWI), which would be, however, crucial to derive biologically meaningful information. Thus, our aim was to optimize fusion and voxel-wise analyses of DWI and standardized uptake values (SUVs) using a novel software for m2p analyses. Using research software, we evaluated the precision of image co-registration and voxel-wise analyses including the rigid and elastic 3D registration of DWI and [18F]-Fluorodeoxyglucose (FDG)-PET from an integrated PET/MR system. We analyzed DWI distortions with a volume-preserving constraint in three different 3D-printed phantom models. A total of 12 PET/MR-DWI clinical datasets (bronchial carcinoma patients) were referenced to the T1 weighted-DIXON sequence. Back mapping of scatterplots and voxel-wise registration was performed and compared to the non-optimized datasets. Fusion was rated using a 5-point Likert scale. Using the 3D-elastic co-registration algorithm, geometric shapes were restored in phantom measurements; the measured ADC values did not change significantly (F = 1.12, p = 0.34). Reader assessment showed a significant improvement in fusion precision for DWI and morphological landmarks in the 3D-registered datasets (4.3 ± 0.2 vs. 4.6 ± 0.2, p = 0.009). Most pronounced differences were noted for the chest wall (p = 0.006), tumor (p = 0.007), and skin contour (p = 0.014). Co-registration increased the number of plausible ADC and SUV combinations by 25%. The volume-preserving elastic 3D registration of DWI significantly improved the precision of fusion with anatomical sequences in phantom and clinical datasets. The research software allowed for a voxel-wise analysis and visualization of [18F]FDG-PET/MR data as a “combined diffusivity–metabolic index” (cDMI). The clinical value of the optimized PET/MR biomarker can thus be tested in future PET/MR studies. Full article
(This article belongs to the Special Issue New Trends and Advances of MRI and PET Hybrid Imaging in Diagnostics)
Show Figures

Figure 1

20 pages, 7094 KiB  
Article
DualNet-PoiD: A Hybrid Neural Network for Highly Accurate Recognition of POIs on Road Networks in Complex Areas with Urban Terrain
by Yongchuan Zhang, Caixia Long, Jiping Liu, Yong Wang and Wei Yang
Remote Sens. 2024, 16(16), 3003; https://doi.org/10.3390/rs16163003 - 16 Aug 2024
Viewed by 212
Abstract
For high-precision navigation, obtaining and maintaining high-precision point-of-interest (POI) data on the road network is crucial. In urban areas with complex terrains, the accuracy of traditional road network POI acquisition methods often falls short. To address this issue, we introduce DualNet-PoiD, a hybrid [...] Read more.
For high-precision navigation, obtaining and maintaining high-precision point-of-interest (POI) data on the road network is crucial. In urban areas with complex terrains, the accuracy of traditional road network POI acquisition methods often falls short. To address this issue, we introduce DualNet-PoiD, a hybrid neural network designed for the efficient recognition of road network POIs in intricate urban environments. This method leverages multimodal sensory data, incorporating both vehicle trajectories and remote sensing imagery. Through an enhanced dual-attention dilated link network (DAD-LinkNet) based on ResNet18, the system extracts static geometric features of roads from remote sensing images. Concurrently, an improved gated recirculation unit (GRU) captures dynamic traffic characteristics implied by vehicle trajectories. The integration of a fully connected layer (FC) enables the high-precision identification of various POIs, including traffic light intersections, gas stations, parking lots, and tunnels. To validate the efficacy of DualNet-PoiD, we collected 500 remote sensing images and 50,000 taxi trajectory data samples covering road POIs in the central urban area of the mountainous city of Chongqing. Through comprehensive area comparison experiments, DualNet-PoiD demonstrated a high recognition accuracy of 91.30%, performing robustly even under conditions of complex occlusion. This confirms the network’s capability to significantly improve POI detection in challenging urban settings. Full article
Show Figures

Figure 1

23 pages, 9213 KiB  
Article
DCFF-Net: Deep Context Feature Fusion Network for High-Precision Classification of Hyperspectral Image
by Zhijie Chen, Yu Chen, Yuan Wang, Xiaoyan Wang, Xinsheng Wang and Zhouru Xiang
Remote Sens. 2024, 16(16), 3002; https://doi.org/10.3390/rs16163002 - 15 Aug 2024
Viewed by 219
Abstract
Hyperspectral images (HSI) contain abundant spectral information. Efficient extraction and utilization of this information for image classification remain prominent research topics. Previously, hyperspectral classification techniques primarily relied on statistical attributes and mathematical models of spectral data. Deep learning classification techniques have recently been [...] Read more.
Hyperspectral images (HSI) contain abundant spectral information. Efficient extraction and utilization of this information for image classification remain prominent research topics. Previously, hyperspectral classification techniques primarily relied on statistical attributes and mathematical models of spectral data. Deep learning classification techniques have recently been extensively utilized for hyperspectral data classification, yielding promising outcomes. This study proposes a deep learning approach that uses polarization feature maps for classification. Initially, the polar co-ordinate transformation method was employed to convert the spectral information of all pixels in the image into spectral feature maps. Subsequently, the proposed Deep Context Feature Fusion Network (DCFF-NET) was utilized to classify these feature maps. The model was validated using three open-source hyperspectral datasets: Indian Pines, Pavia University, and Salinas. The experimental results indicated that DCFF-NET achieved excellent classification performance. Experimental results on three public HSI datasets demonstrated that the proposed method accurately recognized different objects with an overall accuracy (OA) of 86.68%, 94.73%, and 95.14% based on the pixel method, and 98.15%, 99.86%, and 99.98% based on the pixel-patch method. Full article
(This article belongs to the Special Issue GeoAI and EO Big Data Driven Advances in Earth Environmental Science)
Show Figures

Figure 1

16 pages, 4191 KiB  
Communication
Optical-to-SAR Translation Based on CDA-GAN for High-Quality Training Sample Generation for Ship Detection in SAR Amplitude Images
by Baolong Wu, Haonan Wang, Cunle Zhang and Jianlai Chen
Remote Sens. 2024, 16(16), 3001; https://doi.org/10.3390/rs16163001 - 15 Aug 2024
Viewed by 201
Abstract
Abundant datasets are critical to train models based on deep learning technologies for ship detection applications. Compared with optical images, ship detection based on synthetic aperture radar (SAR) (especially the high-Earth-orbit spaceborne SAR launched recently) lacks enough training samples. A novel cross-domain attention [...] Read more.
Abundant datasets are critical to train models based on deep learning technologies for ship detection applications. Compared with optical images, ship detection based on synthetic aperture radar (SAR) (especially the high-Earth-orbit spaceborne SAR launched recently) lacks enough training samples. A novel cross-domain attention GAN (CDA-GAN) model is proposed for optical-to-SAR translation, which can generate high-quality SAR amplitude training samples of a target by optical image conversion. This high quality includes high geometry structure similarity of the target compared with the corresponding optical image and low background noise around the target. In the proposed model, the cross-domain attention mechanism and cross-domain multi-scale feature fusion are designed to improve the quality of samples for detection based on the generative adversarial network (GAN). Specifically, a cross-domain attention mechanism is designed to simultaneously emphasize discriminative features from optical images and SAR images at the same time. Moreover, a designed cross-domain multi-scale feature fusion module further emphasizes the geometric information and semantic information of the target in a feature graph from the perspective of global features. Finally, a reference loss is introduced in CDA-GAN to completely retain the extra features generated by the cross-domain attention mechanism and cross-domain multi-scale feature fusion module. Experimental results demonstrate that the training samples generated by the proposed CDA-GAN can obtain higher ship detection accuracy using real SAR data than the other state-of-the-art methods. The proposed method is generally available for different orbit SARs and can be extended to the high-Earth-orbit spaceborne SAR case. Full article
Show Figures

Figure 1

26 pages, 11215 KiB  
Article
Unsupervised Learning-Based Optical–Acoustic Fusion Interest Point Detector for AUV Near-Field Exploration of Hydrothermal Areas
by Yihui Liu, Yufei Xu, Ziyang Zhang, Lei Wan, Jiyong Li and Yinghao Zhang
J. Mar. Sci. Eng. 2024, 12(8), 1406; https://doi.org/10.3390/jmse12081406 - 15 Aug 2024
Viewed by 212
Abstract
The simultaneous localization and mapping (SLAM) technique provides long-term near-seafloor navigation for autonomous underwater vehicles (AUVs). However, the stability of the interest point detector (IPD) remains challenging in the seafloor environment. This paper proposes an optical–acoustic fusion interest point detector (OAF-IPD) using a [...] Read more.
The simultaneous localization and mapping (SLAM) technique provides long-term near-seafloor navigation for autonomous underwater vehicles (AUVs). However, the stability of the interest point detector (IPD) remains challenging in the seafloor environment. This paper proposes an optical–acoustic fusion interest point detector (OAF-IPD) using a monocular camera and forward-looking sonar. Unlike the artificial feature detectors most underwater IPDs adopt, a deep neural network model based on unsupervised interest point detector (UnsuperPoint) was built to reach stronger environmental adaption. First, a feature fusion module based on feature pyramid networks (FPNs) and a depth module were integrated into the system to ensure a uniform distribution of interest points in depth for improved localization accuracy. Second, a self-supervised training procedure was developed to adapt the OAF-IPD for unsupervised training. This procedure included an auto-encoder framework for the sonar data encoder, a ground truth depth generation framework for the depth module, and optical–acoustic mutual supervision for the fuse module training. Third, a non-rigid feature filter was implemented in the camera data encoder to mitigate the interference from non-rigid structural objects, such as smoke emitted from active vents in hydrothermal areas. Evaluations were conducted using open-source datasets as well as a dataset captured by the research team of this paper from pool experiments to prove the robustness and accuracy of the newly proposed method. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

17 pages, 16956 KiB  
Article
Motor Fault Diagnosis Using Attention-Based Multisensor Feature Fusion
by Zhuoyao Miao, Wenshan Feng, Zhuo Long, Gongping Wu, Le Deng, Xuan Zhou and Liwei Xie
Energies 2024, 17(16), 4053; https://doi.org/10.3390/en17164053 - 15 Aug 2024
Viewed by 200
Abstract
In order to reduce the influence of environmental noise and different operating conditions on the accuracy of motor fault diagnosis, this paper proposes a capsule network method combining multi-channel signals and the efficient channel attention (ECA) mechanism, sampling the data from multiple sensors [...] Read more.
In order to reduce the influence of environmental noise and different operating conditions on the accuracy of motor fault diagnosis, this paper proposes a capsule network method combining multi-channel signals and the efficient channel attention (ECA) mechanism, sampling the data from multiple sensors and visualizing the one-dimensional time-frequency domain as a two-dimensional symmetric dot pattern (SDP) image, then fusing the multi-channel image data and extracting the image using a capsule network combining the ECA attention mechanism features to match eight different fault types for fault classification. In order to guarantee the universality of the suggested model, data from Case Western Reserve University (CWRU) is used for validation. The suggested multi-channel signal fusion ECA attention capsule network (MSF-ECA-CapsNet) model fault identification accuracy may reach 99.21%, according to the experimental findings, which is higher than the traditional method. Meanwhile, the method of multi-sensor data fusion and the use of the ECA attention mechanism make the diagnosis accuracy much higher. Full article
(This article belongs to the Section F: Electrical Engineering)
Show Figures

Figure 1

12 pages, 843 KiB  
Article
Left Atrium Reverse Remodeling in Fusion CRT Pacing: Implications in Cardiac Resynchronization Response and Atrial Fibrillation Incidence
by Cristina Văcărescu, Dragoș Cozma, Simina Crișan, Dan Gaiță, Debora-Delia Anutoni, Mădălin-Marius Margan, Adelina-Andreea Faur-Grigori, Romina Roteliuc, Silvia-Ana Luca, Mihai-Andrei Lazăr, Oana Pătru, Liviu Cirin, Petru Baneu and Constantin-Tudor Luca
J. Clin. Med. 2024, 13(16), 4814; https://doi.org/10.3390/jcm13164814 - 15 Aug 2024
Viewed by 185
Abstract
Background: When compared to biventricular pacing, fusion CRT pacing was linked to a decreased incidence of atrial fibrillation (AF). There is a gap in the knowledge regarding exclusive fusion CRT without interference with RV pacing, and all the current data are based [...] Read more.
Background: When compared to biventricular pacing, fusion CRT pacing was linked to a decreased incidence of atrial fibrillation (AF). There is a gap in the knowledge regarding exclusive fusion CRT without interference with RV pacing, and all the current data are based on populations of patients with intermittent fusion pacing. Purpose: To assess left atrium remodeling and AF incidence in a real-life population of permanent fusion CRT-P. Methods: Retrospective data were analyzed from a cohort of patients with exclusive fusion CRT-P. Device interrogation, exercise testing, transthoracic echocardiography (TE), and customized medication optimization were all part of the six-monthly individual follow-up. Results: Study population: 73 patients (38 males) with non-ischemic dilated cardiomyopathy aged 63.7 ± 9.3 y.o. Baseline characteristic: QRS 159.8 ± 18.2 ms; EF 27.9 ± 5.1%; mitral regurgitation was severe in 38% of patients, moderate in 47% of patients, and mild in 15% of patients; 43% had type III diastolic dysfunction (DD), 49% had type II DD, 8% had type I DD. Average follow-up was 6.4 years ± 27 months: 93% of patients were responders (including 31% super-responders); EF increased to 40.4 ± 8.5%; mitral regurgitation decreased in 69% of patients; diastolic profile improved in 64% of patients. Paroxysmal and persistent AF incidence was 11%, with only 2% of patients developing permanent AF. Regarding LA volume, statistically significant LA reverse remodeling was observed. Conclusions: Exclusive fusion CRT-P was associated with important LA reverse remodeling and a low incidence of AF. Full article
(This article belongs to the Special Issue Clinical Perspectives on Cardiac Electrophysiology and Arrhythmias)
Show Figures

Figure 1

28 pages, 16921 KiB  
Review
The Quest for the Holy Grail Of 3D Printing: A Critical Review of Recycling in Polymer Powder Bed Fusion Additive Manufacturing
by Bruno Alexandre de Sousa Alves, Dimitrios Kontziampasis and Abdel-Hamid Soliman
Polymers 2024, 16(16), 2306; https://doi.org/10.3390/polym16162306 - 15 Aug 2024
Viewed by 157
Abstract
The benefits of additive manufacturing (AM) are widely recognised, boosting the AM method’s use in industry, while it is predicted AM will dominate the global manufacturing industry. Alas, 3D printing’s growth is hindered by its sustainability. AM methods generate vast amounts of residuals [...] Read more.
The benefits of additive manufacturing (AM) are widely recognised, boosting the AM method’s use in industry, while it is predicted AM will dominate the global manufacturing industry. Alas, 3D printing’s growth is hindered by its sustainability. AM methods generate vast amounts of residuals considered as waste, which are disposed of. Additionally, the energy consumed, the materials used, and numerous other factors render AM unsustainable. This paper aims to bring forward all documented solutions in the literature. The spotlight is on potential solutions for the Powder Bed Fusion (PBF) AM, focusing on Selective Laser Sintering (SLS), as these are candidates for mass manufacturing by industry. Solutions are evaluated critically, to identify research gaps regarding the recyclability of residual material. Only then can AM dominate the manufacturing industry, which is extremely important since this is a milestone for our transition into sustainable manufacturing. This transition itself is a complex bottleneck on our quest for becoming a sustainable civilisation. Unlike previous reviews that primarily concentrate on specific AM recycling materials, this paper explores the state of the art in AM recycling processes, incorporating the latest market data and projections. By offering a holistic and forward-looking perspective on the evolution and potential of AM, this review serves as a valuable resource for researchers and industry professionals alike. Full article
(This article belongs to the Section Polymer Processing and Engineering)
Show Figures

Graphical abstract

23 pages, 7638 KiB  
Article
Parameter Calibration and Verification of Elastoplastic Wet Sand Based on Attention-Retention Fusion Deep Learning Mechanism
by Zhicheng Hu, Xianning Zhao, Junjie Zhang, Sibo Ba, Zifeng Zhao and Xuelin Wang
Appl. Sci. 2024, 14(16), 7148; https://doi.org/10.3390/app14167148 - 14 Aug 2024
Viewed by 299
Abstract
The discrete element method (DEM) is a vital numerical approach for analyzing the mechanical behavior of elastoplastic wet sand. However, parameter uncertainty persists within the mapping between constitutive relationships and inherent model parameters. We propose a Parameter calibration neural network based on Attention, [...] Read more.
The discrete element method (DEM) is a vital numerical approach for analyzing the mechanical behavior of elastoplastic wet sand. However, parameter uncertainty persists within the mapping between constitutive relationships and inherent model parameters. We propose a Parameter calibration neural network based on Attention, Retention, and improved Transformer for Sequential data (PartsNet), which effectively captures the nonlinear mechanical behavior of wet sand and obtains the optimal parameter combination for the Edinburgh elasto-plastic adhesion constitutive model. Variational autoencoder-based principal component ordering is employed by PartsNet to reduce the high-dimensional dynamic response and extract critical parameters along with their weights. Gated recurrent units are combined with a novel sparse multi-head attention mechanism to process sequential data. The fusion information is delivered by residual multilayer perceptron, achieving the association between sequential response and model parameters. The errors in response data generated by calibrated parameters are quantified by PartsNet based on adaptive differentiation and Taylor expansion. Remarkable calibration capabilities are exhibited by PartsNet across six evaluation indicators, surpassing seven other deep learning approaches in the ablation test. The calibration accuracy of PartsNet reaches 91.29%, and MSE loss converges to 0.000934. The validation experiments and regression analysis confirmed the generalization capability of PartsNet in the calibration of wet sand. The improved sparse attention mechanism optimizes multi-head attention, resulting in a convergence speed of 21.25%. PartsNet contributes to modeling and simulating the precise mechanical properties of complex elastoplastic systems and offers valuable insights for diverse engineering applications. Full article
Show Figures

Figure 1

Back to TopTop