Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (8,071)

Search Parameters:
Keywords = model fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 801 KiB  
Article
Fusing Essential Text for Question Answering over Incomplete Knowledge Base
by Huiying Li, Yuxi Feng and Liheng Liu
Electronics 2025, 14(1), 161; https://doi.org/10.3390/electronics14010161 - 2 Jan 2025
Abstract
Knowledge base question answering (KBQA) aims to answer a question using a knowledge base (KB). However, a knowledge base is naturally incomplete, and it cannot cover all the knowledge needed to answer the question. Therefore, obtaining accurate and comprehensive answers to complex questions [...] Read more.
Knowledge base question answering (KBQA) aims to answer a question using a knowledge base (KB). However, a knowledge base is naturally incomplete, and it cannot cover all the knowledge needed to answer the question. Therefore, obtaining accurate and comprehensive answers to complex questions is difficult when KBs are missing relations and entities. To mitigate this challenge, we propose an incomplete KBQA approach based on Relation-Aware Interactive Network and Text Fusion (RAIN-TF). Specifically, we provide essential textual knowledge by finely filtering the question-related text to compensate for the missing relations and entities in the KB. We propose a question-related subgraph construction method that fuses the knowledge from the text and KB and enhances the interactions among questions, entities, and relations. On this basis, we propose a relation-aware interactive network, which is a relation-aware multi-head attention graph neural network (GNN) model, to promote the deep semantic integration of unstructured texts and structured KBs, thus effectively compensating for the lack of knowledge. Comprehensive experiments on three mainstream incomplete KBQA datasets verify the effectiveness of the proposed approach. Full article
Show Figures

Figure 1

20 pages, 6798 KiB  
Article
SS-YOLO: A Lightweight Deep Learning Model Focused on Side-Scan Sonar Target Detection
by Na Yang, Guoyu Li, Shengli Wang, Zhengrong Wei, Hu Ren, Xiaobo Zhang and Yanliang Pei
J. Mar. Sci. Eng. 2025, 13(1), 66; https://doi.org/10.3390/jmse13010066 - 2 Jan 2025
Abstract
As seabed exploration activities increase, side-scan sonar (SSS) is being used more widely. However, distortion and noise during the acoustic pulse’s travel through water can blur target details and cause feature loss in images, making target recognition more challenging. In this paper, we [...] Read more.
As seabed exploration activities increase, side-scan sonar (SSS) is being used more widely. However, distortion and noise during the acoustic pulse’s travel through water can blur target details and cause feature loss in images, making target recognition more challenging. In this paper, we improve the YOLO model in two aspects: lightweight design and accuracy enhancement. The lightweight design is essential for reducing computational complexity and resource consumption, allowing the model to be more efficient on edge devices with limited processing power and storage. Thus, meeting our need to deploy SSS target detection algorithms on unmanned surface vessel (USV) for real-time target detection. Firstly, we replace the original complex convolutional method in the C2f module with a combination of partial convolution (PConv) and pointwise convolution (PWConv), reducing redundant computations and memory access while maintaining high accuracy. In addition, we add an adaptive scale spatial fusion (ASSF) module using 3D convolution to combine feature maps of different sizes, maximizing the extraction of invariant features across various scales. Finally, we use an improved multi-head self-attention (MHSA) mechanism in the detection head, replacing the original complex convolution structure, to enhance the model’s ability to focus on important features with low computational load. To validate the detection performance of the model, we conducted experiments on the combined side-scan sonar dataset (SSSD). The results show that our proposed SS-YOLO model achieves average accuracies of 92.4% (mAP 0.5) and 64.7% (mAP 0.5:0.95), outperforming the original YOLOv8 model by 4.4% and 3%, respectively. In terms of model complexity, the improved SS-YOLO model has 2.55 M of parameters and 6.4 G of FLOPs, significantly lower than those of the original YOLOv8 model and similar detection models. Full article
(This article belongs to the Special Issue Application of Deep Learning in Underwater Image Processing)
Show Figures

Figure 1

19 pages, 3737 KiB  
Article
End-to-End Multi-Scale Adaptive Remote Sensing Image Dehazing Network
by Xinhua Wang, Botao Yuan, Haoran Dong, Qiankun Hao and Zhuang Li
Sensors 2025, 25(1), 218; https://doi.org/10.3390/s25010218 - 2 Jan 2025
Abstract
Satellites frequently encounter atmospheric haze during imaging, leading to the loss of detailed information in remote sensing images and significantly compromising image quality. This detailed information is crucial for applications such as Earth observation and environmental monitoring. In response to the above issues, [...] Read more.
Satellites frequently encounter atmospheric haze during imaging, leading to the loss of detailed information in remote sensing images and significantly compromising image quality. This detailed information is crucial for applications such as Earth observation and environmental monitoring. In response to the above issues, this paper proposes an end-to-end multi-scale adaptive feature extraction method for remote sensing image dehazing (MSD-Net). In our network model, we introduce a dilated convolution adaptive module to extract global and local detail features of remote sensing images. The design of this module can extract important image features at different scales. By expanding convolution, the receptive field is expanded to capture broader contextual information, thereby obtaining a more global feature representation. At the same time, a self-adaptive attention mechanism is also used, allowing the module to automatically adjust the size of its receptive field based on image content. In this way, important features suitable for different scales can be flexibly extracted to better adapt to the changes in details in remote sensing images. To fully utilize the features at different scales, we also adopted feature fusion technology. By fusing features from different scales and integrating information from different scales, more accurate and rich feature representations can be obtained. This process aids in retrieving lost detailed information from remote sensing images, thereby enhancing the overall image quality. A large number of experiments were conducted on the HRRSD and RICE datasets, and the results showed that our proposed method can better restore the original details and texture information of remote sensing images in the field of dehazing and is superior to current state-of-the-art methods. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

28 pages, 1702 KiB  
Review
Deep Learning Applications in Ionospheric Modeling: Progress, Challenges, and Opportunities
by Renzhong Zhang, Haorui Li, Yunxiao Shen, Jiayi Yang, Wang Li, Dongsheng Zhao and Andong Hu
Remote Sens. 2025, 17(1), 124; https://doi.org/10.3390/rs17010124 - 2 Jan 2025
Abstract
With the continuous advancement of deep learning algorithms and the rapid growth of computational resources, deep learning technology has undergone numerous milestone developments, evolving from simple BP neural networks into more complex and powerful network models such as CNNs, LSTMs, RNNs, and GANs. [...] Read more.
With the continuous advancement of deep learning algorithms and the rapid growth of computational resources, deep learning technology has undergone numerous milestone developments, evolving from simple BP neural networks into more complex and powerful network models such as CNNs, LSTMs, RNNs, and GANs. In recent years, the application of deep learning technology in ionospheric modeling has achieved breakthrough advancements, significantly impacting navigation, communication, and space weather forecasting. Nevertheless, due to limitations in observational networks and the dynamic complexity of the ionosphere, deep learning-based ionospheric models still face challenges in terms of accuracy, resolution, and interpretability. This paper systematically reviews the development of deep learning applications in ionospheric modeling, summarizing findings that demonstrate how integrating multi-source data and employing multi-model ensemble strategies has substantially improved the stability of spatiotemporal predictions, especially in handling complex space weather events. Additionally, this study explores the potential of deep learning in ionospheric modeling for the early warning of geological hazards such as earthquakes, volcanic eruptions, and tsunamis, offering new insights for constructing ionospheric-geological activity warning models. Looking ahead, research will focus on developing hybrid models that integrate physical modeling with deep learning, exploring adaptive learning algorithms and multi-modal data fusion techniques to enhance long-term predictive capabilities, particularly in addressing the impact of climate change on the ionosphere. Overall, deep learning provides a powerful tool for ionospheric modeling and indicates promising prospects for its application in early warning systems and future research. Full article
(This article belongs to the Special Issue Advances in GNSS Remote Sensing for Ionosphere Observation)
22 pages, 9655 KiB  
Article
Potato Plant Variety Identification Study Based on Improved Swin Transformer
by Xue Xing, Chengzhong Liu, Junying Han, Quan Feng, Enfang Qi, Yaying Qu and Baixiong Ma
Agriculture 2025, 15(1), 87; https://doi.org/10.3390/agriculture15010087 - 2 Jan 2025
Abstract
Potato is one of the most important food crops in the world and occupies a crucial position in China’s agricultural development. Due to the large number of potato varieties and the phenomenon of variety mixing, the development of the potato industry is seriously [...] Read more.
Potato is one of the most important food crops in the world and occupies a crucial position in China’s agricultural development. Due to the large number of potato varieties and the phenomenon of variety mixing, the development of the potato industry is seriously affected. Therefore, accurate identification of potato varieties is a key link to promote the development of the potato industry. Deep learning technology is used to identify potato varieties with good accuracy, but there are relatively few related studies. Thus, this paper introduces an enhanced Swin Transformer classification model named MSR-SwinT (Multi-scale residual Swin Transformer). The model employs a multi-scale feature fusion module in place of patch partitioning and linear embedding. This approach effectively extracts features of various scales and enhances the model’s feature extraction capability. Additionally, the residual learning strategy is integrated into the Swin Transformer block, effectively addressing the issue of gradient disappearance and enabling the model to capture complex features more effectively. The model can better capture complex features. The enhanced MSR-SwinT model is validated using the potato plant dataset, demonstrating strong performance in potato plant image recognition with an accuracy of 94.64%. This represents an improvement of 3.02 percentage points compared to the original Swin Transformer model. Experimental evidence shows that the improved model performs better and generalizes better, providing a more effective solution for potato variety identification. Full article
(This article belongs to the Section Digital Agriculture)
21 pages, 6626 KiB  
Article
A Text-Based Dual-Branch Person Re-Identification Algorithm Based on the Deep Attribute Information Mining Network
by Ke Han, Xiyan Zhang, Wenlong Xu and Long Jin
Symmetry 2025, 17(1), 64; https://doi.org/10.3390/sym17010064 - 2 Jan 2025
Abstract
Text-based person re-identification enables the retrieval of specific pedestrians from a large image library using textual descriptions, effectively addressing the issue of missing pedestrian images. The main challenges in this task are to learn discriminative image–text features and achieve accurate cross-modal matching. Despite [...] Read more.
Text-based person re-identification enables the retrieval of specific pedestrians from a large image library using textual descriptions, effectively addressing the issue of missing pedestrian images. The main challenges in this task are to learn discriminative image–text features and achieve accurate cross-modal matching. Despite the potential of leveraging semantic information from pedestrian attributes, current methods have not yet fully harnessed this resource. To this end, we introduce a novel Text-based Dual-branch Person Re-identification Algorithm based on the Deep Attribute Information Mining (DAIM) network. Our approach employs a Masked Language Modeling (MLM) module to learn cross-modal attribute alignments through mask language modeling, and an Implicit Relational Prompt (IRP) module to extract relational cues between pedestrian attributes using tailored prompt templates. Furthermore, drawing inspiration from feature fusion techniques, we developed a Symmetry Semantic Feature Fusion (SSF) module that utilizes symmetric relationships between attributes to enhance the integration of information from different modes, aiming to capture comprehensive features and facilitate efficient cross-modal interactions. We evaluated our method using three benchmark datasets, CUHK-PEDES, ICFG-PEDES, and RSTPReid, and the results demonstrated Rank-1 accuracy rates of 78.17%, 69.47%, and 68.30%, respectively. These results indicate a significant enhancement in pedestrian retrieval accuracy, thereby validating the efficacy of our proposed approach. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

29 pages, 5358 KiB  
Article
An Approach for Spatial Statistical Modelling Remote Sensing Data of Land Cover by Fusing Data of Different Types
by Antonella Belmonte, Carmela Riefolo, Gabriele Buttafuoco and Annamaria Castrignanò
Remote Sens. 2025, 17(1), 123; https://doi.org/10.3390/rs17010123 - 2 Jan 2025
Viewed by 111
Abstract
Remote sensing technologies continue to expand their role in environmental monitoring, providing invaluable advances in soil assessing and mapping. This study aimed to prove the need to apply spatial statistical models for processing data in remote sensing (RS), which appears to be an [...] Read more.
Remote sensing technologies continue to expand their role in environmental monitoring, providing invaluable advances in soil assessing and mapping. This study aimed to prove the need to apply spatial statistical models for processing data in remote sensing (RS), which appears to be an important source of spatial data at multiple scales. A crucial problem facing us is the fusion of multi-source spatial data of different natures and characteristics, among which there is the support size of measurement that unfortunately is little considered in RS. A data fusion approach of both sample (point) and grid (areal) data is proposed that explicitly takes into account spatial correlation and change of support in both increasing support (upscaling) and decreasing support (downscaling). The techniques of block cokriging and kriging downscaling were employed for the implementation of such an approach, respectively. The method is applied to soil sample data, jointly analysed with hyperspectral data measured in the laboratory, UAV, and satellite data (Planet and Sentinel 2) of an olive grove after filtering soil pixels. Each data type had its own support that was transformed to the same support as the soil sample data so that the data fusion approach could be applied. To demonstrate the statistical, as well as practical, effectiveness of such a method, it was compared by a cross-validation test with a univariate approach for predicting each soil property. The positive results obtained should stimulate advanced statistical techniques to be applied more and more widely to RS data. Full article
(This article belongs to the Special Issue Remote Sensing in Geomatics (Second Edition))
Show Figures

Figure 1

21 pages, 2838 KiB  
Article
A Nanoparticle Comprising the Receptor-Binding Domains of Norovirus and Plasmodium as a Combination Vaccine Candidate
by Ming Xia, Pengwei Huang, Frank S. Vago, Wen Jiang, Xi Jiang and Ming Tan
Vaccines 2025, 13(1), 34; https://doi.org/10.3390/vaccines13010034 - 1 Jan 2025
Viewed by 420
Abstract
Background: Noroviruses, which cause epidemic acute gastroenteritis, and Plasmodium parasites, which lead to malaria, are two infectious pathogens that pose threats to public health. The protruding (P) domain of norovirus VP1 and the αTSR domain of the circumsporozoite protein (CSP) of Plasmodium sporozoite [...] Read more.
Background: Noroviruses, which cause epidemic acute gastroenteritis, and Plasmodium parasites, which lead to malaria, are two infectious pathogens that pose threats to public health. The protruding (P) domain of norovirus VP1 and the αTSR domain of the circumsporozoite protein (CSP) of Plasmodium sporozoite are the glycan receptor-binding domains of the two pathogens for host cell attachment, making them excellent targets for vaccine development. Modified norovirus P domains self-assemble into a 24-meric octahedral P nanoparticle (P24 NP). Methods: We generated a unique P24-αTSR NP by inserting the αTSR domain into a surface loop of the P domain. The P-αTSR fusion proteins were produced in the Escherichia coli expression system and the fusion protein self-assembled into the P24-αTSR NP. Results: The formation of the P24-αTSR NP was demonstrated through gel filtration, electron microscopy, and dynamic light scattering. A 3D structural model of the P24-αTSR NP was constructed, using the known cryo-EM structure of the previously developed P24 NP and P24-VP8* NP as templates. Each P24-αTSR NP consists of a P24 NP core, with 24 surface-exposed αTSR domains that have retained their general conformations and binding function to heparan sulfate proteoglycans. The P24-αTSR NP is immunogenic, eliciting strong antibody responses in mice toward both the norovirus P domain and the αTSR domain of Plasmodium CSP. Notably, sera from mice immunized with the P24-αTSR NP bound strongly to Plasmodium sporozoites and blocked norovirus VLP attachment to their glycan receptors. Conclusion: These data suggest that the P24-αTSR NP may serve as a combination vaccine against both norovirus and Plasmodium parasites. Full article
(This article belongs to the Special Issue Advance in Nanoparticles as Vaccine Adjuvants)
Show Figures

Figure 1

12 pages, 10766 KiB  
Article
Molecular Dynamics-Based Two-Dimensional Simulation of Powder Bed Additive Manufacturing Process for Unimodal and Bimodal Systems
by Yeasir Mohammad Akib, Ehsan Marzbanrad and Farid Ahmed
J. Manuf. Mater. Process. 2025, 9(1), 9; https://doi.org/10.3390/jmmp9010009 - 1 Jan 2025
Viewed by 197
Abstract
The trend of adapting powder bed fusion (PBF) for product manufacturing continues to grow as this process is highly capable of producing functional 3D components with micro-scale precision. The powder bed’s properties (e.g., powder packing, material properties, flowability, etc.) and thermal energy deposition [...] Read more.
The trend of adapting powder bed fusion (PBF) for product manufacturing continues to grow as this process is highly capable of producing functional 3D components with micro-scale precision. The powder bed’s properties (e.g., powder packing, material properties, flowability, etc.) and thermal energy deposition heavily influence the build quality in the PBF process. The packing density in the powder bed dictates the bulk powder behavior and in-process performance and, therefore, significantly impacts the mechanical and physical properties of the printed components. Numerical modeling of the powder bed process helps to understand the powder spreading process and predict experimental outcomes. A two-dimensional powder bed was developed in this work using the LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) package to better understand the effect of bimodal and unimodal particle size distribution on powder bed packing. A cloud-based pouring of powders with varying volume fractions and different initialization velocities was adopted, where a blade-type recoater was used to spread the powders. The packing fraction was investigated for both bimodal and unimodal systems. The simulation results showed that the average packing fraction for bimodal and unimodal systems was 76.53% and 71.56%, respectively. A particle-size distribution-based spatially varying powder agglomeration was observed in the simulated powder bed. Powder segregation was also studied in this work, and it appeared less likely in the unimodal system compared to the bimodal system with a higher percentage of bigger particles. Full article
Show Figures

Figure 1

31 pages, 9112 KiB  
Article
Intelligent Target Detection in Synthetic Aperture Radar Images Based on Multi-Level Fusion
by Qiaoyu Liu, Ziqi Ye, Chenxiang Zhu, Dongxu Ouyang, Dandan Gu and Haipeng Wang
Remote Sens. 2025, 17(1), 112; https://doi.org/10.3390/rs17010112 - 1 Jan 2025
Viewed by 220
Abstract
Due to the unique imaging mechanism of SAR, targets in SAR images present complex scattering characteristics. As a result, intelligent target detection in SAR images has been facing many challenges, which mainly lie in the insufficient exploitation of target characteristics, inefficient characterization of [...] Read more.
Due to the unique imaging mechanism of SAR, targets in SAR images present complex scattering characteristics. As a result, intelligent target detection in SAR images has been facing many challenges, which mainly lie in the insufficient exploitation of target characteristics, inefficient characterization of scattering features, and inadequate reliability of decision models. In this respect, we propose an intelligent target detection method based on multi-level fusion, where pixel-level, feature-level, and decision-level fusions are designed for enhancing scattering feature mining and improving the reliability of decision making. The pixel-level fusion method through the channel fusion of original images and their features after scattering feature enhancement represents an initial exploration of image fusion. Two feature-level fusion methods are conducted using respective migratable fusion blocks, namely DBAM and FDRM, presenting higher-level fusion. Decision-level fusion based on DST can not only consolidate complementary strengths in different models but also incorporate human or expert involvement in proposition for guiding effective decision making. This represents the highest-level fusion integrating results by proposition setting and statistical analysis. Experiments of different fusion methods integrating different features were conducted on typical target detection datasets. As shown in the results, the proposed method increases the mAP by 16.52%, 7.1%, and 3.19% in ship, aircraft, and vehicle target detection, demonstrating high effectiveness and robustness. Full article
(This article belongs to the Special Issue SAR-Based Signal Processing and Target Recognition (Second Edition))
Show Figures

Figure 1

23 pages, 4009 KiB  
Article
Remaining Life Prediction Modeling Method for Rotating Components of Complex Intelligent Equipment
by Yaohua Deng, Zilin Zhang, Hao Huang and Xiali Liu
Electronics 2025, 14(1), 136; https://doi.org/10.3390/electronics14010136 - 31 Dec 2024
Viewed by 301
Abstract
This paper aims to address the challenges of significant data distribution differences and extreme data imbalances in the remaining useful life prediction modeling of rotating components of complex intelligent equipment under various working conditions. Grounded in deep learning modeling, it considers the multi-dimensional [...] Read more.
This paper aims to address the challenges of significant data distribution differences and extreme data imbalances in the remaining useful life prediction modeling of rotating components of complex intelligent equipment under various working conditions. Grounded in deep learning modeling, it considers the multi-dimensional extraction method for degraded data features in the data feature extraction stage, proposes a network structure with multiple attention data extraction channels, and explores the extraction method for valuable data segments in the channel and time series dimensions. This paper also proposes a domain feature fusion network based on feature migration and examines methods that leverage abundant labeled data from the source domain to assist in target domain learning. Finally, in combination with a long short-term memory neural network (LSTM), this paper constructs an intelligent model to estimate the remaining lifespan of rotating components. Experiments demonstrate that, when integrating the foundational deep convolution network with the domain feature fusion network, the comprehensive loss error for life prediction on the target domain test set can be reduced by up to 6.63%. Furthermore, when adding the dual attention feature extraction network, the maximum reduction in the comprehensive loss error is 3.22%. This model can effectively enhance the precision of life prediction in various operating conditions; thus, it provides a certain theoretical basis and technical support for the operation and maintenance management of complex intelligent equipment. It has certain practical value and application prospects in the remaining life prediction of rotating components under multiple working conditions. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

20 pages, 4757 KiB  
Article
Infrared Image Detection and Recognition of Substation Electrical Equipment Based on Improved YOLOv8
by Haotian Tao, Agyemang Paul and Zhefu Wu
Appl. Sci. 2025, 15(1), 328; https://doi.org/10.3390/app15010328 - 31 Dec 2024
Viewed by 333
Abstract
To address the challenges associated with lightweight design and small object detection in infrared imaging for substation electrical equipment, this paper introduces an enhanced YOLOv8_Adv network model. This model builds on YOLOv8 through several strategic improvements. The backbone network incorporates PConv and FasterNet [...] Read more.
To address the challenges associated with lightweight design and small object detection in infrared imaging for substation electrical equipment, this paper introduces an enhanced YOLOv8_Adv network model. This model builds on YOLOv8 through several strategic improvements. The backbone network incorporates PConv and FasterNet modules to substantially reduce the computational load and memory usage, thereby achieving model lightweighting. In the neck layer, GSConv and VoVGSCSP modules are utilized for multi-stage, multi-feature map fusion, complemented by the integration of the EMA attention mechanism to improve feature extraction. Additionally, a specialized detection layer for small objects is added to the head of the network, enhancing the model’s performance in detecting small infrared targets. Experimental results demonstrate that YOLOv8_Adv achieves a 4.1% increase in [email protected] compared to the baseline YOLOv8n. It also outperforms five existing baseline models, with the highest accuracy of 98.7%, and it reduces the computational complexity by 18.5%, thereby validating the effectiveness of the YOLOv8_Adv model. Furthermore, the effectiveness of the model in detecting small targets in infrared images makes it suitable for use in areas such as infrared surveillance, military target detection, and wildlife monitoring. Full article
(This article belongs to the Special Issue Signal and Image Processing: From Theory to Applications)
Show Figures

Figure 1

26 pages, 6569 KiB  
Article
Design of a Wearable Exoskeleton Piano Practice Aid Based on Multi-Domain Mapping and Top-Down Process Model
by Qiujian Xu, Meihui Li, Guoqiang Chen, Xiubo Ren, Dan Yang, Junrui Li, Xinran Yuan, Siqi Liu, Miaomiao Yang, Mufan Chen, Bo Wang, Peng Zhang and Huiguo Ma
Biomimetics 2025, 10(1), 15; https://doi.org/10.3390/biomimetics10010015 - 31 Dec 2024
Viewed by 343
Abstract
This study designs and develops a wearable exoskeleton piano assistance system for individuals recovering from neurological injuries, aiming to help users regain the ability to perform complex tasks such as playing the piano. While soft robotic exoskeletons have proven effective in rehabilitation therapy [...] Read more.
This study designs and develops a wearable exoskeleton piano assistance system for individuals recovering from neurological injuries, aiming to help users regain the ability to perform complex tasks such as playing the piano. While soft robotic exoskeletons have proven effective in rehabilitation therapy and daily activity assistance, challenges remain in performing highly dexterous tasks due to structural complexity and insufficient motion accuracy. To address these issues, we developed a modular division method based on multi-domain mapping and a top-down process model. This method integrates the functional domain, structural domain, and user needs domain, and explores the principles and methods for creating functional construction modules, overcoming the limitations of traditional top-down approaches in design flexibility. By closely combining layout constraints with the design model, this method significantly improves the accuracy and efficiency of module configuration, offering a new path for the development of piano practice assistance devices. The results demonstrate that this device innovatively combines piano practice with rehabilitation training and through the introduction of ontological modeling methods, resolves the challenges of multidimensional needs mapping. Based on five user requirements (P), we calculated the corresponding demand weight (K), making the design more aligned with user needs. The device excels in enhancing motion accuracy, interactivity, and comfort, filling the gap in traditional piano assistance devices in terms of multi-functionality and high adaptability, and offering new ideas for the design and promotion of intelligent assistive devices. Simulation analysis, combined with the motion trajectory of the finger’s proximal joint, calculates that 60° is the maximum bending angle for the aforementioned joint. Physical validation confirms the device’s superior performance in terms of reliability and high-precision motion reproduction, meeting the requirements for piano-assisted training. Through multi-domain mapping, the top-down process model, and modular design, this research effectively breaks through the design flexibility and functional adaptability bottleneck of traditional piano assistance devices while integrating neurological rehabilitation with music education, opening up a new application path for intelligent assistive devices in the fields of rehabilitation medicine and arts education, and providing a solution for cross-disciplinary technology fusion and innovative development. Full article
(This article belongs to the Special Issue Biomimicry for Optimization, Control, and Automation: 2nd Edition)
Show Figures

Figure 1

23 pages, 104931 KiB  
Article
Applications of the FusionScratchNet Algorithm Based on Convolutional Neural Networks and Transformer Models in the Detection of Cell Phone Screen Scratches
by Zhihong Cao, Kun Liang, Sheng Tang and Cheng Zhang
Electronics 2025, 14(1), 134; https://doi.org/10.3390/electronics14010134 - 31 Dec 2024
Viewed by 231
Abstract
Screen defect detection has become a crucial research domain, propelled by the growing necessity of precise and effective quality control in mobile device production. This study presents the FusionScratchNet (FS-Net), a novel algorithm developed to overcome the challenges of noise interference and to [...] Read more.
Screen defect detection has become a crucial research domain, propelled by the growing necessity of precise and effective quality control in mobile device production. This study presents the FusionScratchNet (FS-Net), a novel algorithm developed to overcome the challenges of noise interference and to characterize indistinct defects and subtle scratches on mobile phone screens. By integrating the transformer and convolutional neural network (CNN) architectures, FS-Net effectively captures both global and local features, thereby enhancing feature representation. The global–local feature integrator (GLFI) module effectively fuses global and local information through unique channel splitting, feature dependency characterization, and attention mechanisms, thereby enhancing target features and suppressing noise. The bridge attention (BA) module calculates an attention feature map based on the multi-layer fused features, precisely focusing on scratch characteristics and recovering details lost during downsampling. Evaluations using the PKU-Market-Phone dataset demonstrated an overall accuracy of 98.04%, an extended intersection over union (EIoU) of 88.03%, and an F1-score of 65.13%. In comparison to established methods like you only look once (YOLO) and retina network (RetinaNet), FS-Net demonstrated enhanced detection accuracy, computational efficiency, and resilience against noise. The experimental results demonstrated that the proposed method effectively enhances the accuracy of scratch segmentation. Full article
Show Figures

Figure 1

25 pages, 13263 KiB  
Article
Development of a Digital Twin of the Harbour Waters and Surrounding Infrastructure Based on Spatial Data Acquired with Multimodal and Multi-Sensor Mapping Systems
by Arkadiusz Tomczak, Grzegorz Stępień, Tomasz Kogut, Łukasz Jedynak, Grzegorz Zaniewicz, Małgorzata Łącka and Izabela Bodus-Olkowska
Appl. Sci. 2025, 15(1), 315; https://doi.org/10.3390/app15010315 - 31 Dec 2024
Viewed by 336
Abstract
Digital twin is an attractive technology for the representation of objects due to its ability to produce precise measurements and their geovisualisation. Of special interest is the application and fusion of various remote sensing techniques for shallow river and inland water areas, commonly [...] Read more.
Digital twin is an attractive technology for the representation of objects due to its ability to produce precise measurements and their geovisualisation. Of special interest is the application and fusion of various remote sensing techniques for shallow river and inland water areas, commonly measured using conventional surveying or multimodal photogrammetry. The construction of spatial digital twins of river areas requires the use of multi-platform and multi-sensor measurements to obtain reliable data of the river environment. Due to the high dynamics of river changes, the cost of measurements and the difficult-to-access measurement area, the mapping should be large-scale and simultaneous. To address these challenges, the authors performed an experiment using three measurement platforms (boat, plane, UAV) and multiple sensors to acquire both cloud and image spatial data, which were integrated temporally and spatially. The integration methods improved the accuracy of the resulting digital model by approximately 20 percent. Full article
Show Figures

Figure 1

Back to TopTop