Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (575)

Search Parameters:
Keywords = human skeleton

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 26373 KiB  
Article
2D to 3D Human Skeleton Estimation Based on the Brown Camera Distortion Model and Constrained Optimization
by Lan Ma and Hua Huo
Electronics 2025, 14(5), 960; https://doi.org/10.3390/electronics14050960 (registering DOI) - 27 Feb 2025
Abstract
In the rapidly evolving field of computer vision and machine learning, 3D skeleton estimation is critical for applications such as motion analysis and human–computer interaction. While stereo cameras are commonly used to acquire 3D skeletal data, monocular RGB systems attract attention due to [...] Read more.
In the rapidly evolving field of computer vision and machine learning, 3D skeleton estimation is critical for applications such as motion analysis and human–computer interaction. While stereo cameras are commonly used to acquire 3D skeletal data, monocular RGB systems attract attention due to benefits including cost-effectiveness and simple deployment. However, persistent challenges remain in accurately inferring depth from 2D images and reconstructing 3D structures using monocular approaches. The current 2D to 3D skeleton estimation methods overly rely on deep training of datasets, while neglecting the importance of human intrinsic structure and the principles of camera imaging. To address this, this paper introduces an innovative 2D to 3D gait skeleton estimation method that leverages the Brown camera distortion model and constrained optimization. Utilizing the Azure Kinect depth camera for capturing gait video, the Azure Kinect Body Tracking SDK was employed to effectively extract 2D and 3D joint positions. The camera’s distortion properties were analyzed, using the Brown camera distortion model which is suitable for this scenario, and iterative methods to compensate the distortion of 2D skeleton joints. By integrating the geometric constraints of the human skeleton, an optimization algorithm was analyzed to achieve precise 3D joint estimations. Finally, the framework was validated through comparisons between the estimated 3D joint coordinates and corresponding measurements captured by depth sensors. Experimental evaluations confirmed that this training-free approach achieved superior precision and stability compared to conventional methods. Full article
(This article belongs to the Special Issue 3D Computer Vision and 3D Reconstruction)
26 pages, 2455 KiB  
Review
A Review of CAC-717, a Disinfectant Containing Calcium Hydrogen Carbonate Mesoscopic Crystals
by Akikazu Sakudo, Koichi Furusaki, Rumiko Onishi, Takashi Onodera and Yasuhiro Yoshikawa
Microorganisms 2025, 13(3), 507; https://doi.org/10.3390/microorganisms13030507 - 25 Feb 2025
Viewed by 107
Abstract
Recent studies on utilizing biological functions of natural substances that mimic the mesoscopic structures (nanoparticles of about 50 to 500 nm) found in plant growth points and coral skeletons have been reported. After the calcium hydrogen carbonate contained in materials derived from plants [...] Read more.
Recent studies on utilizing biological functions of natural substances that mimic the mesoscopic structures (nanoparticles of about 50 to 500 nm) found in plant growth points and coral skeletons have been reported. After the calcium hydrogen carbonate contained in materials derived from plants and coral are separated, the crystals of the mesoscopic structure can be reformed by applying a high voltage under a specific set of conditions. A suspension of these mesoscopic crystals in water (CAC-717) can be used as an effective disinfectant. CAC-717 exhibits universal virucidal activity against both enveloped and non-enveloped viruses as well as bactericidal and anti-prion activity. Moreover, in comparison to sodium hypochlorite, the potency of CAC-717 as a disinfectant is less susceptible to organic substances such as albumin. The disinfection activity of CAC-717 is maintained for at least 6 years and 4 months after storage at room temperature. CAC-717 is non-irritating and harmless to humans and animals, making it a promising biosafe disinfectant. This review explores the disinfection activity of CAC-717 as well as the potential and future uses of this material. Full article
Show Figures

Figure 1

16 pages, 3488 KiB  
Article
Toxic Effects of Bisphenol A on L. variegatus and A. punctulata Sea Urchin Embryos
by Jacob D. Kunsman, Maya C. Schlesinger and Elizabeth R. McCain
Hydrobiology 2025, 4(1), 5; https://doi.org/10.3390/hydrobiology4010005 - 19 Feb 2025
Viewed by 241
Abstract
Bisphenol A, BPA, is a small molecule frequently used in large-scale plastic production. The chemical has garnered a reputation for its association with harmful human health effects, and numerous animal studies have contributed to its classification as an endocrine disruptor. Prior research has [...] Read more.
Bisphenol A, BPA, is a small molecule frequently used in large-scale plastic production. The chemical has garnered a reputation for its association with harmful human health effects, and numerous animal studies have contributed to its classification as an endocrine disruptor. Prior research has investigated the impact of the chemical on echinoderms, including seven species of sea urchin. Our project investigated the toxic effects of this chemical on two uninvestigated species: Lytechinus variegatus and Arbacia punctulata. We exposed embryos to a range of environmentally relevant BPA concentrations (1 µg/L, 10 µg/L, 100 µg/L, and 1000 µg/L) for 48 h, until the pluteus stage. Larvae were classified according to the type of abnormality they exhibited, using a light microscope, and the EC50 was determined through probit analysis and dose–response curves. We also examined isolated plutei skeletons under a scanning electron microscope to assess changes to the skeletal structure under increasing concentrations of BPA. Our results suggest BPA induces embryotoxicity and soft tissue abnormalities more severely in L. variegatus, whereas A. punctulata exhibits more resistance to these effects. The EC50 values, over 1000 µg/L for A. punctulata and approximately 260 µg/L for L. variegatus, support this. These relative values also agree with our hypothesis that sea urchin embryos in a single genus have a similar level of BPA embryotoxicity. Interestingly, under SEM examination, the A. punctulata skeletal microstructure appears to be altered as a result of BPA exposure. While the EC50s are below what has been documented in many, but not all, marine environments, longer and consistent exposure may have a more deleterious impact. These findings suggest BPA’s effects on echinoderms should be further explored with multiple forms of analysis and over the long term. Full article
Show Figures

Figure 1

20 pages, 4820 KiB  
Article
Skeletal Data Matching and Merging from Multiple RGB-D Sensors for Room-Scale Distant Interaction with Multiple Surfaces
by Adrien Coppens and Valerie Maquil
Electronics 2025, 14(4), 790; https://doi.org/10.3390/electronics14040790 - 18 Feb 2025
Viewed by 196
Abstract
Using a commodity RGB-D sensor is a popular and cost-effective way to enable interaction at room scale, as such a device supports body tracking functionality at a reasonable price point. Even though the capabilities of such devices might be enough for applications like [...] Read more.
Using a commodity RGB-D sensor is a popular and cost-effective way to enable interaction at room scale, as such a device supports body tracking functionality at a reasonable price point. Even though the capabilities of such devices might be enough for applications like entertainment systems where a person plays in front of a television, this type of sensor is unfortunately sensitive to occlusions from objects or other people, who might be in the way in more sophisticated room-scale set-ups. One may use multiple RGB-D sensors and aggregate the collected data to address the occlusion problem, increase the tracking range, and improve accuracy. However, doing so requires the gathering of calibration information with regard to the sensors themselves and also regarding their relative placement on interactable surfaces. Another challenging consequence of relying on multiple sensors is the need to perform skeleton matching and merging based on their respective body tracking data (e.g., so that skeletons from different sensors but belonging to the same person are recognised as such). The present contribution focuses on approaches to tackling these issues. Ultimately, it contributes a working human interaction tracking system, leveraging multiple RGB-D sensors to provide unobtrusive and occlusion-resilient understanding capabilities. This constitutes a suitable basis for room-scale experiences such as those based on wall-sized displays. Full article
Show Figures

Figure 1

19 pages, 5398 KiB  
Article
EHC-GCN: Efficient Hierarchical Co-Occurrence Graph Convolution Network for Skeleton-Based Action Recognition
by Ying Bai, Dongsheng Yang, Jing Xu, Lei Xu and Hongliang Wang
Appl. Sci. 2025, 15(4), 2109; https://doi.org/10.3390/app15042109 - 17 Feb 2025
Viewed by 262
Abstract
In tasks such as intelligent surveillance and human–computer interaction, developing rapid and effective models for human action recognition is crucial. Currently, Graph Convolution Networks (GCNs) are widely used for skeleton-based action recognition. Still, they primarily face two issues: (1) The insufficient capture of [...] Read more.
In tasks such as intelligent surveillance and human–computer interaction, developing rapid and effective models for human action recognition is crucial. Currently, Graph Convolution Networks (GCNs) are widely used for skeleton-based action recognition. Still, they primarily face two issues: (1) The insufficient capture of global joint responses, making it difficult to utilize the correlations between all joints. (2) Existing models often tend to be over-parameterized. In this paper, we therefore propose an Efficient Hierarchical Co-occurrence Graph Convolution Network (EHC-GCN). By employing a simple and practical hierarchical co-occurrence framework to adjust the degree of feature aggregation on demand, we first use spatial graph convolution to learn the local features of joints and then aggregate the global features of all joints. Secondly, we introduce depth-wise separable convolution layers to reduce the model parameters. Additionally, we apply a two-stream branch and attention mechanism to further extract discriminative features. On two large-scale datasets, the proposed EHC-GCN achieves better or comparable performance on both 2D and 3D skeleton data to the state-of-the-art methods, with fewer parameters and lower computational complexity, which will be more beneficial for application on computing resource-limited robot platforms. Full article
Show Figures

Figure 1

14 pages, 772 KiB  
Article
Leveraging Artificial Occluded Samples for Data Augmentation in Human Activity Recognition
by Eirini Mathe, Ioannis Vernikos, Evaggelos Spyrou and Phivos Mylonas
Sensors 2025, 25(4), 1163; https://doi.org/10.3390/s25041163 - 14 Feb 2025
Viewed by 273
Abstract
A significant challenge in human activity recognition lies in the limited size and diversity of training datasets, which can lead to overfitting and the poor generalization of deep learning models. Common solutions include data augmentation and transfer learning. This paper introduces a novel [...] Read more.
A significant challenge in human activity recognition lies in the limited size and diversity of training datasets, which can lead to overfitting and the poor generalization of deep learning models. Common solutions include data augmentation and transfer learning. This paper introduces a novel data augmentation method that simulates occlusion by artificially removing body parts from skeleton representations in training datasets. This contrasts with previous approaches that focused on augmenting data with rotated skeletons. The proposed method increases dataset size and diversity, enabling models to handle a broader range of scenarios. Occlusion, a common challenge in real-world HAR, occurs when body parts or external objects block visibility, disrupting activity recognition. By leveraging artificially occluded samples, the proposed methodology enhances model robustness, leading to improved recognition performance, even on non-occluded activities. Full article
(This article belongs to the Special Issue Computer Vision-Based Human Activity Recognition)
Show Figures

Figure 1

18 pages, 13636 KiB  
Article
A Multiscale Mixed-Graph Neural Network Based on Kinematic and Dynamic Joint Features for Human Motion Prediction
by Rongyong Zhao, Bingyu Wei, Lingchen Han, Yuxin Cai, Yunlong Ma and Cuiling Li
Appl. Sci. 2025, 15(4), 1897; https://doi.org/10.3390/app15041897 - 12 Feb 2025
Viewed by 357
Abstract
Predicting human future motion holds significant importance in the domains of autonomous driving and public safety. Kinematic features, including joint coordinates and velocity, are commonly employed in skeleton-based human motion prediction. Nevertheless, most existing approaches neglect the critical role of dynamic information and [...] Read more.
Predicting human future motion holds significant importance in the domains of autonomous driving and public safety. Kinematic features, including joint coordinates and velocity, are commonly employed in skeleton-based human motion prediction. Nevertheless, most existing approaches neglect the critical role of dynamic information and tend to degrade as the prediction length increases. To address the related constraints due to single-scale and fixed-joint topological relationships, this study proposes a novel method that incorporates joint torques estimated via Lagrangian equations as dynamic features of the human body. Specifically, the human skeleton is modeled as a multi-rigid body system, with generalized joint torques calculated based on the Lagrangian formula. Furthermore, to extract both kinematic and dynamic joint information effectively for predicting long-term human motion, we propose a Multiscale Mixed-Graph Neural Network (MS-MGNN). MS-MGNN can extract kinematic and dynamic joint features across three distinct scales: joints, limbs, and body parts. The extraction of joint features at each scale is facilitated by a single-scale mixed-graph convolution module. And to effectively integrate the extracted kinematic and dynamic features, a KD-fused Graph-GRU (Kinematic and Dynamics Fused Graph Gate Recurrent Unit) predictor is designed to fuse them. Finally, the proposed method exhibits superior motion prediction capabilities across multiple motions. In motion prediction experiments on the Human3.6 dataset, it outperforms existing approaches by decreasing the average prediction error by 9.1%, 12.2%, and 10.9% at 160 ms, 320 ms, and 400 ms for short-term prediction and 7.1% at 560 ms for long-term prediction. Full article
Show Figures

Figure 1

14 pages, 3607 KiB  
Article
Self-Enhanced Near-Infrared Copper Nanoscale Electrochemiluminescence Probe for the Sensitive Detection of Ciprofloxacin in Foods
by Jie Wu, Yuanjie Qin, Xiaoxin Mei, Lin Cai, Wen Hao and Guozhen Fang
Foods 2025, 14(3), 538; https://doi.org/10.3390/foods14030538 - 6 Feb 2025
Viewed by 533
Abstract
Ciprofloxacin (CIP), a widely used broad-spectrum antibiotic, poses a serious threat to human health and environmental safety due to its residues. The complementary monomers molecularly imprinted electrochemiluminescence sensor (MIECLS) based on a polyvinylpyrrolidone-functionalized copper nanowires (CuNWs@PVP) luminescent probe was constructed for the ultra-sensitive [...] Read more.
Ciprofloxacin (CIP), a widely used broad-spectrum antibiotic, poses a serious threat to human health and environmental safety due to its residues. The complementary monomers molecularly imprinted electrochemiluminescence sensor (MIECLS) based on a polyvinylpyrrolidone-functionalized copper nanowires (CuNWs@PVP) luminescent probe was constructed for the ultra-sensitive detection of CIP. CuNWs with low cost and high conductivity exhibited near-infrared electrochemiluminescence (NIR ECL) properties, yet their self-aggregation and oxidation led to a weakened emission phenomenon. PVP with solvent affinity and large skeleton was in situ attached to CuNWs surface to avoid CuNWs sedimentation and aggregation, and self-enhanced ECL signals were achieved. The bifunctional monomers molecularly imprinted polymer (MIP) possessed complementary active centers that increased their affinity with CIP, enhancing the accurate and sensitive detection of the target substances. The linear range of CIP using MIECLS was 5.00 × 10−9–5.00 × 10−5 mol L−1 with a low limit of detection (LOD) of 2.59 × 10−9 mol L−1, while the recovery rates of CIP in the spiking recovery experiment were 84.39% to 92.48%. The combination of bifunctional monomer MIP and NIR copper-based nano-luminescent probe provides a new method for the detection of CIP in food. Full article
(This article belongs to the Special Issue Food Contaminants: Detection, Toxicity and Safety Risk Assessment)
Show Figures

Figure 1

18 pages, 3598 KiB  
Article
Vegetation, Architecture, and Human Activities: Reconstructing Land Use History from the Late Yangshao Period in Zhengzhou Region, Central China
by Xia Wang, Junjie Xu, Duowen Mo, Hui Wang and Peng Lu
Land 2025, 14(2), 321; https://doi.org/10.3390/land14020321 - 5 Feb 2025
Viewed by 432
Abstract
In recent decades, a large number of houses from the Late Yangshao period have been excavated in Zhengzhou. They are basically single-level buildings with wood skeletons and mud walls and use a huge amount of timber resources. Nevertheless, there are still a lot [...] Read more.
In recent decades, a large number of houses from the Late Yangshao period have been excavated in Zhengzhou. They are basically single-level buildings with wood skeletons and mud walls and use a huge amount of timber resources. Nevertheless, there are still a lot of questions about the uncertain relationship between plants, architecture, and human activities. In this study, we complete the reconstruction of a Holocene vegetation community around the Dahecun site via pollen analysis of the Z2 core. We take house F1 in Dahecun as an example to estimate the wood consumption of a single house and collect the published data of all houses from the Late Yangshao period in the study area to estimate the wood consumption of houses built in Zhengzhou during this period. Combining the above two approaches, this study explores the relationship between plants, architecture, and human activities in Zhengzhou in the Late Yangshao period, as well as the history of land use. The results are as follows: (1) After 4.9 ka BP, the number of trees and shrubs such as Pinus (falling from 58.8% to 46.9%) decreased rapidly, and the number of herbaceous plants increased. (2) Excluding the influence of the Holocene climate change, the large-scale decline in trees and shrubs in the region is likely to have been human-driven. The number of excavated houses in 11 of the 236 Late Yangshao sites in the Zhengzhou area reached 362, while the minimum wood consumption reached 1270.62 m3. In addition, the rapid expansion of the population size and the large-scale development of new arable land and forest clearance in the Late Yangshao period show that humans had a strong influence on the surrounding vegetation and land cover/use. The trend of regional deforestation was so obvious and irreversible that the inhabitants had to adopt techniques using less wood or no wood to build houses during the subsequent Longshan culture period. Full article
Show Figures

Figure 1

22 pages, 3579 KiB  
Article
Gait-to-Gait Emotional Human–Robot Interaction Utilizing Trajectories-Aware and Skeleton-Graph-Aware Spatial–Temporal Transformer
by Chenghao Li, Kah Phooi Seng and Li-Minn Ang
Sensors 2025, 25(3), 734; https://doi.org/10.3390/s25030734 - 25 Jan 2025
Viewed by 594
Abstract
The emotional response of robotics is crucial for promoting the socially intelligent level of human–robot interaction (HRI). The development of machine learning has extensively stimulated research on emotional recognition for robots. Our research focuses on emotional gaits, a type of simple modality that [...] Read more.
The emotional response of robotics is crucial for promoting the socially intelligent level of human–robot interaction (HRI). The development of machine learning has extensively stimulated research on emotional recognition for robots. Our research focuses on emotional gaits, a type of simple modality that stores a series of joint coordinates and is easy for humanoid robots to execute. However, a limited amount of research investigates emotional HRI systems based on gaits, indicating an existing gap in human emotion gait recognition and robotic emotional gait response. To address this challenge, we propose a Gait-to-Gait Emotional HRI system, emphasizing the development of an innovative emotion classification model. In our system, the humanoid robot NAO can recognize emotions from human gaits through our Trajectories-Aware and Skeleton-Graph-Aware Spatial–Temporal Transformer (TS-ST) and respond with pre-set emotional gaits that reflect the same emotion as the human presented. Our TS-ST outperforms the current state-of-the-art human-gait emotion recognition model applied to robots on the Emotion-Gait dataset. Full article
Show Figures

Figure 1

23 pages, 8209 KiB  
Article
Spatio-Temporal Transformer with Kolmogorov–Arnold Network for Skeleton-Based Hand Gesture Recognition
by Pengcheng Han, Xin He, Takafumi Matsumaru and Vibekananda Dutta
Sensors 2025, 25(3), 702; https://doi.org/10.3390/s25030702 - 24 Jan 2025
Viewed by 731
Abstract
Manually crafted features often suffer from being subjective, having an inadequate accuracy, or lacking in robustness in recognition. Meanwhile, existing deep learning methods often overlook the structural and dynamic characteristics of the human hand, failing to fully explore the contextual information of joints [...] Read more.
Manually crafted features often suffer from being subjective, having an inadequate accuracy, or lacking in robustness in recognition. Meanwhile, existing deep learning methods often overlook the structural and dynamic characteristics of the human hand, failing to fully explore the contextual information of joints in both the spatial and temporal domains. To effectively capture dependencies between the hand joints that are not adjacent but may have potential connections, it is essential to learn long-term relationships. This study proposes a skeleton-based hand gesture recognition framework, the ST-KT, a spatio-temporal graph convolution network, and a transformer with the Kolmogorov–Arnold Network (KAN) model. It incorporates spatio-temporal graph convolution network (ST-GCN) modules and a spatio-temporal transformer module with KAN (KAN–Transformer). ST-GCN modules, which include a spatial graph convolution network (SGCN) and a temporal convolution network (TCN), extract primary features from skeleton sequences by leveraging the strength of graph convolutional networks in the spatio-temporal domain. A spatio-temporal position embedding method integrates node features, enriching representations by including node identities and temporal information. The transformer layer includes a spatial KAN–Transformer (S-KT) and a temporal KAN–Transformer (T-KT), which further extract joint features by learning edge weights and node embeddings, providing richer feature representations and the capability for nonlinear modeling. We evaluated the performance of our method on two challenging skeleton-based dynamic gesture datasets: our method achieved an accuracy of 97.5% on the SHREC’17 track dataset and 94.3% on the DHG-14/28 dataset. These results demonstrate that our proposed method, ST-KT, effectively captures dynamic skeleton changes and complex joint relationships. Full article
Show Figures

Figure 1

23 pages, 10402 KiB  
Article
Enhanced Human Skeleton Tracking for Improved Joint Position and Depth Accuracy in Rehabilitation Exercises
by Vytautas Abromavičius, Ervinas Gisleris, Kristina Daunoravičienė, Jurgita Žižienė, Artūras Serackis and Rytis Maskeliūnas
Appl. Sci. 2025, 15(2), 906; https://doi.org/10.3390/app15020906 - 17 Jan 2025
Viewed by 526
Abstract
The objective of this work is to develop a method for tracking human skeletal movements by integrating data from two synchronized video streams. To achieve this, two datasets were created, each consisting of four different rehabilitation exercise videos featuring various individuals in diverse [...] Read more.
The objective of this work is to develop a method for tracking human skeletal movements by integrating data from two synchronized video streams. To achieve this, two datasets were created, each consisting of four different rehabilitation exercise videos featuring various individuals in diverse environments and wearing different clothing. The prediction model is employed to create a dual-image stream system that enables the tracking of joint positions even when a joint is obscured in one of the streams. This system also mitigates depth coordinate errors by using data from both video streams. The final implementation successfully corrects the positions of the right elbow and wrist joints, though some depth error persists in the left hand. The results demonstrate that adding a second video camera, rotated 90° and aimed at the subject, can compensate for depth prediction inaccuracies, reducing errors by up to 0.4 m. By using a dual-camera setup and fusing the predicted human skeletal models, it is possible to construct a complete human model even when one camera does not capture all body parts and to refine depth coordinates through error correction using a linear regression model. Full article
(This article belongs to the Special Issue Computer Vision Methods for Motion Control and Analysis)
Show Figures

Figure 1

24 pages, 3877 KiB  
Article
A Hybrid Approach for Sports Activity Recognition Using Key Body Descriptors and Hybrid Deep Learning Classifier
by Muhammad Tayyab, Sulaiman Abdullah Alateyah, Mohammed Alnusayri, Mohammed Alatiyyah, Dina Abdulaziz AlHammadi, Ahmad Jalal and Hui Liu
Sensors 2025, 25(2), 441; https://doi.org/10.3390/s25020441 - 13 Jan 2025
Viewed by 614
Abstract
This paper presents an approach for event recognition in sequential images using human body part features and their surrounding context. Key body points were approximated to track and monitor their presence in complex scenarios. Various feature descriptors, including MSER (Maximally Stable Extremal Regions), [...] Read more.
This paper presents an approach for event recognition in sequential images using human body part features and their surrounding context. Key body points were approximated to track and monitor their presence in complex scenarios. Various feature descriptors, including MSER (Maximally Stable Extremal Regions), SURF (Speeded-Up Robust Features), distance transform, and DOF (Degrees of Freedom), were applied to skeleton points, while BRIEF (Binary Robust Independent Elementary Features), HOG (Histogram of Oriented Gradients), FAST (Features from Accelerated Segment Test), and Optical Flow were used on silhouettes or full-body points to capture both geometric and motion-based features. Feature fusion was employed to enhance the discriminative power of the extracted data and the physical parameters calculated by different feature extraction techniques. The system utilized a hybrid CNN (Convolutional Neural Network) + RNN (Recurrent Neural Network) classifier for event recognition, with Grey Wolf Optimization (GWO) for feature selection. Experimental results showed significant accuracy, achieving 98.5% on the UCF-101 dataset and 99.2% on the YouTube dataset. Compared to state-of-the-art methods, our approach achieved better performance in event recognition. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

13 pages, 3243 KiB  
Article
Genetically Engineered Bacterial Ghosts as Vaccine Candidates Against Klebsiella pneumoniae Infection
by Svetlana V. Dentovskaya, Anastasia S. Vagaiskaya, Alexandra S. Trunyakova, Alena S. Kartseva, Tatiana A. Ivashchenko, Vladimir N. Gerasimov, Mikhail E. Platonov, Victoria V. Firstova and Andrey P. Anisimov
Vaccines 2025, 13(1), 59; https://doi.org/10.3390/vaccines13010059 - 10 Jan 2025
Viewed by 769
Abstract
Background/Objectives Bacterial ghosts (BGs), non-living empty envelopes of bacteria, are produced either through genetic engineering or chemical treatment of bacteria, retaining the shape of their parent cells. BGs are considered vaccine candidates, promising delivery systems, and vaccine adjuvants. The practical use of BGs [...] Read more.
Background/Objectives Bacterial ghosts (BGs), non-living empty envelopes of bacteria, are produced either through genetic engineering or chemical treatment of bacteria, retaining the shape of their parent cells. BGs are considered vaccine candidates, promising delivery systems, and vaccine adjuvants. The practical use of BGs in vaccine development for humans is limited because of concerns about the preservation of viable bacteria in BGs. Methods: To increase the efficiency of Klebsiella pneumoniae BG formation and, accordingly, to ensure maximum killing of bacteria, we exploited previously designed plasmids with the lysis gene E from bacteriophage φX174 or with holin–endolysin systems of λ or L-413C phages. Previously, this kit made it possible to generate bacterial cells of Yersinia pestis with varying degrees of hydrolysis and variable protective activity. Results: In the current study, we showed that co-expression of the holin and endolysin genes from the L-413C phage elicited more rapid and efficient K. pneumoniae lysis than lysis mediated by only single gene E or the low functioning holin–endolysin system of λ phage. The introduction of alternative lysing factors into K. pneumoniae cells instead of the E protein leads to the loss of the murein skeleton. The resulting frameless cell envelops are more reminiscent of bacterial sacs or bacterial skins than BGs. Although such structures are less naive than classical bacterial ghosts, they provide effective protection against infection by a hypervirulent strain of K. pneumoniae and can be recommended as candidate vaccines. For our vaccine candidate generated using the O1:K2 hypervirulent K. pneumoniae strain, both safety and immunogenicity aspects were evaluated. Humoral and cellular immune responses were significantly increased in mice that were intraperitoneally immunized compared with subcutaneously vaccinated animals (p < 0.05). Conclusions: Therefore, this study presents novel perspectives for future research on K. pneumoniae ghost vaccines. Full article
(This article belongs to the Section Vaccines against Infectious Diseases)
Show Figures

Figure 1

15 pages, 3603 KiB  
Article
Auxiliary Task Graph Convolution Network: A Skeleton-Based Action Recognition for Practical Use
by Junsu Cho, Seungwon Kim, Chi-Min Oh and Jeong-Min Park
Appl. Sci. 2025, 15(1), 198; https://doi.org/10.3390/app15010198 - 29 Dec 2024
Viewed by 614
Abstract
Graph convolution networks (GCNs) have been extensively researched for action recognition by estimating human skeletons from video clips. However, their image sampling methods are not practical because they require video-length information for sampling images. In this study, we propose an Auxiliary Task Graph [...] Read more.
Graph convolution networks (GCNs) have been extensively researched for action recognition by estimating human skeletons from video clips. However, their image sampling methods are not practical because they require video-length information for sampling images. In this study, we propose an Auxiliary Task Graph Convolution Network (AT-GCN) with low and high-frame pathways while supporting a new sampling method. AT-GCN learns actions at a defined frame rate in the defined range with three losses: fuse, slow, and fast losses. AT-GCN handles the slow and fast losses in two auxiliary tasks, while the mainstream handles the fuse loss. AT-GCN outperforms the original State-of-the-Art model on the NTU RGB+D, NTU RGB+D 120, and NW-UCLA datasets while maintaining the same inference time. AT-GCN shows the best performance on the NTU RGB+D dataset at 90.3% from subjects, 95.2 from view benchmarks, on the NTU RGB+D 120 dataset at 86.5% from subjects, 87.6% from set benchmarks, and at 93.5% on the NW-UCLA dataset as top-1 accuracy. Full article
Show Figures

Figure 1

Back to TopTop