Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,946)

Search Parameters:
Keywords = pose estimation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 5995 KiB  
Article
VE-LIOM: A Versatile and Efficient LiDAR-Inertial Odometry and Mapping System
by Yuhang Gao and Long Zhao
Remote Sens. 2024, 16(15), 2772; https://doi.org/10.3390/rs16152772 - 29 Jul 2024
Abstract
LiDAR has emerged as one of the most pivotal sensors in the field of navigation, owing to its expansive measurement range, high resolution, and adeptness in capturing intricate scene details. This significance is particularly pronounced in challenging navigation scenarios where GNSS signals encounter [...] Read more.
LiDAR has emerged as one of the most pivotal sensors in the field of navigation, owing to its expansive measurement range, high resolution, and adeptness in capturing intricate scene details. This significance is particularly pronounced in challenging navigation scenarios where GNSS signals encounter interference, such as within urban canyons and indoor environments. However, the copious volume of point cloud data poses a challenge, rendering traditional iterative closest point (ICP) methods inadequate in meeting real-time odometry requirements. Consequently, many algorithms have turned to feature extraction approaches. Nonetheless, with the advent of diverse scanning mode LiDARs, there arises a necessity to devise unique methods tailored to these sensors to facilitate algorithm migration. To address this challenge, we propose a weighted point-to-plane matching strategy that focuses on local details without relying on feature extraction. This improved approach mitigates the impact of imperfect plane fitting on localization accuracy. Moreover, we present a classification optimization method based on the normal vectors of planes to further refine algorithmic efficiency. Finally, we devise a tightly coupled LiDAR-inertial odometry system founded upon optimization schemes. Notably, we pioneer the derivation of an online gravity estimation method from the perspective of S2 manifold optimization, effectively minimizing the influence of gravity estimation errors introduced during the initialization phase on localization accuracy. The efficacy of the proposed method was validated through experimentation employing various LiDAR sensors. The outcomes of indoor and outdoor experiments substantiate its capability to furnish real-time and precise localization and mapping results. Full article
29 pages, 2401 KiB  
Review
A Review of Power System False Data Attack Detection Technology Based on Big Data
by Zhengwei Chang, Jie Wu, Huihui Liang, Yong Wang, Yanfeng Wang and Xingzhong Xiong
Information 2024, 15(8), 439; https://doi.org/10.3390/info15080439 - 28 Jul 2024
Viewed by 559
Abstract
As power big data plays an increasingly important role in the operation, maintenance, and management of power systems, complex and covert false data attacks pose a serious threat to the safe and stable operation of the power system. This article first explores the [...] Read more.
As power big data plays an increasingly important role in the operation, maintenance, and management of power systems, complex and covert false data attacks pose a serious threat to the safe and stable operation of the power system. This article first explores the characteristics of new power systems, and the challenges posed by false data attacks. The application of big data technology in power production optimization, energy consumption analysis, and user service improvement is then investigated. The article classifies typical attacks against the four stages of power big data systems in detail and analyzes the characteristics of the attack types. It comprehensively summarizes the attack detection technologies used in the four key stages of power big data, including state estimation, machine learning, and data-driven attack detection methods in the data collection stage; clock synchronization monitoring and defense strategies in the data transmission stage; data processing and analysis, data integrity verification and protection measures of blockchain technology in the third stage; and traffic supervision, statistics and elastic computing measures in the control and response stage. Finally, the limitations of attack detection mechanisms are proposed and discussed from three dimensions: research problems, existing solutions, and future research directions. It aims to provide useful references and inspiration for researchers in power big data security to promote technological progress in the safe and stable operation of power systems. Full article
(This article belongs to the Section Review)
Show Figures

Figure 1

12 pages, 5648 KiB  
Article
Development and Integration of Carbon–Polydimethylsiloxane Sensors for Motion Sensing in Soft Pneumatic Actuators
by Ke Ma, Sihuan Wu, Yuquan Zheng, Maosen Shao, Jie Zhang, Jianing Wu and Jinxiu Zhang
Actuators 2024, 13(8), 285; https://doi.org/10.3390/act13080285 - 28 Jul 2024
Viewed by 164
Abstract
Drawing inspiration from the intricate soft structures found in nature, soft actuators possess the ability to incrementally execute complex tasks and adapt to dynamic and interactive environments. In particular, the integration of sensor data feedback allows actuators to respond to environmental stimuli with [...] Read more.
Drawing inspiration from the intricate soft structures found in nature, soft actuators possess the ability to incrementally execute complex tasks and adapt to dynamic and interactive environments. In particular, the integration of sensor data feedback allows actuators to respond to environmental stimuli with heightened intelligence. However, conventional rigid sensors are constrained by their inherent lack of flexibility. The current manufacturing processes for flexible sensors are complex and fail to align with the inherent simplicity of soft actuators. In this study, to facilitate the straightforward and consistent sensing of soft pneumatic actuators, carbon–polydimethylsiloxane (CPDMS) materials were employed, utilizing 3D printing and laser-cutting techniques to fabricate a flexible sensor with ease. The preparation of standard tensile specimens verified that the sensor exhibits a fatigue life extending to several hundred cycles and determined its gauge factor to be −3.2. Experimental results indicate that the sensor is suitable for application in soft pneumatic actuators. Additionally, a printed circuit board (PCB) was fabricated and the piecewise constant curvature (PCC) kinematic method was utilized to enable real-time pose estimation of the soft pneumatic actuator. Compared with computer vision methods, the pose estimation error obtained by the sensing method is as low as 4.26%. This work demonstrates that this easily fabricated sensor can deliver real-time pose feedback for flexible pneumatic actuators, thereby expanding the potential application scenarios for soft pneumatic actuators (SPAs). Full article
(This article belongs to the Special Issue Advanced Technologies in Soft Pneumatic Actuators)
Show Figures

Figure 1

16 pages, 3449 KiB  
Article
Prediction of Cell Survival Rate Based on Physical Characteristics of Heavy Ion Radiation
by Attila Debreceni, Zsolt Buri, István Csige and Sándor Bodzás
Toxics 2024, 12(8), 545; https://doi.org/10.3390/toxics12080545 - 27 Jul 2024
Viewed by 201
Abstract
The effect of ionizing radiation on cells is a complex process dependent on several parameters. Cancer treatment commonly involves the use of radiotherapy. In addition to the effective killing of cancer cells, another key aspect of radiotherapy is the protection of healthy cells. [...] Read more.
The effect of ionizing radiation on cells is a complex process dependent on several parameters. Cancer treatment commonly involves the use of radiotherapy. In addition to the effective killing of cancer cells, another key aspect of radiotherapy is the protection of healthy cells. An interesting position is occupied by heavy ion radiation in the field of radiotherapy due to its high relative biological effectiveness, making it an effective method of treatment. The high biological efficiency of heavy ion radiation can also pose a danger to healthy cells. The extent of cell death induced by heavy ion radiation in cells was investigated using statistical learning methods in this study. The objective was to predict the healthy cell survival rate based on the physical parameters of the available ionizing radiation. This paper is based on secondary research utilizing the PIDE database. Throughout this study, a local regression and a random forest model were generated. Their predictions were compared to the results of a linear-quadratic model commonly utilized in the field of ionizing radiation using various metrics. The relationship between dose and cell survival rate was examined using the linear-quadratic (LQM) model and local regression (LocReg). An R2 value of 88.43% was achieved for LQM and 89.86% for LocReg. Upon incorporating linear energy transfer, the random forest model attained an R2 value of 96.85%. In terms of RMSE, the linear-quadratic model yielded 9.5910−2, the local regression 9.2110−2, and the random forest 1.96 × 10−2 (lower values indicate better performance). All of these methods were also applied to a log-transformed dataset to decrease the right skewedness of the distribution of the datapoints. This significantly reduced the estimates made with LQM and LocReg (28% decrease in the case of R2), while the random forest retained nearly the same level of estimation as the untransformed data. In conclusion, it can be inferred that dose alone provides a somewhat satisfactory explanatory power for cell survival rate, but the inclusion of linear energy transfer can significantly enhance prediction accuracy in terms of variance and explanatory power. Full article
(This article belongs to the Section Metals and Radioactive Substances)
Show Figures

Figure 1

26 pages, 2610 KiB  
Article
Fixed-Wing UAV Pose Estimation Using a Self-Organizing Map and Deep Learning
by Nuno Pessanha Santos
Robotics 2024, 13(8), 114; https://doi.org/10.3390/robotics13080114 - 27 Jul 2024
Viewed by 189
Abstract
In many Unmanned Aerial Vehicle (UAV) operations, accurately estimating the UAV’s position and orientation over time is crucial for controlling its trajectory. This is especially important when considering the landing maneuver, where a ground-based camera system can estimate the UAV’s 3D position and [...] Read more.
In many Unmanned Aerial Vehicle (UAV) operations, accurately estimating the UAV’s position and orientation over time is crucial for controlling its trajectory. This is especially important when considering the landing maneuver, where a ground-based camera system can estimate the UAV’s 3D position and orientation. A Red, Green, and Blue (RGB) ground-based monocular approach can be used for this purpose, allowing for more complex algorithms and higher processing power. The proposed method uses a hybrid Artificial Neural Network (ANN) model, incorporating a Kohonen Neural Network (KNN) or Self-Organizing Map (SOM) to identify feature points representing a cluster obtained from a binary image containing the UAV. A Deep Neural Network (DNN) architecture is then used to estimate the actual UAV pose based on a single frame, including translation and orientation. Utilizing the UAV Computer-Aided Design (CAD) model, the network structure can be easily trained using a synthetic dataset, and then fine-tuning can be done to perform transfer learning to deal with real data. The experimental results demonstrate that the system achieves high accuracy, characterized by low errors in UAV pose estimation. This implementation paves the way for automating operational tasks like autonomous landing, which is especially hazardous and prone to failure. Full article
(This article belongs to the Special Issue UAV Systems and Swarm Robotics)
25 pages, 8213 KiB  
Article
Automatic Perception of Typical Abnormal Situations in Cage-Reared Ducks Using Computer Vision
by Shida Zhao, Zongchun Bai, Lianfei Huo, Guofeng Han, Enze Duan, Dongjun Gong and Liaoyuan Gao
Animals 2024, 14(15), 2192; https://doi.org/10.3390/ani14152192 - 27 Jul 2024
Viewed by 151
Abstract
Overturning and death are common abnormalities in cage-reared ducks. To achieve timely and accurate detection, this study focused on 10-day-old cage-reared ducks, which are prone to these conditions, and established prior data on such situations. Using the original YOLOv8 as the base network, [...] Read more.
Overturning and death are common abnormalities in cage-reared ducks. To achieve timely and accurate detection, this study focused on 10-day-old cage-reared ducks, which are prone to these conditions, and established prior data on such situations. Using the original YOLOv8 as the base network, multiple GAM attention mechanisms were embedded into the feature fusion part (neck) to enhance the network’s focus on the abnormal regions in images of cage-reared ducks. Additionally, the Wise-IoU loss function replaced the CIoU loss function by employing a dynamic non-monotonic focusing mechanism to balance the data samples and mitigate excessive penalties from geometric parameters in the model. The image brightness was adjusted by factors of 0.85 and 1.25, and mainstream object-detection algorithms were adopted to test and compare the generalization and performance of the proposed method. Based on six key points around the head, beak, chest, tail, left foot, and right foot of cage-reared ducks, the body structure of the abnormal ducks was refined. Accurate estimation of the overturning and dead postures was achieved using the HRNet-48. The results demonstrated that the proposed method accurately recognized these states, achieving a mean Average Precision (mAP) value of 0.924, which was 1.65% higher than that of the original YOLOv8. The method effectively addressed the recognition interference caused by lighting differences, and exhibited an excellent generalization ability and comprehensive detection performance. Furthermore, the proposed abnormal cage-reared duck pose-estimation model achieved an Object Key point Similarity (OKS) value of 0.921, with a single-frame processing time of 0.528 s, accurately detecting multiple key points of the abnormal cage-reared duck bodies and generating correct posture expressions. Full article
Show Figures

Figure 1

43 pages, 431 KiB  
Article
Setting Ranges in Potential Biomarkers for Type 2 Diabetes Mellitus Patients Early Detection By Sex—An Approach with Machine Learning Algorithms
by Jorge A. Morgan-Benita, José M. Celaya-Padilla, Huizilopoztli Luna-García, Carlos E. Galván-Tejada, Miguel Cruz, Jorge I. Galván-Tejada, Hamurabi Gamboa-Rosales, Ana G. Sánchez-Reyna, David Rondon and Klinge O. Villalba-Condori
Diagnostics 2024, 14(15), 1623; https://doi.org/10.3390/diagnostics14151623 - 27 Jul 2024
Viewed by 380
Abstract
Type 2 diabetes mellitus (T2DM) is one of the most common metabolic diseases in the world and poses a significant public health challenge. Early detection and management of this metabolic disorder is crucial to prevent complications and improve outcomes. This paper aims to [...] Read more.
Type 2 diabetes mellitus (T2DM) is one of the most common metabolic diseases in the world and poses a significant public health challenge. Early detection and management of this metabolic disorder is crucial to prevent complications and improve outcomes. This paper aims to find core differences in male and female markers to detect T2DM by their clinic and anthropometric features, seeking out ranges in potential biomarkers identified to provide useful information as a pre-diagnostic tool whie excluding glucose-related biomarkers using machine learning (ML) models. We used a dataset containing clinical and anthropometric variables from patients diagnosed with T2DM and patients without TD2M as control. We applied feature selection with three different techniques to identify relevant biomarker models: an improved recursive feature elimination (RFE) evaluating each set from all the features to one feature with the Akaike information criterion (AIC) to find optimal outputs; Least Absolute Shrinkage and Selection Operator (LASSO) with glmnet; and Genetic Algorithms (GA) with GALGO and forward selection (FS) applied to GALGO output. We then used these for comparison with the AIC to measure the performance of each technique and collect the optimal set of global features. Then, an implementation and comparison of five different ML models was carried out to identify the most accurate and interpretable one, considering the following models: logistic regression (LR), artificial neural network (ANN), support vector machine (SVM), k-nearest neighbors (KNN), and nearest centroid (Nearcent). The models were then combined in an ensemble to provide a more robust approximation. The results showed that potential biomarkers such as systolic blood pressure (SBP) and triglycerides are together significantly associated with T2DM. This approach also identified triglycerides, cholesterol, and diastolic blood pressure as biomarkers with differences between male and female actors that have not been previously reported in the literature. The most accurate ML model was selection with RFE and random forest (RF) as the estimator improved with the AIC, which achieved an accuracy of 0.8820. In conclusion, this study demonstrates the potential of ML models in identifying potential biomarkers for early detection of T2DM, excluding glucose-related biomarkers as well as differences between male and female anthropometric and clinic profiles. These findings may help to improve early detection and management of the T2DM by accounting for differences between male and female subjects in terms of anthropometric and clinic profiles, potentially reducing healthcare costs and improving personalized patient attention. Further research is needed to validate these potential biomarkers ranges in other populations and clinical settings. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

31 pages, 13800 KiB  
Article
Analysis of Debris Flow Protective Barriers Using the Coupled Eulerian Lagrangian Method
by Shiyin Sha, Ashley P. Dyson, Gholamreza Kefayati and Ali Tolooiyan
Geosciences 2024, 14(8), 198; https://doi.org/10.3390/geosciences14080198 - 26 Jul 2024
Viewed by 186
Abstract
Protective structures play a vital role in mitigating the risks associated with debris flows, yet assessing their performance poses crucial challenges for their real-world effectiveness. This study proposes a comprehensive procedure for evaluating the performance of protective structures exposed to impacts from media [...] Read more.
Protective structures play a vital role in mitigating the risks associated with debris flows, yet assessing their performance poses crucial challenges for their real-world effectiveness. This study proposes a comprehensive procedure for evaluating the performance of protective structures exposed to impacts from media transported by large debris flow events. The method combines numerical modelling with site conditions for existing structures along the Hobart Rivulet in Tasmania, Australia. The Coupled Eulerian Lagrangian (CEL) model was validated by comparing simulation results with experimental data, demonstrating high agreement. Utilising three-dimensional modelling of debris flow–boulder interactions over the Hobart Rivulet terrain, boulder velocities were estimated for subsequent finite element analyses. Importantly, a model of interaction between boulders and I-beam posts was established, facilitating a comparative assessment of five distinct I-beam barrier systems defined as Type A to E, which are currently in use at the site. Simulation results reveal larger boulders display a slower increase in their velocities over the 3D terrain. Introducing a key metric, the failure ratio, enable a mechanism for comparative assessments of these barrier systems. Notably, the Type E barriers demonstrate superior performance due to fewer weak points within the structure. The combined CEL and FE assessments allow for multiple aspects of the interactions between debris flows, boulders, and structures to be considered, including structural failure and deformability, to enhance the understanding of debris flow risk mitigation in Tasmania. Full article
(This article belongs to the Section Natural Hazards)
Show Figures

Figure 1

31 pages, 6736 KiB  
Article
Multi-Step Procedure for Predicting Early-Age Thermal Cracking Risk in Mass Concrete Structures
by Barbara Klemczak and Aneta Smolana
Materials 2024, 17(15), 3700; https://doi.org/10.3390/ma17153700 - 26 Jul 2024
Viewed by 312
Abstract
Early-age cracking in mass concrete structures resulting from thermal stress is a well-documented phenomenon that impacts their functionality, durability, and integrity. The primary cause of these cracks is the uneven temperature rise within the structure due to the exothermic nature of cement hydration. [...] Read more.
Early-age cracking in mass concrete structures resulting from thermal stress is a well-documented phenomenon that impacts their functionality, durability, and integrity. The primary cause of these cracks is the uneven temperature rise within the structure due to the exothermic nature of cement hydration. Assessing the likelihood of cracking involves comparing the tensile strength or strain capacity of the concrete with the stresses or strains experienced by the structure. Challenges in evaluating the risk of thermal cracking in mass concrete structures stem from various material and technological factors that influence the magnitude and progression of hydration heat-induced temperature and thermal stress. These complexities can be addressed through numerical analysis, particularly finite element analysis (FEA), which offers comprehensive modeling of early-age effects by considering all pertinent material and technological variables. However, employing FEA poses challenges such as the requirement for numerous input parameters, which may be challenging to define, and the need for specialized software not commonly available to structural engineers. Consequently, the necessity for such advanced modeling, which demands significant time investment, may not always be warranted and should be initially assessed through simpler methods. This is primarily because the definition of massive structures—those susceptible to adverse effects such as cracking due to temperature rise from hydration heat—is not precise. To address these challenges, the authors propose a three-step method for evaluating structures in this regard. The first step involves a simplified method for the classification of massive structures. The second step entails estimating hardening temperatures and levels of thermal stress using straightforward analytical techniques. The third step, reserved for structures identified as having a potential risk of early thermal cracks, involves numerical modeling. The outlined procedure is illustrated with an example application, demonstrating its practicality in analyzing a massive concrete wall constructed on the foundation. Full article
(This article belongs to the Special Issue Masonry Structures and Reinforced Concrete Structures (2nd Edition))
Show Figures

Figure 1

18 pages, 4518 KiB  
Article
Phase Calibration in Holographic Synthetic Aperture Radar: An Innovative Method for Vertical Shift Correction
by Fengzhuo Huang, Dong Feng, Yangsheng Hua, Shaodi Ge, Junhao He and Xiaotao Huang
Remote Sens. 2024, 16(15), 2728; https://doi.org/10.3390/rs16152728 - 25 Jul 2024
Viewed by 271
Abstract
Holographic synthetic aperture radar (HoloSAR) introduces a cutting-edge three-dimensional (3-D) imaging mode to the field of synthetic aperture radar (SAR), enriching the scattering information of targets by observing them across multiple spatial dimensions. However, independent phase errors among baselines, such as those caused [...] Read more.
Holographic synthetic aperture radar (HoloSAR) introduces a cutting-edge three-dimensional (3-D) imaging mode to the field of synthetic aperture radar (SAR), enriching the scattering information of targets by observing them across multiple spatial dimensions. However, independent phase errors among baselines, such as those caused by platform jitter and measurement inaccuracies, pose significant challenges to imaging quality. The phase gradient autofocus (PGA) method effectively estimates phase errors, but struggles to accurately estimate the linear component, causing vertical shift in HoloSAR subaperture imaging result. Therefore, this paper proposes a PGA-based phase error compensation method for HoloSAR to address the vertical shift issue caused by linear phase errors. This method can achieve phase error correction in both the echo domain and image domain with enhanced efficiency. Experimental results of simulated targets and real data from the GOTCHA system demonstrate the effectiveness and practicality of the proposed method. Full article
(This article belongs to the Special Issue Spaceborne SAR Calibration Technology)
Show Figures

Figure 1

17 pages, 825 KiB  
Article
Phase-Based and Lifetime Health System Costs of Care for Patients Diagnosed with Leukemia and Lymphoma: A Population-Based Descriptive Study
by Anubhav Agarwal, Natasha Kekre, Harold Atkins, Haris Imsirovic, Brian Hutton, Doug Coyle and Kednapa Thavorn
Curr. Oncol. 2024, 31(8), 4192-4208; https://doi.org/10.3390/curroncol31080313 - 25 Jul 2024
Viewed by 254
Abstract
Hematologic cancers, notably leukemias and lymphomas, pose significant challenges to healthcare systems globally, due to rising incidence rates and increasing costs. This study aimed to estimate the phase and lifetime health system total costs (not net costs) of care for patients diagnosed with [...] Read more.
Hematologic cancers, notably leukemias and lymphomas, pose significant challenges to healthcare systems globally, due to rising incidence rates and increasing costs. This study aimed to estimate the phase and lifetime health system total costs (not net costs) of care for patients diagnosed with leukemia and lymphoma in Ontario, Canada. We conducted a population-based study of patients diagnosed between 2005 and 2019, using data from the Ontario Cancer Registry linked with health administrative databases. Costs were estimated using a phase-based approach and stratified by care phase and cancer subtype. Acute lymphocytic leukemia (ALL) patients had the highest mean monthly initial (CAD 19,519) and terminal (CAD 41,901) costs among all cancer subtypes, while acute myeloid leukemia (AML) patients had the highest mean monthly cost (CAD 7185) during the continuing phase. Overall lifetime costs were highest for ALL patients (CAD 778,795), followed by AML patients (CAD 478,516). Comparatively, patients diagnosed with Hodgkin lymphoma (CAD 268,184) and non-Hodgkin lymphoma (CAD 321,834) had lower lifetime costs. Major cost drivers included inpatient care, emergency department visits, same-day surgeries, ambulatory services, and specialized cancer drugs. Since 2005, the cost structure has evolved with rising proportions of interventional drug costs. Additionally, costs were higher among males and younger age groups. Understanding these costs can help guide initiatives to control healthcare spending and improve cancer care quality. Full article
(This article belongs to the Section Health Economics)
Show Figures

Figure 1

20 pages, 2787 KiB  
Article
Performance Investigations of VSLAM and Google Street View Integration in Outdoor Location-Based Augmented Reality under Various Lighting Conditions
by Komang Candra Brata, Nobuo Funabiki, Prismahardi Aji Riyantoko, Yohanes Yohanie Fridelin Panduman and Mustika Mentari
Electronics 2024, 13(15), 2930; https://doi.org/10.3390/electronics13152930 - 24 Jul 2024
Viewed by 502
Abstract
The growing demand for Location-based Augmented Reality (LAR) experiences has driven the integration of Visual Simultaneous Localization And Mapping (VSLAM) with Google Street View (GSV) to enhance the accuracy. However, the impact of the ambient light intensity on the accuracy and reliability is [...] Read more.
The growing demand for Location-based Augmented Reality (LAR) experiences has driven the integration of Visual Simultaneous Localization And Mapping (VSLAM) with Google Street View (GSV) to enhance the accuracy. However, the impact of the ambient light intensity on the accuracy and reliability is underexplored, posing significant challenges in outdoor LAR implementations. This paper investigates the impact of light conditions on the accuracy and reliability of the VSLAM/GSV integration approach in outdoor LAR implementations. This study fills a gap in the current literature and offers valuable insights into vision-based approach implementation under different light conditions. Extensive experiments were conducted at five Point of Interest (POI) locations under various light conditions with a total of 100 datasets. Descriptive statistic methods were employed to analyze the data and assess the performance variation. Additionally, the Analysis of Variance (ANOVA) analysis was utilized to assess the impact of different light conditions on the accuracy metric and horizontal tracking time, determining whether there are significant differences in performance across varying levels of light intensity. The experimental results revealed that a significant correlation (p < 0.05) exists between the ambient light intensity and the accuracy of the VSLAM/GSV integration approach. Through the confidence interval estimation, the minimum illuminance 434 lx is needed to provide a feasible and consistent accuracy. Variations in visual references, such as wet surfaces in the rainy season, also impact the horizontal tracking time and accuracy. Full article
(This article belongs to the Special Issue Perception and Interaction in Mixed, Augmented, and Virtual Reality)
Show Figures

Figure 1

19 pages, 1918 KiB  
Article
3D Human Pose Estimation Based on Wearable IMUs and Multiple Camera Views
by Mingliang Chen and Guangxing Tan
Electronics 2024, 13(15), 2926; https://doi.org/10.3390/electronics13152926 - 24 Jul 2024
Viewed by 327
Abstract
The problem of 3D human pose estimation (HPE) has been the focus of research in recent years, yet precise estimation remains an under-explored challenge. In this paper, the merits of both multiview images and wearable IMUs are combined to enhance the process of [...] Read more.
The problem of 3D human pose estimation (HPE) has been the focus of research in recent years, yet precise estimation remains an under-explored challenge. In this paper, the merits of both multiview images and wearable IMUs are combined to enhance the process of 3D HPE. We build upon a state-of-the-art baseline while introducing three novelties. Initially, we enhance the precision of keypoint localization by substituting Gaussian kernels with Laplacian kernels in the generation of target heatmaps. Secondly, we incorporate orientation regularized network (ORN), which enhances cross-modal heatmap fusion by taking a weighted average of the top-scored values instead of solely relying on the maximum value. This not only improves robustness to outliers but also leads to higher accuracy in pose estimation. Lastly, we modify the limb length constraint in the conventional orientation regularized pictorial structure model (ORPSM) to improve the estimation of joint positions. Specifically, we devise a soft-coded binary term for limb length constraint, hence imposing a flexible and smoothed penalization and reducing sensitivity to hyperparameters. The experimental results using the TotalCapture dataset reveal a significant improvement, with a 10.3% increase in PCKh accuracy at the one-twelfth threshold and a 3.9 mm reduction in MPJPE error compared to the baseline. Full article
Show Figures

Figure 1

20 pages, 19501 KiB  
Article
Unintended Consequences of Urban Expansion and Gold Mining: Elevated Indoor Radon Levels in Gauteng Communities’ Neighboring Gold Mine Tailings
by Khathutshelo Vincent Mphaga, Wells Utembe, Busisiwe Shezi, Thokozani P. Mbonane and Phoka C. Rathebe
Atmosphere 2024, 15(8), 881; https://doi.org/10.3390/atmos15080881 - 24 Jul 2024
Viewed by 357
Abstract
The province of Gauteng in South Africa has a rich history of gold mining, which has driven economic growth and urbanization. Gold mining has also created over 270 gold mine tailings (GMT), now surrounded by human settlements due to a housing shortage. These [...] Read more.
The province of Gauteng in South Africa has a rich history of gold mining, which has driven economic growth and urbanization. Gold mining has also created over 270 gold mine tailings (GMT), now surrounded by human settlements due to a housing shortage. These GMT pose a health risk as they harbor elevated uranium, which over time undergoes radioactive decay to produce radon, a known lung carcinogen. This study aimed to investigate the potential correlation between the proximity to gold mine tailings (GMT) and indoor radon concentrations in Gauteng’s residential dwellings. Volume activity of radon (VAR) inside 330 residential dwellings was measured in residential dwellings located proximally (<2 km) and distally (>2 km) to gold mine tailings using AlphaE radon monitors during winter. An interviewer-administered questionnaire was utilized to obtain data on factors that may influence indoor radon activities. Descriptive statistics and bivariate logistic regression analyzed the influence of proximity to gold mine tailings and dwelling characteristics on VAR. Furthermore, VAR was compared to the World Health Organization (WHO) radon reference level of 100 Bq/m3. Residential dwellings near gold mine tailings had significantly higher average indoor radon concentrations (103.30 Bq/m3) compared to the control group (65.19 Bq/m3). Residential dwellings proximal to gold mine tailings were three times more likely to have VAR beyond the WHO reference level of 100 Bq/m3. Furthermore, they had estimated annual effective doses of 2.60 mSv/y compared to 1.64 mSv/y for the control group. This study highlighted a concerning association between proximity to gold mine tailings and elevated indoor radon levels. Public health interventions prioritizing residential dwellings near gold mine tailings are crucial. Educational campaigns and financial assistance for radon mitigation systems in high-risk dwellings are recommended. Residents near gold mine tailings are encouraged to ensure continuous natural ventilation through frequent opening of windows and doors. Full article
(This article belongs to the Section Air Quality)
Show Figures

Figure 1

28 pages, 9021 KiB  
Article
Entropy-Based Strategies for Multi-Bracket Pools
by Ryan S. Brill, Abraham J. Wyner and Ian J. Barnett
Entropy 2024, 26(8), 615; https://doi.org/10.3390/e26080615 - 23 Jul 2024
Viewed by 227
Abstract
Much work in the parimutuel betting literature has discussed estimating event outcome probabilities or developing optimal wagering strategies, particularly for horse race betting. Some betting pools, however, involve betting not just on a single event, but on a tuple of events. For example, [...] Read more.
Much work in the parimutuel betting literature has discussed estimating event outcome probabilities or developing optimal wagering strategies, particularly for horse race betting. Some betting pools, however, involve betting not just on a single event, but on a tuple of events. For example, pick six betting in horse racing, March Madness bracket challenges, and predicting a randomly drawn bitstring each involve making a series of individual forecasts. Although traditional optimal wagering strategies work well when the size of the tuple is very small (e.g., betting on the winner of a horse race), they are intractable for more general betting pools in higher dimensions (e.g., March Madness bracket challenges). Hence we pose the multi-brackets problem: supposing we wish to predict a tuple of events and that we know the true probabilities of each potential outcome of each event, what is the best way to tractably generate a set of n predicted tuples? The most general version of this problem is extremely difficult, so we begin with a simpler setting. In particular, we generate n independent predicted tuples according to a distribution having optimal entropy. This entropy-based approach is tractable, scalable, and performs well. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Back to TopTop