Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (157)

Search Parameters:
Keywords = analytical redundancy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 9259 KiB  
Article
Enhancing Laser-Induced Breakdown Spectroscopy Spectral Quantification Through Minimum Redundancy and Maximum Relevance-Based Feature Selection
by Manping Wang, Yang Lu, Man Liu, Fuhui Cui, Rongke Gao, Feifei Wang, Xiaozhe Chen and Liandong Yu
Remote Sens. 2025, 17(3), 416; https://doi.org/10.3390/rs17030416 - 25 Jan 2025
Viewed by 352
Abstract
Laser-induced breakdown spectroscopy (LIBS) is a rapid, non-contact analytical technique that is widely applied in various fields. However, the high dimensionality and information redundancy of LIBS spectral data present challenges for effective model development. This study aims to assess the effectiveness of the [...] Read more.
Laser-induced breakdown spectroscopy (LIBS) is a rapid, non-contact analytical technique that is widely applied in various fields. However, the high dimensionality and information redundancy of LIBS spectral data present challenges for effective model development. This study aims to assess the effectiveness of the minimum redundancy and maximum relevance (mRMR) method for feature selection in LIBS spectral data and to explore its adaptability across different predictive modeling approaches. Using the ChemCam LIBS dataset, we constructed predictive models with four quantitative methods: random forest (RF), support vector regression (SVR), back propagation neural network (BPNN), and partial least squares regression (PLSR). We compared the performance of mRMR-based feature selection with that of full-spectrum data and three other feature selection methods: competitive adaptive re-weighted sampling (CARS), Regressional ReliefF (RReliefF), and neighborhood component analysis (NCA). Our results demonstrate that the mRMR method significantly reduces the number of selected features while improving model performance. This study validates the effectiveness of the mRMR algorithm for LIBS feature extraction and highlights the potential of feature selection techniques to enhance predictive accuracy. The findings provide a valuable strategy for feature selection in LIBS data analysis and offer significant implications for the practical application of LIBS in predicting elemental content in geological samples. Full article
34 pages, 3199 KiB  
Article
A Hyper-Parameter Optimizer Algorithm Based on Conditional Opposition Local-Based Learning Forbidden Redundant Indexes Adaptive Artificial Bee Colony Applied to Regularized Extreme Learning Machine
by Philip Vasquez-Iglesias, Amelia E. Pizarro, David Zabala-Blanco, Juan Fuentes-Concha, Roberto Ahumada-Garcia, David Laroze and Paulo Gonzalez
Electronics 2024, 13(23), 4652; https://doi.org/10.3390/electronics13234652 - 25 Nov 2024
Viewed by 566
Abstract
Finding the best configuration of a neural network’s hyper-parameters may take too long to be feasible using an exhaustive search, especially when the cardinality of the search space has a big combinatorial number of possible solutions with various hyper-parameters. This problem is aggravated [...] Read more.
Finding the best configuration of a neural network’s hyper-parameters may take too long to be feasible using an exhaustive search, especially when the cardinality of the search space has a big combinatorial number of possible solutions with various hyper-parameters. This problem is aggravated when we also need to optimize the parameters of the neural network, such as the weight of the hidden neurons and biases. Extreme learning machines (ELMs) are part of the random weights neural network family, in which parameters are randomly initialized, and the solution, unlike gradient-descent-based algorithms, can be found analytically. This ability is especially useful for metaheuristic analysis due to its reduced training times allowing a faster optimization process, but the problem of finding the best hyper-parameter configuration is still remaining. In this paper, we propose a modification of the artificial bee colony (ABC) metaheuristic to act as parameterizers for a regularized ELM, incorporating three methods: an adaptive mechanism for ABC to balance exploration (global search) and exploitation (local search), an adaptation of the opposition-based learning technique called opposition local-based learning (OLBL) to strengthen exploitation, and a record of access to the search space called forbidden redundant indexes (FRI) that allow us to avoid redundant calculations and track the explored percentage of the search space. We set ten parameterizations applying different combinations of the proposed methods, limiting them to explore up to approximately 10% of the search space, with results over 98% compared to the maximum performance obtained in the exhaustive search in binary and multiclass datasets. The results demonstrate a promising use of these parameterizations to optimize the hyper-parameters of the R-ELM in datasets with different characteristics in cases where computational efficiency is required, with the possibility of extending its use to other problems with similar characteristics with minor modifications, such as the parameterization of support vector machines, digital image filters, and other neural networks, among others. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

34 pages, 7573 KiB  
Article
FTDZOA: An Efficient and Robust FS Method with Multi-Strategy Assistance
by Fuqiang Chen, Shitong Ye, Lijuan Xu and Rongxiang Xie
Biomimetics 2024, 9(10), 632; https://doi.org/10.3390/biomimetics9100632 - 17 Oct 2024
Viewed by 774
Abstract
Feature selection (FS) is a pivotal technique in big data analytics, aimed at mitigating redundant information within datasets and optimizing computational resource utilization. This study introduces an enhanced zebra optimization algorithm (ZOA), termed FTDZOA, for superior feature dimensionality reduction. To address the challenges [...] Read more.
Feature selection (FS) is a pivotal technique in big data analytics, aimed at mitigating redundant information within datasets and optimizing computational resource utilization. This study introduces an enhanced zebra optimization algorithm (ZOA), termed FTDZOA, for superior feature dimensionality reduction. To address the challenges of ZOA, such as susceptibility to local optimal feature subsets, limited global search capabilities, and sluggish convergence when tackling FS problems, three strategies are integrated into the original ZOA to bolster its FS performance. Firstly, a fractional order search strategy is incorporated to preserve information from the preceding generations, thereby enhancing ZOA’s exploitation capabilities. Secondly, a triple mean point guidance strategy is introduced, amalgamating information from the global optimal point, a random point, and the current point to effectively augment ZOA’s exploration prowess. Lastly, the exploration capacity of ZOA is further elevated through the introduction of a differential strategy, which integrates information disparities among different individuals. Subsequently, the FTDZOA-based FS method was applied to solve 23 FS problems spanning low, medium, and high dimensions. A comparative analysis with nine advanced FS methods revealed that FTDZOA achieved higher classification accuracy on over 90% of the datasets and secured a winning rate exceeding 83% in terms of execution time. These findings confirm that FTDZOA is a reliable, high-performance, practical, and robust FS method. Full article
Show Figures

Graphical abstract

19 pages, 6483 KiB  
Article
Rapid Lactic Acid Content Detection in Secondary Fermentation of Maize Silage Using Colorimetric Sensor Array Combined with Hyperspectral Imaging
by Xiaoyu Xue, Haiqing Tian, Kai Zhao, Yang Yu, Ziqing Xiao, Chunxiang Zhuo and Jianying Sun
Agriculture 2024, 14(9), 1653; https://doi.org/10.3390/agriculture14091653 - 22 Sep 2024
Cited by 1 | Viewed by 987
Abstract
Lactic acid content is a crucial indicator for evaluating maize silage quality, and its accurate detection is essential for ensuring product quality. In this study, a quantitative prediction model for the change of lactic acid content during the secondary fermentation of maize silage [...] Read more.
Lactic acid content is a crucial indicator for evaluating maize silage quality, and its accurate detection is essential for ensuring product quality. In this study, a quantitative prediction model for the change of lactic acid content during the secondary fermentation of maize silage was constructed based on a colorimetric sensor array (CSA) combined with hyperspectral imaging. Volatile odor information from maize silage samples with different days of aerobic exposure was obtained using CSA and recorded by a hyperspectral imaging (HSI) system. Subsequently, the acquired spectral data were subjected to preprocessing through five distinct methods before being modeled using partial least squares regression (PLSR). The coronavirus herd immunity optimizer (CHIO) algorithm was introduced to screen three color-sensitive dyes that are more sensitive to changes in lactic acid content of maize silage. To minimize model redundancy, three algorithms, such as competitive adaptive reweighted sampling (CARS), were used to extract the characteristic wavelengths of the three dyes, and the combination of the characteristic wavelengths obtained by each algorithm was used as an input variable to build an analytical model for quantitative prediction of the lactic acid content by support vector regression (SVR). Moreover, two optimization algorithms, namely grid search (GS) and crested porcupine optimizer (CPO), were compared to determine their effectiveness in optimizing the parameters of the SVR model. The results showed that the prediction accuracy of the model can be significantly improved by choosing appropriate pretreatment methods for different color-sensitive dyes. The CARS-CPO-SVR model had better prediction, with a prediction set determination coefficient (RP2), root mean square error of prediction (RMSEP), and a ratio of performance to deviation (RPD) of 0.9617, 2.0057, and 5.1997, respectively. These comprehensive findings confirm the viability of integrating CSA with hyperspectral imaging to accurately quantify the lactic acid content in silage, providing a scientific and novel method for maize silage quality testing. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

17 pages, 11512 KiB  
Article
A Multi-Model Architecture Based on Deep Learning for Longitudinal Available Overload Prediction of Commercial Subsonic Aircraft with Actuator Faults
by Shengqiang Shan, Yuehua Cheng, Bin Jiang, Cheng Xu, Kun Guo and Xingyu Lin
Electronics 2024, 13(18), 3723; https://doi.org/10.3390/electronics13183723 - 19 Sep 2024
Viewed by 681
Abstract
Assessing the real-time longitudinal available overload onboard under fault conditions offers vital insights for the fault-tolerant reconfiguration and trajectory planning of commercial subsonic aircraft. After actuator failures in a commercial subsonic aircraft, its aerodynamic model undergoes changes. Traditional methods based on analytical models [...] Read more.
Assessing the real-time longitudinal available overload onboard under fault conditions offers vital insights for the fault-tolerant reconfiguration and trajectory planning of commercial subsonic aircraft. After actuator failures in a commercial subsonic aircraft, its aerodynamic model undergoes changes. Traditional methods based on analytical models rely on precise aerodynamic models. However, due to the complexities of the flight environment and uncertainties in disturbances, establishing an accurate aerodynamic model after actuator failures is often challenging. Consequently, traditional methods can yield significant errors when evaluating the available overload under actuator faults. To address this, we introduce a multi-model architecture based on deep learning for the longitudinal available overload prediction of a commercial subsonic aircraft with actuator faults. For flight state data under different working conditions and different faults, Spearman correlation coefficient analysis and the gradient boosting decision tree (GBDT) algorithm are used to remove redundant feature parameters, thereby enhancing the training and prediction speed of the model while reducing the risk of overfitting. To meet prediction accuracy and speed demands, we employ the multi-layer perceptron (MLP) deep learning network to fully explore the environmental features, including uncertainties and disturbances, within the flight state, and the mapping relationships between the flight state and the available overload variations. We incorporate the light gradient boosting machine (LightGBM) and the categorical boosting (CatBoost) algorithms to enhance the model’s prediction speed and fuse it with a longitudinal available overload analytical model to elevate the model’s prediction accuracy, thereby achieving the real-time estimation of the commercial subsonic aircraft’s longitudinal available overload with actuator faults. The results demonstrate that the proposed method achieves a higher accuracy than traditional methods, with a relative error of less than 5%. Full article
(This article belongs to the Special Issue Control and Applications of Intelligent Unmanned Aerial Vehicle)
Show Figures

Figure 1

31 pages, 5400 KiB  
Article
A Closed-Form Inverse Kinematic Analytical Method for Seven-DOF Space Manipulator with Aspheric Wrist Structure
by Guojun Zhao, Bo Tao, Du Jiang, Juntong Yun and Hanwen Fan
Machines 2024, 12(9), 632; https://doi.org/10.3390/machines12090632 - 9 Sep 2024
Cited by 1 | Viewed by 818 | Correction
Abstract
The seven-degree-of-freedom space manipulator, characterized by its redundant and aspheric wrist structure, is extensively used in space missions due to its exceptional dexterity and multi-joint capabilities. However, the non-spherical wrist structure presents challenges in solving inverse kinematics, as it cannot decouple joints using [...] Read more.
The seven-degree-of-freedom space manipulator, characterized by its redundant and aspheric wrist structure, is extensively used in space missions due to its exceptional dexterity and multi-joint capabilities. However, the non-spherical wrist structure presents challenges in solving inverse kinematics, as it cannot decouple joints using the Pieper criterion, unlike spherical wrist structures. To address this issue, this paper presents a closed-form analytical method for solving the inverse kinematics of seven-degree-of-freedom aspheric wrist space manipulators. The method begins by identifying the redundant joint through comparing the volumes of the workspace with different joints fixed. The redundant joint angle is then treated as a parametric joint angle, enabling the derivation of closed-form expressions for the non-parametric joint angles using screw theory. The optimal solution branch is identified through a comparative analysis of various self-motion manifold branches. Additionally, a hybrid approach, combining analytical and numerical methods, is proposed to optimize the parametric joint angle for a trajectory tracking task. Simulation results confirm the effectiveness of the proposed method. Full article
(This article belongs to the Section Machine Design and Theory)
Show Figures

Figure 1

25 pages, 13590 KiB  
Article
Fast and Nondestructive Proximate Analysis of Coal from Hyperspectral Images with Machine Learning and Combined Spectra-Texture Features
by Jihua Mao, Hengqian Zhao, Yu Xie, Mengmeng Wang, Pan Wang, Yaning Shi and Yusen Zhao
Appl. Sci. 2024, 14(17), 7920; https://doi.org/10.3390/app14177920 - 5 Sep 2024
Cited by 1 | Viewed by 1182
Abstract
Proximate analysis, including ash, volatile matter, moisture, fixed carbon, and calorific value, is a fundamental aspect of fuel testing and serves as the primary method for evaluating coal quality, which is critical for the processing and utilization of coal. The traditional analytical methods [...] Read more.
Proximate analysis, including ash, volatile matter, moisture, fixed carbon, and calorific value, is a fundamental aspect of fuel testing and serves as the primary method for evaluating coal quality, which is critical for the processing and utilization of coal. The traditional analytical methods involve time-consuming and costly combustion processes, particularly when applied to large volumes of coal that need to be sampled in massive batches. Hyperspectral imaging is promising for the rapid and nondestructive determination of coal quality indices. In this study, a fast and nondestructive coal proximate analysis method with combined spectral-spatial features was developed using a hyperspectral imaging system in the 450–2500 nm range. The processed spectra were evaluated using PLSR, with the most effective MSC spectra selected. To reduce the spectral redundancy and improve the accuracy, the SPA, Boruta, iVISSA, and CARS algorithms were adopted to extract the characteristic wavelengths, and 16 prediction models were constructed and optimized based on the PLSR, RF, BPNN, and LSSVR algorithms within the Optuna framework for each quality indicator. For spatial information, the histogram statistics, gray-level covariance matrix, and Gabor filters were employed to extract the texture features within the characteristic wavelengths. The texture feature-based and combined spectral-texture feature-based prediction models were constructed by applying the spectral modeling strategy, respectively. Compared with the models based on spectral or texture features only, the LSSVR models with combined spectral-texture features achieved the highest prediction accuracy in all quality metrics, with Rp2 values of 0.993, 0.989, 0.979, 0.948, and 0.994 for Ash, VM, MC, FC, and CV, respectively. This study provides a technical reference for hyperspectral imaging technology as a new method for the rapid, nondestructive proximate analysis and quality assessment of coal. Full article
(This article belongs to the Section Optics and Lasers)
Show Figures

Figure 1

9 pages, 778 KiB  
Communication
Applications of Mittag–Leffler Functions on a Subclass of Meromorphic Functions Influenced by the Definition of a Non-Newtonian Derivative
by Daniel Breaz, Kadhavoor R. Karthikeyan and Gangadharan Murugusundaramoorthy
Fractal Fract. 2024, 8(9), 509; https://doi.org/10.3390/fractalfract8090509 - 29 Aug 2024
Cited by 1 | Viewed by 860
Abstract
In this paper, we defined a new family of meromorphic functions whose analytic characterization was motivated by the definition of the multiplicative derivative. Replacing the ordinary derivative with a multiplicative derivative in the subclass of starlike meromorphic functions made the class redundant; thus, [...] Read more.
In this paper, we defined a new family of meromorphic functions whose analytic characterization was motivated by the definition of the multiplicative derivative. Replacing the ordinary derivative with a multiplicative derivative in the subclass of starlike meromorphic functions made the class redundant; thus, major deviation or adaptation was required in defining a class of meromorphic functions influenced by the multiplicative derivative. In addition, we redefined the subclass of meromorphic functions analogous to the class of the functions with respect to symmetric points. Initial coefficient estimates and Fekete–Szegö inequalities were obtained for the defined function classes. Some examples along with graphs have been used to establish the inclusion and closure properties. Full article
Show Figures

Figure 1

22 pages, 744 KiB  
Review
Homoeologs in Allopolyploids: Navigating Redundancy as Both an Evolutionary Opportunity and a Technical Challenge—A Transcriptomics Perspective
by Gaetano Aufiero, Carmine Fruggiero, Davide D’Angelo and Nunzio D’Agostino
Genes 2024, 15(8), 977; https://doi.org/10.3390/genes15080977 - 24 Jul 2024
Cited by 1 | Viewed by 1192
Abstract
Allopolyploidy in plants involves the merging of two or more distinct parental genomes into a single nucleus, a significant evolutionary process in the plant kingdom. Transcriptomic analysis provides invaluable insights into allopolyploid plants by elucidating the fate of duplicated genes, revealing evolutionary novelties [...] Read more.
Allopolyploidy in plants involves the merging of two or more distinct parental genomes into a single nucleus, a significant evolutionary process in the plant kingdom. Transcriptomic analysis provides invaluable insights into allopolyploid plants by elucidating the fate of duplicated genes, revealing evolutionary novelties and uncovering their environmental adaptations. By examining gene expression profiles, scientists can discern how duplicated genes have evolved to acquire new functions or regulatory roles. This process often leads to the development of novel traits and adaptive strategies that allopolyploid plants leverage to thrive in diverse ecological niches. Understanding these molecular mechanisms not only enhances our appreciation of the genetic complexity underlying allopolyploidy but also underscores their importance in agriculture and ecosystem resilience. However, transcriptome profiling is challenging due to genomic redundancy, which is further complicated by the presence of multiple chromosomes sets and the variations among homoeologs and allelic genes. Prior to transcriptome analysis, sub-genome phasing and homoeology inference are essential for obtaining a comprehensive view of gene expression. This review aims to clarify the terminology in this field, identify the most challenging aspects of transcriptome analysis, explain their inherent difficulties, and suggest reliable analytic strategies. Furthermore, bulk RNA-seq is highlighted as a primary method for studying allopolyploid gene expression, focusing on critical steps like read mapping and normalization in differential gene expression analysis. This approach effectively captures gene expression from both parental genomes, facilitating a comprehensive analysis of their combined profiles. Its sensitivity in detecting low-abundance transcripts allows for subtle differences between parental genomes to be identified, crucial for understanding regulatory dynamics and gene expression balance in allopolyploids. Full article
(This article belongs to the Special Issue Genetics and Genomics of Polyploid Plants)
Show Figures

Figure 1

23 pages, 667 KiB  
Article
Enhancing Supplier Selection for Sustainable Raw Materials: A Comprehensive Analysis Using Analytical Network Process (ANP) and TOPSIS Methods
by Ilyas Masudin, Isna Zahrotul Habibah, Rahmad Wisnu Wardana, Dian Palupi Restuputri and S. Sarifah Radiah Shariff
Logistics 2024, 8(3), 74; https://doi.org/10.3390/logistics8030074 - 18 Jul 2024
Cited by 2 | Viewed by 2278
Abstract
Background: This research endeavors to enhance supplier selection processes by combining the Analytic Network Process (ANP) and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) methodologies, with a specific focus on sustainability criteria. Method: Initially comprising 21 sub-criteria derived from [...] Read more.
Background: This research endeavors to enhance supplier selection processes by combining the Analytic Network Process (ANP) and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) methodologies, with a specific focus on sustainability criteria. Method: Initially comprising 21 sub-criteria derived from prior research, the selection criteria are refined to 17, eliminating redundant elements. The core principle guiding this refinement is the comprehensive coverage of economic, social, and environmental dimensions, essential for sustainable supplier evaluation. Results: The study’s outcomes underscore the paramount importance of economic criteria (0.0652) in supplier selection, followed by environmental (0.0343) and social dimensions (0.0503). Key sub-criteria contributing significantly to this evaluation encompassed consistent product quality, competitive raw material pricing, proficient labor capabilities, recycling potential, punctual delivery performance, and effective waste management practices. Conclusions: These sub-criteria are thoughtfully integrated into the sustainable assessment framework, aligning seamlessly with the economic, environmental, and social criteria. Full article
(This article belongs to the Section Supplier, Government and Procurement Logistics)
Show Figures

Figure 1

17 pages, 6093 KiB  
Article
A Prototype Decision Support System for Tree Selection and Plantation with a Focus on Agroforestry and Ecosystem Services
by Neelesh Yadav, Shrey Rakholia and Reuven Yosef
Forests 2024, 15(7), 1219; https://doi.org/10.3390/f15071219 - 14 Jul 2024
Viewed by 1089
Abstract
This study presents the development and application of a prototype decision support system (DSS) for tree selection specifically for Punjab, India, a region facing challenges of low forest cover and an increasing demand for sustainable land use practices. The DSS developed using the [...] Read more.
This study presents the development and application of a prototype decision support system (DSS) for tree selection specifically for Punjab, India, a region facing challenges of low forest cover and an increasing demand for sustainable land use practices. The DSS developed using the R Shiny framework integrates ecological, social, and agro-commercial criteria to facilitate scientific knowledge decision making in tree plantation. The modules in this DSS include a tree selection tool based on comprehensive species attributes, a GIS-based tree suitability map module utilizing an Analytical Hierarchical Process (AHP), and a silviculture practice information module sourced from authoritative databases. Combining sophisticated statistical and spatial analysis, such as NMDS and AHP-GIS, this DSS mitigates data redundancy in SDM while incorporating extensive bibliographic research in dataset processing. The study highlights the necessity of fundamental niche-based suitability in comparison to realized niche suitability. It emphasizes on the importance of addressing ecosystem services, agro-commercial aspects, and enhancing silvicultural knowledge. Additionally, the study underscores the significance of local stakeholder engagement in tree selection, particularly involving farmers and other growers, to ensure community involvement and support. The DSS supports agroforestry initiatives and finds applications in urban tree management and governmental programs, emphasizing the use of scientific literature at each step, in contrast to relying solely on local knowledge. Full article
(This article belongs to the Section Forest Ecology and Management)
Show Figures

Figure 1

25 pages, 1455 KiB  
Article
Efficient Solution Resilient to Noise and Anchor Position Error for Joint Localization and Synchronization Using One-Way Sequential TOAs
by Shuyi Zhang, Yihuai Xu, Beichuan Tang, Yanbing Yang and Yimao Sun
Appl. Sci. 2024, 14(14), 6069; https://doi.org/10.3390/app14146069 - 11 Jul 2024
Cited by 1 | Viewed by 1045
Abstract
Joint localization and synchronization (JLAS) is a technology that simultaneously determines the spatial locations of user nodes and synchronizes the clocks between user nodes (UNs) and anchor nodes (ANs). This technology is crucial for various applications in wireless sensor networks. Existing solutions for [...] Read more.
Joint localization and synchronization (JLAS) is a technology that simultaneously determines the spatial locations of user nodes and synchronizes the clocks between user nodes (UNs) and anchor nodes (ANs). This technology is crucial for various applications in wireless sensor networks. Existing solutions for JLAS are either computationally demanding or not resilient to noise. This paper addresses the challenge of localizing and synchronizing a mobile user node in broadcast-based JLAS systems using sequential one-way time-of-arrival (TOA) measurements. The AN position uncertainty is considered along with clock offset and skew. Two redundant variables that couple the unknowns are introduced to pseudo-linearize the measurement equation. In projecting the equation to the nullspace spanned by the coefficients of the redundant variables, the affection of them can be eliminated. While the closed-form projection solution provides an initial point for iteration, it is suboptimal and may not achieve the Cramér-Rao lower bound (CRLB) when noise or AN position error is relatively large. To improve performance, we propose a novel robust iterative solution (RIS) formulated through factor graphs and developed via message passing. The RIS outperforms the common Gauss–Newton iteration, especially in high-noise scenarios. It exhibits a lower root mean-square error (RMSE) and a higher probability of converging to the optimal solution, while maintaining manageable computational complexity. Both analytical results and numerical simulations validate the superiority of the proposed solution in terms of performance, resilience, and computational load. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

18 pages, 11579 KiB  
Article
Exploring the Most Effective Information for Satellite-Derived Bathymetry Models in Different Water Qualities
by Zhen Liu, Hao Liu, Yue Ma, Xin Ma, Jian Yang, Yang Jiang and Shaohui Li
Remote Sens. 2024, 16(13), 2371; https://doi.org/10.3390/rs16132371 - 28 Jun 2024
Viewed by 1176
Abstract
Satellite-derived bathymetry (SDB) is an effective means of obtaining global shallow water depths. However, the effect of inherent optical properties (IOPs) on the accuracy of SDB under different water quality conditions has not been clearly clarified. To enhance the accuracy of machine learning [...] Read more.
Satellite-derived bathymetry (SDB) is an effective means of obtaining global shallow water depths. However, the effect of inherent optical properties (IOPs) on the accuracy of SDB under different water quality conditions has not been clearly clarified. To enhance the accuracy of machine learning SDB models, this study aims to assess the performance improvement of integrating the quasi-analytical algorithm (QAA)-derived IOPs using the Sentinel-2 and ICESat-2 datasets. In different water quality experiments, the results indicate that four SDB models (the Gaussian process regression, neural networks, random forests, and support vector regression) incorporating QAA-IOP parameters equal to or outperform those solely based on the remote sensing reflectance (Rrs) datasets, especially in turbid waters. By analyzing information gains in SDB, the most effective inputs are identified and prioritized under different water qualities. The SDB method incorporating QAA-IOP can achieve an accuracy of 0.85 m, 0.48 m, and 0.74 m in three areas (Wenchang, Laizhou Bay, and the Qilian Islands) with different water quality. Also, we find that incorporating an excessive number of redundant bands into machine learning models not only increases the demand of computing resources but also leads to worse accuracy in SDB. In conclusion, the integration of QAA-IOPs offers promising improvements in obtaining bathymetry and the optimal feature selection should be carefully considered in diverse aquatic environments. Full article
Show Figures

Figure 1

16 pages, 1353 KiB  
Article
Sensor Selection for an Electronic Tongue for the Rapid Detection of Paralytic Shellfish Toxins: A Case Study
by Mariana Raposo, Maria Teresa S. R. Gomes, Sara T. Costa, Maria João Botelho and Alisa Rudnitskaya
Chemosensors 2024, 12(6), 115; https://doi.org/10.3390/chemosensors12060115 - 19 Jun 2024
Cited by 2 | Viewed by 1038
Abstract
The performance of an electronic tongue can be optimized by varying the number and types of sensors in the array and by employing data-processing methods. Sensor selection is typically performed empirically, with sensors picked up either by analyzing their characteristics or through trial [...] Read more.
The performance of an electronic tongue can be optimized by varying the number and types of sensors in the array and by employing data-processing methods. Sensor selection is typically performed empirically, with sensors picked up either by analyzing their characteristics or through trial and error, which does not guarantee an optimized sensor array composition. This study focuses on developing a method for sensor selection for an electronic tongue using simulated sensor data and Lasso regularization. Simulated sensor responses were calculated using sensor parameters such as sensitivity and selectivity, which were determined in the individual analyte solutions. Sensor selection was carried out using Lasso regularization, which removes redundant or highly correlated variables without much loss of information. The objective of the optimization of the sensor array was twofold, aiming to minimize both quantification errors and the number of sensors in the array. The quantification of toxins belonging to one of the groups of marine toxins—paralytic shellfish toxins (PSTs)—using arrays of potentiometric chemical sensors was used as a case study. Eight PSTs corresponding to the toxin profiles in bivalves due to the two common toxin-producing phytoplankton species, G. catenatum (dcSTX, GTX5, GTX6, and C1+2) and A. minitum (STX, GTX2+3), as well as total sample toxicity, were included in the study. Experimental validation with mixed solutions of two groups of toxins confirmed the suitability of the proposed method of sensor array optimization with better performance obtained for the a priori optimized sensor arrays. The results indicate that the use of simulated sensor responses and Lasso regularization is a rapid and efficient method for the selection of an optimized sensor array. Full article
Show Figures

Figure 1

26 pages, 836 KiB  
Article
Defence against Side-Channel Attacks for Encrypted Network Communication Using Multiple Paths
by Gregor Tamati Haywood and Saleem Noel Bhatti
Cryptography 2024, 8(2), 22; https://doi.org/10.3390/cryptography8020022 - 28 May 2024
Cited by 1 | Viewed by 1436
Abstract
As more network communication is encrypted to provide data privacy for users, attackers are focusing their attention on traffic analysis methods for side-channel attacks on user privacy. These attacks exploit patterns in particular features of communication flows such as interpacket timings and packet [...] Read more.
As more network communication is encrypted to provide data privacy for users, attackers are focusing their attention on traffic analysis methods for side-channel attacks on user privacy. These attacks exploit patterns in particular features of communication flows such as interpacket timings and packet sizes. Unsupervised machine learning approaches, such as Hidden Markov Models (HMMs), can be trained on unlabelled data to estimate these flow attributes from an exposed packet flow, even one that is encrypted, so it is highly feasible for an eavesdropper to perform this attack. Traditional defences try to protect specific side channels by modifying the packet transmission for the flow, e.g., by adding redundant information (padding of packets or use of junk packets) and perturbing packet timings (e.g., artificially delaying packet transmission at the sender). Such defences incur significant overhead and impact application-level performance metrics, such as latency, throughput, end-to-end delay, and jitter. Furthermore, these mechanisms can be complex, often ineffective, and are not general solutions—a new profile must be created for every application, which is an infeasible expectation to place on software developers. We show that an approach exploiting multipath communication can be effective against HMM-based traffic analysis. After presenting the core analytical background, we demonstrate the efficacy of this approach with a number of diverse, simulated traffic flows. Based on the results, we define some simple design rules for software developers to adopt in order to exploit the mechanism we describe, including a critical examination of existing communication protocol behavior. Full article
Show Figures

Figure 1

Back to TopTop