Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = low oversampling factor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1627 KiB  
Article
Thiran Filters for Wideband DSP-Based Multi-Beam True Time Delay RF Sensing Applications
by Sirani M. Perera, Gayani Rathnasekara and Arjuna Madanayake
Sensors 2024, 24(2), 576; https://doi.org/10.3390/s24020576 - 17 Jan 2024
Viewed by 961
Abstract
The ability to sense propagating electromagnetic plane waves based on their directions of arrival (DOAs) is fundamental to a range of radio frequency (RF) sensing, communications, and imaging applications. This paper introduces an algorithm for the wideband true time delay digital delay Vandermonde [...] Read more.
The ability to sense propagating electromagnetic plane waves based on their directions of arrival (DOAs) is fundamental to a range of radio frequency (RF) sensing, communications, and imaging applications. This paper introduces an algorithm for the wideband true time delay digital delay Vandermonde matrix (DVM), utilizing Thiran fractional delays that are useful for realizing RF sensors having multiple look DOA support. The digital DVM algorithm leverages sparse matrix factorization to yield multiple simultaneous RF beams for low-complexity sensing applications. Consequently, the proposed algorithm offers a reduction in circuit complexity for multi-beam digital wideband beamforming systems employing Thiran fractional delays. Unlike finite impulse response filter-based approaches which are wideband but of a high filter order, the Thiran filters trade usable bandwidth in favor of low-complexity circuits. The phase and group delay responses of Thiran filters with delays of a fractional sampling period will be demonstrated. Thiran filters show favorable results for sample delay blocks with a temporal oversampling factor of three. Thiran fractional delays of orders three and four are mapped to Xilinx FPGA RF-SoC technologies for evaluation. The preliminary results of the APF-based Thiran fractional delays on FPGA can potentially be used to realize DVM factorizations using application-specific integrated circuit (ASIC) technology. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

18 pages, 10562 KiB  
Article
A Novel Chaotic-NLFM Signal under Low Oversampling Factors for Deception Jamming Suppression
by Jianyuan Li, Pei Wang, Hongxi Zhang, Chao Luo, Zhenning Li and Yihai Wei
Remote Sens. 2024, 16(1), 35; https://doi.org/10.3390/rs16010035 (registering DOI) - 21 Dec 2023
Cited by 2 | Viewed by 780
Abstract
Synthetic aperture radar (SAR) is a high-resolution imaging radar. With the deteriorating electromagnetic environment, SAR systems are susceptible to various forms of electromagnetic interference, including deception jamming. This jamming notably impacts SAR signal processing and subsequently worsens the quality of acquired images. Chaotic [...] Read more.
Synthetic aperture radar (SAR) is a high-resolution imaging radar. With the deteriorating electromagnetic environment, SAR systems are susceptible to various forms of electromagnetic interference, including deception jamming. This jamming notably impacts SAR signal processing and subsequently worsens the quality of acquired images. Chaotic frequency modulation (CFM) signals could effectively counteract deception jamming. Nevertheless, due to the inadequate band-limited performance of CFM signals, higher oversampling factors are needed for achieving optimal low sidelobe levels, leading to increased system costs and excessively high data rates. Additionally, not all chaotic sentences meet the CFM signal requirements. In response, this paper proposes a novel signal modulation method called chaotic-nonlinear frequency modulation (C-NLFM) that enhances band-limited performance. The optimum parameters for C-NLFM signals are selected using the particle swarm optimization (PSO) algorithm. In this way, C-NLFM signals attain ideal low sidelobe levels even when faced with reduced oversampling factors. Meanwhile, this chaotic coding mode retains its capability to effectively suppress deception jamming. Moreover, C-NLFM signals demonstrate versatility in adapting to various chaotic sequences, thereby enhancing their flexibility to modify the signals as required. Through comprehensive simulations, data analysis, and a semi-physical experiment, the effectiveness and superiority of this proposed method have been confirmed. Full article
Show Figures

Graphical abstract

11 pages, 1147 KiB  
Article
Race- and Gender-Specific Associations between Neighborhood-Level Socioeconomic Status and Body Mass Index: Evidence from the Southern Community Cohort Study
by Lauren Giurini, Loren Lipworth, Harvey J. Murff, Wei Zheng and Shaneda Warren Andersen
Int. J. Environ. Res. Public Health 2023, 20(23), 7122; https://doi.org/10.3390/ijerph20237122 - 30 Nov 2023
Viewed by 1640
Abstract
Obesity and a low socioeconomic status (SES), measured at the neighborhood level, are more common among Americans of Black race and with a low individual-level SES. We examined the association between the neighborhood SES and body mass index (BMI) using data from 80,970 [...] Read more.
Obesity and a low socioeconomic status (SES), measured at the neighborhood level, are more common among Americans of Black race and with a low individual-level SES. We examined the association between the neighborhood SES and body mass index (BMI) using data from 80,970 participants in the Southern Community Cohort Study, a cohort that oversamples Black and low-SES participants. BMI (kg/m2) was examined both continuously and categorically using cut points defined by the CDC. Neighborhood SES was measured using a neighborhood deprivation index composed of census-tract variables in the domains of education, employment, occupation, housing, and poverty. Generally, the participants in lower-SES neighborhoods were more likely to have a higher BMI and to be considered obese. We found effect modification by race and sex, where the neighborhood-BMI association was most apparent in White female participants in all the quintiles of the neighborhood SES (ORQ2 = 1.55, 95%CI = 1.34, 1.78; ORQ3 = 1.71, 95%CI = 1.48, 1.98; ORQ4 = 1.76, 95%CI = 1.52, 2.03; ORQ5 = 1.64, 95%SE = 1.39, 1.93). Conversely, the neighborhood-BMI association was mostly null in Black male participants (ORQ2 = 0.91, 95%CI = 0.72, 1.15; ORQ3 = 1.05, 95%CI = 0.84, 1.31; βQ4 = 1.00, 95%CI = 0.81, 1.23; ORQ5 = 0.76, 95%CI = 0.63, 0.93). Within all the subgroups, the associations were attenuated or null in participants residing in the lowest-SES neighborhoods. These findings suggest that the associations between the neighborhood SES and BMI vary, and that other factors aside from the neighborhood SES may better predict the BMI in Black and low-SES groups. Full article
Show Figures

Figure 1

17 pages, 25111 KiB  
Article
Evaluating Landslide Susceptibility Using Sampling Methodology and Multiple Machine Learning Models
by Yingze Song, Degang Yang, Weicheng Wu, Xin Zhang, Jie Zhou, Zhaoxu Tian, Chencan Wang and Yingxu Song
ISPRS Int. J. Geo-Inf. 2023, 12(5), 197; https://doi.org/10.3390/ijgi12050197 - 13 May 2023
Cited by 9 | Viewed by 1906
Abstract
Landslide susceptibility assessment (LSA) based on machine learning methods has been widely used in landslide geological hazard management and research. However, the problem of sample imbalance in landslide susceptibility assessment, where landslide samples tend to be much smaller than non-landslide samples, is often [...] Read more.
Landslide susceptibility assessment (LSA) based on machine learning methods has been widely used in landslide geological hazard management and research. However, the problem of sample imbalance in landslide susceptibility assessment, where landslide samples tend to be much smaller than non-landslide samples, is often overlooked. This problem is often one of the important factors affecting the performance of landslide susceptibility models. In this paper, we take the Wanzhou district of Chongqing city as an example, where the total number of data sets is more than 580,000 and the ratio of positive to negative samples is 1:19. We oversample or undersample the unbalanced landslide samples to make them balanced, and then compare the performance of machine learning models with different sampling strategies. Three classic machine learning algorithms, logistic regression, random forest and LightGBM, are used for LSA modeling. The results show that the model trained directly using the unbalanced sample dataset performs the worst, showing an extremely low recall rate, indicating that its predictive ability for landslide samples is extremely low and cannot be applied in practice. Compared with the original dataset, the sample set optimized through certain methods has demonstrated improved predictive performance across various classifiers, manifested in the improvement of AUC value and recall rate. The best model was the random forest model using over-sampling (O_RF) (AUC = 0.932). Full article
Show Figures

Figure 1

12 pages, 1557 KiB  
Article
Using Boosted Machine Learning to Predict Suicidal Ideation by Socioeconomic Status among Adolescents
by Hwanjin Park and Kounseok Lee
J. Pers. Med. 2022, 12(9), 1357; https://doi.org/10.3390/jpm12091357 - 24 Aug 2022
Cited by 2 | Viewed by 1450
Abstract
(1) Background: This study aimed to use machine learning techniques to identify risk factors for suicidal ideation among adolescents and understand the association between these risk factors and socioeconomic status (SES); (2) Methods: Data from 54,948 participants were analyzed. Risk factors were identified [...] Read more.
(1) Background: This study aimed to use machine learning techniques to identify risk factors for suicidal ideation among adolescents and understand the association between these risk factors and socioeconomic status (SES); (2) Methods: Data from 54,948 participants were analyzed. Risk factors were identified by dividing groups by suicidal ideation and 3 SES levels. The influence of risk factors was confirmed using the synthetic minority over-sampling technique and XGBoost; (3) Results: Adolescents with suicidal thoughts experienced more sadness, higher stress levels, less happiness, and higher anxiety than those without. In the high SES group, academic achievement was a major risk factor for suicidal ideation; in the low SES group, only emotional factors such as stress and anxiety significantly contributed to suicidal ideation; (4) Conclusions: SES plays an important role in the mental health of adolescents. Improvements in SES in adolescence may resolve their negative emotions and reduce the risk of suicide. Full article
(This article belongs to the Special Issue Personalized Treatment and Management of Psychiatric Disorders)
Show Figures

Figure 1

19 pages, 33477 KiB  
Article
A Classification-Based Machine Learning Approach to the Prediction of Cyanobacterial Blooms in Chilgok Weir, South Korea
by Jongchan Kim, Andreja Jonoski and Dimitri P. Solomatine
Water 2022, 14(4), 542; https://doi.org/10.3390/w14040542 - 11 Feb 2022
Cited by 8 | Viewed by 2504
Abstract
Cyanobacterial blooms appear by complex causes such as water quality, climate, and hydrological factors. This study aims to present the machine learning models to predict occurrences of these complicated cyanobacterial blooms efficiently and effectively. The dataset was classified into groups consisting of two, [...] Read more.
Cyanobacterial blooms appear by complex causes such as water quality, climate, and hydrological factors. This study aims to present the machine learning models to predict occurrences of these complicated cyanobacterial blooms efficiently and effectively. The dataset was classified into groups consisting of two, three, or four classes based on cyanobacterial cell density after a week, which was used as the target variable. We developed 96 machine learning models for Chilgok weir using four classification algorithms: k-Nearest Neighbor, Decision Tree, Logistic Regression, and Support Vector Machine. In the modeling methodology, we first selected input features by applying ANOVA (Analysis of Variance) and solving a multi-collinearity problem as a process of feature selection, which is a method of removing irrelevant features to a target variable. Next, we adopted an oversampling method to resolve the problem of having an imbalanced dataset. Consequently, the best performance was achieved for models using datasets divided into two classes, with an accuracy of 80% or more. Comparatively, we confirmed low accuracy of approximately 60% for models using datasets divided into three classes. Moreover, while we produced models with overall high accuracy when using logCyano (logarithm of cyanobacterial cell density) as a feature, several models in combination with air temperature and NO3-N (nitrate nitrogen) using two classes also demonstrated more than 80% accuracy. It can be concluded that it is possible to develop very accurate classification-based machine learning models with two features related to cyanobacterial blooms. This proved that we could make efficient and effective models with a low number of inputs. Full article
(This article belongs to the Section Water Quality and Contamination)
Show Figures

Graphical abstract

26 pages, 11179 KiB  
Article
Selecting the Suitable Resampling Strategy for Imbalanced Data Classification Regarding Dataset Properties. An Approach Based on Association Models
by Mohamed S. Kraiem, Fernando Sánchez-Hernández and María N. Moreno-García
Appl. Sci. 2021, 11(18), 8546; https://doi.org/10.3390/app11188546 - 14 Sep 2021
Cited by 22 | Viewed by 4956
Abstract
In many application domains such as medicine, information retrieval, cybersecurity, social media, etc., datasets used for inducing classification models often have an unequal distribution of the instances of each class. This situation, known as imbalanced data classification, causes low predictive performance for the [...] Read more.
In many application domains such as medicine, information retrieval, cybersecurity, social media, etc., datasets used for inducing classification models often have an unequal distribution of the instances of each class. This situation, known as imbalanced data classification, causes low predictive performance for the minority class examples. Thus, the prediction model is unreliable although the overall model accuracy can be acceptable. Oversampling and undersampling techniques are well-known strategies to deal with this problem by balancing the number of examples of each class. However, their effectiveness depends on several factors mainly related to data intrinsic characteristics, such as imbalance ratio, dataset size and dimensionality, overlapping between classes or borderline examples. In this work, the impact of these factors is analyzed through a comprehensive comparative study involving 40 datasets from different application areas. The objective is to obtain models for automatic selection of the best resampling strategy for any dataset based on its characteristics. These models allow us to check several factors simultaneously considering a wide range of values since they are induced from very varied datasets that cover a broad spectrum of conditions. This differs from most studies that focus on the individual analysis of the characteristics or cover a small range of values. In addition, the study encompasses both basic and advanced resampling strategies that are evaluated by means of eight different performance metrics, including new measures specifically designed for imbalanced data classification. The general nature of the proposal allows the choice of the most appropriate method regardless of the domain, avoiding the search for special purpose techniques that could be valid for the target data. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

17 pages, 1011 KiB  
Article
Predicting the Insolvency of SMEs Using Technological Feasibility Assessment Information and Data Mining Techniques
by Sanghoon Lee, Keunho Choi and Donghee Yoo
Sustainability 2020, 12(23), 9790; https://doi.org/10.3390/su12239790 - 24 Nov 2020
Cited by 9 | Viewed by 3220
Abstract
The government makes great efforts to maintain the soundness of policy funds raised by the national budget and lent to corporate. In general, previous research on the prediction of company insolvency has dealt with large and listed companies using financial information with conventional [...] Read more.
The government makes great efforts to maintain the soundness of policy funds raised by the national budget and lent to corporate. In general, previous research on the prediction of company insolvency has dealt with large and listed companies using financial information with conventional statistical techniques. However, small- and medium-sized enterprises (SMEs) do not have to undergo mandatory external audits, and the quality of accounting information is low due to weak internal control. To overcome this problem, we developed an insolvency prediction model for SMEs using data mining techniques and technological feasibility assessment information as non-financial information. We divided the dataset into two types of data based on three years of corporate age. The synthetic minority over-sampling technique (SMOTE) was used to solve the data imbalance that occurred at this time. Six insolvency prediction models were created using logistic regression, a decision tree, an artificial neural network, and an ensemble (i.e., boosting) of each algorithm. By applying a boosted decision tree, the best accuracies of 69.1% and 82.7% were derived, and by applying a decision tree, nine and seven influential factors affected the insolvency of SMEs established for fewer than three years and more than three years, respectively. In addition, we derived several insolvency rules for the two types of SMEs from the decision tree-based prediction model and proposed ways to enhance the health of loans given to potentially insolvent companies using these derived rules. The results of this study show that it is possible to predict SMEs’ insolvency using data mining techniques with technological feasibility assessment information and find meaningful rules related to insolvency. Full article
(This article belongs to the Section Economic and Business Aspects of Sustainability)
Show Figures

Figure 1

Back to TopTop