Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
jaism bilijipub

    jaism bilijipub

    Millions of people worldwide suffer from Chronic Obstructive Pulmonary Disease (COPD), a common chronic respiratory illness. Readmissions can frequently be avoided when patients at high risk are treated promptly and with increased... more
    Millions of people worldwide suffer from Chronic Obstructive Pulmonary Disease (COPD), a
    common chronic respiratory illness. Readmissions can frequently be avoided when patients at high
    risk are treated promptly and with increased monitoring. Therefore, it becomes vital to identify
    hospital readmission risk early on. This research undertook a comparative analysis between 2
    distinct models, Extra Trees Classification (ETC) and Adaptive-Boost Learning Classifier (ADAC),
    each augmented with School Based Optimization (SBO) and Flying Foxes Optimization (FFO)
    techniques for hyperparameter optimization. Their efficacy was evaluated in forecasting COPD
    using various performance metrics, including accuracy, precision, recall, and F1-score.
    Independent evaluators were engaged to scrutinize the model outcomes to ensure impartial
    assessment objectively. These original models were further refined through hybridization with the
    aforementioned optimizers, resulting in ETSB (ETC + SBO), ETFF (ETC + FFO), ADSB (ADAC +
    SBO), and ADFF (ADAC + FFO). In the testing phase, the ETFF model demonstrated superior
    performance with an accuracy of 0.8917, while the ADAC model exhibited comparatively weaker
    performance with an accuracy of 0.7500. Similarly, in terms of precision, the ETFF model
    outperformed others with a value of 0.8940, whereas the ADAC model showed the weakest
    precision performance with a value of 0.7610.
    Selecting features with strong discriminatory capabilities is crucial for data classification challenges. Recurrence Quantification Analysis (RQA) is a promising technique for detecting seizures without assuming stationary conditions,... more
    Selecting features with strong discriminatory capabilities is crucial for data classification
    challenges. Recurrence Quantification Analysis (RQA) is a promising technique for detecting
    seizures without assuming stationary conditions, accommodating various signal and noise sizes. In
    this study, RQA was used to distinguish between ictal and normal EEGs, utilizing a combination of
    Bayesian classifier and genetic algorithm to select optimal RQA features. Recurrence plots were
    generated using five different distance norms (Mahalanobis, maximum, minimum, and
    Manhattan) and 10 threshold levels (εmin = 0.1, εmax = 1, ∆ε = 0.1) for each signal category, totaling
    one hundred samples.Examining the participation rate of each feature in all experiments showed
    that each feature appeared on average in 52% of repetitions, among which transitivity and
    determinism features had the highest and lowest participation in the feature selection stage with
    64% and 33%, respectively. Among 12 calculated RQA features from EEGs, the features of longest
    diagonal line, transitivity and recurrence rate with 6, 4 and 3 numbers of 100% accuracy in
    separating normal and epileptic EEGs yielded better results than other recurrence features. On the
    other hand, the features of divergence, trapping time and longest vertical line without occurrence
    of 100% accuracy yielded the poorest results. Experimental results showed that using the minimum
    norm and ε = 0.4 achieved a 100% discrimination rate for seizure detection. The transitivity
    recurrence feature proved highly effective in classifying normal and epileptic EEGs, making it an
    excellent biomarker for seizure detection with high diagnostic value.
    Detecting the presence of alcohol in individuals poses a significant challenge due to the limitations of conventional devices that rely on odor, which is not always reliable. Electroencephalography (EEG), a widely-used technique for... more
    Detecting the presence of alcohol in individuals poses a significant challenge due to the limitations
    of conventional devices that rely on odor, which is not always reliable. Electroencephalography
    (EEG), a widely-used technique for measuring brain activity, has emerged as a promising tool for
    evaluating subjects with alcoholism. Present study intends to use various types of linear and
    nonlinear analysis of EEG signal to classify alcoholics and non-alcoholics and provide a direct
    comparison of the efficiency of each of the analysis methods. After EEG preprocessing, spectral
    analysis was done to calculate linear features. Then, some nonlinear features were calculated
    through fractal dimension, entropy analysis, Hurst exponent, Lempel-Ziv complexity and
    detrended fluctuation analysis. Feature classification was done through KNN, Naïve Bayes and
    AdaBoost classifiers. The suggested methods were assessed on a publicly UCI alcoholic EEG
    database. Experimental results showed that linear and nonlinear features achieved an accuracy of
    74.96% and 93.62%, respectively, for EEG classification of alcoholics and non-alcoholics.
    Furthermore, Katz fractal dimension had a high accuracy of 95.74%, sensitivity of 98.82% and
    specificity of 92.20% in distinguishing EEG signals of alcoholics and non-alcoholics. The findings
    showed that nonlinear features perform better than linear features for alcoholism detection.
    Therefore, it is recommended to use and investigate nonlinear signal processing methods in future
    studies for the detection of alcoholic EEG.
    Detecting epileptic seizures automatically through intelligent methods is a main challenge in recent years. This is because neurologists are burdened with analyzing electroencephalogram (EEG) data via visual inspection, and automating the... more
    Detecting epileptic seizures automatically through intelligent methods is a main challenge in recent
    years. This is because neurologists are burdened with analyzing electroencephalogram (EEG) data
    via visual inspection, and automating the process can reduce their workload. However, one of the
    challenges of automatic seizure detection using EEG analysis is extracting optimal features that can
    distinguish between different states of epilepsy. To address this issue, this research proposes a new
    approach for automatically identifying epileptic seizures using a deep convolutional network. The
    network has 9 convolutional layers and 1 fully-connected layer, which learn the features
    hierarchically and identify epileptic seizures through the EEG analysis. The designed deep network
    was applied to the epileptic EEG dataset from the University of Bonn. The results showed that 100%
    accuracy, 100% sensitivity, and 100% specificity were achieved using the proposed method and 10-
    fold cross-validation for classifying the three investigated EEG conditions (i.e., normal, preictal and
    ictal states). The proposed architecture was very efficient in classifying epileptic EEG data. Due to
    the high accuracy of the algorithm, it can be used for automatic detection of different stages of
    epilepsy for big EEG data.
    Examining how the timing of students' academic involvement might support evidence-based improvements in curriculum quality is critical, especially in light of current changes in medical education and increased responsibility in higher... more
    Examining how the timing of students' academic involvement might support evidence-based improvements in curriculum quality is critical, especially in light of current changes in medical education and increased responsibility in higher education. Time monitoring is an effective way to gauge how well students are using their academic time, which helps with data-driven curriculum design and development decisions. This study employs Extreme Gradient Boosting Classification (XGBC) and Histogram Gradient Boosting Classification (HGBC) techniques to forecast student time management. Additionally, Tasmanian Devil Optimization (TDO) and Equilibrium Slime Mould Algorithm (ESMA) are integrated to enhance the accuracy of both XGBC and HGBC models. To ensure impartiality, unbiased performance assessors are engaged to objectively evaluate the model outcomes. The study's findings showcase the effectiveness of the prediction model for student time management. Through hybridization with the 2 optimizers, the 2 base models yield the following outputs: XGBC + TDO (XGTD), XGBC + ESMA (XGES), HGBC + TDO (HGTD), and HGBC + ESMA (HGES). In the test section, the XGTD model demonstrates outstanding performance with an accuracy value of 0.9211, while the weakest-performing model, with an accuracy value of 0.8158, is attributed to the HGBC model.
    Banks and financial institutions can avoid bankruptcy by following strict risk management techniques, diversifying their investment portfolios, keeping appropriate capital reserves, and doing thorough credit evaluations. Machine learning... more
    Banks and financial institutions can avoid bankruptcy by following strict risk management
    techniques, diversifying their investment portfolios, keeping appropriate capital reserves, and
    doing thorough credit evaluations. Machine learning (ML) may help in bankruptcy prediction by
    analyzing massive quantities of historical financial data, identifying trends and anomalies that
    indicate trouble, and developing predictive models to estimate the possibility of default. These
    models can use factors including liquidity ratios, debt levels, market circumstances, and economic
    indicators to offer early warnings of possible financial instabilities, allowing organizations to take
    proactive steps to reduce risk and avert bankruptcy. This paper endeavors to forecast bank
    bankruptcies by leveraging ML models. Specifically, the selected model, Histogram Gradient
    Boosting Classification (HGBC), is enriched through the integration of Snake Optimization
    Algorithm (SOA), Gradient-Based Optimization (GBO), and Bonobo Optimization Algorithm
    (BOA). This amalgamation results in the creation of innovative hybrid models, meticulously
    engineered to enhance the accuracy of bankruptcy predictions. The findings reveal that in scenarios
    of financial distress, the HGBC model exhibits the least efficacy, achieving a precision value of
    0.940. Conversely, the HGBO and HGGB models demonstrate precision values of 0.950 and 0.960,
    respectively, showcasing marginally weaker performances compared to the HGSO model, which
    attains a remarkable precision value of 0.980. The proactive measures undertaken by banks and
    financial institutions, such as stringent risk management protocols and diversified investment
    strategies, play an important role in averting the specter of bankruptcy.
    In today's fast-paced society, many choose speed dating since it is efficient. Speed dating events are organized to allow busy singles to meet a variety of potential partners in a short timeframe, thereby maximizing their chances of... more
    In today's fast-paced society, many choose speed dating since it is efficient. Speed dating events are organized to allow busy singles to meet a variety of potential partners in a short timeframe, thereby maximizing their chances of making connections. It creates an organized setting that encourages brief but significant contacts, allowing people to quickly assess chemistry and compatibility. Furthermore, in the digital age, when online dating can be impersonal, speed dating provides face-to-face connection, which increases authenticity and reduces the ambiguity of online profiles. In general, speed dating appeals to modern daters who want quick and tangible results in their search for romance. This research project aims to gain insights into forecasting the course of relationships created during initial meetings utilizing cutting-edge Machine Learning (ML) approaches. Light Gradient Boosting Classification (LGBC) serves as a foundational framework, and an innovative approach is introduced by combining it with the Henry Gass Solubility Optimization Algorithm (HGSOA), Flying Fox Optimization (FFO), and Mayflies Optimization (MO), resulting in a hybrid model. Investigation reveals that throughout the training phase, the LGBC model achieved a small accuracy of 0.938, suggesting its comparative inferiority to the LGHS and LGMO models, which achieved accuracies of 0.945 and 0.956, respectively. Nonetheless, the hybrid HGFF model emerged as the clear accurate model, outperforming all other competitors with an astounding accuracy of 0.965. As a result, it is often regarded as the best model for anticipating relationship dynamics during early meetings, providing vital insights into the complexities of relationships on first dates.