Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
This study aims to propose a flexible, fully parametric hazard-based regression model for censored time-to-event data with crossing survival curves. We call it the accelerated hazard (AH) model. The AH model can be written with or without... more
This study aims to propose a flexible, fully parametric hazard-based regression model for censored time-to-event data with crossing survival curves. We call it the accelerated hazard (AH) model. The AH model can be written with or without a baseline distribution for lifetimes. The former assumption results in parametric regression models, whereas the latter results in semi-parametric regression models, which are by far the most commonly used in time-to-event analysis. However, under certain conditions, a parametric hazard-based regression model may produce more efficient estimates than a semi-parametric model. The parametric AH model, on the other hand, is inappropriate when the baseline distribution is exponential because it is constant over time; similarly, when the baseline distribution is the Weibull distribution, the AH model coincides with the accelerated failure time (AFT) and proportional hazard (PH) models. The use of a versatile parametric baseline distribution (generalize...
The purpose of this study is to propose a novel, general, tractable, fully parametric class for hazard-based and odds-based models of survival regression for the analysis of censored lifetime data, named as the “Amoud class (AM)” of... more
The purpose of this study is to propose a novel, general, tractable, fully parametric class for hazard-based and odds-based models of survival regression for the analysis of censored lifetime data, named as the “Amoud class (AM)” of models. This generality was attained using a structure resembling the general class of hazard-based regression models, with the addition that the baseline odds function is multiplied by a link function. The class is broad enough to cover a number of widely used models, including the proportional hazard model, the general hazard model, the proportional odds model, the general odds model, the accelerated hazards model, the accelerated odds model, and the accelerated failure time model, as well as combinations of these. The proposed class incorporates the analysis of crossing survival curves. Based on a versatile parametric distribution (generalized log-logistic) for the baseline hazard, we introduced a technique for applying these various hazard-based and ...
In this study, we consider a general, flexible, parametric hazard-based regression model for censored lifetime data with covariates and term it the “general hazard (GH)” regression model. Some well-known models, such as the accelerated... more
In this study, we consider a general, flexible, parametric hazard-based regression model for censored lifetime data with covariates and term it the “general hazard (GH)” regression model. Some well-known models, such as the accelerated failure time (AFT), and the proportional hazard (PH) models, as well as the accelerated hazard (AH) model accounting for crossed survival curves, are sub-classes of this general hazard model. In the proposed class of hazard-based regression models, a covariate’s effect is identified as having two distinct components, namely a relative hazard ratio and a time-scale change on hazard progression. The new approach is more adaptive to modelling lifetime data and could give more accurate survival forecasts. The nested structure that includes the AFT, AH, and PH models in the general hazard model may offer a numerical tool for identifying which of them is most appropriate for a certain dataset. In this study, we propose a method for applying these various pa...
Typically, the parametric proportional hazard (PH) model is used to examine data on mortality occurrences. Competing risks are prevalent in health information, making it difficult to manage time to event data in clinical investigations. A... more
Typically, the parametric proportional hazard (PH) model is used to examine data on mortality occurrences. Competing risks are prevalent in health information, making it difficult to manage time to event data in clinical investigations. A Bayesian framework is being developed for managing conflicting risk occurrences in clinical data. The objective of this study is to identify the variables that affect patients' odds of surviving peripheral blood stem-cell transplantation, a therapy option for life-threatening blood disorders. In addition, we want to implement a Bayesian model capable of analysing time-to-event data in the context of competing risk. In this research, we analyse failure reasons in the setting of competing risk models using the generalised log-logistic with right-censored scheme. We present competing risks models for censored survival data in the presence of explanatory variables, where each system contains more than one component in series. We assume that each component's survival time follows a generalized log-logistic distribution. We obtain Bayesian estimates of the component's lifetime distribution parameters and regression coefficients. We present a comprehensive Markov chain Monte Carlo (McMC) method to evaluate the estimators' convergence diagnostics. A real-survival data set dealing with stem-cell transplants demonstrated the model's flexibility and advantages.
Survival analysis is a collection of statistical techniques which examine the time it takes for an event to occur, and it is one of the most important fields in biomedical sciences and other variety of scientific disciplines. Furthermore,... more
Survival analysis is a collection of statistical techniques which examine the time it takes for an event to occur, and it is one of the most important fields in biomedical sciences and other variety of scientific disciplines. Furthermore, the computational rapid advancements in recent decades have advocated the application of Bayesian techniques in this field, giving a powerful and flexible alternative to the classical inference. The aim of this study is to consider the Bayesian inference for the generalized log-logistic proportional hazard model with applications to right-censored healthcare data sets. We assume an independent gamma prior for the baseline hazard parameters and a normal prior is placed on the regression coefficients. We then obtain the exact form of the joint posterior distribution of the regression coefficients and distributional parameters. The Bayesian estimates of the parameters of the proposed model are obtained using the Markov chain Monte Carlo (McMC) simulat...
In this paper, we present a review on the log-logistic distribution and some of its recent generalizations. We cite more than twenty distributions obtained by different generating families of univariate continuous distributions or... more
In this paper, we present a review on the log-logistic distribution and some of its recent generalizations. We cite more than twenty distributions obtained by different generating families of univariate continuous distributions or compounding methods on the log-logistic distribution. We reviewed some log-logistic mathematical properties, including the eight different functions used to define lifetime distributions. These results were used to obtain the properties of some log-logistic generalizations from linear representations. A real-life data application is presented to compare some of the surveyed distributions.
A new three-parameter extension of the generalized-exponential distribution, which has various hazard rates that can be increasing, decreasing, bathtub, or inverted tub, known as the Marshall-Olkin generalized-exponential (MOGE)... more
A new three-parameter extension of the generalized-exponential distribution, which has various hazard rates that can be increasing, decreasing, bathtub, or inverted tub, known as the Marshall-Olkin generalized-exponential (MOGE) distribution has been considered. So, this article addresses the problem of estimating the unknown parameters and survival characteristics of the three-parameter MOGE lifetime distribution when the sample is obtained from progressive type-II censoring via maximum likelihood and Bayesian approaches. Making use of the s-normality of classical estimators, two types of approximate confidence intervals are constructed via the observed Fisher information matrix. Using gamma conjugate priors, the Bayes estimators against the squared-error and linear-exponential loss functions are derived. As expected, the Bayes estimates are not explicitly expressed, thus the Markov chain Monte Carlo techniques are implemented to approximate the Bayes point estimates and to constru...
In farming and related fields, numerous connections exist that should be distinguished quantitatively. Several factors affect the various crop yields in different dimensions. These factors may have relation with farmer’s practices or with... more
In farming and related fields, numerous connections exist that should be distinguished quantitatively. Several factors affect the various crop yields in different dimensions. These factors may have relation with farmer’s practices or with quality of soil. In this study, our main focus is to explore the effect of soil and other factors on the wheat yield. Regression modeling plays an important role in the identification of such factors that greatly affect the crops yield. For reliable and valid results, one has to check the data for outliers and other critical results. In this study, we have fitted the regression models with and without satisfying some regression assumptions to determine the factors affecting yield of wheat. For analysis purposes, the required data were collected from the district Multan. It was observed that when the regression assumptions were satisfied, then coefficient of determination (R2) was improved from 45% to 48%, R2 (adjusted) was improved from 40% to 46%,...
The Gamma ridge regression estimator (GRRE) is commonly used to solve the problem of multicollinearity, when the response variable follows the gamma distribution. Estimation of the ridge parameter estimator is an important issue in the... more
The Gamma ridge regression estimator (GRRE) is commonly used to solve the problem of multicollinearity, when the response variable follows the gamma distribution. Estimation of the ridge parameter estimator is an important issue in the GRRE as well as for other models. Numerous ridge parameter estimators are proposed for the linear and other regression models. So, in this study, we generalized these estimators for the Gamma ridge regression model. A Monte Carlo simulation study and two real-life applications are carried out to evaluate the performance of the proposed ridge regression estimators and then compared with the maximum likelihood method and some existing ridge regression estimators. Based on the simulation study and real-life applications results, we suggest some better choices of the ridge regression estimators for practitioners by applying the Gamma regression model with correlated explanatory variables.
In data analysis, the choice of an appropriate regression model and outlier detection are both very important in obtaining reliable results. Gamma regression (GR) is employed when the distribution of the dependent variable is gamma. In... more
In data analysis, the choice of an appropriate regression model and outlier detection are both very important in obtaining reliable results. Gamma regression (GR) is employed when the distribution of the dependent variable is gamma. In this work, we derived new methods for outlier detection in GR. The proposed methods are based upon the adjusted and standardized Pearson residuals. Furthermore, a comparison of available and proposed methods is made using a simulation study and a real-life data set. The results of simulation and real-life application the evidence better performance of the adjusted Pearson residual based outlier detection approach.
Non-linear time series and linear models were not designed to detect probabilistic process that are depict by velocity and drift associated to returns the way Ornstein-Uhlenbeck stochastic process describes diffusion and velocity... more
Non-linear time series and linear models were not designed to detect probabilistic process that are depict by velocity and drift associated to returns the way Ornstein-Uhlenbeck stochastic process describes diffusion and velocity associated to series or waves influenced by Brownian motion or Levy process.  In this research, Brownian motion and Levy process were conflated as driving force for Ornstein-Uhlenbeck process with its solution applied to Naira-Dollar exchange rates from 2009-2019.The drift and diffusion estimates for the Ornstein-Uhlenbeck process driven by Brownian motion and Levy process are realization of AR (1) with 2.991 and 0.1672 respectively. The AR(1) realization for the Ornstein-Uhlenbeck process was stationary with estimate  that lies outside the unit circle. The AIC, BIC, RMSE, and MSE for the Ornstein-Uhlenbeck process were estimated to be 483.7572, 483.4782, 0.00101, and 8.395 respectively, compare to estimates of the same indexes for AR (1) of 767.5, 634.09, ...
Over the course of the last several decades, many algebraic generalised families and classes of statistical distributions have been developed. The purpose of this research is to construct a brand new cotangent exponentiated generalized... more
Over the course of the last several decades, many algebraic generalised families and classes of statistical distributions have been developed. The purpose of this research is to construct a brand new cotangent exponentiated generalized and generator of distributions that have support on the real line. After that, two novel families of distributions incorporating the cotangent function are proposed: one called the cotangent exponentiated generalised (CE-G) family, and the other called the logistic cotangent exponentiated generalised (LCE-G) family. A comprehensive analysis of the mathematical and structural properties of the recently suggested G-family as well as a Burr-based novel model (LCEB) is presented here. The maximum likelihood approach is used in Monte-Carlo simulation studies to estimate model parameters and evaluate performance. This is done using the maximum likelihood method. The statistical analysis on the survival and waiting times data sets is carried out, and the outcomes confirm the competence, superiority, and utility of the suggested generator, G-family, and novel distribution in comparison to similar and competing Burr-based models that are already well-known.
Te purpose of this research is to interpolate bipolarity into the defnition of the vague soft set. Tis gives a new more applicable, fexible, and generalized extension of the soft set, the fuzzy soft set, or even the vague soft set, which... more
Te purpose of this research is to interpolate bipolarity into the defnition of the vague soft set. Tis gives a new more applicable, fexible, and generalized extension of the soft set, the fuzzy soft set, or even the vague soft set, which is the bipolar vague soft set. In addition, types of bipolar vague soft sets, as well as some new related concepts and operations are established with examples. Moreover, properties of bipolar vague soft sets including absorption, commutative, associative, distributive, and De Morgan's laws are discussed in detail. Furthermore, a bipolar vague soft set-designed decision-making algorithm is provided generalizing Roy and Maji method. Tis allows making more efective decisions to choose the optimal alternative. Finally, an applied problem is introduced with a comparative analysis to illustrate how the proposed algorithm works more successfully than the previous models for problems that contain uncertain ambiguous data.
This paper provides a novel model that is more relevant than the well-known conventional distributions, which stand for the two-parameter distribution of the lifetime modified Kies Topp-Leone (MKTL) model. Compared to the current... more
This paper provides a novel model that is more relevant than the well-known conventional distributions, which stand for the two-parameter distribution of the lifetime modified Kies Topp-Leone (MKTL) model. Compared to the current distributions, the most recent one gives an unusually varied collection of probability functions. The density and hazard rate functions exhibit features, demonstrating that the model is flexible to several kinds of data. Multiple statistical characteristics have been obtained. To estimate the parameters of the MKTL model, we employed various estimation techniques, including maximum likelihood estimators (MLEs) and the Bayesian estimation approach. We compared the traditional reliability function model to the fuzzy reliability function model within the reliability analysis framework. A complete Monte Carlo simulation analysis is conducted to determine the precision of these estimators. The suggested model outperforms competing models in real-world applications and may be chosen as an enhanced model for building a statistical model for the COVID-19 data and other data sets with similar features.
The decision-making technique, launched by Roy and Maji, is considered an effective method to overcome uncertainty and fuzziness in decision-making problems, though, adapting it to reflect the problem parameters' vagueness, as well as... more
The decision-making technique, launched by Roy and Maji, is considered an effective method to overcome uncertainty and fuzziness in decision-making problems, though, adapting it to reflect the problem parameters' vagueness, as well as multibipolarity, is very difficult. So, in this article, the bipolarity is interpolated into the multivague soft set of order n. This gives a new more generalized, flexible, and applicable extension than the fuzzy soft model, or any previous hybrid model, which is the bipolar-valued multivague soft model of dimension n. Moreover, types of bipolar-valued multivague soft sets of dimension n, as well as some new associated concepts and operations, are investigated with examples. Furthermore, properties of bipolar-valued multivague soft sets of dimension n including absorption, commutative, associative, and distributive properties, as well as De Morgan's laws, are provided in detail. Finally, a bipolar-valued multivague soft set-designed decisionmaking algorithm, as well as a real-life example, are discussed generalizing the Roy and Maji method.
Typically, the parametric proportional hazard (PH) model is used to examine data on mortality occurrences. Competing risks are prevalent in health information, making it difficult to manage time to event data in clinical investigations. A... more
Typically, the parametric proportional hazard (PH) model is used to examine data on mortality occurrences. Competing risks are prevalent in health information, making it difficult to manage time to event data in clinical investigations. A Bayesian framework is being developed for managing conflicting risk occurrences in clinical data. The objective of this study is to identify the variables that affect patients' odds of surviving peripheral blood stem-cell transplantation, a therapy option for life-threatening blood disorders. In addition, we want to implement a Bayesian model capable of analysing time-to-event data in the context of competing risk. In this research, we analyse failure reasons in the setting of competing risk models using the generalised log-logistic with right-censored scheme.

We present competing risks models for censored survival data in the presence of explanatory variables, where each system contains more than one component in series. We assume that each component's survival time follows a generalized log-logistic distribution. We obtain Bayesian estimates of the component's lifetime distribution parameters and regression coefficients. We present a comprehensive Markov chain Monte Carlo (McMC) method to evaluate the estimators' convergence diagnostics. A real-survival data set dealing with stem-cell transplants demonstrated the model's flexibility and advantages.
In data analysis, the choice of an appropriate regression model and outlier detection are both very important in obtaining reliable results. Gamma regression (GR) is employed when the distribution of the dependent variable is gamma. In... more
In data analysis, the choice of an appropriate regression model and outlier detection are both very important in obtaining reliable results. Gamma regression (GR) is employed when the distribution of the dependent variable is gamma. In this work, we derived new methods for outlier detection in GR. The proposed methods are based upon the adjusted and standardized Pearson residuals. Furthermore, a comparison of available and proposed methods is made using a simulation study and a real-life data set. The results of simulation and real-life application the evidence better performance of the adjusted Pearson residual based outlier detection approach.
In several experiments of survival analysis, the cause of death or failure of any subject may be characterized by more than one cause. Since the cause of failure may be dependent or independent, in this work, we discuss the competing risk... more
In several experiments of survival analysis, the cause of death or failure of any subject may be characterized by more than one cause. Since the cause of failure may be dependent or independent, in this work, we discuss the competing risk lifetime model under progressive type-II censored where the removal follows a binomial distribution. We consider the Akshaya lifetime failure model under independent causes and the number of subjects removed at every failure time when the removal follows the binomial distribution with known parameters. e classical and Bayesian approaches are used to account for the point and interval estimation procedures for parameters and parametric functions. e Bayes estimate is obtained by using the Markov Chain Monte Carlo (MCMC) method under symmetric and asymmetric loss functions. We apply the Metropolis-Hasting algorithm to generate MCMC samples from the posterior density function. A simulated data set is applied to diagnose the performance of the two techniques applied here. e data represented the survival times of mice kept in a conventional germ-free environment, all of which were exposed to a fixed dose of radiation at the age of 5 to 6 weeks, which was used as a practice for the model discussed. ere are 3 causes of death. In group 1, we considered thymic lymphoma to be the first cause and other causes to be the second. On the base of mice data, the survival mean time (cumulative incidence function) of mice of the second cause is higher than the first cause.
A new three-parameter extension of the generalized-exponential distribution, which has various hazard rates that can be increasing, decreasing, bathtub, or inverted tub, known as the Marshall-Olkin generalized-exponential (MOGE)... more
A new three-parameter extension of the generalized-exponential distribution, which has various hazard rates that can be increasing, decreasing, bathtub, or inverted tub, known as the Marshall-Olkin generalized-exponential (MOGE) distribution has been considered. So, this article addresses the problem of estimating the unknown parameters and survival characteristics of the three-parameter MOGE lifetime distribution when the sample is obtained from progressive type-II censoring via maximum likelihood and Bayesian approaches. Making use of the s-normality of classical estimators, two types of approximate confidence intervals are constructed via the observed Fisher information matrix. Using gamma conjugate priors, the Bayes estimators against the squared-error and linear-exponential loss functions are derived. As expected, the Bayes estimates are not explicitly expressed, thus the Markov chain Monte Carlo techniques are implemented to approximate the Bayes point estimates and to construct the associated highest posterior density credible intervals. e performance of proposed estimators is evaluated via some numerical comparisons and some specific recommendations are also made. We further discuss the issue of determining the optimum progressive censoring plan among different competing censoring plans using three optimality criteria. Finally, two real-life datasets are analyzed to demonstrate how the proposed methods can be used in real-life scenarios.
This study aims to propose a flexible, fully parametric hazard-based regression model for censored time-to-event data with crossing survival curves. We call it the accelerated hazard (AH) model. The AH model can be written with or without... more
This study aims to propose a flexible, fully parametric hazard-based regression model for censored time-to-event data with crossing survival curves. We call it the accelerated hazard (AH) model. The AH model can be written with or without a baseline distribution for lifetimes. The former assumption results in parametric regression models, whereas the latter results in semi-parametric regression models, which are by far the most commonly used in time-to-event analysis. However, under certain conditions, a parametric hazard-based regression model may produce more efficient estimates than a semi-parametric model. The parametric AH model, on the other hand, is inappropriate when the baseline distribution is exponential because it is constant over time; similarly, when the baseline distribution is the Weibull distribution, the AH model coincides with the accelerated failure time (AFT) and proportional hazard (PH) models. The use of a versatile parametric baseline distribution (generalized log-logistic distribution) for modeling the baseline hazard rate function is investigated. For the parameters of the proposed AH model, the classical (via maximum likelihood estimation) and Bayesian approaches using noninformative priors are discussed. A comprehensive simulation study was conducted to assess the performance of the proposed model’s estimators. A real-life right-censored gastric cancer dataset with crossover survival curves is used to demonstrate the tractability and utility of the proposed fully parametric AH model. The study concluded that the parametric AH model is effective and could be useful for assessing a variety of survival data types with crossover survival curves.
In this study, we consider a general, flexible, parametric hazard-based regression model for censored lifetime data with covariates and term it the “general hazard (GH)” regression model. Some well-known models, such as the accelerated... more
In this study, we consider a general, flexible, parametric hazard-based regression model for censored lifetime data with covariates and term it the “general hazard (GH)” regression model. Some well-known models, such as the accelerated failure time (AFT), and the proportional hazard (PH) models, as well as the accelerated hazard (AH) model accounting for crossed survival curves, are sub-classes of this general hazard model. In the proposed class of hazard-based regression models, a covariate’s effect is identified as having two distinct components, namely a relative hazard ratio and a time-scale change on hazard progression. The new approach is more adaptive to modelling lifetime data and could give more accurate survival forecasts. The nested structure that includes the AFT, AH, and PH models in the general hazard model may offer a numerical tool for identifying which of them is most appropriate for a certain dataset. In this study, we propose a method for applying these various parametric hazard-based regression models that is based on a tractable parametric distribution for the baseline hazard, known as the generalized log-logistic (GLL) distribution. This distribution is closed under all the PH, AH, and AFT frameworks and can incorporate all of the basic hazard rate shapes of interest in practice, such as decreasing, constant, increasing, V-shaped, unimodal, and J-shaped hazard rates. The Bayesian and frequentist approaches were used to estimate the model parameters. Comprehensive simulation studies were used to evaluate the performance of the proposed model’s estimators and its nested structure. A right-censored cancer dataset is used to illustrate the application of the proposed approach. The proposed model performs well on both real and simulation datasets, demonstrating the importance of developing a flexible parametric general class of hazard-based regression models with both time-independent and time-dependent covariates for evaluating the hazard function and hazard ratio over time.
The purpose of this study is to propose a novel, general, tractable, fully parametric class for hazard-based and odds-based models of survival regression for the analysis of censored lifetime data, named as the “Amoud class (AM)” of... more
The purpose of this study is to propose a novel, general, tractable, fully parametric class for hazard-based and odds-based models of survival regression for the analysis of censored lifetime data, named as the “Amoud class (AM)” of models. This generality was attained using a structure resembling the general class of hazard-based regression models, with the addition that the baseline odds function is multiplied by a link function. The class is broad enough to cover a number of widely used models, including the proportional hazard model, the general hazard model, the proportional odds model, the general odds model, the accelerated hazards model, the accelerated odds model, and the accelerated failure time model, as well as combinations of these. The proposed class incorporates the analysis of crossing survival curves. Based on a versatile parametric distribution (generalized log-logistic) for the baseline hazard, we introduced a technique for applying these various hazard-based and odds-based regression models. This distribution allows us to cover the most common hazard rate shapes in practice (decreasing, constant, increasing, unimodal, and reversible unimodal), and various common survival distributions (Weibull, Burr-XII, log-logistic, exponential) are its special cases. The proposed model has good inferential features, and it performs well when different information criteria and likelihood ratio tests are used to select hazard-based and odds-based regression models. The proposed model’s utility is demonstrated by an application to a right-censored lifetime dataset with crossing survival curves.
Non-linear time series and linear models were not designed to detect probabilistic process that are depict by velocity and drift associated to returns the way Ornstein-Uhlenbeck stochastic process describes diffusion and velocity... more
Non-linear time series and linear models were not designed to detect probabilistic process that are depict by velocity and drift associated to returns the way Ornstein-Uhlenbeck stochastic process describes diffusion and velocity associated to series or waves influenced by Brownian motion or Levy process.  In this research, Brownian motion and Levy process were conflated as driving force for Ornstein-Uhlenbeck process with its solution applied to Naira-Dollar exchange rates from 2009-2019.The drift and diffusion estimates for the Ornstein-Uhlenbeck process driven by Brownian motion and Levy process are realization of AR (1) with 2.991 and 0.1672 respectively. The AR(1) realization for the Ornstein-Uhlenbeck process was stationary with estimate  that lies outside the unit circle. The AIC, BIC, RMSE, and MSE for the Ornstein-Uhlenbeck process were estimated to be 483.7572, 483.4782, 0.00101, and 8.395 respectively, compare to estimates of the same indexes for AR (1) of 767.5, 634.09, ...
This article discusses dynamics of the fractal double-chain deoxyribonucleic acid model. This structure contains two long elastic homogeneous strands that serve as two polynucleotide chains of deoxyribonucleic acid molecules, bounded by... more
This article discusses dynamics of the fractal double-chain deoxyribonucleic acid model. This structure contains two long elastic homogeneous strands that serve as two polynucleotide chains of deoxyribonucleic acid molecules, bounded by an elastic membrane indicating hydrogen bonds between the base pairs of two chains. The semi-inverse variational principle and auxiliary equation method are employed to extricate soliton solutions. The collection of retrieved exact solutions includes bright, dark, periodic, and other solitons. The constraint conditions emerge naturally which ensure the presence of these solutions. Additionally, 2D and 3D graphs showing the impact of fractals on solutions are included. These plots use appropriate parameter values. Furthermore, sensitivity analysis of the considered model is also acknowledged. The outcomes reveal that these techniques are reliable, effective, and applicable to various biological systems.
Fits tractable fully parametric odds-based regression models for survival data, including proportional odds (PO), accelerated failure time (AFT), accelerated odds (AO), and General Odds (GO) models in overall survival frameworks. Given... more
Fits tractable fully parametric odds-based regression models for survival data, including proportional odds (PO), accelerated failure time (AFT), accelerated odds (AO), and General Odds (GO) models in overall survival frameworks. Given at least an R function specifying the survivor, hazard rate and cumulative distribution functions, any user-defined parametric distribution can be fitted. We applied and evaluated a minimum of seventeen (17) various baseline distributions that can handle different failure rate shapes for each of the four different proposed odds-based regression models. For more information see Bennet et al., (1983) <doi:10.1002/sim.4780020223>, and Muse et al., (2022) <doi:10.1016/j.aej.2022.01.033>.
Numerous studies have already been attempted to explore the reliability of systems considering mask data, though the mass of them has largely focused on basic series or parallel systems, where component failures are assumed to follow an... more
Numerous studies have already been attempted to explore the reliability of systems considering mask data, though the mass of them has largely focused on basic series or parallel systems, where component failures are assumed to follow an exponential or Weibull distribution. However, most electrotonic products and systems are made up of numerous components integrated in parallel-series, series-parallel, and other bridge hybrid structures, and the number of studies in the area of accelerated life testing (ALT) employing masked data for hybrid systems is limited. In this paper, the constant-stress ALT (CSALT) is explored based on type-II progressive censoring scheme (TIIPCS) for a four-component hybrid system using geometric process (GmP). The failure times of the components of the system are assumed to follow the generalized Pareto (GP) distribution. The maximum likelihood estimate (MLE) technique is used to establish statistical inference for the model&#39;s unknown parameters under t...
A common practice is to get reliable regression results in the generalized linear model which is the detection of influential cases. For the identification of influential cases, the present study focuses to compare empirically the... more
A common practice is to get reliable regression results in the generalized linear model which is the detection of influential cases. For the identification of influential cases, the present study focuses to compare empirically the performance of various existing residuals for the case of the Poisson regression model. Furthermore, we computed Cook’s distance for the stated residuals. In order to show the effectiveness of proposed methodology, data have been generated by using simulation, and further applicability of methodology is shown with the help of real data that followed the Poisson regression. The comparative analysis of the residuals is carried out for the detection of influential cases.
Survival analysis is a collection of statistical techniques which examine the time it takes for an event to occur, and it is one of the most important fields in biomedical sciences and other variety of scientific disciplines. Furthermore,... more
Survival analysis is a collection of statistical techniques which examine the time it takes for an event to occur, and it is one of the most important fields in biomedical sciences and other variety of scientific disciplines. Furthermore, the computational rapid advancements in recent decades have advocated the application of Bayesian techniques in this field, giving a powerful and flexible alternative to the classical inference. e aim of this study is to consider the Bayesian inference for the generalized log-logistic proportional hazard model with applications to right-censored healthcare data sets. We assume an independent gamma prior for the baseline hazard parameters and a normal prior is placed on the regression coefficients. We then obtain the exact form of the joint posterior distribution of the regression coefficients and distributional parameters. e Bayesian estimates of the parameters of the proposed model are obtained using the Markov chain Monte Carlo (McMC) simulation technique. All computations are performed in Bayesian analysis using Gibbs sampling (BUGS) syntax that can be run with Just Another Gibbs Sampling (JAGS) from the R software. A detailed simulation study was used to assess the performance of the proposed parametric proportional hazard model. Two real-survival data problems in the healthcare are analyzed for illustration of the proposed model and for model comparison. Furthermore, the convergence diagnostic tests are presented and analyzed. Finally, our research found that the proposed parametric proportional hazard model performs well and could be beneficial in analyzing various types of survival data.
Description Flexible parametric Accelerated Hazards (AH) regression models in overall and relative survival frameworks with 13 distinct Baseline Distributions. The AH Model can also be applied to lifetime data with crossed survival... more
Description Flexible parametric Accelerated Hazards (AH) regression models in overall and relative survival frameworks with 13 distinct Baseline Distributions. The AH Model can also be applied to lifetime data with crossed survival curves. Any user-defined parametric distribution can be fitted, given at least an R function defining the cumulative hazard and hazard rate functions.
Non-linear time series and linear models were not designed to detect probabilistic process that are depict by velocity and drift associated to returns the way Ornstein-Uhlenbeck stochastic process describes diffusion and velocity... more
Non-linear time series and linear models were not designed to detect probabilistic process that are depict by velocity and drift associated to returns the way Ornstein-Uhlenbeck stochastic process describes diffusion and velocity associated to series or waves influenced by Brownian motion or Levy process.  In this research, Brownian motion and Levy process were conflated as driving force for Ornstein-Uhlenbeck process with its solution applied to Naira-Dollar exchange rates from 2009-2019.The drift and diffusion estimates for the Ornstein-Uhlenbeck process driven by Brownian motion and Levy process are realization of AR (1) with 2.991 and 0.1672 respectively. The AR(1) realization for the Ornstein-Uhlenbeck process was stationary with estimate  that lies outside the unit circle. The AIC, BIC, RMSE, and MSE for the Ornstein-Uhlenbeck process were estimated to be 483.7572, 483.4782, 0.00101, and 8.395 respectively, compare to estimates of the same indexes for AR (1) of 767.5, 634.09, ...
The goal of this paper is to develop an optimal statistical model to analyze COVID-19 data in order to model and analyze the COVID-19 mortality rates in Somalia. Combining the log-logistic distribution and the tangent function yields the... more
The goal of this paper is to develop an optimal statistical model to analyze COVID-19 data in order to model and analyze the COVID-19 mortality rates in Somalia. Combining the log-logistic distribution and the tangent function yields the flexible extension log-logistic tangent (LLT) distribution, a new two-parameter distribution. This new distribution has a number of excellent statistical and mathematical properties, including a simple failure rate function, reliability function, and cumulative distribution function. Maximum likelihood estimation (MLE) is used to estimate the unknown parameters of the proposed distribution. A numerical and visual result of the Monte Carlo simulation is obtained to evaluate the use of the MLE method. In addition, the LLT model is compared to the well-known two-parameter, three-parameter, and four-parameter competitors. Gompertz, log-logistic, kappa, exponentiated log-logistic, Marshall–Olkin log-logistic, Kumaraswamy log-logistic, and beta log-logist...
The generalized log-logistic distribution is especially useful for modelling survival data with variable hazard rate shapes because it extends the log-logistic distribution by adding an extra parameter to the classical distribution,... more
The generalized log-logistic distribution is especially useful for modelling survival data with variable hazard rate shapes because it extends the log-logistic distribution by adding an extra parameter to the classical distribution, resulting in greater flexibility in analyzing and modelling various data types. We derive the fundamental mathematical and statistical properties of the proposed distribution in this paper. Many well-known lifetime special submodels are included in the proposed distribution, including the Weibull, log-logistic, exponential, and Burr XII distributions. The maximum likelihood method was used to estimate the unknown parameters of the proposed distribution, and a Monte Carlo simulation study was run to assess the estimators’ performance. This distribution is significant because it can model both monotone and nonmonotone hazard rate functions, which are quite common in survival and reliability data analysis. Furthermore, the proposed distribution’s flexibilit...
There is a long history of interest in modeling Poisson regression in different fields of study. The focus of this work is on handling the issues that occur after modeling the count data. For the prediction and analysis of count data, it... more
There is a long history of interest in modeling Poisson regression in different fields of study. The focus of this work is on handling the issues that occur after modeling the count data. For the prediction and analysis of count data, it is valuable to study the factors that influence the performance of the model and the decision based on the analysis of that model. In regression analysis, multicollinearity and influential observations separately and jointly affect the model estimation and inferences. In this article, we focused on multicollinearity and influential observations simultaneously. To evaluate the reliability and quality of regression estimates and to overcome the problems in model fitting, we proposed new diagnostic methods based on Sherman–Morrison Woodbury (SMW) theorem to detect the influential observations using approximate deletion formulas for the Poisson regression model with the Liu estimator. A Monte Carlo method is done for the assessment of the proposed diagn...
For the first time and by using an entire sample, we discussed the estimation of the unknown parameters θ 1 , θ 2 , and β and the system of stress-strength reliability R = P Y &lt; X for exponentiated inverted Weibull (EIW) distributions... more
For the first time and by using an entire sample, we discussed the estimation of the unknown parameters θ 1 , θ 2 , and β and the system of stress-strength reliability R = P Y &lt; X for exponentiated inverted Weibull (EIW) distributions with an equivalent scale parameter supported eight methods. We will use maximum likelihood method, maximum product of spacing estimation (MPSE), minimum spacing absolute-log distance estimation (MSALDE), least square estimation (LSE), weighted least square estimation (WLSE), method of Cramér-von Mises estimation (CME), and Anderson-Darling estimation (ADE) when X and Y are two independent a scaled exponentiated inverted Weibull (EIW) distribution. Percentile bootstrap and bias-corrected percentile bootstrap confidence intervals are introduced. To pick the better method of estimation, we used the Monte Carlo simulation study for comparing the efficiency of the various estimators suggested using mean square error and interval length criterion. From ca...
Non-linear time series and linear models were not designed to detect probabilistic process that are depict by velocity and drift associated to returns the way Ornstein-Uhlenbeck stochastic process describes diffusion and velocity... more
Non-linear time series and linear models were not designed to detect probabilistic process that are depict by velocity and drift associated to returns the way Ornstein-Uhlenbeck stochastic process describes diffusion and velocity associated to series or waves influenced by Brownian motion or Lévy process. In this research, Brownian motion and Lévy process were conflated as driving force for Ornstein-Uhlenbeck process with its solution applied to Naira-Dollar exchange rates from 2009-2019.The drift and diffusion estimates for the Ornstein-Uhlenbeck process driven by Brownian motion and Lévy process are realization of AR (1) with 2.991 and 0.1672 respectively. The AR(1) realization for the Ornstein-Uhlenbeck process was stationary with estimate 0.7204  that lies outside the unit circle. The AIC, BIC, RMSE, and MSE for the Ornstein-Uhlenbeck process were estimated to be 483.7572, 483.4782, 0.00101, and 8.395 respectively, compare to estimates of the same indexes for AR (1) of 767.5, 634.09, 0.3819, and 23.48. The criterion via the residuals from the Ornstein-Uhlenbeck process was smaller, which connotes that the errors approximated in using drift, Brownian motion and 1 t x  to estimate t x is relatively small via the Ornstein-Uhlenbeck process.