Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

    Hossein Moosaei

    COVID-19 has caused many deaths worldwide. The automation of the diagnosis of this virus is highly desired. Convolutional neural networks (CNNs) have shown outstanding classification performance on image datasets. To date, it appears that... more
    COVID-19 has caused many deaths worldwide. The automation of the diagnosis of this virus is highly desired. Convolutional neural networks (CNNs) have shown outstanding classification performance on image datasets. To date, it appears that COVID computer-aided diagnosis systems based on CNNs and clinical information have not yet been analysed or explored. We propose a novel method, named the CNN-AE, to predict the survival chance of COVID-19 patients using a CNN trained with clinical information. Notably, the required resources to prepare CT images are expensive and limited compared to those required to collect clinical data, such as blood pressure, liver disease, etc. We evaluated our method using a publicly available clinical dataset that we collected. The dataset properties were carefully analysed to extract important features and compute the correlations of features. A data augmentation procedure based on autoencoders (AEs) was proposed to balance the dataset. The experimental re...
    Data classification by support vector Hypersphere which is a competitive method to all other methods usually ends in non-convex problems. In this paper, we will present a new method for classifying data based on the use of separating... more
    Data classification by support vector Hypersphere which is a competitive method to all other methods usually ends in non-convex problems. In this paper, we will present a new method for classifying data based on the use of separating Hypersphere. Our main idea is to linearize the constraints of the problem to get rid of the non-convexity.
    Abstract. The harmonic projection method can be used to find interior eigenpairs of large matrices. Given a target point or shift t to which the needed interior eigenvalues are close, the desired interior eigenpairs are the eigenvalues... more
    Abstract. The harmonic projection method can be used to find interior eigenpairs of large matrices. Given a target point or shift t to which the needed interior eigenvalues are close, the desired interior eigenpairs are the eigenvalues nearest t and the associated eigenvectors. In this paper, we present a new algorithm, which is called weighted harmonic projection algorithm for computing the eigenvalues of a nonsymmetric matrix. The implementation of the algorithm has been tested by numerical examples, the results show that the algorithm converges fast and works with high accuracy
    We discuss some basic concepts and present a numerical procedure for finding the minimum-norm solution of convex quadratic programs (QPs) subject to linear equality and inequality constraints. Our approach is based on a theorem of... more
    We discuss some basic concepts and present a numerical procedure for finding the minimum-norm solution of convex quadratic programs (QPs) subject to linear equality and inequality constraints. Our approach is based on a theorem of alternatives and on a convenient characterization of the solution set of convex QPs. We show that this problem can be reduced to a simple constrained minimization problem with a once-differentiable convex objective function. We use finite termination of an appropriate Newton’s method to solve this problem. Numerical results show that the proposed method is efficient.
    This work is focused on the optimal correction of infeasible system of linear equality. In this paper, for correcting this system, we will make the changes just in the coefficient matrix by using l ଶ norm and show that solving this... more
    This work is focused on the optimal correction of infeasible system of linear equality. In this paper, for correcting this system, we will make the changes just in the coefficient matrix by using l ଶ norm and show that solving this problem is equivalent to solving a fractional quadratic problem. To solve this problem, we use the genetic algorithm. Some examples are provided to illustrate the efficiency and validity of the proposed method.
    هدیکچ ي یسًئت شظو صا ٍك يتبػًضًم صا يکی ناشگطَيژپ ٍجًت دسًم یدشثسبك يم ندشك اذيپ ٍلبسم ،ذضبث بث ةاًج مك هیشرت شرو يم ٍلبسم کی .ذضبث ذىوبم يَبگتسد يلك سًط ٍث غلاي سد سذل تلادبؼم ٌبگتسد يم كلطم ةاًرج کری صا صيرث ذرواًت ٍترضاد يؼيجط ،تلبح... more
    هدیکچ ي یسًئت شظو صا ٍك يتبػًضًم صا يکی ناشگطَيژپ ٍجًت دسًم یدشثسبك يم ندشك اذيپ ٍلبسم ،ذضبث بث ةاًج مك هیشرت شرو يم ٍلبسم کی .ذضبث ذىوبم يَبگتسد يلك سًط ٍث غلاي سد سذل تلادبؼم ٌبگتسد يم كلطم ةاًرج کری صا صيرث ذرواًت ٍترضاد يؼيجط ،تلبح هیا سد ،ذضبث ختوا هیشتُث ي هیشت بث ةاًج ٍجسبحم ،ةب مك هیشت يم شو برث ٍلبرسم ةاًرج ٍلبمم هیا سد ٍك ذضبث مك شو هیشت 1 يم ٍجسبحم ي يسسشث ٌبگتسد هیا ث بث .دًض ٍ دًجُث ژرواشگلا شيس یشيگسبك ٍلبرسم ٍرتفبی کری ٍرث ٌسبرضا دسًرم سم ٍىيُث ٍلب ذيل نيذث یصبس کی بُىت نآ فذَ غثبت ٍك كتطم سبث يم لیذجت تسا شیزپ رث یاشث .دًض ٍ سبرك صا هتًريو شيس یشيگ شيس ٌدشك ٌدبفتسا یصبسساًمَ یبَ ٍلبسم لح .میا برث تػشرس برث اسضرث ٌصاذروا برث یبَ ٌذرض ٌسبرضا شيس يیاسبرك هيرجم لا يم .ذضبث
    In this paper, we propose a method for solving the twin bounded support vector machine (TBSVM) for the binary classification. To do so, we use the augmented Lagrangian (AL) optimization method and smoothing technique, to obtain new... more
    In this paper, we propose a method for solving the twin bounded support vector machine (TBSVM) for the binary classification. To do so, we use the augmented Lagrangian (AL) optimization method and smoothing technique, to obtain new unconstrained smooth minimization problems for TBSVM classifiers. At first, the augmented Lagrangian method is recruited to convert TBSVM into unconstrained minimization programming problems called as AL-TBSVM. We attempt to solve the primal programming problems of AL-TBSVM by converting them into smooth unconstrained minimization problems. Then, the smooth reformulations of AL-TBSVM, which we called AL-STBSVM, are solved by the well-known Newton's algorithm. Finally, experimental results on artificial and several University of California Irvine (UCI) benchmark data sets are provided along with the statistical analysis to show the superior performance of our method in terms of classification accuracy and learning speed.
    We discuss some basic concepts and present a numerical procedure for finding the minimum-norm solution of convex quadratic programs (QPs) subject to linear equality and inequality constraints. Our approach is based on a theorem of... more
    We discuss some basic concepts and present a numerical procedure for finding the minimum-norm solution of convex quadratic programs (QPs) subject to linear equality and inequality constraints. Our approach is based on a theorem of alternatives and on a convenient characterization of the solution set of convex QPs. We show that this problem can be reduced to a simple constrained minimization problem with a once-differentiable convex objective function. We use finite termination of an appropriate Newton’s method to solve this problem. Numerical results show that the proposed method is efficient.
    Data classification by support vector Hypersphere which is a competitive method to all other methods usually ends in non-convex problems. In this paper, we will present a new method for classifying data based on the use of separating... more
    Data classification by support vector Hypersphere which is a competitive method to all other methods usually ends in non-convex problems. In this paper, we will present a new method for classifying data based on the use of separating Hypersphere. Our main idea is to linearize the constraints of the problem to get rid of the non-convexity.
    We study the optimum correction of infeasible systems of linear inequalities through making minimal changes in the coefficient matrix and the right-hand side vector by using the Frobenius norm. It leads to a special structured... more
    We study the optimum correction of infeasible systems of linear inequalities through making minimal changes in the coefficient matrix and the right-hand side vector by using the Frobenius norm. It leads to a special structured unconstrained nonlinear and nonconvex problem, which can be reformulated as a one-dimensional parametric minimization problem such that each objective function corresponds to a trust region subproblem. We show that, under some assumptions, the parametric function is differentiable and strictly unimodal. We present optimally conditions, propose lower and upper bounds on the optimal value and discuss attainability of the optimal value. To solve the original problem, we propose a binary search method accompanied by a type of Newton–Lagrange method for solving the subproblem. The numerical results illustrate the effectiveness of the suggested method.
    The harmonic projection method can be used to find interior eigenpairs of large matrices. Given a target point or shift t to which the needed interior eigenvalues are close, the desired interior eigenpairs are the eigenvalues nearest t... more
    The harmonic projection method can be used to find interior eigenpairs of large matrices. Given a target point or shift t to which the needed interior eigenvalues are close, the desired interior eigenpairs are the eigenvalues nearest t and the associated eigenvectors. In this paper, we present a new algorithm, which is called weighted harmonic projection algorithm for computing the eigenvalues of a nonsymmetric matrix. The implementation of the algorithm has been tested by numerical examples, the results show that the algorithm converges fast and works with high accuracy
    The support vector classification-regression machine for K-class classification (K-SVCR) is a novel multi-class classification method based on the “1-versus-1-versus-rest” structure. In this paper, we propose a least squares version of... more
    The support vector classification-regression machine for K-class classification (K-SVCR) is a novel multi-class classification method based on the “1-versus-1-versus-rest” structure. In this paper, we propose a least squares version of K-SVCR named LSK-SVCR. Similarly to the K-SVCR algorithm, this method assesses all the training data into a “1-versus-1-versus-rest” structure, so that the algorithm generates ternary outputs {− 1,0,+ 1}. In LSK-SVCR, the solution of the primal problem is computed by solving only one system of linear equations instead of solving the dual problem, which is a convex quadratic programming problem in K-SVCR. Experimental results on several benchmark, MC-NDC, and handwritten digit recognition data sets show that not only does the LSK-SVCR have better performance in the aspects of classification accuracy to that of K-SVCR and Twin-KSVC algorithms but also has remarkably higher learning speed.
    The support vector classification-regression machine for K-class classification (K-SVCR) is a novel multi-class classification method based on “1-versus-1-versus-rest” structure. In this paper, we propose a least squares version of K-SVCR... more
    The support vector classification-regression machine for K-class classification (K-SVCR) is a novel multi-class classification method based on “1-versus-1-versus-rest” structure. In this paper, we propose a least squares version of K-SVCR named as LSK-SVCR. Similarly as the K-SVCR algorithm, this method assess all the training data into a “1-versus-1-versus-rest” structure, so that the algorithm generates ternary output \( \{-1, 0, +1\}\). In LSK-SVCR, the solution of the primal problem is computed by solving only one system of linear equations instead of solving the dual problem, which is a convex quadratic programming problem in K-SVCR. Experimental results on several benchmark data set show that the LSK-SVCR has better performance in the aspects of predictive accuracy and learning speed.
    AbstractThe harmonic projection method can be used to find interior eigen-pairs of large matrices. Given a target point or shift σ to which theneeded interior eigenvalues are close, the desired interior eigenpairs arethe eigenvalues... more
    AbstractThe harmonic projection method can be used to find interior eigen-pairs of large matrices. Given a target point or shift σ to which theneeded interior eigenvalues are close, the desired interior eigenpairs arethe eigenvalues nearest σ and the associated eigenvectors. In this arti-cle we use the harmonic projection algorithm for computing the interioreigenpairs of a large unsymmetric generalized eigenvalue problem. Keywords: Eigenvalue, unsymmetric matrix, harmonic, Arnoldi, shift-and-invert 1 Introduction The eigenvalue problem is one of the most important subjects in the appliedsciences and engineering. One of the most important and practical topicsin computational mathematics is computing some of the interior generalizedeigenvalues close to target point or shift σ and the associated eigenvectors.Consider the large unsymmetric generalized eigenproblem. CX i = θ i BX i (1)Where C and B are N×N large matrices, we are interested in computing someeigenvalues close to a given shift
    COVID-19 has caused many deaths worldwide. The automation of the diagnosis of this virus is highly desired. Convolutional neural networks (CNNs) have shown outstanding classification performance on image datasets. To date, it appears that... more
    COVID-19 has caused many deaths worldwide. The automation of the diagnosis of this virus is highly desired. Convolutional neural networks (CNNs) have shown outstanding classification performance on image datasets. To date, it appears that COVID computer-aided diagnosis systems based on CNNs and clinical information have not yet been analysed or explored. We propose a novel method, named the CNN-AE, to predict the survival chance of COVID-19 patients using a CNN trained with clinical information. Notably, the required resources to prepare CT images are expensive and limited compared to those required to collect clinical data, such as blood pressure, liver disease, etc. We evaluated our method using a publicly available clinical dataset that we collected. The dataset properties were carefully analysed to extract important features and compute the correlations of features. A data augmentation procedure based on autoencoders (AEs) was proposed to balance the dataset. The experimental re...
    Today, the use of artificial intelligence methods to diagnose and predict infectious and non-infectious diseases has attracted so much attention. Currently, COVID-19 is considered a new virus which has caused so many deaths worldwide. Due... more
    Today, the use of artificial intelligence methods to diagnose and predict infectious and non-infectious diseases has attracted so much attention. Currently, COVID-19 is considered a new virus which has caused so many deaths worldwide. Due to the pandemic nature of COVID-19, the automated tools for the clinical diagnostic of this disease are highly desired. Convolutional Neural Networks (CNNs) have shown outstanding classification performance on image datasets. Up to our knowledge, COVID computer aided diagnosis systems based on CNNs and clinical information have been never analyzed or explored to date. Moreover, Most of existing literature on COVID-19 focuses on distinguishing infected individuals from non-infected ones. In this paper, we propose a novel method named CNN-AE to predict survival chance of COVID-19 patients using a CNN trained on clinical information. To further increase the prediction accuracy, we use the CNN in combination with an autoencoder. Our method is one of th...
    The first known case of Coronavirus disease 2019 (COVID-19) was identified in December 2019. It has spread worldwide, leading to an ongoing pandemic, imposed restrictions and costs to many countries. Predicting the number of new cases and... more
    The first known case of Coronavirus disease 2019 (COVID-19) was identified in December 2019. It has spread worldwide, leading to an ongoing pandemic, imposed restrictions and costs to many countries. Predicting the number of new cases and deaths during this period can be a useful step in predicting the costs and facilities required in the future. The purpose of this study is to predict new cases and deaths rate one, three and seven-day ahead during the next 100 days. The motivation for predicting every n days (instead of just every day) is the investigation of the possibility of computational cost reduction and still achieving reasonable performance. Such a scenario may be encountered in real-time forecasting of time series. Six different deep learning methods are examined on the data adopted from the WHO website. Three methods are LSTM, Convolutional LSTM, and GRU. The bidirectional extension is then considered for each method to forecast the rate of new cases and new deaths in Aus...
    In this paper, we study the optimum correction of the absolute value equations through making minimal changes in the coefficient matrix and the right hand side vector and using spectral norm. This problem can be formulated as a... more
    In this paper, we study the optimum correction of the absolute value equations through making minimal changes in the coefficient matrix and the right hand side vector and using spectral norm. This problem can be formulated as a non-differentiable, non-convex and unconstrained fractional quadratic programming problem. The regularized least squares is applied for stabilizing the solution of the fractional problem. The regularized problem is reduced to a unimodal single variable minimization problem and to solve it a bisection algorithm is proposed. The main difficulty of the algorithm is a complicated constraint optimization problem, for which two novel methods are suggested. We also present optimality conditions and bounds for the norm of the optimal solutions. Numerical experiments are given to demonstrate the effectiveness of suggested methods.
    As a development of ν-support vector machine (ν-SVM), parametric-margin ν-support vector machine (Par-ν-SVM) can be useful in many cases, especially heteroscedastic noise classification problems. The present article proposes a novel and... more
    As a development of ν-support vector machine (ν-SVM), parametric-margin ν-support vector machine (Par-ν-SVM) can be useful in many cases, especially heteroscedastic noise classification problems. The present article proposes a novel and fast method to solve the primal problem of Par-ν-SVM (named as DC-Par-ν-SVM), while Par-ν-SVM maximizes the parametric-margin by solving a dual quadratic programming problem. In fact, the primal non-convex problem is converted into an unconstrained problem to express the objective function as the difference of convex functions (DC). The DC-Algorithm (DCA) based on generalized Newton’s method is proposed to solve the unconstrained problem cited. Numerical experiments performed on several artificial, real-life, UCI and NDC data sets showed the superiority of the DC-Par-ν-SVM in terms of both accuracy and learning speed.
    Abstract This paper studies overdetermined system of absolute value equations (AVE) where data are contaminated by noise. The primary goal of this paper is to present a class of smoothing methods for optimal error correcting of AVE. To... more
    Abstract This paper studies overdetermined system of absolute value equations (AVE) where data are contaminated by noise. The primary goal of this paper is to present a class of smoothing methods for optimal error correcting of AVE. To prove the feasibility of this non linear system, we adopt the minimal correction using the l2 norm by applying changes to the right hand vector. We demonstrate that the formulation of this problem as an unconstrained optimization problem using a quadratic objective function is possible. We then outline the algorithm and convergence analysis and propose a Newton method type for solving unconstrained optimization problem. Numerical results are provided to reveal the effectiveness of our method.
    The first integral method is an efficient method for obtaining exact solutions of some nonlinear partial differential equations. In this paper, the first integral method is used to construct exact solutions of the perturbed nonlinear... more
    The first integral method is an efficient method for obtaining exact solutions of some nonlinear partial differential equations. In this paper, the first integral method is used to construct exact solutions of the perturbed nonlinear Schrodinger’s equation (NLSE) with Kerr law nonlinearity. It is shown that the proposed method is effective and general.
    In this present work, the simplest equation method is used to construct exact solutions of the DS-I and DS-II equations. The simplest equation method is a powerful solution method for obtaining exact solutions of nonlinear evolution... more
    In this present work, the simplest equation method is used to construct exact solutions of the DS-I and DS-II equations. The simplest equation method is a powerful solution method for obtaining exact solutions of nonlinear evolution equations. This method can be applied to nonintegrable equations as well as to integrable ones.
    One effective technique that has recently been considered for solving classification problems is parametric $$\nu $$ν-support vector regression. This method obtains a concurrent learning framework for both margin determination and... more
    One effective technique that has recently been considered for solving classification problems is parametric $$\nu $$ν-support vector regression. This method obtains a concurrent learning framework for both margin determination and function approximation and leads to a convex quadratic programming problem. In this paper we introduce a new idea that converts this problem into an unconstrained convex problem. Moreover, we propose an extension of Newton’s method for solving the unconstrained convex problem. We compare the accuracy and efficiency of our method with support vector machines and parametric $$\nu $$ν-support vector regression methods. Experimental results on several UCI benchmark data sets indicate the high efficiency and accuracy of this method.
    Source localization and target tracking are among the most challenging problems in wireless sensor networks (WSN). Most of the state-of-the-art solutions are complicated and do not meet the processing and memory limitations of the... more
    Source localization and target tracking are among the most challenging problems in wireless sensor networks (WSN). Most of the state-of-the-art solutions are complicated and do not meet the processing and memory limitations of the existing low-cost sensor nodes. In this paper, we propose computationally-cheap solutions based on the support vector machine (SVM) and twin SVM (TWSVM) learning algorithms in which network nodes firstly detect the desired signal. Then, the network is trained to specify the nodes in the vicinity of the source (or target); hence, the region of event is detected. Finally, the centroid of the event region is considered as an estimation of the source location. The efficiency of the proposed methods is shown by simulations.
    Source localization and target tracking are among the most challenging problems in wireless sensor networks (WSN). Most of the state-of-the-art solutions are complicated and do not meet the processing and memory limitations of the... more
    Source localization and target tracking are among the most challenging problems in wireless sensor networks (WSN). Most of the state-of-the-art solutions are complicated and do not meet the processing and memory limitations of the existing low-cost sensor nodes. In this paper, we propose computationally-cheap solutions based on the support vector machine (SVM) and twin SVM (TWSVM) learning algorithms in which network nodes firstly detect the desired signal. Then, the network is trained to specify the nodes in the vicinity of the source (or target); hence, the region of event is detected. Finally, the centroid of the event region is considered as an estimation of the source location. The efficiency of the proposed methods is shown by simulations.
    Absolute value equations (AVE) provide a useful tool for optimization as they subsume many mathematical programming problems. However, in some applications, it is difficult to determine the exact values of the problem data and there may... more
    Absolute value equations (AVE) provide a useful tool for optimization as they subsume many mathematical programming problems. However, in some applications, it is difficult to determine the exact values of the problem data and there may be some certain errors. Finding a solution for AVE based on erroneous data using existing approaches might yield a meaningless solution. In this paper, robust optimization, which represents errors in the problem data, is used. We prove that a robust solution can be obtained by solving a robust counterpart problem, which is equivalent to a second order cone program. The results also show that robust solutions can significantly improve the performance of solutions, especially when the size of errors in the problem is large.
    ABSTRACT The aim of this paper is to propose a method for solving fully fuzzy Sylvester equation (FFSE) , where , and are fuzzy matrices. By using of -cutting, FFSE is transformed to a generalized Sylvester matrix equation, and then to a... more
    ABSTRACT The aim of this paper is to propose a method for solving fully fuzzy Sylvester equation (FFSE) , where , and are fuzzy matrices. By using of -cutting, FFSE is transformed to a generalized Sylvester matrix equation, and then to a crisp system of linear equations. Existence and uniqueness of a solution to this system are investigated. Two numerical examples are given to illustrate the proposed method.
    The main goal of this paper is to compute the solution to the NP-hard absolute value equations (AVEs) Ax - |x| = b when the singular values of A exceed 1. First we show the AVE is equivalent to a bilinear programming problem and then we... more
    The main goal of this paper is to compute the solution to the NP-hard absolute value equations (AVEs) Ax - |x| = b when the singular values of A exceed 1. First we show the AVE is equivalent to a bilinear programming problem and then we present a system tantamount to this problem. We use the simulated annealing (SA) algorithm to solve this system. Finally, several examples are given to illustrate the implementation and efficiency of the proposed method.
    Cardiovascular disease is one of the most rampant causes of death around the world and was deemed as a major illness in Middle and Old ages. Coronary artery disease, in particular, is a widespread cardiovascular malady entailing high... more
    Cardiovascular disease is one of the most rampant causes of death around the world and was deemed as a major illness in Middle and Old ages. Coronary artery disease, in particular, is a widespread cardiovascular malady entailing high mortality rates. Angiography is, more often than not, regarded as the best method for the diagnosis of coronary artery disease; on the other hand, it is associated with high costs and major side effects. Much research has, therefore, been conducted using machine learning and data mining so as to seek alternative modalities. Accordingly, we herein propose a highly accurate hybrid method for the diagnosis of coronary artery disease. As a matter of fact, the proposed method is able to increase the performance of neural network by approximately 10% through enhancing its initial weights using genetic algorithm which suggests better weights for neural network. Making use of such methodology, we achieved accuracy, sensitivity and specificity rates of 93.85%, 97% and 92% respectively, on Z-Alizadeh Sani dataset.
    We investigate the optimum correction of an absolute value equation by minimally changing the coefficient matrix and right-hand side vector using Tikhonov regularization. Solving this problem is equivalent to minimizing the sum of... more
    We investigate the optimum correction of an absolute value equation by minimally changing the coefficient matrix and right-hand side vector using Tikhonov regularization. Solving this problem is equivalent to minimizing the sum of fractional quadratic and quadratic functions. The primary difficulty with this problem is its nonconvexity. Nonetheless, we show that a global optimal solution to this problem can be found by solving an equation on a closed interval using the subgradient method. Some examples are provided to illustrate the efficiency and validity of the proposed method.
    In this study, calculations necessary to solve the large scale linear programming problems in two operating systems, Linux and Windows 7 (Win), are compared using two different methods. Relying on the interior-point methods,... more
    In this study, calculations necessary to solve the large scale linear programming problems in two operating systems, Linux and Windows 7 (Win), are compared using two different methods. Relying on the interior-point methods, linear-programming interior point solvers (LIPSOL) software was used for the first method and relying on an augmented Lagrangian method-based algorithm, the second method used the generalized derivative. The performed calculations for various problems show the produced random in the Linux operating system (OS) and Win OS indicate the efficiency of the performed calculations in the Linux OS in terms of the accuracy and using of the optimum memory.
    In this paper, we introduce and analyze two new methods for solving the NP-hard absolute value equations (AVE) Ax − |x| = b, where A is an arbitrary n × n real matrix and b ∈ R n , in the case, singular value of A exceeds 1. The... more
    In this paper, we introduce and analyze two new methods for solving the NP-hard absolute value equations (AVE) Ax − |x| = b, where A is an arbitrary n × n real matrix and b ∈ R n , in the case, singular value of A exceeds 1. The comparison with other known methods is carried to show the effectiveness of the proposed methods for a variety of randomly generated problems. The ideas and techniques of this paper may stimulate further research.
    Research Interests:
    ABSTRACT In this paper we have studied the optimum correction of the absolute value equation through making minimal changes in the coefficient matrix and the right-hand side using the l2l2 norm. Solving this problem is equal to solving a... more
    ABSTRACT In this paper we have studied the optimum correction of the absolute value equation through making minimal changes in the coefficient matrix and the right-hand side using the l2l2 norm. Solving this problem is equal to solving a nonconvex and fractional quadratic problem. To solve this problem, we use a genetic algorithm. Our computational results show that this method is efficient, with high accuracy.
    ABSTRACT The main objective of this study is to discuss the optimum correction of linear inequality systems and absolute value equations (AVE). In this work, a simple and efficient feasible direction method will be provided for solving... more
    ABSTRACT The main objective of this study is to discuss the optimum correction of linear inequality systems and absolute value equations (AVE). In this work, a simple and efficient feasible direction method will be provided for solving two fractional nonconvex minimization problems that result from the optimal correction of a linear system. We will show that, in some special-but frequently encountered-cases, we can solve convex optimization problems instead of not-necessarily-convex fractional problems. And, by using the method of feasible directions, we solve the optimal correction problem. Some examples are provided to illustrate the efficiency and validity of the proposed method.
    ABSTRACT In this paper, we give an algorithm to compute the minimum norm solution to the absolute value equation (AVE) in a special case. We show that this solution can be obtained from theorems of the alternative and a useful... more
    ABSTRACT In this paper, we give an algorithm to compute the minimum norm solution to the absolute value equation (AVE) in a special case. We show that this solution can be obtained from theorems of the alternative and a useful characterization of solution sets of convex quadratic programs. By using an exterior penalty method, this problem can be reduced to an unconstrained minimization problem with once differentiable convex objective function. Also, we propose a quasi-Newton method for solving unconstrained optimization problem. Computational results show that convergence to high accuracy often occurs in just a few iterations.
    ABSTRACT In this paper we have studied the optimum correction of the absolute value equation through making minimal changes in the coefficient matrix and the right-hand side using the l2l2 norm. Solving this problem is equal to solving a... more
    ABSTRACT In this paper we have studied the optimum correction of the absolute value equation through making minimal changes in the coefficient matrix and the right-hand side using the l2l2 norm. Solving this problem is equal to solving a nonconvex and fractional quadratic problem. To solve this problem, we use a genetic algorithm. Our computational results show that this method is efficient, with high accuracy.

    And 9 more