Kernel functions play an important role in the complexity analysis of the interior point methods ... more Kernel functions play an important role in the complexity analysis of the interior point methods for linear optimization. In this paper, we present a primal-dual interior point method for linear optimization based on a new kernel function consisting of a trigonometric function in its barrier term. By simple analysis, we show that the feasible primal-dual interior point methods based on the new proposed kernel function enjoys O(n√(logn)2lognϵ) worst case complexity result which improves the results obtained by El Ghami et al. (J Comput Appl Math 236:3613–3623, 2012) for the kernel functions with trigonometric barrier terms.
The International Journal of Advanced Manufacturing Technology, 2014
ABSTRACT In this paper, we introduce a novel optimal location problem on a network, called as con... more ABSTRACT In this paper, we introduce a novel optimal location problem on a network, called as continuous defensive location problem (CDLP). In this problem, a decision maker locates different kinds of defensive facilities (with different capacities) on the vertices of the network in order to prevent her/his aggressors from reaching a strategic site, called core, which is a vertex in the network. This problem is a generalization of the defensive location problem, introduced by Uno and Katagiri (European J Oper Res 188:76-84, 2008). The CDLP is formulated as a bi-level programming problem in which the defender and the aggressor are the upper and lower level of decision makers, respectively. This problem is a non-deterministic polynomial-time (NP) hard problem which is derived from the equilibrium conditions. Therefore, finding a solution for this problem in the large-scale setting is very severe. In order to solve this problem, a hybrid tabu search algorithm is proposed based on continuous tabu search (TS), the so-called directed tabu search (DTS), and the Levenberg-Marquardt (LM) method. This combination helps us to escape from local minima by using a global algorithm, i.e., DTS algorithm, and makes a high-speed convergence rate by employing a local minimizer algorithm, i.e., LM algorithm. We finally apply our hybrid scheme for solving several randomly generated CDLPs. Our numerical experiences show that the proposed algorithm has admirable performance in terms of both CPU time and solution accuracy and is one of the effective and robust approaches for solving these problems.
Interior Point Methods not only are the most effective methods in practice but also have polynomi... more Interior Point Methods not only are the most effective methods in practice but also have polynomial-time complexity. The large update interior point methods perform in practice much better than the small update methods which have the best known theoretical complexity. In this paper, motivated by the complexity results for linear optimization based on kernel functions, we extend a generic primal
ABSTRACT In the framework of large-scale optimization problems, the standard BFGS method is not a... more ABSTRACT In the framework of large-scale optimization problems, the standard BFGS method is not affordable due to memory constraints. The so-called limited-memory BFGS (L-BFGS) method is an adaption of the BFGS method for large-scale settings. However, the standard BFGS method and therefore the standard L-BFGS method only use the gradient information of the objective function and neglect function values. In this paper, we propose a new regularized L-BFGS method for solving large scale unconstrained optimization problems in which more available information from the function and gradient values are employed to approximate the curvature of the objective function. The proposed method utilizes a class of modified quasi-Newton equations in order to achieve higher order accuracy in approximating the second order curvature of the objective function. Under some standard assumptions, we provide the global convergence property of the new method. In order to provide an efficient method for finding global minima of a continuously differentiable function, a hybrid algorithm that combines a genetic algorithm (GA) with the new proposed regularized L-BFGS method has been proposed. This combination leads the iterates to a stationary point of the objective function with higher chance of being global minima. Numerical results show the efficiency and robustness of the new proposed regularized L-BFGS and its hybridized version with GA in practice.
ABSTRACT In this article, we deal with the inverse linear optimization problem under ℓ2-norm defi... more ABSTRACT In this article, we deal with the inverse linear optimization problem under ℓ2-norm defined as follows: for a given linear optimization problem LP(b), with the right-hand side vector b, and current operating plan x , we would like to perturb the right-hand side vector b to so that x is an optimal solution of and is minimum. We prove that this inverse problem is equivalent to an optimization problem which contains some equilibrium constraints that makes it hard to solve. In order to solve this optimization problem, we employ the interior point approach behind LOQO and show that, at each iteration, the search direction is a descent direction for the ℓ2 merit function. Therefore, an ε-solution of the inverse problem can be computed using an extension of polynomial-time interior-point methods for linear and quadratic optimization problems. Finally, computational results of applying proposed approach on some generated inverse problems are given.
ABSTRACT Kernel functions play an important role in the complexity analysis of the interior point... more ABSTRACT Kernel functions play an important role in the complexity analysis of the interior point methods for linear optimization. In this paper, we present a primal-dual interior point method for linear optimization based on a new kernel function consisting of a trigonometric function in its barrier term. By simple analysis, we show that the feasible primal-dual interior point methods based on the new proposed kernel function enjoys worst case complexity result which improves the results obtained by El Ghami et al. (J Comput Appl Math 236:3613-3623, 2012) for the kernel functions with trigonometric barrier terms.
... B. Zahedi†, M. Ahmadian†, K. Mohamed-pour†, M. Peyghami‡,M. Norouzi†, and S. Salari† †Faculty... more ... B. Zahedi†, M. Ahmadian†, K. Mohamed-pour†, M. Peyghami‡,M. Norouzi†, and S. Salari† †Faculty of Electrical and Computer Engineering, KNToosi University of Technology, Tehran, Iran Email: bzahedi@ee.kntu.ac.ir, {m ahmadian, kmpour}@kntu.ac.ir, majid.norouzi@ee ...
ABSTRACT In this paper, we present a new relaxed nonmonotone trust region method with adaptive ra... more ABSTRACT In this paper, we present a new relaxed nonmonotone trust region method with adaptive radius for solving unconstrained optimization problems. The proposed method combines a relaxed nonmonotone technique with a modified version of the adaptive trust region strategy proposed by Shi and Guo (J Comput Appl Math 213:509-520, 2008). Under some suitable and standard assumptions, we establish the global convergence property as well as the superlinear convergence rate for the new method. Numerical results on some test problems show the efficiency and effectiveness of the new proposed method in practice.
Kernel functions play an important role in the complexity analysis of the interior point methods ... more Kernel functions play an important role in the complexity analysis of the interior point methods for linear optimization. In this paper, we present a primal-dual interior point method for linear optimization based on a new kernel function consisting of a trigonometric function in its barrier term. By simple analysis, we show that the feasible primal-dual interior point methods based on the new proposed kernel function enjoys O(n√(logn)2lognϵ) worst case complexity result which improves the results obtained by El Ghami et al. (J Comput Appl Math 236:3613–3623, 2012) for the kernel functions with trigonometric barrier terms.
The International Journal of Advanced Manufacturing Technology, 2014
ABSTRACT In this paper, we introduce a novel optimal location problem on a network, called as con... more ABSTRACT In this paper, we introduce a novel optimal location problem on a network, called as continuous defensive location problem (CDLP). In this problem, a decision maker locates different kinds of defensive facilities (with different capacities) on the vertices of the network in order to prevent her/his aggressors from reaching a strategic site, called core, which is a vertex in the network. This problem is a generalization of the defensive location problem, introduced by Uno and Katagiri (European J Oper Res 188:76-84, 2008). The CDLP is formulated as a bi-level programming problem in which the defender and the aggressor are the upper and lower level of decision makers, respectively. This problem is a non-deterministic polynomial-time (NP) hard problem which is derived from the equilibrium conditions. Therefore, finding a solution for this problem in the large-scale setting is very severe. In order to solve this problem, a hybrid tabu search algorithm is proposed based on continuous tabu search (TS), the so-called directed tabu search (DTS), and the Levenberg-Marquardt (LM) method. This combination helps us to escape from local minima by using a global algorithm, i.e., DTS algorithm, and makes a high-speed convergence rate by employing a local minimizer algorithm, i.e., LM algorithm. We finally apply our hybrid scheme for solving several randomly generated CDLPs. Our numerical experiences show that the proposed algorithm has admirable performance in terms of both CPU time and solution accuracy and is one of the effective and robust approaches for solving these problems.
Interior Point Methods not only are the most effective methods in practice but also have polynomi... more Interior Point Methods not only are the most effective methods in practice but also have polynomial-time complexity. The large update interior point methods perform in practice much better than the small update methods which have the best known theoretical complexity. In this paper, motivated by the complexity results for linear optimization based on kernel functions, we extend a generic primal
ABSTRACT In the framework of large-scale optimization problems, the standard BFGS method is not a... more ABSTRACT In the framework of large-scale optimization problems, the standard BFGS method is not affordable due to memory constraints. The so-called limited-memory BFGS (L-BFGS) method is an adaption of the BFGS method for large-scale settings. However, the standard BFGS method and therefore the standard L-BFGS method only use the gradient information of the objective function and neglect function values. In this paper, we propose a new regularized L-BFGS method for solving large scale unconstrained optimization problems in which more available information from the function and gradient values are employed to approximate the curvature of the objective function. The proposed method utilizes a class of modified quasi-Newton equations in order to achieve higher order accuracy in approximating the second order curvature of the objective function. Under some standard assumptions, we provide the global convergence property of the new method. In order to provide an efficient method for finding global minima of a continuously differentiable function, a hybrid algorithm that combines a genetic algorithm (GA) with the new proposed regularized L-BFGS method has been proposed. This combination leads the iterates to a stationary point of the objective function with higher chance of being global minima. Numerical results show the efficiency and robustness of the new proposed regularized L-BFGS and its hybridized version with GA in practice.
ABSTRACT In this article, we deal with the inverse linear optimization problem under ℓ2-norm defi... more ABSTRACT In this article, we deal with the inverse linear optimization problem under ℓ2-norm defined as follows: for a given linear optimization problem LP(b), with the right-hand side vector b, and current operating plan x , we would like to perturb the right-hand side vector b to so that x is an optimal solution of and is minimum. We prove that this inverse problem is equivalent to an optimization problem which contains some equilibrium constraints that makes it hard to solve. In order to solve this optimization problem, we employ the interior point approach behind LOQO and show that, at each iteration, the search direction is a descent direction for the ℓ2 merit function. Therefore, an ε-solution of the inverse problem can be computed using an extension of polynomial-time interior-point methods for linear and quadratic optimization problems. Finally, computational results of applying proposed approach on some generated inverse problems are given.
ABSTRACT Kernel functions play an important role in the complexity analysis of the interior point... more ABSTRACT Kernel functions play an important role in the complexity analysis of the interior point methods for linear optimization. In this paper, we present a primal-dual interior point method for linear optimization based on a new kernel function consisting of a trigonometric function in its barrier term. By simple analysis, we show that the feasible primal-dual interior point methods based on the new proposed kernel function enjoys worst case complexity result which improves the results obtained by El Ghami et al. (J Comput Appl Math 236:3613-3623, 2012) for the kernel functions with trigonometric barrier terms.
... B. Zahedi†, M. Ahmadian†, K. Mohamed-pour†, M. Peyghami‡,M. Norouzi†, and S. Salari† †Faculty... more ... B. Zahedi†, M. Ahmadian†, K. Mohamed-pour†, M. Peyghami‡,M. Norouzi†, and S. Salari† †Faculty of Electrical and Computer Engineering, KNToosi University of Technology, Tehran, Iran Email: bzahedi@ee.kntu.ac.ir, {m ahmadian, kmpour}@kntu.ac.ir, majid.norouzi@ee ...
ABSTRACT In this paper, we present a new relaxed nonmonotone trust region method with adaptive ra... more ABSTRACT In this paper, we present a new relaxed nonmonotone trust region method with adaptive radius for solving unconstrained optimization problems. The proposed method combines a relaxed nonmonotone technique with a modified version of the adaptive trust region strategy proposed by Shi and Guo (J Comput Appl Math 213:509-520, 2008). Under some suitable and standard assumptions, we establish the global convergence property as well as the superlinear convergence rate for the new method. Numerical results on some test problems show the efficiency and effectiveness of the new proposed method in practice.
Uploads
Papers by M. Reza Peyghami