Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Effects of Hadron-Quark Phase Transitions in Hybrid Stars within the NJL Model
Next Article in Special Issue
Dynamic Vehicle Routing Problem—Predictive and Unexpected Customer Availability
Previous Article in Journal
Performance Improvement of a Liquid Molten Salt Pump: Geometry Optimization and Experimental Verification
Previous Article in Special Issue
Fast Retrieval Method of Forestry Information Features Based on Symmetry Function in Communication Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Effective Global Optimization Algorithm for Quadratic Programs with Quadratic Constraints

School of Mathematical Sciences, Henan Institute of Science and Technology, Xinxiang 453003, China
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(3), 424; https://doi.org/10.3390/sym11030424
Submission received: 13 January 2019 / Revised: 6 March 2019 / Accepted: 7 March 2019 / Published: 22 March 2019
(This article belongs to the Special Issue New Trends in Dynamics)

Abstract

:
This paper will present an effective algorithm for globally solving quadratic programs with quadratic constraints. In this algorithm, we propose a new linearization method for establishing the linear programming relaxation problem of quadratic programs with quadratic constraints. The proposed algorithm converges with the global optimal solution of the initial problem, and numerical experiments show the computational efficiency of the proposed algorithm.

1. Introduction

Quadratic programs with quadratic constraints (QPWQC) have attracted the attention of many researchers for several decades. On the one hand, it is since these classes of problems have a broad applications in multistage shipping, path planning, finance, and portfolio optimization, among others. [1,2,3,4,5,6,7,8,9,10,11]. On the other hand, it is because these classes of problems exist as important theoretical complexities and computational difficulties, that is to say, they are known to generally possess multiple local optimal solutions, which are not optimal solutions.
In the last several decades, many algorithms have been developed for globally solving the (QPWQC) and its special cases, such as branch-and-bound method [12,13], approximation approach [14], robust approach [15], branch-reduce-bound algorithm [16,17,18,19], geometric programming approach [20,21,22,23], and others. Except for the above approaches, some global optimization algorithms [24,25,26,27,28,29,30,31,32,33,34,35,36,37,38] for linear multiplicative programming problems and generalized linear fractional programming problems can be used to solve the quadratic programs with quadratic constraints (QPWQC) considered in this paper. Although these algorithms can be employed to solve the QPWQC and its special cases, less work has been done for globally solving the QPWQC considered in this paper.
In this paper, first of all, by making use of the characteristics of simple variable quadratic function, we construct a new linearization method for establishing the linear programming relaxation problem of the QPWQC. Next, we present a global optimization algorithm based on the branch-and-bound scheme for solving the QPWQC. Finally, the global convergence of the proposed algorithm is proved, and numerical experimental results demonstrate the higher computational efficiency of the proposed algorithm.
The main features of the proposed algorithm are given as follows. (1) A new linearization method is proposed for systematically converting the QPWQC into a sequence of linear programming relaxation problems, and the solutions of these linear programming relaxation problems can infinitely approximate the global optimal solution of the original QPWQC by subdividing the linear relaxation of the feasible region of the QPWQC and solving a series of linear programming relaxation problems. (2) The constructed linear programming relaxation problems are embedded within a branch-and-bound framework, which can be effectively solved by any efficient linear programming method. (3) Combining the proposed linear programming relaxation problem with the branch-and-bound framework, an effective algorithm is proposed for solving the problem of QPWQC. (4) Compared with the exist algorithms [37,39,40,41,42,43,44,45,46,47], numerical results show that the proposed algorithm in this paper can be used to globally solve the QPWQC with higher computational efficiency.
The remaining sections of this paper are organized as follows. Firstly, the aim of Section 2 is to propose a new linearization method for establishing the linear programming relaxation problem of the initial QPWQC. Secondly, based on the branch-and-bound scheme, Section 3 proposes a global optimization algorithm, and its global convergence is proved. Thirdly, compared with the existing methods, Section 4 describes some numerical examples to show the computational efficiency of the proposed algorithm. Finally, some conclusions are given.

2. New Linearization Method for Deriving Linear Programming Relaxation Problem

In this paper, the mathematical modeling of quadratic programs with quadratic constraints is given as follows:
( QPWQC ) { min   ψ 0 ( x ) = k = 1 n c k o x k + j = 1 n k = 1 n d i j 0 x j x k     s . t .     ψ i ( x ) = k = 1 n c k i x k + j = 1 n k = 1 n d i j i x j x k b i ,   i = 1 , 2 , , m , x X 0 = {   x R n :   l 0 x u 0 } ,
where d j k i , c k i , and b i are all arbitrary real numbers; l 0 = ( l 1 0 , , l n 0 ) T > ,             u 0 = ( u 1 0 , , u n 0 ) T < + .
In this section, we construct a new linearization method for deriving the linear programming relaxation problem of the QPWQC, and the detailed construction process of the linearization method is described as follows.
For convenience, we assume without loss of generality that X = { ( x 1 , x 2 , , x n ) T R n : l j x j u j , j = 1 , 2 , , n } X 0 .
Theorem 1.
For any x X , k { 1 , 2 , , n } , we consider the functions x k 2 , u k 2 + 2 u k ( x k u k ) and   u k 2 + 2 l k ( x k u k ) , we have the following conclusions:
u k 2 + 2 u k ( x k u k ) x k 2 u k 2 + 2 l k ( x k u k ) ;
lim u l 0 { x k 2 [ u k 2 + 2 u k ( x k u k ) ] } = 0
lim u l 0 { u k 2 + 2 l k ( x k u k ) x k 2 } = 0
Proof. 
(i) From the mean value theorem, there exists a point ξ k = α l k + ( 1 α ) u k [ l k , u k ] , where α [ 0 , 1 ] , which satisfies that
x k 2 = u k 2 + 2 ξ k ( x k u k ) .
From l k ξ k u k , it follows that
u k 2 + 2 l k ( x k u k ) u k 2 + 2 ξ k ( x k u k ) = x k 2 u k 2 + 2 u k ( x k u k ) .
(ii) From
x k 2 [ u k 2 + 2 u k ( x k u k ) ] = u k 2 x k 2 ( u k l k ) 2 ,
it follows that
lim u l 0 { x k 2 [ u k 2 + 2 u k ( x k u k ) ] } = 0 .
Also from
u k 2 + 2 l k ( x k u k ) x k 2 = ( x k u k ) [ 2 l k u k x k ] 2 ( u k l k ) 2 .
Therefore, we have
lim u l 0 { u k 2 + 2 l k ( x k u k ) x k 2 } = 0 .
The proof is completed. □
From the conclusion (2), it follows that
u j 2 + 2 u j ( x j u j ) x j 2 u j 2 + 2 l j ( x j u j ) ,
( x j x k ) 2 ( u j l k ) 2 + 2 ( u j l k ) [ x j x k ( u k l k ) ] ,
( x j x k ) 2 ( u j l k ) 2 + 2 ( l j u k ) [ x j x k ( u k l k ) ] .
From the conclusions (3) and (4), it follows that
lim u l 0 { x j 2 [ u j 2 + 2 u j ( x j u j ) ] } = 0 ,
lim u l 0 { u j 2 + 2 l j ( x j u j ) x j 2 } = 0 ,
lim u l 0 { ( u j l k ) 2 + 2 ( l j u k ) [ x j x k ( u k l k ) ] ( x j x k ) 2 } = 0 .
and
lim u l 0 { ( x j x k ) 2 {   ( u j l k ) 2 + 2 ( u j l k ) [ x j x k ( u j l k ) ] } } = 0 .  
For any x X , j , k { 1 , 2 , , n } , j k , without loss of generality, we define
ψ _ j k ( x ) = 1 2 { u j 2 + 2 u j ( x j u j ) + u k 2 + 2 u k ( x k u k ) { ( u j l k ) 2 + 2 ( l j u k ) [ x j x k ( u k l k ) ] } ]
and
ψ ¯ j k ( x ) = 1 2 { u j 2 + 2 l j ( x j u j ) + u k 2 + 2 l k ( x k u k ) { ( u j l k ) 2 + 2 ( u j l k ) [ x j x k ( u k l k ) ] } ] ,
Theorem 2.
For any x X , j , k { 1 , 2 , , n } , j k , consider the functions ψ _ j k ( x ) , x j x k and ψ ¯ j k ( x )   , the following conclusions hold:
ψ _ j k ( x ) x j x k = 1 2 [ x j 2 + x k 2 ( x j x k ) 2 ] ψ ¯ j k ( x ) ,
lim u l 0 [ x j x k ψ _ j k ( x ) ] = 0 ,
and
lim u l 0 [ ψ ¯ j k ( x ) x j x k ] = 0 .
Proof. 
(i) By the conclusions of Theorem 1, it follows that
ψ ¯ j k ( x ) = 1 2 { u j 2 + 2 l j ( x j u j ) + u k 2 + 2 l k ( x k u k )   { ( u j l k ) 2 + 2 ( u j l k ) [ x j x k ( u k l k ) ] } }   1 2 [ x j 2 + x k 2 ( x j x k ) 2 ] = x j x k   1 2 { u j 2 + 2 u j ( x j u j ) + u k 2 + 2 u k ( x k u k )   { ( u j l k ) 2 + 2 ( l j u k ) [ x j x k ( u j l k ) ] } } = ψ _ j k ( x ) .
(ii) From the inequalities (7) and (9), we have
x j x k ψ _ j k ( x ) = 1 2 [ x j 2 + x k 2 ( x j x k ) 2 ]   1 2 { u j 2 + 2 u j ( x j u j ) + u k 2 + 2 u k ( x k u k )   { ( u j l k ) 2 + 2 ( l j u k ) [ x j x k ( u j l k ) ] } } 1 2 ( u j l j ) 2 + 1 2 ( u k l k ) 2 + ( u k + u j l j l k ) 2 .
Thus, we can get that lim u l 0 [ x j x k ψ _ j k ( x ) ] = 0 .  □
Also from the proof of Theorem 2 and the inequalities (7) and (9), we get that
ψ ¯ j k ( x ) x j x k = 1 2 { u j 2 + 2 l j ( x j u j ) + u k 2 + 2 l k ( x k u k )   { ( u j l k ) 2 + 2 ( u j l k ) [ x j x k ( u j l k ) ] } }   1 2 [ x j 2 + x k 2 ( x j x k ) 2 ] ( u j l j ) 2 + ( u k l k ) 2 + 1 2 ( u k + u j l k l j ) 2 .
Thus, we can get that lim u l 0 [ ψ ¯ j k ( x ) x j x k ] = 0 .
Without loss of generality, for any X = [ l , u ] X 0 for any x X , and i { 0 , 1 , 2 , , m } we let
f _ k k i = { d k k i { u k i + 2 u k ( x k u k ) } , i f   d k k i > 0   , d k k i { u k i + 2 l k ( x k u k ) } , i f   d k k i < 0 ,  
f _ j k i = { d j k i ψ _ j k ( x ) , i f   d j k i > 0   , j k , d j k i ψ ¯ j k ( x ) , i f   d j k i < 0   , j k ,
ψ i L ( x ) = k = 1 n ( c k i x k + f _ k k i ( x ) ) + j = 1 n k = 1 , k j n f _ j k i ( x ) .
Theorem 3.
For any x X = [ l , u ] X 0 , for each i = 0 , 1 , 2 , , m , we get that ψ i L ( x ) ψ i ( x ) and lim u l 0 [ ψ i ( x ) ψ i L ( x ) ] = 0 .
Proof. 
(i) From (2) and (12), we have
f _ k k i d k k i x k 2 f ¯ k k i ( x )         a n d       f _ j k i d j k i x j x k f ¯ j k i ( x ) .
By (29), it follows that ψ i L ( x ) ψ i ( x ) .
(ii)
ψ i ( x ) ψ i L ( x ) = k = 1 n c k i x k + k = 1 n d k k i x k 2 + j = 1 n k = 1 , k j n d j k i x j x k [ k = 1 n c k i x k + k = 1 n f _ k k i ( x ) + j = 1 n k = 1 , k j n f _ j k i ( x ) ] = k = 1 n ( d k k i x k 2 f _ k k i ( x ) ) + j = 1 n k = 1 , k j n [ d j k i x j x k f _ j k i ( x ) ] = k = 1 , d k k i > 0 n d k k i { x k 2 [ u k 2 + 2 u k ( x k u k ) ] } + k = 1 , d k k i < 0 n d k k i { x k 2 [ u k 2 + 2 l k ( x k u k ) ] } + j = 1 n k = 1 , k j , d k k i > 0 n d j k i [ x j x k ψ _ j k ( x ) ] + j = 1 n k = 1 , k j , d k k i < 0 n d j k i [ x j x k ψ ¯ j k ( x ) ]
From (3), (4), (14), and (15), we get
lim u l 0 { x k 2 [ u k 2 + 2 u k ( x k u k ) ] } = 0 ,
lim u l 0 { [ u k 2 + 2 l k ( x k u k ) ] x k 2 } = 0 ,
lim u l 0 [ x j x k ψ _ j k ( x ) ] = 0
and
lim u l 0 [ ψ ¯ j k ( x ) x j x k ] = 0 .
Therefore, we have
lim u l 0 [ ψ i ( x ) ψ i L ( x ) ] = 0 .
The proof is completed. □
By Theorem 3, we can establish the linear programming relaxation problem (LPRP) of the QPWQC over X as follows:
( LPRP ) : { min   ψ 0 L ( x ) = k = 1 n ( c k 0 x k + φ _ k k 0 ( x ) ) + j = 1 n k = 1 , k j n φ _ j k 0 ( x ) ,     s . t .       ψ i L ( x ) = k = 1 n ( c k i x k + φ _ k k i ( x ) ) + j = 1 n k = 1 , k j n φ _ j k i ( x ) b i ,       i = 1 , 2 , , m ,     x X = { x : l x u } .
From the construction process of the former linearizing method, it is obvious that for any given X, each feasible point of the QPWQC is also feasible to the LPRP, and the optimal value of the LPRP is less than or equal to that of QPWQC. Therefore, the LPRP offers a reliable lower bound for the optimal value of the QPWQC. Except for the above approach, Theorem 3 also ensures the global convergence of the proposed algorithm.

3. New Global Optimization Algorithm

In this section, based on the former LPRP, we shall present a new global optimization algorithm for solving the QPWQC. In this algorithm, there are the following several key operations: branching, bounding, and space reduction.
Firstly, we choose a simple branching operation, which is called an interval bisection method. For any selected box X = [ l , u ] X 0 . Let δ arg max { u i l i :   i = 1 , 2 , , n } , subdivide [ x _ δ x ¯ δ ] into [ x _ δ , ( x _ δ + x ¯ δ ) / 2 ] and [ ( x _ δ + x ¯ δ ) / 2 , x ¯ δ ] , X can be subdivide into X 1 and X 2 . The selected branching operation is sufficient to ensure the global convergence of this algorithm.
Secondly, for each investigated sub-box X X 0 , we must solve the LPRP, and set L B s = min { L B ( X ) | X Ω s } , where Ω s is still not fathomed as a sub-box set. In order to update the upper bound, we need to fathom the feasible point, and set Θ be the known feasible point set and U B s = min { ψ 0 ( x ) | x Θ } , to be the existent best upper bound.
In addition, we can introduce an interval reduction operation from Theorem 3 [6] to improve the convergent speed of the proposed algorithm.

3.1. Steps for Global Optimization Algorithm

For any investigated box X s X 0 , let L B ( X s ) and x s = x ( X s ) be the optimal value and optimal solution of the LPRP over X s . Based on the branch-and-bound scheme and the former LPRP, a new global optimization algorithm is described as follows.
Algorithm Steps:
Step 1. Set ε = 10 6 , solve the (LPRP) over X 0 to obtain its optimal solution x 0 and the optimal value L B ( X 0 ) , respectively.
Let the lower bound L B 0 = L B ( X 0 ) . If x 0 is feasible to the QPWQC, let the upper bound be U B 0 = ψ 0 ( x 0 ) , otherwise let the initial upper bound be U B 0 = + .
If U B 0 L B 0 ε , let the global ε optimal solution of the QPWQC be x 0 , otherwise let Ω 0 = { X 0 } ,     Λ = ϕ ,     s = 1 .
Step 2. Let the upper bound be U B s = U B s 1 , partition X s 1 into X s , 1 and X s , 2 , and let Λ = Λ { X s 1 } be the deleted sub-boxes set.
For each X s , t , t = 1 , 2 , utilize the interval reduction method to compress the investigated box, and let X s , t be the remaining box.
For each remaining box X s , t , t = 1 , 2 , solve the LPRP to obtain its optimal solution x s , t and optimal value L B ( X s , t ) , respectively.
Set Ω s = { X | X Ω s 1 { X s , 1 , X s , 2 } , X Λ } and L B s = min { L B ( X ) | X Ω s } .
Step 3. For each X s , t , t = 1 , 2 , if x m i d is the feasible point of the QPWQC, let Θ : = Θ { x m i d } , and let the new upper bound U B s = min x Θ { ψ 0 ( x ) } ; if x s , t is feasible to the QPWQC, let the new upper bound U B s = min { U B s , ψ 0 ( x s , t ) } , and let the best known feasible point be x s , which satisfies U B s = ψ 0 ( x s ) .
Step 4. If U B s L B s ε , then we let the ε global optimal solution of the QPWQC be x s , otherwise let s = s + 1 , and return to Step 2.

3.2. Global Convergence of the Proposed Algorithm

If the proposed algorithm terminates after finite iterations, then, when it terminates, we can obtain the global optimal solution of the QPWQC. Otherwise, the proposed algorithm will generate an infinite sequence, whose limitation is the global optimal solution of the QPWQC; the detailed proof is given as follows.
Theorem 4.
If the proposed algorithm does not terminate after finite iterations, then the proposed algorithm will generate an infinite sequence { X s } , whose accumulation point will be the global optimal solution of the QPWQC.
Proof. 
First of all, in the proposed algorithm, the selected branching method is the rectangle bisection, which is exhaustive, and which guarantees that the intervals of all variables converge to 0.
Secondly, as u l 0 , from the conclusions of Theorem 3, it follows that the LPRP will sufficiently approximate the QPWQC, which is to say, lim s ( U B s L B s ) = 0 , i.e., the proposed algorithm satisfies that the bounding operation is consistent. Thirdly, in the proposed algorithm, the subdivided box which achieved the actual lower bound is immediately selected for the later partition, and the proposed algorithm satisfies that the selected operational bound is improving. By Theorem 4.3 of Reference [39], the proposed branch-and-bound algorithm satisfies the global convergent sufficient condition. Hence, the proposed algorithm converges to the global optimal solution of the QPWQC. □

4. Numerical Experiments

Let ε = 10 6 be the convergence error. Some numerical examples in recent literature are solved in C++ program on microcomputer, and the simplex approach is employed to solve the LPRP. Compared with the existent algorithms, these numerical examples are given as follows, and their computational results are listed in Table 1 and Table 2.
Example 1 (Reference [40])
( LPRP ) : { min   ψ 0 L ( x ) = k = 1 n ( c k 0 x k + φ _ k k 0 ( x ) ) + j = 1 n k = 1 , k j n φ _ j k 0 ( x ) ,     s . t .     ψ i L ( x ) = k = 1 n ( c k i x k + φ _ k k i ( x ) ) + j = 1 n k = 1 , k j n φ _ j k i ( x ) b i ,       i = 1 , 2 , , m ,     x X = { x : l x u } .
{ min   ψ 0 ( x ) = x 1 s . t .     ψ 1 ( x ) = 1 4 x 1 + 1 2 x 2 1 16 x 1 2 1 16 x 2 2 1 , ψ 2 ( x ) = 3 7 x 1 3 7 x 2 + 1 14 x 1 2 + 1 14 x 2 2 1 , 1 x 1 5.5 ,     1 x 2 5.5 ,
Example 2 (Reference [40])
{ min   ψ 0 ( x ) = x 1 x 2 2 x 1 + x 2 + 1 s . t .     ψ 1 ( x ) = 6 x 1 16 x 2 + 8 x 2 2 11 , ψ 2 ( x ) = 3 x 1 + 2 x 2 x 2 2 7 , 1 x 1 2.5 ,     1 x 2 2.225 .
Example 3 (References [37,41,42])
{ min   ψ 0 ( x ) = x 1 2 + x 2 2 s . t .     ψ 1 ( x ) = 0.3 x 1 x 2 1 , 2 x 1 5 ,     1 x 2 3 .
Example 4 (References [41,42,43,44])
{ min   ψ 0 ( x ) = x 1 s . t .     ψ 1 ( x ) = 4 x 2 4 x 1 2 1 , ψ 2 ( x ) = x 1 x 2 1 , 0.01 x 1 , x 2 15 .
Example 5 (Reference [45])
{ min   ψ 0 ( x ) = 6 x 1 2 + 4 x 2 2 + 5 x 1 x 2 s . t .     ψ 1 ( x ) = 6 x 1 x 2 48 , 0 x 1 , x 2 10 .
Example 6 (Reference [46])
{ min   ψ 0 ( x ) = x 1 + x 1 x 2 0.5 x 2 s . t .     ψ 1 ( x ) = 6 x 1 + 8 x 2 3 , ψ 2 ( x ) = 3 x 1 x 2 3 , 1 x 1 , x 2 1.5 .
Example 7 (References [26,43])
{ min   ψ 0 ( x ) = 4 x 2 + ( x 1 1 ) 2 + x 2 10 x 3 2 s . t .     ψ 1 ( x ) = x 1 2 + x 2 2 + x 3 2 2 , ψ 2 ( x ) = ( x 1 2 ) 2 + x 2 2 + x 3 2 2 , 2 2 x 1 2 ,     0 x 2 , x 3 2 .
Comparing with the existent algorithms, numerical results show that the proposed algorithm has the higher computational efficiency.
To demonstrate robustness of the proposed algorithm, we give a large-scale random numerical example as follows.
Example 8. (Reference [47])
{ min   ψ 0 ( x ) = k = 1 n c k o x k + j = 1 n k = 1 n d i j 0 x j x k s . t .         ψ i ( x ) = k = 1 n c k i x k + j = 1 n k = 1 n d i j i x j x k b i ,   i = 1 , 2 , , m , x X 0 = {   x R n :   l 0 x u 0 } ,
where c k 0 , k = 1 , 2 , , n , is randomly generated in [0, 1], d k j 0 , k = 1 , 2 , , n , j = 1 , 2 , , n , is randomly generated in [0, 1]; c k i , i = 1 , , m , k = 1 , 2 , , n , is randomly generated in [−1, 0], d k j i , k = 1 , 2 , , n , j = 1 , 2 , , n , is randomly generated in [−1, 0], b i , i = 1 , 2 , , m , is randomly generated in [−300, −90]. In the Example 8, ‘n’ denotes the number of variables while ‘m’ denotes the number of constraints. Numerical results about the Example 8 are given in the Table 2.

5. Concluding Remarks

This paper presents an effective algorithm for globally solving quadratic programs with quadratic constraints. In this algorithm, a new linearization method is constructed for deriving the linear programming relaxation problem of the QPWQC. The proposed algorithm converges to the global optimal solution of the initial problem of QPWQC, and numerical experimental results show the higher computational efficiency of the proposed algorithm.

Author Contributions

D.S., J.Y. and C.B. conceived and worked together to achieve this work.

Funding

This research was funded by the Science and Technology Project of Henan Province (192102210114, 182102310941), the Key Scientific Research Project of Universities of Henan Province (18A110019, 17A110021, 16A110013, 16A110014).

Acknowledgments

The authors would like to express their sincere thanks to the responsible editor and the anonymous referees for their valuable comments and suggestions, which have greatly improved the earlier version of our paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhu, D.; Huang, H.; Yang, S.X. Dynamic task assignment and path planning of multi-auv system based on an improved self-organizing map and velocity synthesis method in three-dimensional underwater workspace. IEEE Trans. Cybern. 2012, 43, 504–514. [Google Scholar]
  2. Jiao, H.; Liu, S. Range division and compression algorithm for quadratically constrained sum of quadratic ratios. Comput. Appl. Math. 2017, 36, 225–247. [Google Scholar] [CrossRef]
  3. Cao, X.; Yu, A.L. Multi-auv cooperative target search algorithm in 3-D underwater workspace. J. Navig. 2017, 53, 1–19. [Google Scholar] [CrossRef]
  4. Cao, X.; Zhu, D.Q. Multi-AUV task assignment and path planning with ocean current based on biological inspired self-organizing map and velocity synthesis algorithm. Intell. Autom. Soft Comput. 2017, 23, 31–39. [Google Scholar] [CrossRef]
  5. Wieczorek, Ł.; Ignaciuk, P. Continuous Genetic Algorithms as Intelligent Assistance for Resource Distribution in Logistic Systems. Data 2018, 3, 68. [Google Scholar] [CrossRef]
  6. Salami, M.; Movahedi Sobhani, F.; Ghazizadeh, M. Short-Term Forecasting of Electricity Supply and Demand by Using the Wavelet-PSO-NNs-SO Technique for Searching in Big Data of Iran’s Electricity Market. Data 2018, 3, 43. [Google Scholar] [CrossRef]
  7. Faris, H. A Hybrid Swarm Intelligent Neural Network Model for Customer Churn Prediction and Identifying the Influencing Factors. Information 2018, 9, 288. [Google Scholar] [CrossRef]
  8. Stojčić, M.; Pamučar, D.; Mahmutagić, E.; Stević, Ž. Development of an ANFIS Model for the Optimization of a Queuing System in Warehouses. Information 2018, 9, 240. [Google Scholar] [CrossRef]
  9. Lee, P.; Kang, S. An Interactive Multiobjective Optimization Approach to Supplier Selection and Order Allocation Problems Using the Concept of Desirability. Information 2018, 9, 130. [Google Scholar] [CrossRef]
  10. Jain, S.; Bisht, D.C.S.; Mathpal, P.C. Particle swarm optimised fuzzy method for prediction of water table elevation fluctuation. Int. J. Data Anal. Tech. Strateg. 2018, 10, 99–110. [Google Scholar] [CrossRef]
  11. Sun, H.; Tian, Y. Using improved genetic algorithm under uncertain circumstance of site selection of O2O customer returns. Int. J. Data Anal. Tech. Strateg. 2018, 10, 241–256. [Google Scholar] [CrossRef]
  12. Shen, P.; Zhang, T.; Wang, C. Solving a class of generalized fractional programming problems using the feasibility of linear programs. J. Inequal. Appl. 2017, 147. [Google Scholar] [CrossRef]
  13. Jiao, H.; Liu, S. An efficient algorithm for quadratic sum-of-ratios fractional programs problem. Numer. Funct. Anal. Optim. 2017, 38, 1426–1445. [Google Scholar] [CrossRef]
  14. Fu, M.; Luo, Z.Q.; Ye, Y. Approximation algorithms for quadratic programming. J. Comb. Optim. 1998, 2, 29–50. [Google Scholar] [CrossRef]
  15. Shen, P.; Wang, C. Linear decomposition approach for a class of nonconvex programming problems. J. Inequal. Appl. 2017, 74. [Google Scholar] [CrossRef]
  16. Hou, Z.; Jiao, H.; Cai, L.; Bai, C. Branch-delete-bound algorithm for globally solving quadratically constrained quadratic programs. Open Math. 2017, 15, 1212–1224. [Google Scholar] [CrossRef]
  17. Jiao, H.; Chen, Y.Q.; Cheng, W.X. A Novel Optimization Method for Nonconvex Quadratically Constrained Quadratic Programs. Abstr. Appl. Anal. 2014, 2014, 698489. [Google Scholar] [CrossRef]
  18. Zhao, Y.; Liu, S. Global optimization algorithm for mixed integer quadratically constrained quadratic program. J. Comput. Appl. Math. 2017, 319, 159–169. [Google Scholar] [CrossRef]
  19. Jiao, H.; Chen, R. A parametric linearizing approach for quadratically inequality constrained quadratic programs. Open Math. 2018. [Google Scholar] [CrossRef]
  20. Shen, P.; Huang, B.; Wang, L. Range division and linearization algorithm for a class of linear ratios optimization problems. J. Comput. Appl. Math. 2019, 350, 324–342. [Google Scholar] [CrossRef]
  21. Shen, P.P.; Li, X.A.; Jiao, H.W. Accelerating method of global optimization for signomial geometric programming. J. Comput. Appl. Math. 2008, 214, 66–77. [Google Scholar] [CrossRef]
  22. Shen, P.; Huang, B. Global algorithm for solving linear multiplicative programming problems. Optim. Lett. 2019. [Google Scholar] [CrossRef]
  23. Jiao, H.; Wang, Z.; Chen, Y. Global optimization algorithm for sum of generalized polynomial ratios problem. Appl. Math. Model. 2013, 37, 187–197. [Google Scholar] [CrossRef]
  24. Chen, Y.; Jiao, H. A nonisolated optimal solution of general linear multiplicative programming problems. Comput. Oper. Res. 2009, 36, 2573–2579. [Google Scholar] [CrossRef]
  25. Shen, P.-P.; Lu, T. Regional division and reduction algorithm for minimizing the sum of linear fractional functions. J. Inequal. Appl. 2018, 63. [Google Scholar] [CrossRef]
  26. Shen, P.; Li, X. Branch reduction bound algorithm for generalized geometric programming. J. Glob. Optim. 2013, 56, 1123–1142. [Google Scholar] [CrossRef]
  27. Jiao, H.; Chen, Y.; Yin, J. Optimality condition and iterative thresholding algorithm for l(p)-regularization problems. SpringerPlus 2016, 5, 1873. [Google Scholar] [CrossRef]
  28. Shen, P.; Yang, L.; Liang, Y. Range division and contraction algorithm for a class of global optimization problems. J. Glob. Optim. 2014, 242, 116–126. [Google Scholar] [CrossRef]
  29. Zhao, Y.; Liu, S. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints. SpringerPlus 2016, 5, 1302. [Google Scholar] [CrossRef]
  30. Jiao, H.W.; Liu, S.Y.; Zhao, Y.F. Effective algorithm for solving the generalized linear multiplicative problem with generalized polynomial constraints. Appl. Math. Model. 2015, 39, 7568–7582. [Google Scholar] [CrossRef]
  31. Shen, P.P.; Bai, X.D. Global optimization for generalized geometric programming problems with discrete variables. Optimization 2013, 62, 895–917. [Google Scholar] [CrossRef]
  32. Jiao, H. A branch and bound algorithm for globally solving a class of nonconvex programming problems. Nonlinear Anal. Theory Methods Appl. 2009, 70, 1113–1123. [Google Scholar] [CrossRef]
  33. Jiao, H.; Liu, S. A new linearization technique for minimax linear fractional programming. Int. J. Comput. Math. 2014, 91, 1730–1743. [Google Scholar] [CrossRef]
  34. Jiao, H.; Guo, Y.; Shen, P. Global optimization of generalized linear fractional programming with nonlinear constraints. Appl. Math. Comput. 2006, 183, 717–728. [Google Scholar] [CrossRef]
  35. Jiao, H.; Liu, S.; Yin, J.; Zhao, Y. Outcome space range reduction method for global optimization of sum of affine ratios problem. Open Math. 2016, 14, 736–746. [Google Scholar] [CrossRef]
  36. Jiao, H.W.; Liu, S.Y. A practicable branch and bound algorithm for sum of linear ratios problem. Eur. J. Oper. Res. 2015, 243, 723–730. [Google Scholar] [CrossRef]
  37. Jiao, H.; Liu, S.; Lu, N. A parametric linear relaxation algorithm for globally solving nonconvex quadratic programming. Appl. Math. Comput. 2015, 250, 973–985. [Google Scholar] [CrossRef]
  38. Shen, P.; Jiao, H. Linearization method for a class of multiplicative programming with exponent. Appl. Math. Comput. 2006, 183, 328–336. [Google Scholar] [CrossRef]
  39. Horst, R.; Tuy, H. Global Optimization: Deterministic Approaches, 2nd ed.; Springer: Berlin, Germany, 1993. [Google Scholar]
  40. Shen, P.; Jiao, H. A new rectangle branch-and-pruning appproach for generalized geometric programming. Appl. Math. Comput. 2006, 183, 1027–1038. [Google Scholar]
  41. Shen, P.; Liu, L. A global optimization approach for quadratic programs with nonconvex quadratic constraints. Chin. J. Eng. Math. 2008, 25, 923–926. [Google Scholar]
  42. Wang, Y.; Liang, Z. A deterministic global optimization algorithm for generalized geometric programming. Appl. Math. Comput. 2005, 168, 722–737. [Google Scholar] [CrossRef]
  43. Jiao, H.; Chen, Y. A global optimization algorithm for generalized quadratic programming. J. Appl. Math. 2013, 2013, 215312. [Google Scholar] [CrossRef]
  44. Wang, Y.J.; Zhang, K.C.; Gao, Y.L. Global optimization of generalized geometric programming. Comput. Math. Appl. 2004, 48, 1505–1516. [Google Scholar] [CrossRef]
  45. Gao, Y.; Shang, Y.; Zhang, L. A branch and reduce approach for solving nonconvex quadratic programming problems with quadratic constraints. OR Trans. 2005, 9, 9–20. [Google Scholar]
  46. Shen, P. Linearization method of global optimization for generalized geometric programming. Appl. Math. Comput. 2005, 162, 353–370. [Google Scholar] [CrossRef]
  47. Qu, S.-J.; Ji, Y.; Zhang, K.-C. A deterministic global optimization algorithm based on a linearizing method for nonconvex quadratically constrained programs. Math. Comput. Model. 2008, 48, 1737–1743. [Google Scholar] [CrossRef]
Table 1. Numerical comparisons for Examples 1–7.
Table 1. Numerical comparisons for Examples 1–7.
ExampleRefs.Optimal ValueOptimal SolutionIterationTime (s)
1ours1.177124990(1.177124344, 2.177124344)220.0091
[40]1.177124327(1.177124327, 2.177124353)4341.0000
2ours−0.999999202(2.000000, 1.000000)220.0085
[40]−1.0(2.000000, 1.000000)240.0129
3ours6.777809491(2.000000000, 1.666676181)130.0038
[37]6.777778340(2.000000000, 1.666666667)300.0068
[41]6.777782016(2.000000000, 1.666666667)400.0320
[42]6.7780(2.00003, 1.66665)440.1800
4ours0.500000600(0.500000000, 0.500000000)260.0061
[41]0.500004627(0.5 0.5)340.0560
[42]0.5(0.5, 0.5)910.8500
[43]0.500000442(0.500000000, 0.500000000)370.0193
[44]0.5(0.5, 0.5)961.0000
5ours118.381493268(2.564162744, 3.119857633)700.0435
[45]118.383756475(2.5557793695, 3.1301646393)2100.7800
6ours−1.162882315(1.499977112, 1.5)370.0412
[46]−1.16288(1.5, 1.5)840.1257
7ours−11.363635682(1.0,0.181818133, 0.983332175)2290.3919
[43]−11.363636364(1.0,0.181818470, 0.983332113)4200.2845
[26]−10.35(0.998712, 0.196213, 0.979216)16480.3438
Table 2. Computational results for Example 8.
Table 2. Computational results for Example 8.
(n,m)Algorithm of [47]This Paper
Computational Time (s)Computational Time (s)
(4, 6)2.376781.9894
(5, 11)6.398974.9867
(14, 6)9.227326.4567
(18, 7)15.841011.6856
(20, 5)11.95388.9802
(35, 10)74.885356.7866
(37, 9)77.147645.6324
(45, 8)86.717465.6845
(46, 5)44.250232.2150
(60, 11)315.659216.534

Share and Cite

MDPI and ACS Style

Shi, D.; Yin, J.; Bai, C. An Effective Global Optimization Algorithm for Quadratic Programs with Quadratic Constraints. Symmetry 2019, 11, 424. https://doi.org/10.3390/sym11030424

AMA Style

Shi D, Yin J, Bai C. An Effective Global Optimization Algorithm for Quadratic Programs with Quadratic Constraints. Symmetry. 2019; 11(3):424. https://doi.org/10.3390/sym11030424

Chicago/Turabian Style

Shi, Dongwei, Jingben Yin, and Chunyang Bai. 2019. "An Effective Global Optimization Algorithm for Quadratic Programs with Quadratic Constraints" Symmetry 11, no. 3: 424. https://doi.org/10.3390/sym11030424

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop