Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Smoothing Methods for Nonlinear Complementarity Problems

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

In this paper, we present some new smoothing techniques to solve general nonlinear complementarity problems. Under a weaker condition than monotonicity as on the original problems, we prove convergence of our methods. We also present an error estimate under a general monotonicity condition. Some numerical tests confirm the efficiency of the proposed methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Ferris, M.C., Mangasarian, O.L., Pang, J.-S. (eds.): Complementarity: Applications, Algorithms and Extensions. Papers from the International Conference on Complementarity (ICCP99). Applied Optimization, vol. 50. Kluwer Academic, Dordrecht (2001)

    Google Scholar 

  2. Ferris, M.C., Pang, J.S.: Engineering and economic applications of complementarity problems. SIAM Rev. 39(4), 669–713 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  3. Cottle, R.W., Pang, J.-S., Stone, R.E.: The Linear Complementarity Problem. Classics in Applied Mathematics, vol. 60. SIAM, Philadelphia (2009)

    Book  MATH  Google Scholar 

  4. Seetharama Gowda, M., Tawhid, M.A.: Existence and limiting behavior of trajectories associated with P 0-equations. Computational optimization—a tribute to Olvi Mangasarian, Part I. Comput. Optim. Appl. 12(1–3), 229–251 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  5. Crouzeix, J.-P.: Pseudomonotone variational inequality problems: existence of solutions. Math. Program. 78, 305–314 (1997)

    MATH  MathSciNet  Google Scholar 

  6. Auslender, A., Cominetti, R., Haddou, M.: Asymptotic analysis for penalty and barrier methods in convex and linear programming. Math. Oper. Res. 22(1), 43–62 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  7. Haddou, M.: A new class of smoothing methods for mathematical programs with equilibrium constraints. Pac. J. Optim. 5(1), 86–96 (2009)

    MathSciNet  Google Scholar 

  8. Ben-Tal, A., Teboulle, M.: A smoothing technique for nondifferentiable optimization problems. In: Dolecki (ed.) Optimization, Lectures Notes in Mathematics, vol. 1405, pp. 1–11. Springer, New York (1989)

    Google Scholar 

  9. Huang, C., Wang, S.: A power penalty approach to a nonlinear complementarity problem. Oper. Res. Lett. 38(1), 72–76 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  10. Li, D.-h., Zeng, J.-p.: A penalty technique for nonlinear problems. Appl. Comput. Math. 16(1), 40–50 (1998)

    MATH  MathSciNet  Google Scholar 

  11. Ding, J., Yin, H.: A new homotopy method for nonlinear complementarity problems. Numer. Math. J. Chin. Univ. (Engl. Ser.) 16(2), 155–163 (2007)

    MATH  MathSciNet  Google Scholar 

  12. Kojima, M., Shindo, S.: Extensions of Newton and quasi-Newton methods to systems of PC1 equations. J. Oper. Res. Soc. Jpn. 29, 352–374 (1986)

    MATH  MathSciNet  Google Scholar 

  13. http://dm.unife.it/pn2o/software/Extragradient/test_problems.html

  14. Harker, P.T.: Accelerating the convergence of the diagonalization and projection algorithms for finite-dimensional variational inequalities. Math. Program. 48, 29–59 (1990)

    Google Scholar 

  15. Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, vols. I, II. Springer Series in Operations Research. Springer, New York (2003)

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank anonymous referees and editors for their kind and helpful remarks and comments. The first author is partially supported by the ANR (Agence Nationale de la Recherche) through HJnet project ANR-12-BS01-0008-01.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mounir Haddou.

Additional information

Communicated by Jean-Pierre Crouzeix.

Appendix

Appendix

We give in this appendix a brief description of each test example and report some numerics obtained by using the following projection method; see [15, Sect. 12.1]:

$$x^{k+1}= \max \bigl(0, x^k -D^{-1}F \bigl(x^k \bigr) \bigr), \quad k=0,1,\ldots. $$

We choose D=λI, where λ>0 is a constant and I is the n×n identity matrix. Table 2 presents the best obtained results when varying the value of λ (λ=0.1,1,10,20,50,100).

  • The two first examples P1 and P2 [9] correspond to strongly monotone function F(x)=(F 1(x),…,F n (x))T with \(F_{i}(x)=-x_{i+1}+2x_{i}-x_{i-1}+ \frac{1}{3}x_{i}^{3}-b_{i},\quad i=1,\ldots ,n\), (x 0=x n+1=0) and b i =(−1)i (resp. \(b_{i}=\frac{(-1)^{i}}{\sqrt{i}}\)), i=1,…,n for P1 (resp. P2).

  • P3 is another strongly monotone test problem from [10] where F(x)=(F 1(x),…,F n (x))T with \(F_{i}(x)=-x_{i+1}+2x_{i}-x_{i-1}+ \arctan(x_{i}) +(i-\frac{n}{2})\), i=1,…,n, (x 0=x n+1=0).

  • P4 and P5 are known as the degenerate and the non-degenerate examples of Kojima-Shindo [12]. P4 and P5 are, respectively, defined by

    $$\begin{aligned} &F_4(x)=\left ( \begin{array}{c} 3x_1^2+2x_1x_2+2x_2^2+x_3+3x_4-6\\ 2x_1^2+x_1+x_2^2+10x_3+2x_4-2\\ 3x_1^2+x_1x_2+2x_2^2+2x_3+9x_4-9\\ x_1^2+3x_2^2+2x_3+3x_4-3 \end{array} \right ), \\ &F_5(x)=\left ( \begin{array}{c} 3x_1^2+2x_1x_2+2x_2^2+x_3+3x_4-6\\ 2x_1^2+x_1+x_2^2+10x_3+2x_4-2\\ 3x_1^2+x_1x_2+2x_2^2+2x_3+3x_4-1\\ x_1^2+3x_2^2+2x_3+3x_4-3 \end{array} \right ). \end{aligned}$$
  • P5 has a unique solution \(x^{*}=(\frac{\sqrt{6}}{2},0,0,\frac {1}{2})\) with \(F(x^{*})=(0,2+\frac{\sqrt{6}}{2},3,0)\) while P4 has two optimal solutions \(x^{*}=(\frac{\sqrt{6}}{2},0,0,\frac{1}{2})\) with \(F(x^{*})=(0,2+\frac{\sqrt {6}}{2},0,0)\) and x ∗∗=(1,0,3,0) with F(x ∗∗)=(0,31,0,4). The first optimal solution of P4 is degenerate since \(x^{*}_{3}=F_{3}(x^{*})=0\).

  • A complete description of P6 and P7 can be found in [13, 14]. These two examples correspond to the Nash–Cournot test problem with N=5 and N=10.

    Let \(x \in \mathbb {R}^{N}\), Q=∑x i and define the functions C i (x i ) and p(Q) as follows:

    $$p(Q) = 5000^{\frac{1}{\gamma}}Q^{\frac{-1}{\gamma}}, \qquad C_i(x_i) = c_ix_i + \frac{b_i}{1 + b_i}L_i^{\frac{1}{b_i}}x_i^{\frac{b_i+1}{b_i}}. $$

    The NCP-function is given by F i (x)=C i ′(x i )−p(Q)−x i p′(Q), i=1,…,N, or in a vectorial form \(F(x) = [ c + L^{\frac{1}{b}}x^{\frac{1}{b}}-p(Q)(e -\frac{x}{\gamma Q}) ]\) with c i , L i , b i , γ>0 and γ≥1.

    For our numerics, we used - N=5, c=[10,8,6,4,2]T, b=[1.2,1.10,1,0.9,0.8]T, L=[5,5,5,5,5]T, e=[1,1,1,1,1]T and γ=1.1.

  • N=10, c=[5,3,8,5,1,3,7,4,6,3]T, b=[1.2,1,0;9,0.6,1.5,1,0.7,1.1,0.95,0.75]T, L=[10,10,10,10,10,10,10,10,10,10]T, e=[1,1,1,1,1,1,1,1,1,1]T and γ=1.2.

  • P8, P9 and P10 are also described in [13, 14]. They correspond, respectively, to the HpHard test problem with n=20, n=30 and n=100.

The corresponding function F(x) is of the form F(x)=(AA T+B+D)x+q; here the matrices A,B, and D are randomly generated as follows: any entry of the square n×n matrix A and of the n×n skew-symmetric matrix B is uniformly generated from ]−5,5[, and any entry of the diagonal matrix D is uniformly generated from ]0,3[. The vector q is uniformly generated from ]−500;0[.

Table 2 Results for the projection method

The matrix AA T+B+D is positive definite and the function F is strongly monotone. We used the M-files proposed in [13] to generate A,B,D and q. We implemented the projection method for solving the previous test problems in the same conditions and using the same material as for our methods. The following table gives the best obtained results when varying the value of λ (λ=0.1,1,10,20,50,100). In each computation we used a vector of ones as starting point. The column Iter corresponds to the number of iterations of the projection method and cannot be compared to Initer or Outiter in Table 1. The other columns correspond to the same things in Table 1 and can be used for comparison.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Haddou, M., Maheux, P. Smoothing Methods for Nonlinear Complementarity Problems. J Optim Theory Appl 160, 711–729 (2014). https://doi.org/10.1007/s10957-013-0398-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-013-0398-1

Keywords