Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

A Nonmonotone Semismooth Inexact NM

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

A nonmonotone semismooth inexact Newton method

SILVIA BONETTINI∗, FEDERICA TINTI

Dipartimento di Matematica, Università di Modena e Reggio Emilia, Italy

Abstract
In this work we propose a variant of the inexact Newton method
for the solution of semismooth nonlinear systems of equation. We
introduce a nonmonotone scheme, which couples the inexact features
with the nonmonotone strategies. For the nonmonotone scheme, we
present the convergence theorems. Finally, we show how we can apply
these strategies in the variational inequality context and we present
some numerical examples.
Keywords: Semismooth Systems, Inexact Newton Methods, Non-
monotone Convergence, Variational Inequality Problems, Nonlinear
Programming Problems.

1 Introduction
An efficient method for the solution of the nonlinear system of equations

F (x) = 0, (1)

where F : Rn → Rn is a continuously differentiable function, is the inexact


Newton method. The idea of this method has been presented firstly in [4],
with local convergence properties; then in [6] the authors proposed a global
version of the method. Furthermore, the inexact Newton method has been
proposed also for the solution of nonsmooth equation (see for example [12],
[7]). The whole class of these methods can be described in a unitary way by

Corresponding author. Email bonettini.silvia@unimo.it
This research was supported by the Italian Ministry for Education, University and Re-
search(MIUR), FIRB Project RBAU01JYPN.

1
2

saying that an inexact Newton method is every method which generates a


sequence satisfying the two following properties,

kF (xk ) + G(xk , sk )k ≤ ηk kF (xk )k (2)


and
kF (xk + sk )k ≤ (1 − β(1 − ηk ))kF (xk )k (3)

where G : Rn × Rn → Rn is a given iteration function, xk+1 = xk + sk , ηk is


the forcing term, i.e. a scalar parameter chosen in the interval [0, 1), and β
is a positive number fixed in (0, 1).
In the smooth case, we could choose G(x, s) = ∇F (x)T s, where ∇F (x)T
is the jacobian matrix of F , while if F is just semismooth, we can take
G(x, s) = Hs where H is a matrix of the B-subdifferential of F at x (for the
definition of semismooth function and B-subdifferential see for example [19]
and [20]). The condition (2) relates the residual of the equation

F (xk ) + G(xk , sk ) = 0 (4)

to the outer residual, given by the quantity kF (xk )k, while (3) implies that
the ratio of the norms of the vector F computed in two successive iterates
is less than (1 − β(1 − ηk )), a quantity less than one.
It is worth to stressing that the two conditions are related each other by
means of the forcing term ηk .
There are many advantages in the algorithms with inexact features, from
the theoretical and practical point of view. Indeed, global convergence the-
orems can be proved under some standard assumptions. Furthermore, the
condition (2) tells us that an adaptive tolerance can be introduced in the
solution of the iteration function equation (4), saving unnecessary compu-
tations when we are far from the solution.
A further relaxation on the requirements can be obtained by allowing non-
monotone choices. The nonmonotone strategies (see for example [10]) are
well known in literature for their effectiveness in the choice of the steplength
in many line–search algorithms.
In [1] the nonmonotone convergence has been proved in the smooth case
for an inexact Newton line–search algorithm and the numerical experience
shows that the nonmonotone strategies can be useful in this kind of al-
gorithm to avoid the stagnation of the iterates in a neighborhood of some
“critical” points. In this paper we modify the general inexact Newton frame-
work (2) and (3) in a nonmonotone way, by substituting (2) and (3) with
3

the following conditions

kF (xk ) + G(xk , sk )k ≤ ηk kF (x`(k) )k (5)


and
kF (xk + sk )k ≤ (1 − β(1 − ηk ))kF (x`(k) )k (6)

where F is a semismooth function and, given N ∈ N, x`(k) is the element


with the following property

kF (x`(k) )k = max kF (xk−j )k. (7)


0≤j≤min(N,k)

Our approach is to generalize the global method for smooth equations pre-
sented in [1], providing convergence theorems under analogous assumptions.
In the following section we recall some basic definitions and some results
for the semismooth functions; in section 3 we describe the general scheme
of our nonmonotone semismooth inexact method and we prove the con-
vergence theorems; in the section 4 we apply the method to a particular
semismooth system arising from variational inequalities and nonlinear pro-
gramming problems and, in section 5, we report the numerical results.

2 The semismooth case


Now we consider the nonlinear system of equations (1) with a nonsmooth
operator F ; in particular we focus on the case in which the system (1) is
semismooth.
In order to introduce the semismooth notion, we also report the B-subdifferen-
tial and the generalized gradient definitions. We consider a vector-valued
function F : Rn → Rn , with F (x) = [f1 (x), . . . , fn (x)]T , as above, and we
assume that, for each component fi , a Lipschitz condition near a given point
x holds. This means that a function F : Rn → Rn is said locally Lipschitz
near a given point x ∈ Rn , if there exists a positive number δ such that each
fi satisfies

kfi (x1 ) − fi (x2 )k ≤ li kx1 − x2 k ∀x1 , x2 ∈ Nδ (x), li ∈ Rn

where Nδ (x) is the set {y ∈ Rn : ky − xk < δ} and L = (l1 , . . . , ln ) is called


rank of F .
The function F : Rn → Rn is locally Lipschitz if, for any x ∈ Rn , F is said
locally Lipschitz near x.
Rademacher’s Theorem asserts that F is differentiable almost everywhere
4

(i.e. each fi is differentiable almost everywhere) on any neighborhood of x


in which F is a locally Lipschitz function.

We denote with ΩF the set of points at which F fails to be differentiable.

Definition 2.1 B-subdifferential ([19])


Let F : Rn → Rn be a locally Lipschitz function near a given point x (with
x ∈ ΩF ). The B-subdifferential of F at x is

∂B F (x) = {Z ∈ Rn×n |∃{xk } * ΩF , with lim ∇F (xk )T = Z}.


xk →x

Definition 2.2 (Clarke’s generalized jacobian ([3]))


Let F : Rn → Rn be a locally Lipschitz function. Clarke’s generalized
jacobian of F at x is
∂F (x) = co∂B F (x)
where co denotes the convex combinations in the space Rn×n .
Remark: Clarke’s generalized jacobian is the convex hull of all matrices Z
obtained as the limit of sequence of the form ∇F (xi )T where xi → x and
xi ∈
/ ΩF .

Now we can finally define the semismooth function, as follows.

Definition 2.3 ([20]) Let F : Rn → Rn be locally Lipschitzian at x ∈ Rn .


We say that F is semismooth at x if
0
lim Zv
0
Z∈ ∂F (x+tv )
0
v →v,t↓0

exists for all v ∈ Rn .

The following definition of a BD-regular vector plays a crucial role in estab-


lishing global convergence results of several iterative methods.

Definition 2.4 ([16]) Let F : Rn → Rn . We say that a point x ∈ Rn is


BD–regular for F (F is BD–regular at x) if F is locally Lipschitz at x and
if all the elements in the B–subdifferential ∂B F (x) are nonsingular.
5

The next results play an important role in establishing the global conver-
gence of the semismooth Newton methods.

Proposition 2.1 ([19]) If F : Rn → Rn is BD–regular at x∗ , then there


exists a positive number δ and a constant K > 0 such that for all x ∈ Nδ (x∗ )
and all H ∈ ∂B F (x), H is nonsingular and

kH −1 k ≤ K

Proposition 2.2 ([16]) If F : Rn → Rn is semismooth at a point x ∈ Rn


then for any ² > 0 there exists a δ > 0 such that

kF (y) − F (x) − H · (y − x)k ≤ ²ky − xk

for all H ∈ ∂B F (y), for all y ∈ Nδ (x).

3 Nonmonotone semismooth inexact Newton meth-


ods
The general paradigm for the nonmonotone semismooth Newton methods
can be introduced as in the smooth case by means of the properties which
the sequence of the iterates has to satisfy. Thus, according to the notion
used in (7), we require that for each k the two following conditions hold

kF (xk ) + Hk sk k ≤ ηk kF (x`(k) )k (8)


and
kF (xk + sk )k ≤ (1 − β(1 − ηk ))kF (x`(k) )k (9)

where xk+1 = xk + sk , Hk ∈ ∂B F (xk ), ηk ∈ [0, 1) and β is a positive param-


eter fixed in (0, 1).
We will call the vector sk which satisfies (8) nonmonotone semismooth in-
exact Newton step at the level ηk .
For a sequence satisfying (8) and (9) it is possible to prove the following
convergence result which is fundamental for the convergence proofs of the
algorithms presented in the following. For the analogous smooth result see
[6, Theorem 3.3] and [17].
6

Theorem 3.1 Let F : Rn → Rn be a locally Lipschitz function. Let {xk } a


sequence such that limk→∞ F (xk ) = 0 and for each k the following conditions
hold:
kF (xk ) + Hk sk k ≤ ηkF (x`(k) )k, (10)
kF (xk+1 )k ≤ kF (x`(k) )k, (11)
where Hk ∈ ∂B F (xk ), sk = xk+1 − xk and η < 1. If x∗ is a limit point of
{xk }, then F (x∗ ) = 0 and if F is BD-regular in x∗ , then the sequence {xk }
converges to x∗ .
Proof. If x∗ is a limit point of the sequence {xk }, there exists a subsequence
{xkj } of {xk } convergent to x∗ . By the continuity of F , we obtain
µ ¶
F (x∗ ) = F lim xkj = lim F (xkj ) = 0.
j→∞ j→∞

Furthermore, since {x`(k) } is a subsequence of {xk }, also the sequence {F (x`(k) )}


converges to zero when k diverges. From Proposition 2.1, there exist δ > 0
and a constant K such that each H ∈ ∂B F (x) is nonsingular and kH −1 k ≤ K
for any x ∈ Nδ (x∗ ); we can suppose that δ is sufficiently small such that
Proposition 2.2 implies
1
kF (y) − F (x∗ ) − Hy (y − x∗ )k ≤ ky − x∗ k
2K
for y ∈ Nδ (x∗ ) and for any Hy ∈ ∂B F (y). Then for any y ∈ Nδ (x∗ ) we have

kF (y)k = kHy (y − x∗ ) + F (y) − F (x∗ ) − Hy (y − x∗ )k


≥ kHy (y − x∗ )k − kF (y) − F (x∗ ) − Hy (y − x∗ )k
≥ K1 ky − x∗ k − 2K
1
ky − x∗ k
1
= 2K ky − x∗ k.
Then
ky − x∗ k ≤ 2KkF (y)k (12)
holds for any y ∈ Nδ (x∗ ). Now let ² ∈ (0, 4δ ) and since x∗ is a limit point of
{xk }, there exists a k sufficiently large that

xk ∈ N δ (x∗ )
2

and ½ ¾
²
x`(k) ∈ S² ≡ y : kF (y)k < .
K(1 + η)
7

Note that since x`(k) ∈ S² then also xk+1 ∈ S² because kF (xk+1 )k ≤


kF (x`(k) )k. For the direction sk , by (10), (11) and since kHk−1 k ≤ K, the
following inequality holds:

ksk k ≤ kHk−1 k(kF (xk )k + kF (xk ) + Hk sk k)


≤ 2K(kF (x`(k) )k + ηkF (x`(k) )k)
= 2K(1 + η)kF (x`(k) )k < 2² < 2δ .

Since sk = xk+1 − xk , the previous inequality implies kxk+1 − x∗ k < δ and


from (12) we obtain

² δ
kxk+1 − x∗ k ≤ 2KkF (xk+1 )k < 2K <
K(1 + η) 2

that implies xk+1 ∈ N δ (x∗ ). Therefore x`(k+1) ∈ S² , since kF (x`(k+1) )k ≤


2
kF (x`(k) )k. It follows that, for any j sufficiently large, xj ∈ Nδ (x∗ ), and
from (12)
kxj − x∗ k ≤ 2KkF (xj )k.
Since F (xj ) converges to 0 we can conclude that xj converges to x∗ . ¤

3.1 A line–search semismooth inexact Newton algorithm


In this section we describe a line–search algorithm: once computed a semis-
mooth inexact Newton step, the steplengh is reduced by a backtracking
procedure until an acceptance rule is satisfied, according to the following
scheme.

Algorithm 3.1

Set x0 ∈ Rn , β ∈ (0, 1),0 < θmin < θmax < 1, ηmax ∈ (0, 1), k = 0.

For k = 0, 1, 2, ...

Choose Hk ∈ ∂B F (xk ) and determine η̄k ∈ [0, ηmax ] and s̄k that
satisfy
kHk s̄k + F (xk )k ≤ η̄k kF (x`(k) )k.
Set αk = 1.
While kF (xk + αk s̄k )k > (1 − αk β(1 − η̄k ))kF (x`(k) )k
Choose θ ∈ [θmin , θmax ]
8

Set αk = θαk .
Set xk+1 = xk + αk s̄k

The steplength is represented by the damping parameter αk which is reduced


until the backtracking condition

kF (xk + αk s̄k )k ≤ (1 − αk β(1 − η̄k ))kF (x`(k) )k (13)

is satisfied. Condition (13) is more general than the Armijo condition em-
ployed for example in [7], since it does not require the differentiability of the
merit function Ψ(x) = 1/2kF (x)k2 .
The final inexact Newton step is given by sk = αk s̄k , and it satisfies condi-
tions (8) and (9) with forcing term ηk = 1 − αk (1 − η̄k ).
We will simply assume that at each iterate k it is possible to compute the
vector s̄k which is an inexact Newton step at the level η̄k (see for example
the assumption A1 in [12] for a sufficient condition). The next lemma shows
that, under the previous assumption, the sequence generated by Algorithm
3.1 satisfies conditions (8) and (9).

Lemma 3.1 Let β ∈ (0, 1); suppose that there exist η̄ ∈ [0, 1), s̄ satisfying

kF (xk ) + Hk s̄k ≤ η̄kF (x`(k) )k.

Then, there exist αmax ∈ (0, 1] and a vector s such that

kF (xk ) + Hk sk ≤ ηkF (x`(k) )k (14)

kF (xk + s)k ≤ (1 − βα(1 − η))kF (x`(k) )k (15)


hold for any α ∈ (0, αmax ], where η ∈ [η̄, 1), η = (1 − α(1 − η̄)).
Proof. Let s = αs̄. Then we have
kF (xk ) + Hk sk = kF (xk ) − αF (xk ) + αF (xk ) + αHk s̄k
≤ (1 − α)kF (xk )k + αkF (xk ) + Hk s̄k
≤ (1 − α)kF (x`(k) )k + αη̄kF (x`(k) )k
= ηkF (x`(k) )k,

so (14) is proved. Now let

(1 − β)(1 − η̄)
ε= kF (x`(k) )k, (16)
ks̄k
9

and δ > 0 be sufficiently small (see Proposition 2.2) that


kF (xk + s) − F (xk ) − Hk sk ≤ εksk (17)
δ
whenever ksk ≤ δ. Choosing αmax = min(1, ks̄k ), for any α ∈ (0, αmax ]
we have ksk ≤ δ and then, using (16) and (17), we obtain the following
inequality
kF (xk + s)k ≤ kF (xk + s) − F (xk ) − Hk sk + kF (xk ) + Hk sk
≤ εαks̄k + ηkF (x`(k) )k
= ((1 − β)(1 − η̄)α + (1 − α(1 − η̄)))kF (x`(k) k
= (1 − βα(1 − η̄))kF (x`(k) )k
≤ (1 − βα(1 − η))kF (x`(k) )k,

that completes the proof. ¤

A consequence of the previous lemma is that the backtracking loop 3, at


each iterate k, terminates in a finite number of steps. Indeed, at each iter-
ate k the backtracking condition (13) is satisfied for α < αmax , where αmax
depends on k. Since the value of αk is reduced by a factor θ < θmax < 1,
then there exists a positive integer p such that (θmax )p < αmax and so the
while loop terminates at most after p steps. When, at some iterate k, it
is impossible to determine the next point xk+1 satisfying (8) and (9), we
say that the algorithm breaks down. Then, Lemma 3.1 yields that assum-
ing that it is possible to compute the semismooth inexact Newton step s̄k
satisfying (8), then Algorithm 3.1 does not break down and it is well defined.

Theorem 3.2 Suppose that {xk } is the sequence generated by the non-
monotone semismooth Algorithm 3.1, with 2β < 1 − ηmax . Assume that the
following conditions hold:
A1 There exists a limit point x∗ of the sequence {xk }, such that F is
BD-regular in x∗ ;
A2 At each iterate k it is possible to find a forcing term η̄k and a vector
s̄k such that the inexact residual condition (8) is satisfied;
A3 For every sequence {xk } converging to x∗ , every convergent sequence
{sk } and every sequence {λk } of positive scalars converging to zero,
Ψ(xk + λk sk ) − Ψ(x`(k) )
lim sup ≤ lim F (xk )T Hk sk ,
k→+∞ λk k→+∞
10

where Ψ(x) = 1/2kF (x)k2 , whenever the limit in the left-hand side
exists;

A4 For every sequence {xkj } such that αkj converges to zero, then ks̄kj k
is bounded.
Then, F (x∗ ) = 0 and the sequence {xk } converges to x∗ .
Proof. The assumption A1 implies that the norm of the vector ks̄k k is
bounded in a neighborhood of the point x∗ . Indeed, from Proposition 2.1,
there exists a positive number δ and a constant K such that kHk−1 k ≤ K
for any Hk ∈ ∂B F (xk ), for any xk ∈ Nδ (x∗ ).
Thus, the following conditions hold:

ks̄k k ≤ kHk−1 k(kF (xk )k + kF (xk ) + Hk s̄k k)


≤ K(kF (x`(k) )k + ηmax kF (x`(k) )k)
= K(1 + ηmax )kF (x`(k) )k
≤ K(1 + ηmax )kF (x0 )k.

Furthermore, the condition A2 ensures that the Algorithm 3.1 does not
break down, thus it generates an infinite sequence.
Now we consider separately the two following cases:
a) There exists a set of indices K such that {xk }k∈K converges to x∗ and
lim inf αk = 0;
k→+∞,k∈K

b) For any subsequence {xk }k∈K converging to x∗ we have lim inf αk =


k→+∞,k∈K
τ > 0.
a)Since kF (x`(k) )k is a monotone nonincreasing, bounded sequence, then
there exists L ≥ 0 such that

L = lim kF (x`(k) )k. (18)


k→∞

From (7) we have that kF (x`(k) )k ≥ kF (xk )k, thus

L≥ lim kF (xk )k (19)


k→+∞,k∈I

where I is a set of indices such that the limit on the right hand side exists.
Since αk is the final value after the backtracking loop, we must have
αk ³ αk ´
kF (xk + s̄k )k > 1 − β(1 − η̄k ) kF (x`(k) )k (20)
θ θ
11

which yields
αk ³ αk ´
lim kF (xk + s̄k )k ≥ lim 1− β(1 − η̄k ) kF (x`(k) )k.
k→+∞,k∈K θ k→+∞,k∈K θ
(21)
If we choose K as the set of indices with the property a), exploiting the
continuity of F , recalling that η̄k is bounded, that ks̄k k is bounded and sub-
sequencing to ensure the existence of the limit of αk , we obtain kF (x∗ )k ≥ L.
On the other hand, from (19) we have also that L ≥ kF (x∗ )k, thus it follows
that
L = kF (x∗ )k. (22)
Furthermore, by squaring both sides of (20), we obtain the following in-
equalities
αk ³ αk ´2
kF (xk + s̄k )k2 > 1− β(1 − η̄k ) kF (x`(k) )k2
θ ³ θ ´
αk
≥ 1 − 2 β(1 − η̄k ) kF (x`(k) )k2 .
θ
This yields
αk αk
kF (xk + s̄k )k2 − kF (x`(k) )k2 > −2 β(1 − η̄k )kF (x`(k) )k2 . (23)
θ θ
αk
Dividing both sides by θ , passing to the limit and exploiting the assumption
A4, we obtain
αk 2
T kF (xk + θ s̄k )k − kF (x`(k) )k2
lim F (xk ) Hk sk ≥ lim αk
k→+∞,k∈K k→+∞,k∈K θ
≥ lim −2β(1 − η̄k )kF (x`(k) )k2 . (24)
k→+∞,k∈K

Since (22) holds and taking into account that η̄k ≥ 0, we have

lim F (xk )T Hk sk ≥ −2βkF (x∗ )k2 . (25)


k→+∞,k∈K

On the other hand, we have

F (xk )T Hk s̄k = F (xk )T [−F (xk ) + F (xk ) + Hk s̄k ]


= −kF (xk )k2 + F (xk )T [F (xk ) + Hk s̄k ]
≤ −kF (xk )k2 + kF (xk )k · kF (xk ) + Hk s̄k k
≤ −kF (xk )k2 + ηmax kF (x`(k) )k2 , (26)
12

thus we can write

lim F (xk )T Hk s̄k ≤ lim −kF (xk )k2 + ηmax kF (x`(k) )k2 .
k→+∞ k→+∞

and, considering the subsequence {xk }k∈K , it follows that

lim F (xk )T Hk s̄k ≤ −(1 − ηmax )kF (x∗ )k2 . (27)


k→+∞,k∈K

From (25) and (27) we deduce

−2βkF (x∗ )k2 ≤ −(1 − ηmax )kF (x∗ )k2 .

Since we set (1 − ηmax ) > 2β, then we must have kF (x∗ )k = 0.


This implies that limk→+∞ kF (x`(k) )k = 0 and, consequently from (7), we
have
lim kF (xk )k = 0.
k→+∞

Thus, the convergence of the sequence is ensured by Theorem 3.1.


b) Writing the backtracking condition for the iterate `(k), we obtain

kF (x`(k) )k ≤ (1 − α`(k)−1 β(1 − η̄`(k)−1 ))kF (x`(`(k)−1) )k. (28)

When k diverges, we can write

L ≤ L − L · lim α`(k)−1 β(1 − η̄`(k)−1 ), (29)


k→∞

where L is defined as in (18).


Since β is a constant and 1 − η̄j ≥ 1 − ηmax > 0 for any j, (29) yields

L · lim α`(k)−1 ≤ 0
k→∞

that implies
L=0
or
lim α`(k)−1 = 0. (30)
k→∞

ˆ
Suppose that L 6= 0, so that (30) holds. Defining `(k) = `(k + N + 1) so
ˆ
that `(k) > k, we show by induction that for any j ≥ 1 we have

lim α`(k)−j
ˆ =0 (31)
k→∞
13

and
lim kF (x`(k)−j
ˆ )k = L. (32)
k→∞

For j = 1, since {α`(k)−1


ˆ } is a subsequence of {α`(k)−1 }, (30) implies (31).
Thanks to the assumption A4, we also obtain

lim kx`(k)
ˆ − x`(k)−1
ˆ k = 0. (33)
k→∞

By exploiting the Lipschitz property of F , from |kF (x)k−kF (y)k| ≤ kF (x)−


F (y)k and (33) we obtain

lim kF (x`(k)−1
ˆ )k = L. (34)
k→∞

Assume now that (31) and (32) hold for a given j. We have

kF (x`(k)−j )k ≤ (1 − α`(k)−(j+1) β(1 − η`(k)−(j+1) ))kF (x`(`(k)−(j+1)) )k.

Using the same arguments employed above, since L > 0, we obtain

lim α`(k)−(j+1)
ˆ =0
k→∞

and so
lim kx`(k)−j
ˆ − x`(k)−(j+1)
ˆ k = 0,
k→∞

lim kF (x`(k)−(j+1)
ˆ )k = L.
k→∞

Thus, we conclude that (31) and (32) hold for any j ≥ 1. Now, for any k,
we can write
ˆ
`(k)−k−1
X
kxk+1 − x`(k)
ˆ k≤ α`(k)−j
ˆ ks̄`(k)−j
ˆ k
j=1

ˆ − k − 1 ≤ N , we have
so that, since we have `(k)

lim kxk+1 − x`(k)


ˆ k = 0. (35)
k→∞

Furthermore, we have

kx`(k)
ˆ − x∗ k ≤ kx`(k)
ˆ − xk+1 k + kxk+1 − x∗ k (36)

Since x∗ is a limit point of {xk } and (35) holds, (36) implies that x∗ is
a limit point for the sequence {x`(k)
ˆ }. From (33) we conclude that x∗ is a
14

limit point also for the sequence {x`(k)−1


ˆ }, which contradicts the assumption
made. Indeed, since {x`(k)−1
ˆ } converges to x∗ , we should have that α`(k)−1
ˆ
is bounded away from zero, from the hypothesis b). Hence, we necessarily
have L = 0, that implies

lim kF (xk )k = 0.
k→∞

Now Theorem 3.1 completes the proof.


¤
The previous theorem is proved under the assumptions [A1]–[A4]: the hy-
pothesis [A4] is analogous to the one employed in [1] in the smooth case,
while [A3] is the nonmonotone, and weaker, version of the assumption (A4)
in [12]. This hypothesis is not required in the smooth case, thanks to the
stronger properties of the function F and of its jacobian ∇F (x)T (see §3.2.10
in [14]).

4 An application: a nonmonotone semismooth in-


exact Newton method for the solutions of the
Karush–Kuhn–Tucker systems
In this section we consider a particular semismooth system of equations
derived from the optimality conditions arising from variational inequalities
or nonlinear programming problems.
We consider the classical variational inequality problem VIP(C,F), which is
to find x∗ ∈ C, such that

< V (x∗ ), x − x∗ >≥ 0, ∀x ∈ C (37)

where C is a nonempty closed convex subset of Rn , < ·, · > the usual inner
product in Rn and V : Rn → Rn is a continuous function.
When V is the gradient mapping of the real-valued function f : Rn → R, the
problem VIP(C,V) becomes the stationary point problem of the following
optimization problem
min f (x)
(38)
s. t. x ∈ C

We assume, as in [18], that the feasible set C can be represented as follows

C = {x ∈ Rn |h(x) = 0, g(x) ≥ 0, Πl x ≥ l, Πu x ≤ u}, (39)


15

where h : Rn −→ Rneq , g : Rn −→ Rm , Πl ∈ Rnl×n and Πu ∈ Rnu×n ; Πl (or


Πu ) denotes a matrix given by the rows of the identity matrix with indices
equal to those of the entries of x which are bounded below (above).
nl and nu denote respectively the number of entries of the vector x subject
to lower and upper bounds.
We consider the following conditions, representing the Karush-Kuhn-Tucker
(KKT) optimality conditions of VIP(C,V) or of the nonlinear programming
problem (38):

L(x, λ, µ, κl , κu ) = 0
h(x) = 0
µT g(x) = 0 g(x) ≥ 0 µ ≥ 0 (40)
κTl (Πl x − l) = 0 Πl x − l ≥ 0 κl ≥ 0
κTu (u − Πu x) = 0 u − Πu x ≥ 0 κu ≥ 0

where L(x, λ, µ, κl , κu ) = V (x) − ∇h(x)λ − ∇g(x)µ − ΠTl κl + ΠTu κu is the


Lagrangian function. Here ∇h(x)T and ∇g(x)T are the Jacobian matrices
of h(x) and g(x) respectively.
In order to rewrite the KKT-conditions as a nonlinear system of equations,
we make use of Fischer’s function, [8], ϕ : R2 → R defined by
p
ϕ(a, b) := a2 + b2 − a − b.

The main property of this function is the following characterization of its


zeros:
ϕ(a, b) = 0 ⇔ a ≥ 0, b ≥ 0, ab = 0.
Therefore, the KKT-conditions (40) can be equivalently written as the non-
linear system of equations

V (x) − ∇h(x)λ − ∇g(x)µ − ΠTl κl + ΠTu κu = 0


h(x) = 0
ϕI (µ, g(x)) = 0
ϕl (κl , Πl x − l) = 0
ϕu (κu , u − Πu x) = 0

or, in more concise form,


Φ(w) = 0 (41)
16

where w = (xT , λT , µT , κTl , κTu )T ; ϕI : R2m → Rm with ϕI (µ, g(x)) :=


(ϕ(µ1 , g1 ), . . . , ϕ(µm , gm ))T ∈ Rm ; ϕl : R2nl → Rnl , with ϕu (κu , u − Πu x) ∈
Rnu ; ϕu : R2nu → Rnu , with ϕu (κu , u − Πu x) ∈ Rnu .
Note that the functions ϕI , ϕl , ϕu are not differentiable in the origin, so that
the system (41) is a semismooth reformulation of the KKT-conditions (40).

The system (41) can be solved by the semismooth inexact Newton method
[7], given by
wk+1 = wk + αk ∆wk ,
where w0 is a convenient starting point, αk is a damping parameter and
∆wk is the solution of the following linear system
Hk ∆w = −Φ(wk ) + rk (42)
where Hk ∈ ∂B Φ(wk ) and rk is the residual vector and it satisfies the con-
dition
krk k ≤ ηk kΦ(wk )k.

As shown in [18], permuting the rows of the matrix Hk of the right hand
side and changing the sign of the fourth row, the system (42) can be written
as follows:
 
Rκl 0 0 Rl Πl 0
 0 Rκu 0 Ru Πu 0 
 
 0 0 Rµ Rg (∇g(x)) T 0 
 
 ΠT −ΠTu ∇g(x) −∇V (x) + ∇2 g(x)µ + ∇2 h(x)λ ∇h(x) 
l
0 0 0 (∇h(x))T 0
   
4κl ϕl (κl , Πl x − l)
 4κu   ϕu (κu , u − Πu x) 
   
 4µ  = −  ϕI (µ, g(x))  + Pr
   
 4x   α 
4λ h(x)
where Pr is the permuting residual vector and −α = V (x) − ∇h(x)λ −
∇g(x)µ − (Πl )T κl + (Πu )T κu ;

Rg = diag(rg1 , . . . , rgm )
 Ã !

 g
 diag q i −1 if (gi (x), µi ) 6= 0
(rg )i = µ2 + g2

 i i
 −1 if (gi (x), µi ) = 0
17

Rµ = diag(rµ1 , . . . , rµm )
 Ã !

 µ
 diag q i −1 if (gi (x), µi ) 6= 0
(rµ )i = µ2 + g2

 i i
 −1 if (gi (x), µi ) = 0

Rl = diag(rl1 , . . . , rlnl )
 Ã !

 ((Π x)i − l i )
 diag q l
−1 if ((Πl x)i − li , (κl )i ) 6= 0
(rl )i = (κ )2 + ((Π x) − l )2


l i l i i
 −1 if ((Πl x)i − li , (κl )i ) = 0

Rκl = diag(rκl1 , . . . , rκlnl )


 Ã !

 (κ )
l i
 diag q −1 if ((κl )i , (Πl x)i − li ) 6= 0
(rκl )i = (κ )2 + ((Π x) − l )2


l i l i i
 −1 if ((κl )i , (Πl x)i − li ) = 0

Ru = diag(ru1 , . . . , runu )
 Ã !

 −diag q (ui − (Πu x)i )

−1 if (ui − (Πu x)i , (κu )i ) 6= 0
(ru )i = (κ )2 + (u − (Π x) )2


u i i u i
 −1 if (ui − (Πu x)i , (κu )i ) = 0

Rκu = diag(rκu1 , . . . , rκunu )


 Ã !

 (κ u ) i
 diag q −1 if ((κu )i , ui − (Πu x)i ) 6= 0
(rκu )i = (κ )2 + (u − (Π x) )2


u i i u i
 −1 if ((κu )i , ui − (Πu x)i ) = 0

Now we define the merit function Ψ : Rm+n+neq+nl+nu → R, Ψ(w) =


1 2
2 kΦ(w)k ; the differentiability of the function Ψ(w) plays a crucial role in
the globalized strategy of the semismooth inexact Newton method proposed
in [7]. In the approach followed here, this property is not required, since the
convergence theorem 3.2 can be proved without assuming this hypothesis,
by means of the backtracking rule proposed in [6] and employed also in [19].
Now we introduced the nonmonotone inexact Newton algorithm, as follows:
18

Algorithm 4.1

Step 1 Choose w0 = (x0 , λ0 , µ0 , κl0 , κu0 ) ∈ Rm+n+neq+nl+nu , θ ∈ (0, 1), β ∈


(0, 1/2) and fix ηmax < 1; λ0 = 0, κl0 = 0, κu0 = 0, µ0 = 0.

Step 2 (Stopping criterion).


if kΦ(wk )k ≤ tol then stop
else

Step 3 (Search direction ∆w)


Select an element Hk ∈ ∂B Φ(wk ).
Find the direction ∆wk ∈ Rn and a parameter ηk in[0, ηmax ] such
that
kHk ∆wk + Φ(wk )k ≤ ηk kH(w`(k) )k (43)

Step 4 (Linesearch)
Compute the minimum integer h, such that, if αk = θh the following
condition holds

kΦ(wk + αk ∆wk )k ≤ (1 − βαk (1 − ηk ))kΦ(w`(k) )k (44)

Step 5 Compute wk+1 = wk + αk ∆wk go to Step 2.

It is straightforward to observe that Algorithm 4.1 is a special case of Algo-


rithm 3.1. Furthermore, as shown in [7], the merit function Ψ(w) is differ-
entiable and ∇Ψ(w) = H t Φ(w) (see Proposition 4.2 in [7]).
This yields that the hypothesis A3 holds [12].
Moreover we assume that Hk in (43) is nonsingular and that all the iterates
wk belong to a compact set. As a consequence, we have that the norm of
the direction is bounded: indeed, for any k, from (43) we obtain

k∆wk k ≤ M (1 + ηmax )kΦ(w0 )k

where M = maxwk kHk−1 k.

5 Numerical results
In this section we report some numerical experiments, by coding Algorithm
4.1 in FORTRAN 90 using double precision on HP zx6000 workstation with
Itanium2 processor with 1.3 GHz and 2 Gb of RAM, running HP-UX oper-
ating system.
19

In particular the input parameters are: β = 10−4 , θ = 0.5, tol = 10−8 and we
declare a failure of the algorithm when the tolerance tol is not reached after
500 iterations or when, in order to satisfy the backtracking condition (44),
more than 30 reductions of the damping parameter have been performed.
The forcing term ηk has been adaptively chosen as
µ ¶
1 −8
ηk = max , 10 .
1+k

The solution of the linear system (43) is computed by the LSQR method
(Paige and Saunders in [15]) with a suitable preconditioner proposed in [18].

We have considered a test set composed by the nonlinear programming prob-


lems and complementarity problems listed in Table 1, where we also report
the number of variables n, the number of equality and inequality constraints,
neq and m respectively, and the number of lower, and upper bounds, nl and
nu respectively.
Tables 2 and 3 summarize the results obtained on this test set, in terms of
number of external and inner iterations, reported in the rows with the “ext.”
and “inn.” symbols respectively, and of number of backtracking reductions
(the rows denoted by “back”). Our aim is to compare the performances
of the Algorithm 4.1 with different monotonicity degrees, corresponding to
each column of the table, for N = 1, 3, 5, 7.
It is worth to stressing that the case N = 1 is the usual monotone case.
Tables 2 and 3 show that the nonmonotone strategies can produce a sensible
decrease both in the number of inner iterations and of backtracking reduc-
tions. Furthermore, in some cases, also the number of external iterations is
reduced, when nonmonotone choices are employed.
This fact could be explained by observing that different choices of the pa-
rameter N imply different values of the inner tolerance and recalling that
the direction ∆wk satisfying the property (43), depends on the inner toler-
ance itself. On the other side, a too large value of the parameter N could,
in some case, produce a degenerate behaviour of the algorithm in the NLP
problems, as we observed for example in the MCP problem duopoly .
20

Table 1: Test Problems


NLP Problem Ref. n neq m nl nu
harkerp2 [2] 100 0 0 100 0
himmelbk [2] 24 14 0 24 0
optcdeg2 [2] 295 197 0 197 0
optcdeg3 [2] 295 197 0 197 99
optcntrl [2] 28 19 1 20 10
aug2dc [2] 220 96 0 18 0
minsurf [5] 225 0 0 225 0
marine [5] 175 152 0 15 0
steering [5] 294 236 0 61 60
dtoc2 [2] 294 196 0 0 0
dtoc6 [2] 298 149 0 0 0
lukvle8 [11] 300 298 0 0 0
blend * 24 14 0 24 0
branin * 2 2 0 2 0
kowalik * 4 0 0 4 4
osbornea [2] 5 0 0 5 5
rosenbr [2] 2 0 0 0 0
hs6 [2] 2 0 1 0 0
mitt105 [13, Ex.5.5] 65 45 0 65 65
α = 0.01, N = 5
mitt305 [13, Ex.4] 70 45 0 25 50
α = 0.001, N = 5
mitt405 [13, Ex.3] 50 25 0 25 50
α = 0.001, N = 5
MCP Problem Ref. n neq m nl nu
ehl-kost [9] 101 0 0 100 0
ehl-def [9] 101 0 0 100 0
bertsek [9] 15 0 0 10 0
choi [9] 13 0 0 0 0
josephy [9] 4 0 0 4 0
bai-haung [9] 4900 0 0 4900 0
bratu [9] 5929 0 0 5625 5625
duopoly [9] 69 0 0 63 0
ehl-k40 [9] 41 0 0 40 0
hydroc06 [9] 29 0 0 11 0
lincont [9] 419 0 0 170 0
opt-cont31 [9] 1024 0 0 512 512
opt-cont127 [9] 4096 0 0 2048 2048
opt-cont511 [9] 16384 0 0 8192 8192

*http://scicomp.ewha.ac.kr/netlib/ampl/models/nlmodels/
21

Table 2: Nonmonotone results in the NLP problems


NLP Problem N =1 N =3 N =5 N =7
harkerp2 ext. 105 104 104 104
inn. 471 404 415 438
back 37 20 14 8
himmelbk ext. 22 23 27 25
inn. 114 96 149 110
back 22 60 83 58
optcdeg2 ext. - - 87 75
inn. - - 294 278
back - - 313 289
optcdeg3 ext. - 100 73 68
inn. - 266 158 134
back - 315 133 82
optcntrl ext. 31 25 23 23
inn. 78 54 45 45
back 135 70 54 54
aug2dc ext. 7 7 7 7
inn. 12 7 7 7
back 0 0 0 0
minsurf ext. 5 5 5 5
inn. 10 8 8 5
back 0 0 0 0
marine ext. 94 68 67 72
inn. 436 296 284 287
back 473 327 287 309
steering ext. 11 - - -
inn. 30 - - -
back 37 - - -
dtoc2 ext. 7 7 7 7
inn. 11 8 8 8
back 1 1 1 1
dtoc6 ext. 21 9 9 9
inn. 21 9 9 9
back 39 1 0 0
lukvle8 ext. - 20 21 25
inn. - 418 365 455
back - 34 34 34
blend ext. 23 20 19 19
inn. 79 112 82 75
back 225 98 72 72
branin ext. 8 8 8 8
inn. 8 8 8 8
back 11 2 2 2
kowalik ext. 37 11 22 20
inn. 37 11 22 20
back 149 1 14 6
22

NLP Problem N =1 N =3 N =5 N =7
osborne1 ext. 121 16 16 16
inn. 121 16 16 16
back 443 0 0 0
rosenbr ext. 180 14 9 9
inn. 180 14 9 9
back 1121 4 2 2
hs6 ext. 8 7 7 7
inn. 8 7 7 7
back 14 11 11 11
mitt105 ext. 11 9 8 8
inn. 19 13 9 9
back 13 4 0 0
mitt305 ext. 31 24 24 20
inn. 71 47 46 37
back 195 105 102 68
mitt405 ext. 32 26 19 19
inn. 77 54 39 39
back 227 144 81 81
− the algorithm does not converge

References
[1] S. Bonettini (2005). A Nonmonotone Inexact Newton Method, Optim.
Meth. Software, 20, Vol. 4-5.

[2] I. Bongartz, A.R. Conn, N. Gould and Ph. L. Toint (1995). CUTE:
Constrained and Unconstrained Testing Environnment, ACM Transac-
tions on Mathematical Software, 21, 123–160.

[3] F. H. Clarke Optimization and Nonsmooth Analisys, John Wiley &


Songs, New York, (1983)

[4] R. S. Dembo, S. C. Eisenstat and T. Steihaug(1982). Inexact Newton


methods, SIAM Journal on Numerical Analysis, 19, 400–408.

[5] E.D. Dolan, J. J. Moré and T. S. Munson (2004). Benchmarking op-


timization software with COPS 3.0, Technical Report ANL/MCS-TM-
273, Argonne National Laboratory, Illinois, USA.

[6] S. C. Eisenstat and H. F. Walker (1994). Globally convergent Inexact


Newton methods, SIAM Journal on Optimization, 4, 393–422.
23

Table 3: Nonmonotone results in the MCP problems


MCP Problem N =1 N =3 N =5 N =7
ehl-kost ext. 14 12 14 16
inn. 104 50 50 48
back 17 0 0 0
ehl-def ext. 14 12 14 16
inn. 103 50 50 48
back 17 0 0 0
bertsek ext. 6 6 6 6
inn. 8 7 6 6
back 0 0 0 0
choi ext. 5 5 5 5
inn. 5 5 5 5
back 0 0 0 0
josephy ext. 6 6 7 7
inn. 8 7 7 7
back 2 2 2 2
bai-haung ext. 6 6 6 6
inn. 13 9 9 9
back 0 0 0 0
bratu ext. 5 5 5 5
inn. 10 6 6 6
back 0 0 0 0
duopoly ext. 44 38 48 ∗
inn. 135 127 140 ∗
back 225 158 81 ∗
ehl-k40 ext. ∗ ∗ 203 333
inn. ∗ ∗ 1292 2056
back ∗ ∗ 2252 3230
hydroc06 ext. 5 5 5 5
inn. 8 6 6 6
back 1 1 1 1
lincont ext. 46 32 33 34
inn. 385 165 144 170
back 220 89 77 73
opt-cont127 ext. 10 10 10 10
inn. 49 33 21 19
back 0 0 0 0
opt-cont31 ext. 9 9 9 11
inn. 101 52 42 42
back 0 0 0 0
opt-cont511 ext. 8 8 8 8
inn. 22 15 14 14
back 0 0 0 0
∗ maximum number of backtracking reductions reached
24

[7] F. Facchinei, A. Fischer and C. Kanzow (1996). Inexact Newton meth-


ods for semismooth equations with applications to variational inequality
problems, in G. Di Pillo and F. Giannessi (eds.), Nonlinear Optimiza-
tion and Applications, Plenum Press, New York 1996, 125-139.

[8] A. Fischer (1992). A special Newton-type optimization method, Opti-


mization 24, 269–284.

[9] S. P. Dirkse and M. C. Ferris (1995). A collection of nonlinear mixed


complementarity problems, Optimization Methods and Software 5, 123–
156.

[10] L. Grippo, F. Lampariello and S. Lucidi (1986). A Nonmonotone line


search technique for Newton’s method, SIAM Journal on Numerical
Analysis 23, 707–716.

[11] L. Lukšan and J. Vlček (1999). Sparse and partially separable test prob-
lems for unconstrained and equality constrained optimization, Technical
report 767, Institute of Computer Science, Academy of Science of the
Czech Republic.

[12] J. M. Martinez and L. Qi (1995). Inexact Newton methods for solving


nonsmooth equations, J. Comput. Appl. Math., 60, 127–145.

[13] H. D. Mittelmann and H. Maurer (1999). Optimization techiques for


solving elliptic control problems with control and state constraints: Part
1. Boundary control, Computational Optimization and Applications,
16, 29–55.

[14] J. M. Ortega and W. C. Rheimboldt (1970). Iterative solution of non-


linear equations in several variables, Academic Press, New York.

[15] C. C. Paige and M. A. Saunders (1982). LSQR: An Algorithm for


Sparce Linear Equations and Sparse Least Squares, ACM Transactions
on Mathematical Software, 8(1), 43–71.

[16] J. S. Pang and L. Qi (1993). Nonsmooth equations: Motivations and


algorithms, SIAM J. Optim., 3, 443–465.

[17] W.C. Rheinboldt (1998). Methods for Solving Systems of Nonlinear


Equations, Second Edition, SIAM, Philadelphia.

[18] V. Ruggiero and F. Tinti (2005). A preconditioner for solving large-


scale variational inequality problems by a semismooth inexact approach,
25

Tech. Rep. n. 69, Dipartiemnto di Matematica, Università di Modena


e Reggio Emilia, Modena.

[19] L. Qi (1993). A convergence analysis of some algorithms for solving


nonsmooth equations, Mathematical of Operator Research, 18, 227–
244.

[20] L. Qi, J. Sun (1993). A nonsmooth version of Newton method, Mathe-


matical Programming, 58, 353–367.

You might also like