~
APPLIED
MATHEMATICS
~NUMERICAL
ELSEVIER
Applied Numerical Mathematics 18 (1995)461-471
A cell-averaging Chebyshev spectral method for the controlled
Duffing oscillator
G a m a l N. E l n a g a r a, M.A. K a z e m i b,.
a Department of Mathematics, The University of South Carolina, Spartanburg, SC 29303, USA
b Department of Mathematics, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
Abstract
In order to maintain spectral accuracy, the grids on which a physical problem is to be solved must also be
obtained by spectrally accurate techniques. This paper presents a spectral method of solving the controlled Duffing
oscillator. The method is based upon constructing the Mth-degree interpolation polynomials, using Chebyshev
nodes, to approximate the state and the control vectors. The differential and integral expressions which arise from
the system dynamics and the performance index are converted into an algebraic nonlinear programming problem.
The results of computer simulation studies compare favorably to optimal solutions obtained by closed-form analysis
a n d / o r by other numerical schemes.
1. Introduction
In the recent times there is much interest in the use of spectral methods for the solution of
nonlinear physical and engineering problems. This trend is more evident in the area of
optimization in which highly accurate solutions are needed. However, serious analytical and
numerical difficulties, such as accumulation of roundoff truncation errors, need to be overcome
before optimal control approaches will find widespread practical implementation, especially for
nonlinear optimal control problems such as the controlled Duffing oscillator problem. The
controlled Duffing oscillator, which is known to describe many important oscillating phenomena in nonlinear engineering systems [10,12], has received considerable attention in the past
decade.
In the classical development, it is well known that the variational method of optimal control
theory, which typically consists of the calculus of variations and Pontryagin's methods [14], can
* Corresponding author.
0168-9274/95/$09.50 © 1995 Elsevier Science B.V. All rights reserved
SSDI 0168-9274(95)00075-5
G.N. Elnagar, M.A. Kazemi /Applied Numerical Mathematics 18 (1995) 461-471
462
be used to derive a set of necessary conditions that must be satisfied by an optimal control law
and its associated state-control equations. These necessary conditions of optimality lead to a
(generally nonlinear) two-point boundary-value problem (2PBVP) that must be solved to
determine the explicit expression for the optimal control. Except in some special cases, the
solution of this 2PBVP is difficult, and in some cases not practical to obtain. In general the
solution of these distinct problems requires different numerical methods which will increase the
computational time and effort.
Van Dooren and Vlassenbroeck [17] introduced a direct method for the controlled Duffing
oscillator. This m e t h o d requires that the state and the control variables, the system dynamics,
the boundary conditions, and the performance index be expanded in Chebyshev series with
unknown coefficients. The resulting system of nonlinear equations has to be solved, for these
unknowns, by some kind of iterative method. The unknowns which evolve from the Chebyshev
series expansion of the performance index have to be calculated at each step of the iteration.
As a result, a rather large and complicated nonlinear system of equations has to be solved to
obtain accuracy of comparable order.
The purpose of this paper is to present an alternative computational m e t h o d for solving the
controlled Duffing oscillator. This approach draws upon the power of well-developed nonlinear
programming techniques and computer codes (see, e.g., [2,13,15,16]) to determine optimal
solutions of nonlinear systems. Central to the idea is the construction of the Mth-degree
interpolating polynomial, using Chebyshev nodes, to approximate the state and the control
vectors. As a result, the derivatives J/(t) and 2(t) of the state vector x(t) are approximated by
the analytic derivatives of the corresponding interpolating polynomial, and the performance
index is discretized using a cell-averaging technique. In this way, it is possible to formulate the
problem as an algebraic nonlinear programming problem. An important benefit of recasting the
problem as a nonlinear programming problem is that it eliminates the requirement of:
(1) solving a 2PBVP;
(2) finding a closed-form expression for the necessary and sufficient conditions of optimality.
In Section 2 we discuss the controlled linear oscillator. Section 3 is devoted to the
development of our approach, the Chebyshev collocation method and a cell-averaging technique. In Section 4 the proposed method is applied to approximate the solution of the
controlled linear oscillator whose exact solution can be obtained by using Pontryagin's maxim u m principle m e t h o d [14]. Section 5 is devoted to the study of the controlled Duffing
oscillator problem which is converted into a nonlinear mathematical programming problem
(NLMP). Some of the previous results from the controlled linear oscillator may be used as
starting values, needed to initiate the iterative procedure.
2. The controlled linear oscillator
Consider the optimal control problem of a linear oscillator as in [17]: Find the control vector
U(~-) which minimizes the performance index
J=
(2.1)
G.N. Elnagar, M.A. Kazemi /Applied Numerical Mathematics 18 (1995) 461-471
463
subject to
J((~-)+w2X(~-)=U(~-),
-T<~'r<~O
(T is known)
(2.2)
with
x(-r)
=x0,
x(o) =0.
(2.3)
Eq. (2.2) is equivalent to the following first-order system of differential equations:
21('/" ) ~'~-X2('F),
22('I" ) = -w2Xl(~ ") + U(~').
(2.4)
Therefore (2.3) is equivalent to
[X1(-T)=xl'
x , ( 0 ) = 0,
Xz(-T)=x2'.
(2.5)
x 2 ( 0 ) = 0.
The problem is to find the control vector U(~-) which minimizes (2.1) subject to (2.4) and (2.5).
Application of Pontryagin's maximum principle method [14] to this optimal control problem
yields the following exact solution
1
XI(~ ) = ~WSw2[Aw-csin w~- + B(sin w-c-w~- cos w-r)],
1
X2(-r) = ~--~w[A(sin w~" +w~" cos w~')+Bwr sin w~-],
U(~-) = A cos wr + B sin w-r,
1
J = 8ww[2wT(A 2 + B z) + (A 2 - B z) sin 2wT - 4AB sin 2 wT],
(2.6)
where
A=
2W[XlW2T sin w r - x 2 ( w T cos w T - sin wr)]
(wZT 2 - sin 2 wT)
2we[ x2T sin wT + Xl(sin wT + wT(cos wT)]
B=
(w2T 2 - sin 2 wT)
(2.7)
3. The numerical method
In the most common Chebyshev collocation method, the collocation points (Chebyshev
nodes) in the interval [ - 1 , 1] are chosen to be the extrema (see, e.g., [5,7])
tj = cos(--~ ) ( j = 0 ,
1, 2 . . . . . M)
of the Mth-order Chebyshev polynomial TM(t) = cos(M cos-~ t), - 1 < t ~< 1.
(3.1)
G.N. Elnagar, M.A. Kazemi /Applied Numerical Mathematics 18 (1995) 461-471
464
In order to construct the interpolants of the state vector
the point t we define the Lagrange polynomials
¢,(t) = (--1)*+l(1--t2)TM(t)
C, M2(t - t,)
= 2 _ ~ ~ Ti(t,)Tj(t )
NICk j=o
Cj
x(t) and the control vector u(t) at
( k = O , 1,..
M),
(3.2)
"'
with C o = C M = 2, C, = 1 for 1 ~< k ~<M - 1. It is readily verified that
q~k(tj)=6kj=
1,
k =1,
O,
k=hj.
(3.3)
Associated with the M + 1 Chebyshev nodes tj., is a unique M t h - d e g r e e interpolating polynomial which we d e n o t e by vM(t) := IM(v)(t) = ~,~oV(tt)¢t(t), and so vM(t~) = V(ti).
The M t h - d e g r e e interpolation polynomials to x(t) and u(t) are given by
M
xM(t) = ~., at~bl(t),
(3.4)
l=0
M
uM(t) =
E b,¢,(t).
(3.5)
1=0
The relationship between 2(t) and x(t) at the Chebyshev nodes t k, k = 0, 1 , . . . , M, can be
obtained by differentiating (3.4). The result is a matrix multiplication given in [4,5]
M
2M(t,) = E "-',triO)"-,,
(3.6)
l=0
where D (1) = (D~)) is an ( M + 1) x ( M + 1) Chebyshev first-derivative matrix
C, ( - 1) *+t
C, ( t, - tt) '
if
k~l,
2M 2 + 1
D (1) = ( D (1)) =
if k = l = 0 ,
6
'
2M 2 + 1
(3.7)
if k = I = M ,
tt
2(1-t2)
'
i f l <~k=l<<.M-1.
Similarly, at the Chebyshev nodes t k, k = 0, 1 , . . . , M, the relationship between
2M(t) and
xm(t) is given by
M
2M(tk) = E "-'ktr~(2)"'t,
1=0
(3.8)
G.N. Elnagar, MM. Kazemi /Applied Numerical Mathematics 18 (1995) 461-471
where
D (2) =
465
(D~)) is an (M + 1) × (M + 1) Chebyshev second-derivative matrix
M 4-
1
15
'
(M 2 - 1)(1
if k = l , k = 0 , M,
+
-- t 2 )
3,
if k = l , 1 <~k < ~ M - 1,
2 ( - 1 ) l (2M 2 + 1 ) ( 1 - t l ) - 6
D(2)=(D~2))=
3
~
~-Ztt7
,
2 ( - 1 ) M+t ( 2 M 2 + 1)(1 + t t ) -~
Ct
(1 + tt) z
ifk=0,1~<l~<M,
6
,
(_1)/, +' t k2 + t k t I - 2
Ct
(1- 2
ifk=M,O<~l~M-1,
ifl<~k4M-l,O<~14M-1,
t k ) ( t k - - t t ) e'
kq:l.
(3.9)
In general, Eqs. (3.4)-(3.9) can be used to discretize a nonlinear dynamical system of the
form
f(t,x(t),2(t),~(t))=u(t),
t~[-1,
1]
(3.10)
as follows: Substituting (3.4), (3.5), (3.6) and (3.8) in (3.10) and collocating at the Chebyshev
nodes ti, we obtain the following system of nonlinear algebraic equations
f(t,,xM(ty),2M(t,),5~M(t,))--uM(ti)--f(t,,a~,a,
_
(1)
,a t( 2 ) )
_
bj=0,
(3.11)
where a}1) and a}2) are the jth component in D(1)[ao a l . . . a M ] v and D(2)[ao a l . . . a a 4 f f
respectively.
The cell averaging of a function F(t), t ~ [ - 1, 1] is defined by (see [4])
1
g_,/2
=
--
ft'-'F(s) ds,
t j_ ~ - tj t;
(3.12)
j= 1,...,M.
The cell averages F1/2, F3/2,..-, PM-1/2 are related to F 0 =F(t0),... , F M = F ( t M) through an
M × (M + 1) matrix whose entries (djl) , 1 <~j <~M, 0 <~l <~M are given by
(3.13)
dj, = g,t( tj_i/2),
where
t j _ 1/2 ~---COS
(j - 1/2)'rr
M
1 [
~,(t) = CtM 1
M
+ E
k=2
]
'
G.N. Elnagar, AIM. Kazemi /Applied Numerical Mathematics 18 (1995) 461-471
466
qT
1
Uk(t)= k + l T ~ + l ( t ) '
k=l,...,M.
(3.14)
Regarding the rate of convergence, accuracy, and the error estimates at the Chebyshev nodes
of the above scheme, refer to [4,5,11].
4. Solving the controlled linear Duffing oscillator
In order to use the Chebyshev nodes we introduce the transformation ~-= ½ T ( t - 1). The
optimal control problem (2.1)-(2.3) may then be restated as
J= 4f2ff2(t)dt,
(4.1)
T2
subject to 2(t) = ---~-[-w2x(t) + u(t)],
(4.2)
Minimize
with
x ( - 1 ) =xo,
x(1) = 0 ,
2 ( - 1 ) =2o,
2(1)=0.
Substituting (3.4), (3.5) and (3.8) in (4.2), the system dynamics reduces to
4
Gk(a , /3, T ) = -r- a2 (2)
k +w2a~-b~,
(4.3)
(4.4)
where a ' = ( a o, al,...,aM), /3 :=(b 0, bl,...,bM), and k =0, 1 , . . . , M .
Setting F(t) = ¼Tu2(t) and using (3.9), we obtain the following discretization of (4.1):
T M
M
T M
jM(/3, T ) = ~]
dt=
Y'. (t1_1 - tl.) Y'. djtb 2= -d,*b 2,
M ft]j-'u2(t)
-41=1
l=O
1=1
4 t --~0
(4.5)
where dp = Y:M=I(tj_1 -- tj)dit.
Thus, the optimal control problem has been converted into the following quadratic programming problem:
Minimize jM(/3),
subject to Gk(a, /3, T) = 0 ,
k =0, 1,...,M.
Note that x(t o) = x ( 1 ) = a 0 = 0 and x(t M) = x ( - 1 ) = a M =x o are given.
The quadratic programming solution algorithm developed by Gill and Murray [9], considered
to be one of the most efficient algorithms for quadratic programming problems, was implemented to determine the optimal state and control parameters of the cell-averaging Chebyshev
approach.
G.N. Ehlagar, M.A. Kazemi /Applied Numerical Mathematics 18 (1995) 461-471
467
Another approach to solve this linear quadratic problem is the solution proposed by
Lagrange (see [17]). This method consists of forming the Lagrangian
L = JM(/3, T) + hoa(o11+ hla~ )
= JM(/3, T ) + h o ( [ D ~ ), D(ll ), • .., /-')(1)
~-0M]] [ a o , a l , . . . ,
aM]T)
+ h, DO'o, ~.M1,...,~MMIIao, a,,...,aMlT--2o ,
(4.6)
where A0, /~1 a r e Lagrange multipliers. Note that the number of unknowns in the Lagrangian
(4.6) can be reduced by substituting
4
b k = --T-2a(~)+ w2ak
4
T2[D(2)o, D(k2), " " ",
~'kM/3(2)l][0' a 1 , . . . ,
aM_ ' , x0] T +
W2ak,
(4.7)
k =0, 1,...,M,
in the discretized performance index (4.5).
The necessary conditions for minimum are
aL
--=0,
Oai
i= 1,...,M-I,
(4.8)
~L
--=0,
0A,,
n=0,1.
(4.9)
Eqs. (4.8) and (4.9) are linear algebraic equations which can be solved for the unknowns ai, h o
and h i by a linear algebraic solver, such as a Gaussian elimination routine.
5. Solving the controlled Duffing oscillator
The optimal control of the Duffing oscillator is described by the nonlinear differential
equation
~(.~) +w2x(.O +ex3('~)=u(z),
-r<~
<0,
(5.1)
subject to the same boundary conditions as in the linear case with the same performance index.
Of course, the exact solution in this case is not known.
By applying the cell-averaging method introduced in Section 4, the following modifications
have to be taken into account. The approached nonlinear system dynamics becomes
f(xM(t),£M(t))=uM(t),
t ~ [--1, 1],
where f(x, y ) : = xT
i 2y + wEx -b e x 3.
(5.2)
G.N. Ebtagar, M.A. Kazemi /Applied Numerical Mathematics 18 (1995) 461-471
468
At the Chebyshev nodes we obtain the following nonlinear system of equations
Hk( fl, ~,
T)=
-b k +
4
-~--~a(2)
+W2ak +ea3k=O,
k=0, 1...,M.
(5.3)
In the form of a nonlinear programming problem, we have
Minimize
jM(~, T),
subject to Hk(/3, a , T) = 0,
k = 0, 1 , . . . , M.
The methods proposed by [13,15] have been implemented on a Sun-SPARC-II workstation
to solve this nonlinear programming problem. Results obtained from both methods have been
found to be quite close. For more complete discussion of nonlinear programming and optimization, the reader is referred to [1,3,6,8].
It is well known that nonlinear programming algorithms only guarantee determination of a
local minimum. The identification of the global minimum usually requires the trial-and-error
introduction of different initial guesses. However, in the simulation studies reported below, the
initial guess (starting values) of the nonlinear programming problem have been taken from the
controlled linear oscillator i.e. e = 0. Using this procedure, the optimal state-control parameters of the proposed approach have been determined accurately.
6. The numerical results
In Tables 1 and 2, we report, for the controlled linear oscillator, the cell-averaging
approximations xM(tj), uM(tj) of order M = 5 and 6, the error estimates eM(t j) .'= I xM(tj) -x(tj) l, and ~M( tj) "= l uM(tj) -- u( tj) l, j = O, 2,..., M, with the following choice of the numerical values of the parameters in the standard case
w=l,
T=2,
xM(--1) = 0.5,
.~M(--1) = --0.5.
(6.1)
A comparison between the fifth-order cell-averaging approximation and the exact solution
shows that, for j = 0, 1, . . . , 5 , the errors es(t j) <~ 10 - 6 , 6"5(t/)~< 10 - 5 , and the error at the
boundary conditions is zero. As M increases the errors eM(t i) and ~M(tj) decrease significantly
and the results will rapidly tend to the exact values.
Table 1
Approximations of x(t) for the controlled linear oscillator
xS(t/)
es(t j)
X6(tj)
e6(t i)
0.00000000
0.00088397
0.03769183
0.19615789
0.43690546
0.50000000
0
0.00000000
0.00044147
0.01405975
0.09285472
0.26277819
0.46320729
0.50000000
0
< 10 -9
< 10 -9
< 10-a
< 10 -9
< 10 -7
< 10-6
< 10-6
< 10 -7
0
< 10 - 9
0
G.N. Ehtagal, M.A. Kazemi /Applied Numerical Mathematics 18 (1995) 461-471
Table 2
Approximations of
uS(ts)
- 0.00830143
0.14970098
0.38191395
0.49157494
0.56952499
0.58875212
u(t) for the
469
controlled linear oscillator
~5(ts-)
< 10 - 6
< 10 - 6
< 10 -5
< 10 -5
< 10 -6
< 10 .6
u6(t)
-0.00830133
0.097188235
0.284447723
0.474991244
0.549118501
0.518114438
0.588753947
~6(t)
< 10 .7
< 10 .7
< 10 -5
< 10 -5
< 10 -5
< 10 -7
< 10 -7
O n e o f t h e i m p o r t a n t a d v a n t a g e s o f t h e u s e o f t h e c e l l - a v e r a g i n g C h e b y s h e v m e t h o d is t h a t
t h e r a t e o f c o n v e r g e n c e o f uM(t) to u(t) a n d xM(t) to x(t) is f a s t e r t h a n a n y p o w e r o f 1/M,
c o n s u l t [4,5]. I n d e e d , as it is well k n o w n t h a t s p e c t r a l p r o j e c t i o n s like I M (see S e c t i o n 3)
p r o v i d e s highly a c c u r a t e a p p r o x i m a t i o n s o f x(t) a n d u(t), p r o v i d e d t h a t x(t) a n d u(t) a r e
sufficiently s m o o t h . T h e r e f o r e , by p r o c e e d i n g to a h i g h e r - o r d e r c e l l - a v e r a g i n g C h e b y s h e v
a p p r o x i m a t i o n s , t h e r e s u l t s o b t a i n e d r a p i d l y t e n d to the exact results. T h e c e l l - a v e r a g i n g
C h e b y s h e v a p p r o x i m a t i o n o f o r d e r six is a v e r y a c c u r a t e a p p r o x i m a t i o n o f the e x a c t s o l u t i o n .
All a p p r o x i m a t i o n s a n d e r r o r e s t i m a t e s h a v e b e e n c o m p u t e d with v e r y high p r e c i s i o n o n a
Sun-SPARC-II workstation.
T a b l e 3 r e p o r t s , f o r v a r i o u s v a l u e s o f t h e p a r a m e t e r s w, T, x M ( - 1 ) a n d A M ( - 1 ) , the
m a x i m u m e r r o r , w i t h r e s p e c t to t h e exact solutions, on the a p p r o x i m a t e d c o e f f i c i e n t s a n d o n
t h e p e r f o r m a n c e i n d e x o b t a i n e d w i t h t h e c e l l - a v e r a g i n g a p p r o x i m a t i o n of o r d e r M = 8 w i t h t h e
m e t h o d o f [17] o f o r d e r M = 10. By i n c r e a s i n g the v a l u e of s o m e o f t h e a b o v e p a r a m e t e r s ,
h o l d i n g t h e o t h e r p a r a m e t e r s fixed, w e f o u n d t h a t t h e a c c u r a c y is relatively l o w e r t h a n t h e
a c c u r a c y o b t a i n e d using t h e s t a n d a r d case.
I n T a b l e 4, a c o m p a r i s o n is m a d e b e t w e e n t h e s o l u t i o n s o f [17], o f o r d e r M = 4, 7, a n d 10,
a n d t h e s o l u t i o n s o b t a i n e d b y t h e p r o p o s e d m e t h o d , o f o r d e r M = 4 a n d 6, w i t h t h e s t a n d a r d
c a s e (6.1), f o r t h e c o n t r o l l e d l i n e a r oscillator.
T a b l e 5 p r e s e n t s t h e c e l l - a v e r a g i n g a p p r o x i m a t i o n s for the c o n t r o l l e d D u f f i n g oscillator, w i t h
Table 3
Maximum error versus parameter variations
Methods
Parameters
Max. error on a s
Max. error on bs
IJ - jM [
Method of [ 1 7 ]
M = 10, m i = 15,
N = 30
Standard case
w= 2
T=3
x M ( - 1) = 1, ,~m(- 1) = - 1
5.3 × 10-11
7.3 × 10- 8
3.5 × 10- 9
1.1 × 10 -H~
1.2 × 10 -8
1.6 × 10- 5
3.4 × 10- 7
2.4 x 10 -8
2.7 × 10-15
9.7 × 10- J1
6.6 × 10-14
1.1 x 10 -14
Cell-averaging
M =8
Standard case
w= 2
T=3
XM(-- 1)= 1, .~m(-- 1)= --1
< 10-13
< 10 -jr
<10 -~°
<10 -13
< 10- lo
< 10 - 9
< 10-16
< 10 -t4
<10 -9
<10 -15
< 10-10
<10 -16
470
G.N. Elnagar, AIM. Kazerni /Applied Numerical Mathematics 18 (1995) 461-471
Table 4
A c o m p a r i s o n b e t w e e n t h e cell-averaging a n d t h e m e t h o d o f [17], for t h e controlled linear oscillator
Methods
PREC
j M
M e t h o d of [17]
M = 4
M = 7
M = 10
10-6
10-8
10 - m
0.184917
0.18485854
0.1848585424
10-8
10-1o
0.18485871
0.1848585424
Cell-Averaging
M = 5
M = 6
Exact J: 0.1848585424
Table 5
A c o m p a r i s o n b e t w e e n t h e cell-averaging a n d t h e m e t h o d o f [17], for t h e controlled D u f f i n g oscillator
Methods
Exec. t i m e (s)
PREC
M e t h o d of [17]
M = 4
M = 7
M = 10
0.79
1.9
4.7
10- 6
10 - 8
1 0 - lo
Cell-averaging
M = 4
M = 6
M = 8
0.53
0.98
2.3
10 - 6
10 - 8
10- m
SFK
MEBC
0.187531
0.18744484
0.1874448561
m
m
< 10 - 5
< 10 - 7
< 10 - m
j M
0
0
0
0.187493
0.18744487
0.1874448561
the standard case (6.1), using the proposed method of order M = 4, 6, and 8, in the table are
shown
M
M
S F K = ~_, [ F k I = ~_, I b k
k=O
--
Ck
-
-
W2ak --
ea3k l,
k =0
the maximum error at the boundary conditions ( M E B C ) the precision on o~,/3 and A imposed
in order to stop the iterative procedures is indicated by P R E C , and the execution time (Exec.
time) represents time required to attain such precision. A comparison is made with the method
of [17].
Table 6
T h e cell-averaging p r e c i s i o n v e r s u s e v a r i a t i o n s
Cell-averaging
e
Exec. t i m e (s)
1.2
10 - 6
e = 0.5
1.9
2.7
10 - s
1 0 - lo
1.24
2.6
2.91
10 - 6
10 - 8
1 0 - Jo
< 10 -5
< 10 - 7
< 1 0 - Jo
M = 5
M = 7
M = 9
M = 6
M = 8
M = 10
e = 0.75
PREC
SFK
MEBC
jM
< 10 -5
0
< 10 - 7
< 10 - 1o
0
0
0.193561
0.19353034
0.1935303316
0
0
0
0.197940
0.19791864
0.1979186274
G.N. Elnagar,M.A. Kazemi/Applied NumericalMathematics 18 (1995)461-471
471
The coefficient e of the nonlinearity has b e e n taken as e = 0.15. The effect of the p a r a m e t e r
e characterizing the nonlinearity has b e e n investigated, and the numerical findings are reported
in Table 6. As e increases, a larger value of the order M of the cell-averaging Chebyshev
approximations is n e e d e d in o r d e r to obtain the same precision.
7. Conclusions
In this paper, a cell-averaging Chebyshev m e t h o d has b e e n used to generate the optimal
solution of the controlled Duffing oscillator. With the availability of this methodology, it will
now b e c o m e possible to investigate the spectral solution of nonlinear optimal control problems
which may describe many nonlinear physical and engineering problems.
Acknowledgment
The authors wish to express their sincere thanks to the referees for valuable suggestions
which improved the final manuscript.
References
[1] M. Avriel, Nonlinear Programming: Analysis and Methods (Prentice-Hall, Englewood Cliffs, NJ, 1976).
[2] A.D. Belegundu and J.S. Arora, A study of mathematical programming methods for structural optimization,
Part II, Internat. J. Numer. Methods Engrg. (1985) 1601-1623.
[3] G.S.G. Beveridge and R.S. Sehechter, Optimization Theory and Practice (McGraw-Hill, New York, 1970).
[4] W. Cai, D. Gottlieb and A. Harten, Cell-averaging Chebyshev methods for hyperbolic problems, Comput. Math.
Appl. 24 (5/6) (1992) 37-49.
[5] C. Canuto, M.Y. Hussaini, A. Quarteroni and T.A. Zang, Spectral Methods in Fluid Dynamics (Springer-Verlag,
New York, 1988).
[6] A. Coranan, C. Marchesi, C. Martini and S. Ridella, Minimizing multimodal functions of continuous variables
with the simulated annealing algorithm, ACM Trans. Math. Software 13 (13) (1987) 262-280.
[7] P.J. Davis and P. Rabinowitz, Methods of Numerical Integration (Academic Press, London, 2nd ed., 1984).
[8] R.L. Fox, Optimization Methods for Engineering Design (Addison-Wesley, Reading, MA, 1971).
[9] P.E. Gill and W. Murray, Linearly constrained problems including linear and quadratic programming, in:
Jaeobs, ed., The State of the Art in Numerical Analysis (Academic Press, New York, 1977).
[10] J.C. Gille, P. Decaulne and M. Pelegrain, Syst~mes, Asservis Non Lin#aires (Bordas, Paris, 1975).
[11] D. Gottlieb, M.Y. Hussaini and S.A. Orszag, Theory and Applications of Spectral Methods for Partial Differential
Equations (SIAM, Philadelphia, PA, 1984).
[12] A.H. Nayfeh and D.T. Mook, Nonlinear Oscillations (Wiley, New York, 1979).
[13] J.A. Nelder and R.A. Mead, A simplex method for function minimization, Computer J. 7 (1965) 308-313.
[14] L.S. Pontryagin, V. Boltyanskii, R. Gamkrelidze and E. Mischenko, The Mathematical Theory of Optimal
Processes (Interscience, New York, 1962).
[15] M.J.D. Powell, An efficient method for finding the minimum of a function of several variables without
calculating derivative functions, Computer Z 7 (1964) 155-162.
[16] J.N. Sidal, Optimal Engineering Design: Principal and Applications (Marcel Dekker, New York, 1982).
[17] R. Van Dooren, and J. Vlassenbroeck, Chebyshev series solution of the controlled Duffing oscillator, J.
Comput. Phys. 47 (1982) 321-329.