ECM6Lecture11cVietnam 2014
ECM6Lecture11cVietnam 2014
ECM6Lecture11cVietnam 2014
Slide 1 of 5
Lecture 11c
Nonlinear Regression
Brian G. Higgins
Department of Chemical Engineering & Materials Science
University of California, Davis
April 2014, Hanoi, Vietnam
ECM6Lecture11cVietnam_2014.nb
Y = a0 + a1 X1 + a2 X2 + + aN XN
Here the unknown parameters ai appear linearly in the regression model. But this is not always the case
For example, suppose we have vapor pressure versus temperature data that we want to use to determine the coefficients in an Antoine equation:
P HTL = Exp a -
b
T+c
Note that we cannot transform this equation into a linear model. Nevertheless, we can still define a least
squares solution to determine the parameters. That is
M
E2 HPL = HP HTk L - Pk L2
k=1
and then look for the parameters such that E2 HPL is minimized. Again the local minimum with respect to
the parameters a, b, c is given by
P
a
= 0,
P
b
= 0,
P
c
=0
These equations are not linear, however. Nonetheless, we can still solve them using a Newton iteration
method. Let us consider the following example:
ECM6Lecture11cVietnam_2014.nb
We would like to fit this data to the Antoine model given by (14). As before we define the data and the
necessary functions to extract out the temperature and vapor pressure values from the data list. We
also define our Antoine model, and the expression of the square of the errors E2
Mathematica Code
In[1]:=
data = 88280, 8.615<, 8290, 13.866<, 8300, 21.696<, 8310, 32.588<, 8320, 47.845<,
8330, 68.14697<, 8340, 94.879<, 8350, 129.325<, 8360, 172.806<,
8370, 226.774<, 8380, 292.951<, 8390, 372.788<, 8400, 468.248<<;
Antoine@T_D := ExpBa -
b
T+c
Here is what the square of the errors E2 looks like using the Antoine Model:
In[9]:=
E2 @Antoine, MD
2
Out[9]=
ECM6Lecture11cVietnam_2014.nb
In[11]:=
eqns = 8D@E2 @Antoine, MD, aD 0, D@E2 @Antoine, MD, bD 0, D@E2 @Antoine, MD, cD 0<
b
Out[11]=
280 + c
b
H290 + cL2
H300 + cL2
H320 + cL2
b
H330 + cL2
H360 + cL2
b
H370 + cL2
H350 + cL2
H340 + cL2
H380 + cL2
b
H310 + cL2
0,
400 + c
b
390 + c
380 + c
b
370 + c
360 + c
b
350 + c
340 + c
H280 + cL2
320 + c
300 + c
310 + c
b
290 + c
H400 + cL2
0>
This is a set of nonlinear transcendental equations. We can solve these equations using a Newton's
method (i.e. use FindRoot). But we need to supply initial guesses for the parameters
ECM6Lecture11cVietnam_2014.nb
This is a set of nonlinear transcendental equations. We can solve these equations using a Newton's
method (i.e. use FindRoot). But we need to supply initial guesses for the parameters
In[15]:=
Out[15]=
sol = FindRoot@eqns, 88a, 16<, 8b, 3200<, 8c, - 37<<, MaxIterations InfinityD
8a 14.0599, b 2827.25, c - 42.6153<
In this case we have increased the number of iterations allowed from the default value of 100 to Infinity.
Note that the residual is quite small (< 0.01) but not vanishingly small.
In[19]:=
Out[19]=
E2 @Antoine, MD . sol
0.0192802
Here is a plot of the data and the model with the fitted parameters:
In[16]:=
400
Out[18]=
PHTL
300
200
100
0
280
300
320
340
360
380
400
ECM6Lecture11cVietnam_2014.nb
In[20]:=
Out[20]=
b
HT + cL
Note that these parameters are the same that our code produced, but the number of iterations was less
In[21]:=
400
300
Out[23]=
PHTL
200
100
0
280
300
320
340
360
380
400
ECM6Lecture11cVietnam_2014.nb
b
HT + cL
The values of the parameters {a,b,c} during the iteration are stored in the variable steps; the final value
of the fit parameters is given by the variable fit. The data in steps is of the form
88a1 , b1 , c1 <, 8a2 , b2 , c2 <, , 8an , bn , cn <<
We can plot this data in 3D by using ListPlot3D
In[25]:=
plt1 = ListPlot3D@stepsD
Out[25]=
The following shows the location of the optimum value for the parameters in the space of all values
obtained during the iteration.
ECM6Lecture11cVietnam_2014.nb
In[26]:=
Out[26]=
We can also use the previous definition for the least squares function and evaluate it at the iteration
points during the computation
In[27]:=
res = E2 @Antoine, MD
2
Out[27]=
To do so we creat a set of rules from the data stored in the variable steps:
In[28]:=
paramRules =
steps . 8a1_Real, b1_Real, c1_Real< 8Rule@a, a1D, Rule@b, b1D, Rule@c, c1D<;
Then we plot the least squares error as a functioin of the iteration number
In[39]:=
400 000
E2
Out[39]=
300 000
200 000
100 000
0
0
20
40
60
80
100
number of iterations n
After about 90 iterations the value of the least squares error settles down to a small number. The total
number of iterations is
ECM6Lecture11cVietnam_2014.nb
In[35]:=
Out[35]=
Here is a blow-up of the region near the minimum value for the least squared error:
In[38]:=
E2
30
20
Out[38]=
10
0
96
111
106
101
number of iterations n
116
10
ECM6Lecture11cVietnam_2014.nb
References
These notes and the examples were adapted from the follow texts:
M. J. Maron, Numerical Analysis. A Practical Approach, 2nd Edition, Macmillan Publishing Company,
1987
A. J. Pettoprezzo, Introductory Numerical Analysis, Dover Publications, 1984