Regression Interpolation
Regression Interpolation
Regression Interpolation
CS 740
Ajit Rajwade
1 of 20
Problem statements
• Consider a set of N points {(xi,yi)}, 1 ≤ i ≤ N.
• Suppose we know that these points actually
lie on a function of the form y = f(x;a) where
f(.) represents a function family and a
represents a set of parameters.
• For example: f(x) is a linear function of x, i.e.
of the form y = f(x) = mx+c. In this case, a =
(m,c).
2
Problem statements
• Example 2: f(x) is a quadratic function of x, i.e. of the
form y = f(x) = px2+qx+r. In this case, a = (p,q,r).
4
Polynomial Regression
• A polynomial of degree n is a function of the
form:
n
y ai x a0 a1 x a2 x ... an x
i 2 n
i 0
5
Polynomial regression
• Let us assume that x is the independent
variable, and y is the dependent variable.
• In the point set {(xi,yi)} containing N points, we
will assume that the x-coordinates of the
points are available accurately, whereas the y-
coordinates are affected by measurement
error – called as noise.
6
7
Least Squares Polynomial regression
• If we assume the polynomial is linear, we have
yi = mxi + c + ei, where ei is the noise in yi. We
want to estimate m and c.
• We will do so by minimizing the following
w.r.t. m and c:
N
( y mx c)
i 0
i i
2
n . provided
y2 x2 . . x2 x2 1 e2 there are at
2
. . .
. . . . least n non-
a2 coincident,
. . . . . . non-collinear
y n 2 a1 points.
N x N . . x N x N 1 a eN Basically, XTX
0 must be non-
y Xa singular.
a (X T X)
1 T
X y X y 10
Least Squares Polynomial regression
• Regardless of the order of the polynomial, we
are dealing with a linear form of regression –
because in each case, we are solving an
equation of the form y ≈ Xa.
11
Least Squares Polynomial Regression:
Geometric Interpretation
• The least squares solution seeks a set of polynomial
coefficients that minimize the sum of the squared
difference between the measured y coordinate of
each point and the y coordinate if the assumed
polynomial model was strictly obeyed, i.e.
N n
min {a j } (y i a j x )
i
j 2
i 0 j 0
min {a} y Xa 2
2
12
Least Squares Polynomial Regression:
Geometric Interpretation
• The least squares solution seeks a set of polynomial
coefficients that minimize the sum of the squared
difference between the measured y coordinate of
each point and the y coordinate if the assumed
polynomial model was strictly obeyed, i.e. we are
trying to find a model that will minimize vertical
distances (distances along the Y axis).
13
Weighted least squares: Row weighting
N n
min {a j } (y i a j x ij ) 2
i 0 j 0
min {a} y Xa 2
2
Different instances of y get different weights
(higher the weight the more the importance to
that instance!)
N n Wy WXa
min {a j } wi ( y i a j x ) j 2
a ( WX) WX ( WX)T Wy
i
T 1
i 1 j 0
min {a} W(y Xa )
2
2
W diag ( w1 , w2 ,..., wN ) 14
Regularization
• In some cases, the system of equations may be ill-
conditioned.
• In such cases, there may be more than one
solution.
• It is common practice to choose a “simple” or
“smooth” solution.
• This practice of choosing a “smooth” solution is
called as regularization.
• Regularization is a common method used in
several problems in machine learning, computer
vision and statistics.
15
Regularization: Ridge regression
Penalize (or discourage) solutions in which the
min{a} y Xa 2 a 2
2 2
magnitude of vector a is too high. The parameter λ
is called the regularization parameter. Larger the
value of λ, the greater the encouragement to find a
smooth solution.
X T
X Ia X y T
Taking derivative w.r.t. a, we get this regularized
solution for a.
X U SVT
VS V Ia VS U y
2 T T
V S IV a VS U y
2 T T
S IV a S U y
2 T T
SiiUTi yr
V a 2
T
i 1 Sii 16
Regularization: Tikhonov regularization
min{a} y Xa 2 Ba 2
2 2
X T
X BT Ba X T y
Penalize (or discourage) solutions in which the magnitude of vector Ba is too high. The
parameter λ is called the regularization parameter. Larger the value of λ, the greater the
encouragement to find a smooth solution. The matrix B can be chosen in different ways
– very commonly, one penalizes large values of the gradient of vector a, given as
follows:
a a1 a2 a1 a3 a2 . . an1 an2 an an1
18
Total Least Squares
• Consider the equation y ≈ Xa for polynomial
regression.
• We want to find a, in the case where both y
and X have errors (in the earlier case, only y
had errors).
• So we seek to find a new version of y (denoted
y*) close to y, and a new version of X
(denoted X*) close to X, such that y*=X*a.
• Thus we want (see next slide):
19
Find ( y* ,X*) to minimize
yy* 2 XX* such that y* X * a.
2 2
F
1
Size of X * is N ( n 1). Size of Z * is N (n 2).
a
As the vector is not all zeros,
1
the column rank of Z * is less than n 2.
To minimize Z Z * F , Z * is the best rank n 1 approximation to Z, given by
2
n 1
Z* iui v it from SVD of Z (Eckhart - Young Theorem).
i 1
21
Interpolation
• Given a set of N points lying on a curve, we
can perform linear interpolation by simply
joining consecutive points with a line.
22
Interpolation
• The problem with this approach is that the slopes
of adjacent lines change abruptly at each data-
point – called as C1 discontinuity.
• This can be resolved by stitching piecewise
quadratic polynomials between consecutive point
pairs and requiring that
Adjacent polynomials actually pass through the
point (interpolatory condition)
Adjacent polynomials have the same slope at
that point (C1 continuity condition)
23
Interpolation
• We could go one step further and require that
Adjacent polynomials have the same second derivative
at that point (C2 continuity condition)
• This requires that the polynomials be piecewise cubic
(degree 3) at least.
• This type of interpolation is called cubic-spline
interpolation and is very popular in graphics and image
processing.
• The data-points (where the continuity conditions) are
imposed are called as knots or control points.
• A cubic spline is a piecewise polynomial that is twice
continuously differentiable (including at the knots).
24
Cubic spline interpolation
• A curve is parameterized as y = f(t) where ‘t’ is
the independent variable.
• Consider N points (ti,yi) where i ranges from 1
to N.
• We will have N-1 cubic polynomials, each of
the form yi(t) = ai + bi t + ci t2 + di t3 where i
ranges from 1 to N-1.
• Thus the overall function y(t) is equal to yi(t)
where t lies in between ti and ti+1.
25
Cubic spline interpolation
• The total number of unknowns here is 4(N-1)
(why?).
• To determine them, we need to specify 4(N-1)
equations.
• As the curve must pass through the N points,
we get N equations as follows:
y1 a1 b1t1 c1t12 d1t13
y2 a1 b1t2 c1t22 d1t23
.
yi ai 1 bi 1ti ci 1ti2 d i 1ti3
.
y N a N 1 bN 1tN cN 1t N2 d N 1tN3 26
Cubic spline interpolation
• The interpolatory conditions at the N-2
interior points yield N-2 equations of the
form:
y2 a1 b1t2 c1t22 d1t23 a2 b2t2 c2t22 d 2t23
.
yi ai 1 bi 1ti ci 1ti2 d i 1ti3 ai bi ti ci ti2 d i ti3
.
y N 1 a N 2 bN 2tN 1 cN 2t N2 1 d N 2tN3 1 a N 1 bN 1tN 1 cN 1t N2 1 d N 1tN3 1
27
Cubic spline interpolation
• The C1 continuity conditions at the N-2
interior points yield N-2 equations of the
form:
b1 2c1t2 3d1t22 b2 2c2t2 3d 2t22
.
bi 1 2ci 1ti 3d i 1ti2 bi 2ci ti 3d i ti2
.
bN 2 2cN 2tN 1 3d N 2tN2 1 bN 1 2cN 1tN 1 3d N 1tN2 1
28
Cubic spline interpolation
• The C2 continuity conditions at the N-2
interior points yield N-2 equations of the
form:
2c1 6d1t2 2c2 6d 2t2
.
2ci 1 6d i 1ti 2ci 6d i ti
.
2cN 2 6d N 2t N 1 2cN 1 6d N 1tN 1
2
0
0 1 2 t2 3t22 0 1 2 t2 3t2 a2 0
0 0 2 6t2 0 0 2 6t2 b2 0
0 0 2 6t1 0 0 0 0 c2 0
0 0
0 0 0 0 2 6t3 d 2 0
31
32
19
18
17
16
15
14
13
12
11
10
1 1.5 2 2.5 3
33
References
• Scientific Computing, Michael Heath
• http://en.wikipedia.org/wiki/Total_least_squa
res
34