Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (1 vote)
529 views

Control System Theory and Design: ECE 515 Class Notes

This document contains lecture notes for a graduate-level control systems theory and design course. It covers topics such as modeling dynamical systems using state-space models, analyzing system properties like stability, controllability and observability, and designing controllers and observers. The notes are organized into four parts that cover system modeling and analysis, structural properties of systems, synthesis of observers/compensators and controllers, and optimization. Both continuous-time and discrete-time linear systems are addressed, with nonlinear systems covered to a lesser extent. The goal is to teach fundamental principles and design techniques using state-space methods.

Uploaded by

marll002
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (1 vote)
529 views

Control System Theory and Design: ECE 515 Class Notes

This document contains lecture notes for a graduate-level control systems theory and design course. It covers topics such as modeling dynamical systems using state-space models, analyzing system properties like stability, controllability and observability, and designing controllers and observers. The notes are organized into four parts that cover system modeling and analysis, structural properties of systems, synthesis of observers/compensators and controllers, and optimization. Both continuous-time and discrete-time linear systems are addressed, with nonlinear systems covered to a lesser extent. The goal is to teach fundamental principles and design techniques using state-space methods.

Uploaded by

marll002
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

ECE 515 Class Notes

Lecture Notes on
CONTROL SYSTEM THEORY
AND DESIGN
Tamer Basar, Sean P. Meyn, and William R. Perkins
University of Illinois at Urbana-Champaign
DRAFT
Not to be quoted or referenced
without the consent of the authors
August 28, 2009
Preface
This is a collection of the lecture notes of the three authors for a rst-
year graduate course on control system theory and design (ECE 515, for-
merly ECE 415) at the ECE Department of the University of Illinois at
Urbana-Champaign. This is a fundamental course on the modern theory
of dynamical systems and their control, and builds on a rst-level course
in control that emphasizes frequency-domain methods (such as the course
ECE 486, formerly ECE 386, at UIUC). The emphasis in this graduate
course is on state space techniques, and it encompasses modeling, analysis
(of structural properties of systems, such as stability, controllability, and ob-
servability), synthesis (of observers/compensators and controllers) subject
to design specications, and optimization. Accordingly, this set of lecture
notes is organized in four parts, with each part dealing with one of the issues
identied above. Concentration is on linear systems, with nonlinear systems
covered only in some specic contexts, such as stability and dynamic opti-
mization. Both continuous-time and discrete-time systems are covered, with
the former, however, in much greater depth than the latter.
The notions of control and feedback, in precisely the sense they will
be treated and interpreted in this text, pervade our everyday operations,
oftentimes without us being aware of it. Leaving aside the facts that the
human body is a large (and very complex) feedback mechanism, and an
economy without its build-in (and periodically ne-tuned) feedback loops
would instantaneously turn to chaos, the most common examples of control
systems in the average persons everyday life is the thermostat in ones living
room, or the cruise control in ones their automobile. Control systems of a
similar nature can be found in almost any of todays industries. The most
obvious examples are the aerospace industry, where control is required in
y by wire positioning of ailerons, or in the optimal choice of trajectories in
space ight. The chemical industry requires good control designs to ensure
safe and accurate production of specic products, and the paper industry
requires accurate control to produce high quality paper. Even in applications
iii
iv
where control has not found any use, this may be a result of inertia within
industry rather than lack of need! For instance, in the manufacture of
semiconductors currently many of the fabrication steps are done without
the use of feedback, and only now are engineers seeing the diculties that
such open-loop processing causes.
There are only a few fundamental ideas that are required to take this
course, other than a good background in linear algebra and dierential equa-
tions. All of these ideas revolve around the concept of feedback, which is sim-
ply the act of using measurements as they become available to control the
system of interest. The main objective of this course is to teach the student
some fundamental principles within a solid conceptual framework, that will
enable her/him to design feedback loops compatible with the information
available on the states of the system to be controlled, and by taking into
account considerations such as stability, performance, energy conservation,
and even robustness. A second objective is to familiarize her/him with the
available modern computational, simulation, and general software tools that
facilitate the design of eective feedback loops.
TB, SPM, WRP
Urbana, January 2007
Contents
Preface iii
I System Modeling and Analysis 3
1 State Space Models 1
1.1 An electromechanical system . . . . . . . . . . . . . . . . . . 2
1.2 Linearization about an equilibrium state . . . . . . . . . . . . 3
1.3 Linearization about a trajectory . . . . . . . . . . . . . . . . 5
1.4 A two link inverted pendulum . . . . . . . . . . . . . . . . . . 6
1.5 An electrical circuit . . . . . . . . . . . . . . . . . . . . . . . 9
1.6 Transfer Functions & State Space Models . . . . . . . . . . . 10
1.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2 Vector Spaces 25
2.1 Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2 Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3 Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.4 Change of basis . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.5 Linear Operators . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.6 Linear operators and matrices . . . . . . . . . . . . . . . . . . 33
2.7 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . 34
2.8 Inner Products . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.9 Orthogonal vectors and reciprocal basis vectors . . . . . . . . 39
2.10 Adjoint transformations . . . . . . . . . . . . . . . . . . . . . 40
2.11 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3 Solutions of State Equations 49
3.1 LTI state space models . . . . . . . . . . . . . . . . . . . . . . 49
3.2 Other descriptions of the state transition matrix . . . . . . . 50
v
vi CONTENTS
3.3 Change of state variables . . . . . . . . . . . . . . . . . . . . 52
3.4 Numerical examples . . . . . . . . . . . . . . . . . . . . . . . 53
3.5 Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . . . . 54
3.6 Linear Time-Varying Systems . . . . . . . . . . . . . . . . . . 57
3.7 Fundamental matrices . . . . . . . . . . . . . . . . . . . . . . 58
3.8 Peano-Baker Series . . . . . . . . . . . . . . . . . . . . . . . . 59
3.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
II System Structural Properties 65
4 Stability 67
4.1 Stability in the sense of Lyapunov . . . . . . . . . . . . . . . 68
4.2 Lyapunovs direct method . . . . . . . . . . . . . . . . . . . . 73
4.3 Region of asymptotic stability . . . . . . . . . . . . . . . . . . 77
4.4 Stability of linear state space models . . . . . . . . . . . . . . 78
4.5 Stability subspaces . . . . . . . . . . . . . . . . . . . . . . . . 81
4.6 Linearization of nonlinear models and stability . . . . . . . . 83
4.7 Input-output stability . . . . . . . . . . . . . . . . . . . . . . 85
4.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5 Controllability 93
5.1 A preview: The LTI discrete-time case . . . . . . . . . . . . . 96
5.2 The general LTV continuous-time case . . . . . . . . . . . . . 96
5.3 Controllability using the controllability matrix . . . . . . . . 99
5.4 Other tests for controllability . . . . . . . . . . . . . . . . . . 101
5.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6 Observability, Duality and Minimality 109
6.1 The observability matrix . . . . . . . . . . . . . . . . . . . . . 111
6.2 LTV models and the observability grammian . . . . . . . . . 112
6.3 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
6.4 Kalman canonical forms . . . . . . . . . . . . . . . . . . . . . 115
6.5 State space models and their transfer functions . . . . . . . . 117
6.6 Realization of MIMO transfer functions . . . . . . . . . . . . 119
6.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
CONTENTS vii
III Feedback 123
7 Pole Placement 125
7.1 State feedback . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.2 Observers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
7.3 Observer feedback . . . . . . . . . . . . . . . . . . . . . . . . 131
7.4 Reduced-order (Luenberger) observers . . . . . . . . . . . . . 133
7.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
8 Tracking and Disturbance Rejection 149
8.1 Internal model principle . . . . . . . . . . . . . . . . . . . . . 149
8.2 Transfer function approach . . . . . . . . . . . . . . . . . . . 154
8.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
9 Control Design Goals 159
9.1 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
9.2 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
9.3 Robustness and sensitivity . . . . . . . . . . . . . . . . . . . . 161
9.4 Zeros and sensitivity . . . . . . . . . . . . . . . . . . . . . . . 163
9.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
IV Optimal Control 169
10 Dynamic Programming and the HJB Equation 171
10.1 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . 171
10.2 Hamilton-Jacobi-Bellman equations . . . . . . . . . . . . . . . 172
10.3 A solution to the LQR problem . . . . . . . . . . . . . . . . . 176
10.4 The Hamiltonian matrix . . . . . . . . . . . . . . . . . . . . . 178
10.5 Innite horizon regulator . . . . . . . . . . . . . . . . . . . . 181
10.6 Return dierence equation . . . . . . . . . . . . . . . . . . . . 191
10.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
11 An Introduction to the Minimum Principle 207
11.1 Minimum Principle and the HJB equations . . . . . . . . . . 208
11.2 Minimum Principle and Lagrange multipliers

. . . . . . . . . 210
11.3 The penalty approach

. . . . . . . . . . . . . . . . . . . . . . 214
11.4 Application to LQR . . . . . . . . . . . . . . . . . . . . . . . 216
11.5 Nonlinear examples . . . . . . . . . . . . . . . . . . . . . . . . 217
11.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
viii CONTENTS
List of Figures
1.1 Magnetically Suspended Ball . . . . . . . . . . . . . . . . . . 2
1.2 Trajectory of a 2D nonlinear state space model . . . . . . . . 4
1.3 The Pendubot . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Coordinate description of the pendubot . . . . . . . . . . . . 7
1.5 Continuum of equilibrium positions for the Pendubot . . . . . 8
1.6 A simple RLC circuit . . . . . . . . . . . . . . . . . . . . . . . 9
1.7 Simulation diagram for the simple RLC circuit . . . . . . . . 9
1.8 Controllable Canonical Form . . . . . . . . . . . . . . . . . . 12
1.9 Observable Canonical Form . . . . . . . . . . . . . . . . . . . 12
1.10 Partial fraction expansion of a transfer function . . . . . . . . 15
4.1 Typical solutions of the linear state space model . . . . . . . 67
4.2 Is this model stable, or unstable? . . . . . . . . . . . . . . . . 69
4.3 If x
0
B

(x
e
), then x(t) = (t; x
0
) B

(x
0
) for all t 0. . . 69
4.4 Frictionless pendulum . . . . . . . . . . . . . . . . . . . . . . 70
4.5 Stable and unstable equilibria. . . . . . . . . . . . . . . . . . 71
4.6 V (x(t)) represents the height of the lifted trajectory . . . . 74
4.7 V (x(t)) is decreasing with time if V (x) f(x) 0 . . . . . . 74
4.8 The vectors V (x) and f(x) meet at ninety degrees . . . . . 76
4.9 The sub-level set of V . . . . . . . . . . . . . . . . . . . . . . 78
4.10 The graph of f remains in a tube of radius M . . . . . . . . . 85
5.1 The controlled model can be steered to any state . . . . . . . 94
5.2 The state can never leave this line, regardless of the control . 95
5.3 The modes of the model are decoupled . . . . . . . . . . . . . 102
6.1 If the input does not aect the internal temperature... . . . . 109
6.2 If the input to this circuit is zero... . . . . . . . . . . . . . . . 110
6.3 The Kalman Controllability Canonical Form . . . . . . . . . . 115
6.4 The Kalman Observability Canonical Form . . . . . . . . . . 117
ix
LIST OF FIGURES 1
6.5 A block diagram realization of P. . . . . . . . . . . . . . . . . 120
7.1 Observer design . . . . . . . . . . . . . . . . . . . . . . . . . . 130
7.2 The separation principle . . . . . . . . . . . . . . . . . . . . . 132
8.1 Observer/state feedback . . . . . . . . . . . . . . . . . . . . . 154
9.1 An error feedback conguration . . . . . . . . . . . . . . . . . 162
9.2 A Nyquist plot for the Pendubot . . . . . . . . . . . . . . . . 165
9.3 Sensitivity function for the Pendubot . . . . . . . . . . . . . . 166
10.1 If a better control existed... . . . . . . . . . . . . . . . . . . . 173
10.2 Solutions to the ARE for a scalar model . . . . . . . . . . . . 183
10.3 The loop transfer function . . . . . . . . . . . . . . . . . . . . 192
10.4 The symmetric root locus . . . . . . . . . . . . . . . . . . . . 193
10.5 The symmetric root locus . . . . . . . . . . . . . . . . . . . . 194
10.6 The loop transfer function . . . . . . . . . . . . . . . . . . . . 195
10.7 The symmetric root locus for the Pendubot . . . . . . . . . . 197
10.8 Nyquist plot for the Pendubot . . . . . . . . . . . . . . . . . . 197
11.1 An optimization problem on R
2
with a single constraint . . . 211
11.2 A perturbation of the function z D[t
0
, t
1
]. . . . . . . . . . . 212
11.3 Optimal state trajectory for the bilinear model . . . . . . . . 219
11.4 Optimal trajectories for the minimum time problem . . . . . 224
2 LIST OF FIGURES
Part I
System Modeling and
Analysis
3
Chapter 1
State Space Models
After a rst course in control system design one learns that intuition is a
starting point in control design, but that intuition may fail for complex
systems such as those with multiple inputs and outputs, or systems with
nonlinearities. In order to augment our intuition to deal with problems as
complex as high speed ight control, or ow control for high speed commu-
nication networks, one must start with a useful mathematical model of the
system to be controlled. It is not necessary to take the system apart -
to model every screw, valve, and axle. In fact, to use the methods to be
developed in this book it is frequently more useful to nd a simple model
which gives a reasonably accurate description of system behavior. It may
be dicult to verify a complex model, in which case it will be dicult to
trust that the model accurately describes the system to be controlled. More
crucial in the context of this course is that the control design methodology
developed in this text is model based. Consequently, a complex model of
the system will result in a complex control solution, which is usually highly
undesirable in practice.
Although this text treats nonlinear models in some detail, the most
far reaching approximation made is linearity. It is likely that no physical
system is truly linear. The reasons that control theory is eective in practice
even when this ubiquitous assumption fails are that (i) physical systems can
frequently be approximated by linear models; and (ii) control systems are
designed to be robust with respect to inaccuracies in the system model. It
is not simply luck that physical systems can be modeled with reasonable
accuracy using linear models. Newtons laws, Kirchos voltage and current
laws, and many other laws of physics give rise to linear models. Moreover,
in this chapter we show that a generalized Taylor series expansion allows
1
2 CHAPTER 1. STATE SPACE MODELS
the approximation of even a grossly nonlinear model by a linear one.
The linear models we consider are primarily of the state space form
x = Ax +Bu
y = Cx +Du, (1.1)
where x is a vector signal in R
n
, y and u are the output and input of the
system evolving in R
p
and R
m
, respectively, and A, B, C, D are matrices
of appropriate dimensions. If these matrices do not depend on the time
variable, t, the corresponding linear system will be referred to as linear
time invariant (LTI), whereas if any one of these matrices has time-varying
entries, then the underlying linear system will be called linear time varying
(LTV). If both the input and the output are scalar, then we refer to the
system as single input-single output (SISO); if either a control or output are
of dimension higher than one, then the system is multi input-multi output
(MIMO).
Our rst example now is a simple nonlinear system which we approxi-
mate by a linear state space model of the form (1.1) through linearization.
u(t)
y(t)
Reference
Height r
Figure 1.1: Magnetically Suspended Ball
1.1 An electromechanical system
The magnetically suspended metallic ball illustrated in Figure 1.1 is a simple
example which illustrates some of the important modeling issues addressed
1.2. LINEARIZATION ABOUT AN EQUILIBRIUM STATE 3
in this chapter. The input u is the current applied to the electro-magnet,
and the output y is the distance between the center of the ball and some
reference height. Since positive and negative inputs are indistinguishable at
the output of this system, it follows that this cannot be a linear system.
The upward force due to the current input is approximately proportional to
u
2
/y
2
, and hence from Newtons law for translational motion we have
ma = m y = mg c
u
2
y
2
,
where g is the gravitational constant and c is some constant depending on
the physical properties of the magnet and ball. This input-output model
can be converted to (nonlinear) state space form using x
1
= y and x
2
= y:
x
1
= x
2
, x
2
= g
c
m
u
2
x
2
1
where the latter equation follows from the formula x
2
= y. This pair of
equations forms a two-dimensional state space model
x
1
= x
2
= f
1
(x
1
, x
2
, u) (1.2)
x
2
= g
c
m
u
2
x
2
1
= f
2
(x
1
, x
2
, u) (1.3)
It is nonlinear, since f
2
is a nonlinear function of
_
x
1
x
2
_
. Letting x =
_
x
1
x
2
_
,
and f =
_
f
1
f
2
_
, the state equations may be written succinctly as
x = f(x, u).
The motion of a typical solution to a nonlinear state space model in R
2
is
illustrated in Figure 1.2.
1.2 Linearization about an equilibrium state
Suppose that a xed value u
e
, say positive, is applied, and that the state
x
e
= (
x
e1
x
e2
) has the property that
f(x
e1
, x
e2
, u
e
) =
_
f
1
(x
e1
,x
e2
,ue)
f
2
(x
e1
,x
e2
,ue)
_
= .
From the denition of f we must have x
e2
= 0, and
x
e1
=
_
c
mg
u
e
4 CHAPTER 1. STATE SPACE MODELS
x(t)
f(x(t),u(t))
x(0)
Figure 1.2: Trajectory of a nonlinear state space model in two dimensions:
x = f(x, u)
which is unique when restricted to be positive. The state x
e
is called an
equilibrium state since the velocity vector f(x, u) vanishes when x = x
e
,
and u = u
e
. If the signals x
1
(t), x
2
(t) and u(t) remain close to the xed
point (x
e1
, x
e2
, u
e
), then we may write
x
1
(t) = x
e1
+x
1
(t)
x
2
(t) = x
e2
+x
2
(t)
u(t) = u
e
+u(t),
where x
1
(t), x
2
(t), and u(t) are small-amplitude signals. From the state
equations (1.2) and (1.3) we then have
x
1
= x
e2
+x
2
(t) = x
2
(t)
x
2
= f
2
(x
e1
+x
1
, x
e2
+x
2
, u
e
+u)
Applying a Taylor series expansion to the right hand side (RHS) of the
second equation above gives
x
2
= f
2
(x
e1
, x
e2
, u
e
) +
f
2
x
1

(x
e1
,x
e2
,ue)
x
1
+
f
2
x
2

(x
e1
,x
e2
,ue)
x
2
+
f
2
u

(x
e1
,x
e2
,ue)
u + H.O.T.
After computing partial derivatives we obtain the formulae

x
1
= x
2
.
x
2
= 2
c
m
u
2
e
x
3
e1
x
1

2c
m
u
e
x
2
e1
u + H.O.T.
1.3. LINEARIZATION ABOUT A TRAJECTORY 5
Letting x denote the bivariate signal x(t) =
_
x
1
(t)
x
2
(t)
_
we may write the lin-
earized system in state space form:
x =
_
0 1
0
_
x +
_
0

_
u
y = x
1
,
where
= 2
c
m
u
2
e
x
3
e1
, = 2
c
m
u
e
x
2
e1
.
This linearized system is only an approximation, but one that is reasonable
and useful in control design as long as the state x and the control u remain
small. For example, we will see in Chapter 4 that local stability of the
nonlinear system is guaranteed if the simpler linear system is stable.
1.3 Linearization about a trajectory
Consider now a general nonlinear model
x(t) = f(x(t), u(t), t)
where f is a continuously dierentiable (C
1
) function of its arguments. Sup-
pose that for an initial condition x
0
given at time t
0
(i.e., x(t
0
) = x
0
), and a
given input u
n
(t), the solution of the nonlinear state equation above exists
and is denoted by x
n
(t), which we will refer to as the nominal trajectory
corresponding to the nominal input u
n
(t), for the given initial state. For ex-
ample, in the control of a satellite orbiting the earth, the nominal trajectory
x
n
(t) might represent a desired orbit (see Exercise 7 below). We assume
that the input and state approximately follow these nominal trajectories,
and we again write
x(t) = x
n
(t) +x(t)
u(t) = u
n
(t) +u(t)
From a Taylor series expansion we then have
x = x
n
+ x = f(x
n
, u
n
, t) +
_
f
x

(x
n
,u
n
,t)
. .
A(t)
_
x +
_
f
u

(x
n
,u
n
,t)
. .
B(t)
_
u +H.O.T.
6 CHAPTER 1. STATE SPACE MODELS
Since we must have x
n
= f(x
n
, u
n
, t), this gives the state space description
x = A(t)x +B(t)u,
where the higher order terms have been ignored.
1.4 A two link inverted pendulum
Below is a photograph of the pendubot found at the robotics laboratory at
the University of Illinois, and a sketch indicating its component parts. The
pendubot consists of two rigid aluminum links: link 1 is directly coupled to
the shaft of a DC motor mounted to the end of a table. Link 1 also includes
the bearing housing for the second joint. Two optical encoders provide
position measurements: one is attached at the elbow joint and the other
is attached to the motor. Note that no motor is directly connected to link
2 - this makes vertical control of the system, as shown in the photograph,
extremely dicult!
Encoder 1
Motor
Table
Encoder 2
Encoder 2
Link 1
Link 2
Link 1
Link 2
Motor
and
Encoder 1
Figure 1.3: The Pendubot
Since the pendubot is a two link robot with a single actuator, its dynamic
equations can be derived using the so-called Euler-Lagrange equations found
1.4. A TWO LINK INVERTED PENDULUM 7
in numerous robotics textbooks [11]. Referring to the gure, the equations
of motion are
d
11
q
1
+d
12
q
2
+h
1
+
1
= (1.4)
d
21
q
1
+d
22
q
2
+h
2
+
2
= 0 (1.5)
where q
1
, q
2
are the joint angles and
d
11
= m
1

2
c1
+m
2
(
2
1
+
2
c2
+ 2
1

c2
cos(q
2
)) +I
1
+I
2
d
22
= m
2

2
c2
+I
2
d
12
= d
21
= m
2
(
2
c2
+
1

c2
cos(q
2
)) +I
2
h
1
= m
2

c2
sin(q
2
) q
2
2
2m
2

c2
sin(q
2
) q
2
q
1
h
2
= m
2

c2
sin(q
2
) q
2
1

1
= (m
1

c1
+m
2

1
)g cos(q
1
) +m
2

c2
g cos(q
1
+q
2
)

2
= m
2

c2
g cos(q
1
+q
2
)
The denitions of the variables q
i
,
1
,
ci
can be deduced from Figure 1.4.
q
1
l
1
q
y
x
2
l
c1
l
c2
Figure 1.4: Coordinate description of the pendubot:
1
is the length of the
rst link, and
c1
,
c2
are the distances to the center of mass of the respective
links. The variables q
1
, q
2
are joint angles of the respective links.
8 CHAPTER 1. STATE SPACE MODELS
This model may be written in state space form as a nonlinear vector
dierential equation, x = f(x, u), where x = (q
1
, q
2
, q
1
, q
2
)

, and f is dened
from the above equations. For a range of dierent constant torque inputs ,
this model admits various equilibria. For example, when = 0, the vertical
downward position x
e
= (/2, 0, 0, 0) is an equilibrium, as illustrated in
the right hand side of Figure 1.3. When = 0 it follows from the equations
of motion that the upright vertical position x
e
= (+/2, 0, 0, 0) is also an
equilibrium. It is clear from the photograph given in the left hand side of
Figure 1.3 that the upright equilibrium is strongly unstable in the sense
that with = 0, it is unlikely that the physical system will remain at rest.
Nevertheless, the velocity vector vanishes, f(x
e
, 0) = 0, so by denition the
upright position is an equilibrium when = 0. Although complex, we may
again linearize these equations about the vertical equilibrium. The control
design techniques introduced later in the book will provide the tools for
stabilization of the pendubot in this unstable vertical conguration via an
appropriate controller.
Figure 1.5: There is a continuum of dierent equilibrium positions for the
Pendubot corresponding to dierent constant torque inputs .
1.5. AN ELECTRICAL CIRCUIT 9
1.5 An electrical circuit
A simple electrical circuit is illustrated in Figure 1.6. Using Kirchos volt-
age and current laws we may obtain a state space model in which the current
through the inductor and the capacitor voltage become state variables.
C
u(t)
R
x (t)
2
x (t)
1
L
+
+ + - -
-
Figure 1.6: A simple RLC circuit
x
1
x
2 u
- -

1
L
1
C
1
RC
Figure 1.7: A simulation diagram for the corresponding state space model
with x
1
= the voltage across the capacitor, and x
2
= the current through
the inductor.
Kirchos Current Law (KCL) gives
x
2
= C x
1
+
1
R
x
1
,
which may be written as
x
1
=
1
RC
x
1
+
1
C
x
2
(1.6)
From Kirchos Voltage Law (KVL) we have
x
1
+L x
2
= +u,
10 CHAPTER 1. STATE SPACE MODELS
or,
x
2
=
1
L
x
1
+
1
L
u. (1.7)
Equations (1.7) and (1.6) thus give the system of state space equations
x =
_

1
RC
1
C

1
L
0
_
x +
_
0
1
L
_
u
1.6 Transfer Functions & State Space Models
In the previous examples we began with a physical description of the system
to be controlled, and through physical laws obtained a state space model
of the system. In the rst and second examples, a linear model could be
obtained through linearization, while in the last example, a linear model
was directly obtained through the KVL and KCL circuit laws. For a given
LTI system, a state space model which is constructed from a given transfer
function G(s) is called a realization of G. Realization of a given G is not
unique, and in fact there are innitely many such realizations, all of which
are equivalent however from an input-output point of view. In this section
we show how to obtain some of the most common realizations starting from
a transfer function model of a system.
Consider the third-order model
...
y
+a
2
y +a
1
y +a
0
y = b
2
u +b
1
u +b
0
u (1.8)
where the coecients a
i
, b
i
are arbitrary real numbers. The corresponding
transfer function description is
G(s) =
Y (s)
U(s)
=
B(s)
A(s)
=
b
2
s
2
+b
1
s +b
0
s
3
+a
2
s
2
+a
1
s +a
0
where Y and U are the Laplace transforms of the signals y and u, respec-
tively. The numerator term B(s) = b
2
s
2
+ b
1
s + b
0
can create complexity
in determining a state space model. So, as a starting point, we consider the
zero-free system where B(s) 1, which results in the model
...
w +a
2
w +a
1
w +a
0
w = u
An equivalent simulation diagram description for this system is:
1.6. TRANSFER FUNCTIONS & STATE SPACE MODELS 11
a
2
a
1
a
0
w w u
- - -
...
w
..
w
.

With zero initial conditions one has the relation Y (s) = B(s)W(s), so
that the signal y can be obtained by adding several interconnections to
the above simulation diagram, yielding the simulation diagram depicted in
Figure 1.8. Letting the outputs of the integrators form states for the system
we obtain the state space model
x
1
= w x
1
= x
2
x
2
= w x
2
= x
3
x
3
= w x
3
= a
2
x
3
a
1
x
2
a
0
x
1
+u
and
y = b
0
x
1
+b
1
x
2
+b
2
x
3
This may be written in matrix form as
x =
_
_
0 1 0
0 0 1
a
0
a
1
a
2
_
_
x +
_
_
0
0
1
_
_
u
y = [b
0
b
1
b
2
]x + [0]u.
This nal state space model is called the controllable canonical form (CCF)
- one of the most important system descriptions from the point of view of
analysis.
Several alternative descriptions can be obtained by manipulating the
transfer function G(s) before converting to the time domain, or by dening
states in dierent ways. One possibility is to take the description
(s
3
+a
2
s
2
+a
1
s +a
0
)Y (s) = (b
0
+b
1
s +b
2
s
2
)U(s) (1.9)
and divide throughout by s
3
to obtain
_
1 +
a
2
s
+
a
1
s
2
+
a
0
s
3
_
Y (s) =
_
b
0
s
3
+
b
1
s
2
+
b
2
s
_
U(s)
12 CHAPTER 1. STATE SPACE MODELS
a
2
a
1
a
0
w


u
y
b
2
b
1
b
0
-
- -
Figure 1.8: Controllable Canonical Form
a
- - -
0
a
1
a
2
y u

b
2
b
1
b
0
Figure 1.9: Observable Canonical Form
1.6. TRANSFER FUNCTIONS & STATE SPACE MODELS 13
Rearranging terms then gives
Y =
1
s
3
(b
0
U a
0
Y ) +
1
s
2
(b
1
U a
1
Y )
+
1
s
(b
2
U a
2
Y )
We may again describe this equation using a simulation diagram, as given
in Figure 1.9. As before, by letting x
1
, x
2
, x
3
denote the outputs of the
integrators we obtain a state space model which now takes the form
x
1
= x
2
a
2
x
1
+b
2
u
x
2
= x
3
a
1
x
1
+b
1
u
x
3
= a
0
x
1
+b
0
u
y = x
1
or in matrix form
x =
_
_
a
2
1 0
a
1
0 1
a
0
0 0
_
_
x +
_
_
b
2
b
1
b
0
_
_
u
y = [1 0 0]x + [ 0 ]u.
This nal form is called the observable canonical form (OCF).
In the example above, the degree n
0
of the denominator of G(s) is 3,
and the degree m
0
of the numerator is 2, so that n
0
> m
0
. In this case
the model is called strictly proper. In the case where n
0
= m
0
, the D
matrix in (1.1) will be non-zero in any state space realization. To see this,
try adding the term b
3
s
3
to the right hand side of (1.9), or solve Exercise 10
of this chapter.
Both controllable and observable canonical forms admit natural gen-
eralizations to the n-dimensional case. For a SISO LTI system, let the
input-output transfer function be given by
G(s) =
Y (s)
U(s)
=
B(s)
A(s)
=
b
n1
s
n1
+ +b
1
s +b
0
s
n
+a
n1
s
n1
+ +a
2
s
2
+a
1
s +a
0
Then, the n-dimensional state space realization in controllable canonical
form is identied by the following A, B, and C matrices:
A =
_

_
0 1 0 0 0
0 0 1 0 0
.
.
.
.
.
.
.
.
.
.
.
.
a
0
a
1
a
2
a
3
a
n1
_

_
; B =
_

_
0
.
.
.
0
1
_

_
; C =
_

_
b
0
b
1
.
.
.
b
n1
_

_
T
.
14 CHAPTER 1. STATE SPACE MODELS
The observable canonical form, on the other hand, will have the following
A, B, and C matrices:
A =
_

_
a
n1
1 0 0 0
a
n2
0 1 0 0
.
.
.
.
.
.
.
.
.
.
.
.
a
0
0 0 0 0
_

_
; B =
_

_
b
n1
.
.
.
b
1
b
0
_

_
; C =
_

_
1
0
.
.
.
0
_

_
T
.
Another alternative is obtained by applying a partial fraction expansion
to the transfer function G:
G(s) =
b(s)
a(s)
= d +
n

i=1
k
i
s p
i
,
where p
i
: 1 i n are the poles of G, which are simply the roots of
a. A partial expansion of this form is always possible if all of the poles are
distinct. In general, a more complex partial fraction expansion must be em-
ployed. When a simple partial fraction expansion is possible, as above, the
system may be viewed as a parallel network of simple rst order simulation
diagrams; see Figure 1.10.
The signicance of this form is that it yields a strikingly simple system
description:
x
1
= p
1
x
1
+k
1
u
.
.
.
x
n
= p
n
x
n
+k
n
u
_

_
decoupled dynamic equations.
This gives the state space model
x =
_

_
p
1
0 0
0 p
2
0
.
.
.
.
.
.
0 0 p
n
_

_
x +
_

_
k
1
k
2
.
.
.
k
n
_

_
u
y = [1, . . . , 1]x + [d]u.
This is often called the modal form, and the states x
i
(t) are then called
modes. It is important to note that this form is not always possible if the
roots of the denominator polynomial are not distinct. Exercises 12 and 13
below address some generalizations of the modal form.
1.6. TRANSFER FUNCTIONS & STATE SPACE MODELS 15
p
2
k
2
u

p
3
k
3
d

x
1
x
2
x
3
k
1

y
p
1
Figure 1.10: Partial fraction expansion of a transfer function
Matlab Commands
Matlab is not well suited to nonlinear problems. However, the Matlab pro-
gram Simulink can be used for simulation of both linear and nonlinear
models. Some useful Matlab commands for system analysis of linear
systems are
RLOCUS calculates the root locus. e.g. rlocus(num,den), or rlocus(A,B,C,D).
STEP computes the step response of a linear system.
BODE computes the Bode plot.
NYQUIST produces the Nyquist plot.
TF2SS gives the CCF state space representation of a transfer function
model, but in a dierent form than given here.
SS2TF computes the transfer function of a model, given any state space
representation.
16 CHAPTER 1. STATE SPACE MODELS
RESIDUE may be used to obtain the partial fraction expansion of a trans-
fer function.
1.6. TRANSFER FUNCTIONS & STATE SPACE MODELS 17
Summary and References
State space models of the form
x = f(x, u)
y = g(x, u)
occur naturally in the mathematical description of many physical systems,
where u is the input to the system, y is the output, and x is the state variable.
In such a state space model, each of these signals evolves in Euclidean space.
The functions f and g are often linear, so that the model takes the form
x = Ax +Bu
y = Cx +Du.
If not, the functions f, g may be approximated using a Taylor series expan-
sion which again yields a linear state space model. The reader is referred to
[7] for more details on modeling from a control perspective.
A given linear model will have many dierent state space representa-
tions. Three methods for constructing a state space model from its transfer
function have been illustrated in this chapter:
(a) Construct a simulation diagram description, and dene the outputs
of the integrators as states. This approach was used to obtain the
controllable canonical form, but it is also more generally applicable.
(b) Manipulate the transfer function to obtain a description which is more
easily described in state space form. For instance, a simple division
by s
n
led to the observable canonical form.
(c) Express the transfer function as a sum of simpler transfer functions
This approach was used to obtain a modal canonical form.
These three system descriptions, the modal, controllable, and observable
canonical forms, will be applied in control analysis in later chapters.
Much more detail on the synthesis of state space models for linear sys-
tems may be found in Chapter 6 of [6], and Chapter 3 of [5].
18 CHAPTER 1. STATE SPACE MODELS
1.7 Exercises
1.7.1 You are given a nonlinear input-output system which satises the non-
linear dierential equation:
y(t) = 2y (y
2
+ 1)( y + 1) +u.
(a) Obtain a nonlinear state-space representation.
(b) Linearize this system of equations around its equilibrium output
trajectory when u() 0, and write it in state space form.
1.7.2 Repeat Exercise 1 with the new system
y(t) = 2y (y
2
+ 1)( y + 1) +u + 2 u.
1.7.3 Obtain state equations for the following circuit. For the states, use the
voltage across the capacitor, and the current through the inductor.
C
i (t)
L
v (t)
C
2 1
L
R
+ + - -
u (t)
+
-
u (t)
+
-
1.7.4 In each circuit below,
(a) Obtain a transfer function and a state space realization.
(b) Sketch a frequency response.
(c) Use the step command in Matlab to obtain a step response.
1.7. EXERCISES 19
C
R/3
R
R
C
3C
u(t)
+
-
y(t)
+
-
C
R
R
u(t)
+
-
y(t)
+
-
R
R
u(t)
+
-
y(t)
+
-
C
(a) Lead network
(b) Lag network
(c) Notch network
1.7.5 Consider the mass-spring system shown below
k
1
k
1
k
2
m
b
u(t) = f(t)
y(t)
m
1 2
Assume that a force is acting on m
1
, and let the horizontal position
of m
2
represent the output of this system.
(a) Derive a set of dierential equations which describes this input-
output system. To solve this problem you will require Newtons
law of translational motion, and the following facts: (i) The force
exerted by a spring is proportional to its displacement, and (ii)
the force exerted by a frictional source is proportional to the
relative speed of the source and mass.
(b) Find the transfer function for the system.
(c) Obtain a state space description of the system.
20 CHAPTER 1. STATE SPACE MODELS
1.7.6 The n-dimensional nonlinear vector dierential equation x = f(x) has
a unique solution from any x R
n
if the function f has continuous
partial derivatives. To see that just continuity of f is not sucient for
uniqueness, and that some additional conditions are needed, consider
the scalar dierential equation
x =
_
1 x
2
, x(0) = 1.
Show that this dierential equation with the given initial condition
has at least two solutions: One is x(t) 1, and another one is x(t) =
cos(t).
1.7.7 Consider a satellite in planar orbit about the earth. The situation is
modeled as a point mass m in an inverse square law force eld, as
sketched below. The satellite is capable of thrusting (using gas jets,
for example) with a radial thrust u
1
and a tangential ( direction)
thrust u
2
. Recalling that acceleration in polar coordinates has a radial
component ( rr

2
), and a tangential component (r

+2 r

), Newtons
Law gives
m( r r

2
) =
k
r
2
+u
1
m(r

+ 2 r

) = u
2
,
where k is a gravitational constant.
(a) Convert these equations to (nonlinear) state space form using x
1
=
r, x
2
= r, x
3
= , x
4
=

.
(b) Consider a nominal circular trajectory r(t) = r
0
; (t) =
0
t, where
r
0
and
0
are constants. Using u
1
(t) = u
2
(t) = 0, obtain expres-
sions for the nominal state variables corresponding to the circular
trajectory. How are k, r
0
, m, and
0
related?
(c) Linearize the state equation in (a) about the state trajectory in
(b). Express the equations in matrix form in terms of r
0
,
0
and
m (eliminate k).
1.7. EXERCISES 21
r
m

1.7.8 Using Matlab or Simulink, simulate the nonlinear model for the mag-
netically suspended ball.
(a) Using proportional feedback u = k
1
y + k
2
r, can you stabilize
the ball to a given reference height r? Interpret your results by
examining a root locus diagram for the linearized system.
(b) Can you think of a better control law? Experiment with other
designs.
Transfer Functions & State Space Models
1.7.9 A SISO LTI system is described by the transfer function
G(s) =
s + 4
(s + 1)(s + 2)(s + 3)
(a) Obtain a state space representation in the controllable canonical
form;
(b) Now obtain one in the observable canonical form;
(c) Use partial fractions to obtain a representation with a diagonal
state matrix A (modal form).
In each of (a)(c) draw an all-integrator simulation diagram.
1.7.10 A SISO LTI system is described by the transfer function
G(s) =
s
3
+ 2
(s + 1)(s + 3)(s + 4)
(a) Obtain a state space representation in the controllable canonical
form;
(b) Now obtain one in the observable canonical form;
22 CHAPTER 1. STATE SPACE MODELS
(c) Use partial fractions to obtain a representation of this model with
a diagonal state matrix A.
In each of (a)(c) draw an all-integrator simulation diagram.
1.7.11 For the multiple input-multiple output (MIMO) system described by
the pair of dierential equations
...
y
1
+ 2 y
1
+ 3y
2
= u
1
+ u
1
+ u
2
y
2
3 y
2
+ y
1
+y
2
+y
1
= u
2
+ u
3
u
3
obtain a state space realization by choosing y
1
and y
2
as state variables.
Draw the corresponding simulation diagram.
1.7.12 This exercise generalizes modal form to the case where some eigenval-
ues are repeated. For each of the following transfer functions, obtain a
state-space realization for the corresponding LTI system by breaking
H into simple additive terms, and drawing the corresponding simula-
tion diagrams for the sum. Choose the outputs of the integrators as
state variables.
(a) H(s) =
2s
2
s
3
s
2
+s 1
(b) H(s) =
s
2
+s + 1
s
3
+ 4s
2
+ 5s + 2
1.7.13 This exercise indicates that a useful generalization of the modal form
may be constructed when some eigenvalues are complex.
(a) Use partial fractions to obtain a diagonal state space representa-
tion for a SISO LTI system with transfer function
G(s) =
s + 6
s
2
+ 2s + 2
.
Note that complex gains appear in the corresponding all-integrator
diagram.
(b) Given a transfer function in the form
G(s) =
s +
(s
1
)(s
2
)
and a corresponding state space realization with A matrix Com-
pute the eigenvalues
1
,
2
of the matrix
A =
_


_
1.7. EXERCISES 23
where 0, > 0, nd the relationships between
1
,
2
,
and , . In view of this, complete the state space realization
by obtaining the B and C matrices, and draw the corresponding
simulation diagram.
(c) For the transfer function H(s) =
s
2
+s + 1
s
3
+ 4s
2
+ 5s + 2
, obtain a state-
space realization for the corresponding LTI system by breaking
H into simple additive terms, and drawing the corresponding
simulation diagrams for the sum.
(d) Apply your answer in (b) to obtain a state space realization of
G(s) in (a) with only real coecients.
24 CHAPTER 1. STATE SPACE MODELS
Chapter 2
Vector Spaces
Vectors and matrices, and the spaces where they belong, are fundamental to
the analysis and synthesis of multivariable control systems. The importance
of the theory of vector spaces in fact goes well beyond the subject of vectors
in nite-dimensional spaces, such as R
n
. Input and output signals may be
viewed as vectors lying in innite dimensional function spaces. A system is
then a mapping from one such vector to another, much like a matrix maps
one vector in a Euclidean space to another. Although abstract, this point of
view greatly simplies the analysis of state space models and the synthesis
of control laws, and is the basis of much of current optimal control theory. In
this chapter we review the theory of vector spaces and matrices, and extend
this theory to the innite-dimensional setting.
2.1 Fields
A eld is any set of elements for which the operations of addition, subtrac-
tion, multiplication, and division are dened. It is also assumed that the
following axioms hold for any , , T
(a) + T and T.
(b) Addition and multiplication are commutative:
+ = +, = .
(c) Addition and multiplication are associative:
( +) + = + ( +), ( ) = ( ).
25
26 CHAPTER 2. VECTOR SPACES
(d) Multiplication is distributive with respect to addition:
( +) = +
(e) There exists a unique null element 0 such that 0 = 0 and 0 + = .
(f) There exists a unique identity element 1 such that 1 = .
(g) For every T there exists a unique element T such that + =
0; this unique element is sometimes referred to as the additive inverse
or negative of , and is denoted as .
(h) To every T which is not the element 0 (i.e., ,= 0), there corre-
sponds an element such that = 1; this element is referred to as
the multiplicative inverse of , and is sometimes written as
1
.
Fields are a generalization of R, the set of all real numbers. The next
example is the set of all complex numbers, denoted C. These are the only
examples of elds that will be used in the text, although we will identify
others in the exercises at the end of the chapter.
2.2 Vector Space
A set of vectors A is a set on which vector addition, and scalar multiplication
are dened, generalizing the concept of the vector space R
n
. In this abstract
setting a vector space over a eld T, denoted (A, T), is dened as follows:
(a) For every x
1
, x
2
A, the vector sum x
1
+x
2
A.
(b) Addition is commutative: For every x
1
, x
2
A, the sum x
1
+ x
2
=
x
2
+x
1
.
(c) Addition is associative: For every x
1
, x
2
, x
3
A,
(x
1
+x
2
) +x
3
= x
1
+ (x
2
+x
3
)
(d) The set A contains a vector such that +x = x for all x A.
(e) For every x A, there is a vector y A such that x +y = .
(f) For every x A, T, the scalar product x A.
(g) Scalar multiplication is associative: for every , T, and x A,
(x) = ()x.
2.3. BASES 27
Below is a list of some of the vector spaces which are most important in
applications
(R
n
, R) the real vector space of n dimensional real-valued vectors.
(C
n
, C) the complex vector space of n dimensional complex-valued vec-
tors.
(C
n
[a, b], R) the vector space of real-valued continuous functions on the
interval [a, b], taking values in R
n
.
(D
n
[a, b], R) the vector space of real-valued piecewise-continuous func-
tions on the interval [a, b], taking values in R
n
.
(L
n
p
[a, b], R) the vector space of functions on the interval [a, b], taking
values in R
n
, which satisfy the bound
_
b
a
[f(t)[
p
dt < , f L
p
[a, b].
((C), R) the vector space of rational functions
b(s)
a(s)
of a complex variable
s, with real coecients.
A subspace of a vector space A is a subset of A which is itself a
vector space with respect to the operations of vector addition and scalar
multiplication. For example, the set of complex n-dimensional vectors whose
rst component is zero is a subspace of (C
n
, C), but R
n
is not a subspace of
(C
n
, C).
2.3 Bases
A set of vectors S = (x
1
, . . . , x
n
) in (A, T) is said to be linearly independent
if the following equality

1
x
1
+
2
x
2
+ +
n
x
n
= 0
holds for a set of n elements
i
: 1 i n T, then
1
=
2
=
n
=
0. If the set S contains an innite number of vectors, then we call S linearly
independent if every nite subset of S is linearly independent, as dened
above.
28 CHAPTER 2. VECTOR SPACES
In the case where (A, T) = (R
n
, R), we will have linear independence of
x
1
, . . . , x
n
if and only if
det[x
1
x
2
x
n
] = det
_

_
x
1
1
x
n
1
.
.
.
.
.
.
x
1
n
x
n
n
_

_ ,= 0.
The maximum number of linearly independent vectors in (A, T) is called
the dimension of (A, T). For example, (R
n
, R) and (C
n
, C) both have di-
mension n. What is the dimension of (C
n
, R)? (see Exercise 6).
More interesting examples can be found in function spaces. If for exam-
ple (A, T) = (C[0, 1], R), where C[0, 1] is the set of real-valued continuous
functions on [0, 1], then we can easily nd a set S of innite size which is
linearly independent. One such set is the collection of simple polynomials
S = t, t
2
, t
3
, . . .. To see that S is linearly independent, note that for any
n,
n

i=1

i
t
i
= only if
i
= 0, 1 i n,
where C[0, 1] is the function which is identically zero on [0, 1]. We have
thus shown that the dimension of (C[0, 1], R) is innite.
A set of linearly independent vectors S = e
1
, . . . , e
n
in (A, T) is said
to be a basis of A if every vector in A can be expressed as a unique linear
combination of these vectors. That is, for any x A, one can nd
i
, 1
i n such that
x =
1
e
1
+
2
e
2
+ +
n
e
n
.
Because the set S is linearly independent, one can show that for any vector
x, the scalars
i
are uniquely specied in T. The n-tuple
1
, . . . ,
n
is
often called the representation of x with respect to the basis e
1
, . . . , e
n
.
We typically denote a vector x R
n
by
x =
_

_
x
1
.
.
.
x
n
_

_.
There are two interpretations of this equation:
(a) x is a vector (in R
n
), independent of basis.
2.4. CHANGE OF BASIS 29
(b) x is a representation of a vector with respect to the natural basis:
x = x
1
_

_
1
0
.
.
.
0
0
_

_
+x
2
_

_
0
1
0
.
.
.
0
_

_
+ +x
n
_

_
0
0
.
.
.
0
1
_

_
.
The following theorem is easily proven in R
n
using matrix manipulations,
and the general proof is similar:
Theorem 2.3.1. In any n-dimensional vector space, any set of n linearly
independent vectors qualies as a basis.
In the case of Euclidean space (A, T) = (R
n
, R), with e
1
, . . . , e
n
a
given basis, any vector x R
n
may be expressed as
x =
1
e
1
+ +
n
e
n
,
where
i
are all real scalars. This expression may be equivalently written
as x = E, where
x =
_

_
x
1
.
.
.
x
n
_

_, E =
_

_
e
11
e
1n
.
.
.
.
.
.
e
n1
e
nn
_

_, =
_

1
.
.
.

n
_

_,
which through inversion shows that is uniquely given by
= E
1
x (2.1)
Here E
1
stands for the matrix inverse of E (i.e., E
1
E = EE
1
= I, where
I is the identity matrix), which exists since e
i
s are linearly independent.
Consider the numerical example with e
1
= (
1
1
), e
2
= (
0
1
), and x = (
2
5
).
Then we have x =
1
e
1
+
2
e
2
, with
_

2
_
=
_
1 0
1 1
_
1
_
2
5
_
=
_
1 0
1 1
__
2
5
_
=
_
2
3
_
2.4 Change of basis
Suppose now we are given two sets of basis vectors:
e
1
, , e
n
e
1
, , e
n

30 CHAPTER 2. VECTOR SPACES


A vector x in A can be represented in two possible ways, depending on
which basis is chosen:
x =
1
e
1
+
n
e
n
=
n

k=1

k
e
k
(2.2)
or,
x =

1
e
1
+

n
e
n
=
n

k=1

k
e
k
. (2.3)
Since e
i
A, there exist scalars p
ki
: 1 k, i n such that for any i,
e
i
= p
1i
e
1
+ +p
ni
e
n
=
n

k=1
p
ki
e
k
.
From (2.2) it then follows that a vector x may be represented as
x =
n

i=1

i
_
n

k=1
p
ki
e
k
_
=
n

k=1
_
n

i=1
p
ki

i
_
e
k
. (2.4)
In view of (2.3) and (2.4) we have by subtraction
n

k=1
_
n

i=1
p
ki

k
_
e
k
= .
By linear independence of e
k
, this implies that each coecient in brackets
is 0. This gives a matrix relation between the coecients
i
and

i
:

k
=
n

i=1
p
ki

i
, k = 1, . . . , n
or using compact matrix notation,

=
_

1
.
.
.

n
_

_ =
_

_
p
11
. . . p
1n
.
.
.
p
n1
. . . p
nn
_

_
_

1
.
.
.

n
_

_ = P.
The transformation P maps T
n
T
n
, and is one to one, and onto. It
therefore has an inverse P
1
, so that can also be computed through

:
= P
1

.
2.5. LINEAR OPERATORS 31
For the special case where (A, T) = (R
n
, R), the vectors e
i
can be
stacked to form a matrix to obtain as in (2.1),
x = E =

E

.
Hence the transformation P can be computed explicitly:

=

E
1
E, so
that P =

E
1
E. The inverse

E
1
again exists by linear independence.
2.5 Linear Operators
A linear operator / is simply a function from one vector space (A, T) to
another (, T), which is linear. This means that for any scalars
1
,
2
, and
any vectors x
1
, x
2
,
/(
1
x
1
+
2
x
2
) =
1
/(x
1
) +
2
/(x
2
).
For a linear operator / or a general function from a set A into a set we
adopt the terminology
A : Domain of the mapping /
: Co-Domain of the mapping /
When / is applied to every x A, the resulting set of vectors in is
called the range (or image) of /, and is denoted by (/):
(/) :=
_
xX
/(x)
Pictorially, these notions are illustrated as follows:
x
A(x)
R(A)
Y
X
The rank of / is dened to be the dimension of (/).
32 CHAPTER 2. VECTOR SPACES
In the special case of a linear operator / : R
n
R
m
dened as /(x) =
Ax for an mn matrix A,
y = Ax = [a
1
. . . a
n
]
_

_
x
1
.
.
.
x
n
_

_
= x
1
a
1
+ +x
n
a
n
,
the range of A is the set of all possible linear combinations of the columns
a
i
of A. That is, the space spanned by the columns of A. The dimension
of (A) is then the maximum number of linearly independent columns of
A.
Theorem 2.5.1. For a linear operator /, the set (/) is a subspace of .
Proof To prove the theorem it is enough to check closure under addition
and scalar multiplication. Suppose that y
1
, y
2
(/), and that
1
,
2
T.
Then by denition of (/) there are vectors x
1
, x
2
such that
/(x
1
) = y
1
, /(x
2
) = y
2
,
and then by linearity of the mapping /,
/(
1
x
1
+
2
x
2
) =
1
/(x
1
) +
2
/(x
2
) =
1
y
1
+
2
y
2
.
Hence,
1
y
1
+
2
y
2
(/), which establishes the desired closure property.

By assumption, the set contains a zero vector, which could be mapped


from numerous x A. The set of all such x A is called the nullspace of
/, denoted ^(/):
^(/) := x A such that /(x) = 0 .
Theorem 2.5.2. For any linear operator /: A , the nullspace ^(/)
is a subspace of A.
Proof Again we check that ^(/) is closed under addition and scalar
multiplication. Suppose that x
1
, x
2
^(/), and that
1
,
2
T. Then by
denition,
/(x
1
) = /(x
2
) = .
Again by linearity of / it is clear that /(
1
x
1
+
2
x
2
) = , which proves
the theorem.
2.6. LINEAR OPERATORS AND MATRICES 33
2.6 Linear operators and matrices
Suppose that 1 and J are two nite-dimensional vector spaces over the eld
T, with bases v
1
, . . . , v
n
and w
1
, . . . , w
m
respectively. If / : 1 J,
then / may be represented by a matrix. To see this, take any v 1, and
write
v =
n

i=1

i
v
i
for scalars
i
T. By linearity we then have,
/(v) = /
_
n

i=1

i
v
i
_
=
n

i=1

i
/(v
i
)
But for any i we have that /(v
i
) J, which implies that for some scalars
a
ji
,
/(v
i
) =
m

j=1
a
ji
w
j
, 1 i n.
From the form of v we must therefore have
/(v) =
n

i=1

i
m

j=1
a
ji
w
j
=
n

j=1
_
m

i=1
a
ji

i
_
w
j
(2.5)
Recall that the vector w = /(v) in J has a unique representation
/(v) =
m

j=1

j
w
j
Consequently, the terms in parentheses in (2.5) are identical to the
j
:

j
=
n

i=1
a
ji

i
j = 1, . . . , m.
From this we see how the representations of v and w are transformed through
the linear operator /:
=
_

1
.
.
.

m
_

_
=
_

_
a
11
. . . a
in
.
.
.
.
.
.
a
mi
. . . a
mn
_

_
_

1
.
.
.

n
_

_
= A
34 CHAPTER 2. VECTOR SPACES
so that A is the representation of / with respect to v
i
and w
i
.
The special case where / is a mapping of (A, T) into itself is of partic-
ular interest. A question that frequently arises is if the linear operator is
represented by a matrix A of elements in T, how is A aected by a change
of basis?. Let x
b
= /(x
a
), and write
x
a
=
n

i=1

i
e
i
=
n

i=1

i
e
i
x
b
=
m

i=1

i
e
i
=
m

i=1

i
e
i
where the and are related by
= A

=

A .
To see how A and

A are related, recall that there is a matrix P such that
= P

= P
Combining these four equations gives
PA = P =

=

A =

AP.
Since A is arbitrary, we conclude that PA =

AP, and hence we can
also conclude that

A = PAP
1
A = P
1

AP
When these relationships hold, we say that the matrices A and

A are similar.
2.7 Eigenvalues and Eigenvectors
For a linear operator /: A A on an arbitrary vector space (A, T), a
scalar is called an eigenvalue of / if there exists a non-zero vector x for
which
/(x) = x. (2.6)
The vector x in (2.6) is then called an eigenvector.
Let A = C
n
, T = C, and A be a matrix representation for /. If an
eigenvalue of / does exist, then one may infer from the equation
(A I)x = 0,
2.7. EIGENVALUES AND EIGENVECTORS 35
that the matrix A I is singular. For nontrivial solutions, we must then
have
() := det(I A) = 0. (2.7)
The function ( ) is called the characteristic polynomial of the matrix A,
and (2.7) is known as the characteristic equation. The characteristic poly-
nomial is a polynomial of degree n, which must therefore have n roots. Any
root of is an eigenvalue, so at least in the case of operators on (C
n
, C),
eigenvalues always exist. Note that if

A is some other matrix representation
for /, since A and

A are necessarily similar,

A has the same characteris-
tic polynomial as A. Hence, the eigenvalues do not depend on the specic
representation picked.
If the roots of the characteristic polynomial are distinct, then the vector
space C
n
admits a basis consisting entirely of eigenvectors:
Theorem 2.7.1. Suppose that
1
, . . . ,
n
are the distinct eigenvalues of the
n n matrix A, and let v
1
, . . . , v
n
be the associated eigenvectors. Then the
set v
i
, i = 1 . . . n is linearly independent over C.
When the eigenvalues of A are distinct, the modal matrix dened as
M := [v
1
. . . v
n
]
is nonsingular. It satises the equation AM = M, where
=
_

1
0
.
.
.
0
n
_

_
and therefore A is similar to the diagonal matrix :
= M
1
AM. (2.8)
Unfortunately, not every matrix can be diagonalized, and hence the eigen-
vectors do not span C
n
in general. Consider the matrix
A =
_
_
1 1 2
0 1 3
0 0 2
_
_
The characteristic polynomial for A is
() = det(I A) = ( 1)( 1)( 2)
36 CHAPTER 2. VECTOR SPACES
So, eigenvalues are
1
= 1,
2
= 1,
3
= 2.
Considering the eigenvector equation
(A
1
I)x =
_
_
0 1 2
0 0 3
0 0 1
_
_
x = ,
we see that x =
_
_
1
0
0
_
_
and its constant multiples are the only eigenvec-
tors associated with
1
. It then follows that there does not exist a state
transformation P for which
=
_
_
1 0 0
0 1 0
0 0 2
_
_
= PAP
1
.
To obtain a nearly diagonal representation of A we use generalized eigen-
vectors, dened as follows. Search for a solution to
(A
1
I)
k
x = 0
such that (A
1
I)
k1
x ,= 0. Then x is called a generalized eigenvector of
grade k.
Note that it then follows that the vector (A
1
I)x is a generalized
eigenvector of grade k 1. In the example,
(A
1
I)
2
x =
_
_
0 1 2
0 0 3
0 0 1
_
_
_
_
0 1 2
0 0 3
0 0 1
_
_
x
=
_
_
0 0 5
0 0 3
0 0 1
_
_
x.
So x =
_
_
0
1
0
_
_
is a generalized eigenvector of grade k.
Letting y = (A
1
I)x, we have
(A
1
I)y = (A
1
I)
2
x = 0.
So y is an eigenvector to A, and x, y are linearly independent. In fact, y =
(1, 0, 0)

is the eigenvector computed earlier. To obtain an approximately


2.8. INNER PRODUCTS 37
diagonal form, let x
1
= y, x
2
= x, x
3
any eigenvector corresponding to

3
= 2. We then have
Ax
1
=
1
x
1
Ax
2
= Ax = y +
1
x = x
1
+
1
x
2
Ax
3
=
3
x
3
Letting M = [x
1
[x
2
[x
3
] it follows that
AM = M
_
_

1
1 0
0
1
0
0 0
3
_
_
=
_
_
M
_
_

1
0
0
_
_

M
_
_
1

1
0
_
_

M
_
_
0
0

3
_
_
_
_
= MJ
where
J =
_
_

1
1 0
0
1
0
0 0
3
_
_
= M
1
AM
This representation of A with respect to a basis of generalized eigenvectors
is known as the Jordan form.
2.8 Inner Products
Inner products are frequently applied in the solution of optimization prob-
lems because they give a natural measure of distance between vectors. This
abstract notion of distance can often be interpreted as a cost in applications
to nance, or as energy in mechanical systems. Inner products also provide
a way of dening angles between vectors, and thereby introduce geometry to
even innite dimensional models where any geometric structure is far from
obvious at rst glance.
To dene an inner product we restrict our attention to a vector space A
over the complex eld C. An inner product is then a complex-valued function
of two vectors, denoted , ), such that the following three properties hold:
(a) x, y) = y, x) (complex conjugate).
(b) x,
1
y
1
+
2
y
2
) =
1
x, y
1
)+
2
x, y
2
), for all x, y
1
, y
2
A,
1
,
2

C.
(c) x, x) 0 for all x A, and x, x) = 0 if and only if x = 0.
38 CHAPTER 2. VECTOR SPACES
In the special case where A = C
n
we typically dene
x, y) = x

y
where x

denotes the complex conjugate transpose of the vector x. Another


important example is the function space L
p
[a, b] with p = 2. It can be shown
that the formula
f, g) :=
_
b
a
f(t)g(t) dt, f, g L
2
[a, b],
denes on inner product on L
2
[a, b].
The most obvious application of the inner product is in the formulation
of a norm. In general, the norm of a vector x in a vector space (A, C),
denoted |x|, is a real-valued function of a vector x such that
1. |x| 0, and |x| = 0 if and only if x = 0.
2. |x| = [[ |x|, for any C.
3. |x +y| |x| +|y|.
The third dening property is known as the triangle inequality.
In R
n
we usually dene the norm as the usual Euclidean norm,
|x| =

x
T
x =

_
n

i=1
x
2
i
,
which we will henceforth write as [x[, and reserve the notation | | for norm
of an innite-dimensional vector. This Euclidean norm can also be dened
using the inner product:
[x[ =
_
x, x) (2.9)
In fact, one can show that the expression (2.9) denes a norm in an arbitrary
(nite- or innite-dimensional) inner product space.
We dene the norm of a vector f L
p
[a, b] as
|f|
Lp
:=
_
_
b
a
[f(t)[
p
dt
_
1/p
.
In the case p = 2, this norm is derived from the inner product on L
2
[a, b],
but for general p this norm is not consistent with any inner product.
2.9. ORTHOGONAL VECTORS ANDRECIPROCAL BASIS VECTORS39
2.9 Orthogonal vectors and reciprocal basis vec-
tors
Two vectors x, y in an inner product space (A, C) are said to be orthogonal
if x, y) = 0. This concept has many applications in optimization, and
orthogonality is also valuable in computing representations of vectors. To
see the latter point, write
x =
n

i=1

i
v
i
,
where v
i
, i = 1 . . . n is a basis for (A, C). By orthogonality we then have
v
j
, x) = v
j
,
n

i=1

i
v
i
) =
n

i=1

i
v
j
, v
i
) j = 1 . . . n
This may be written explicitly as
v
1
, x) =
1
v
1
, v
1
) +
2
v
1
, v
2
) + +
n
v
1
, v
n
)
.
.
.
v
n
, x) =
1
v
n
, v
1
) +
2
v
1
, v
2
) + +
n
v
n
, v
n
)
or in matrix form
_

_
v
1
, x)
.
.
.
v
n
, x)
_

_ =
_

_
v
1
, v
1
) v
1
, v
n
)
.
.
.
v
n
, v
1
) v
n
, v
n
)
_

_
. .
G
_

1
.
.
.

n
_

_
The n n matrix G is called the Grammian. Its inverse gives a formula for
the representation :
= G
1
_

_
v
1
, x)
.
.
.
v
n
, x)
_

_
If the basis is orthogonal, then G is diagonal, in which case the computation
of the inverse G
1
is straightforward.
A basis is said to be orthonormal if
v
j
, v
i
) =
ij
=
_
1, i = j
0, i ,= j
40 CHAPTER 2. VECTOR SPACES
In this case G = I (the identity matrix), so that G
1
= G.
A basis r
i
is said to be reciprocal to (or dual to) the basis v
i
if
r
i
, v
j
) =
ij
, i = 1, . . . , n, j = 1, . . . , n. (2.10)
If a dual basis is available, then again the representation of x with respect
to the basis v
i
is easily computed. For suppose that x is represented by
:
x =
n

i=1

i
v
i
Then, by the dual property and linearity we have
r
j
, x) = r
j
,
n

i=1

i
v
i
) =
n

i=1

i
r
j
, v
i
).
Since r
j
, v
i
) =
ij
, this shows that x =

n
i=1
r
i
, x)v
i
. Of course, to to
apply this formula we must have the reciprocal basis r
i
, which may be as
dicult to nd as the inverse Grammian.
In the vector space (C
n
, C) dene the matrices
M = [v
1
v
n
] R =
_

_
r
1
.
.
.
r
n
_

_
From the dening property (2.10) of the dual basis, we must have RM = I,
so that R = M
1
.
2.10 Adjoint transformations
Suppose that / is a linear operator from the vector space (A, C) to another
vector space (, C), that is, / : A . Then the adjoint is a linear
operator working in the reverse direction:
/

: A.
Its denition is subtle since it is not directly dened through /. We say
that /

is the adjoint of /, if for any x A and any y ,


/(x), y) = x, /

(y)).
2.10. ADJOINT TRANSFORMATIONS 41
To illustrate this concept, let us begin with the nite dimensional case
A = C
n
, = C
m
and suppose that / is dened through an mn matrix A, so that /(x) =
Ax, x A. We may then compute the adjoint using the denition of the
inner products on A and as follows
/(x), y) = (Ax)

y = x
T

A
T
y = x,

A
T
y)
Thus, the adjoint of / is dened through the complex conjugate transpose
of A:
/

(y) =

A
T
y = A

y
Matlab Commands
Matlab is virtually designed to deal with the numerical aspects of the vector
space concepts described in this chapter. Some useful commands are
INV to compute the inverse of a matrix.
DET to compute the determinant.
EIG nds eigenvalues and eigenvectors.
RANK computes the rank of a matrix.
42 CHAPTER 2. VECTOR SPACES
Summary and References
In this chapter we have provided a brief background on several dierent
topics in linear algebra, including
(a) Fields and vector spaces.
(b) Linear independence and bases.
(c) Representations of vectors, and how these representations change under
a change of basis.
(d) Linear operators and matrices.
(e) Inner products and norms.
(f) Adjoint operators.
(g) Eigenvectors.
Good surveys on linear algebra and matrix theory may be found in Chapter 2
of [6], or Chapters 4 and 5 of [5].
2.11. EXERCISES 43
2.11 Exercises
2.11.1 Determine conclusively which of the following are elds:
(a) The set of integers.
(b) The set of rational numbers.
(c) The set of polynomials of degree less than 3 with real coecients.
(d) The set of all n n nonsingular matrices.
(e) The set 0, 1 with addition being binary exclusive-or and mul-
tiplication being binary and.
2.11.2 Dene rules of addition and multiplication such that the set consist-
ing of three elements a, b, c forms a eld. Be sure to dene the zero
and unit elements.
2.11.3 Let (X, T) be a vector space, and Y X a subset of X. If Y satises
the closure property y
1
, y
2
Y,
1
,
2
T =
1
y
1
+
2
y
2
Y , show
carefully using the denitions that (Y, T) is a subspace of (X, T).
Linear independence and bases
2.11.4 Which of the following sets are linearly independent?
(a)
_
_
1
4
2
_
_
,
_
_
0
2
3
_
_
,
_
_
0
0
1
_
_
in (R
3
, R).
(b)
_
_
1
2
3
_
_
,
_
_
4
5
6
_
_
,
_
_
7
8
9
_
_
,
_
_
7912
314
0.098
_
_
in (R
3
, R).
(c)
_
_
1
4j
2
_
_
,
_
_
0
2
j
_
_
,
_
_
j
0
0
_
_
in (C
3
, C).
(d) sin(t), cos(t), t in (C[0, 1], R) - the set of real-valued continuous
functions on [0, 1] over the real eld.
(e) e
jt
, sin(t), cos(t), t in (C[0, 1], C) - the set of complex-valued con-
tinuous functions on [0, 1] over the complex eld.
2.11.5 Determine which of the following sets of vectors are linearly indepen-
dent in R
3
by computing the determinant of an appropriate matrix.
(a)
_
_
1
0
2
_
_
,
_
_
2
0
1
_
_
,
_
_
0
5
1
_
_
.
44 CHAPTER 2. VECTOR SPACES
(b)
_
_
4
5
1
_
_
,
_
_
1
2
1
_
_
,
_
_
2
1
3
_
_
.
2.11.6 For the vector space (C
n
, R),
(a) Verify that this is indeed a vector space.
(b) What is its dimension?
(c) Find a basis for (C
n
, R).
2.11.7 Let R
22
be the set of all 2 2 real matrices.
(a) Briey verify that R
22
is a vector space under the usual matrix
addition and scalar multiplication.
(b) What is the dimension of R
22
?
(c) Find a basis for R
22
.
2.11.8 Is the set
_
I, A, A
2
_
linearly dependent or independent in (R
22
, R),
with A =
_
1 1
0 2
_
?
Representations of vectors
2.11.9 Given the basis
_
_
_
v
1
=
_
_
1
1
1
_
_
, v
2
=
_
_
2
0
0
_
_
, v
3
=
_
_
1
0
1
_
_
_
_
_
and the vector
x =
_
_
3
2
1
_
_
=
1
v
1
+
2
v
2
+
3
v
3
(a) Compute the reciprocal basis.
(b) Compute the Grammian.
(c) Compute the representation of x with respect to v
i
using your
answer to (a).
(d) Compute the representation of x with respect to v
i
using your
answer to (b).
Linear operators and matrices
2.11.10 Compute the null space, range space, and rank of the following
matrices.
2.11. EXERCISES 45
(a) A =
_
_
1 1 2
0 2 2
0 3 3
_
_
.
(b) A =
_
_
1 3 2 1
2 0 1 1
1 1 0 1
_
_
.
2.11.11 Let b R
n
and A R
nm
. Give necessary and sucient conditions
on A and b in order that the linear system of equations Ax = b has a
solution x R
m
.
2.11.12 Let / : R
3
R
3
be a linear operator. Consider the two sets
B = b
1
, b
2
, b
3
and C = c
1
, c
2
, c
3
below
B =
_
_
_
_
_
1
0
0
_
_
,
_
_
0
1
0
_
_
,
_
_
0
0
1
_
_
_
_
_
, C =
_
_
_
_
_
1
1
0
_
_
,
_
_
0
1
1
_
_
,
_
_
1
0
1
_
_
_
_
_
It should be clear that these are bases for R
3
.
(a) Find the transformation P relating the two bases.
(b) Suppose the linear operator / maps
/(b
1
) =
_
_
2
1
0
_
_
, /(b
1
) =
_
_
0
0
0
_
_
, /(b
1
) =
_
_
0
4
2
_
_
Write down the matrix representation of / with respect to the
basis B and also with respect to the basis C.
2.11.13 Find the inverse of the matrix A, where B is a matrix.
A =
_
I B
0 I
_
2.11.14 Consider the set P
n
of all polynomials of degree strictly less than
n, with real coecients, where x P
n
may be written x = a
0
+a
1
t +
+a
n1
t
n1
.
(a) Verify that P
n
is a vector space, with the usual denitions of poly-
nomial addition, and scalar multiplication.
(b) Explain why 1, t, . . . , t
n1
is a basis, and thus why P
n
is n-
dimensional.
46 CHAPTER 2. VECTOR SPACES
(c) Suppose x = 10 2t + 2t
2
3t
3
. Find the representation of x
with respect to the basis in (b) for n = 4.
(d) Consider dierentiation,
d
dt
, as an operator /: P
n
P
n1
. That
is, /(x) =
d
dt
x. Show that / is a linear operator, and compute
its null space and range space.
(e) Find A, the matrix representation of / for n = 4, using the basis
in (b). Use your A and to compute the derivative of x in (c).
Inner products and norms
2.11.15 Let (1, C) be an inner product space.
(a) Let x, y 1 with x orthogonal to y. Prove the Pythagorean
theorem:
|x +y|
2
= |x|
2
+|y|
2
(b) Prove that in an inner product space,
|x +y|
2
+|x y|
2
= 2|x|
2
+ 2|y|
2
where || is the norm induced by the inner product. This is called
the Parallelogram law. Can you give a geometric interpretation
of this law in R
2
?
Adjoint operators
2.11.16 If /: A where A and are inner product spaces, the adjoint
/

is a mapping /

: A. Hence, the composition Z = /

/ is
a mapping from A to itself. Prove that ^(Z) = ^(/).
2.11.17 For 1 p < , let L
p
denote functions f : (, ) C such
that
_

[f(s)[
p
ds < . For p = , L
p
denotes bounded functions
f : (, ) C. The set L
p
is a vector space over the complex eld.
Dene the function /: L
p
L
p
as /(f) = a f, where denotes
convolution. We assume that for some constants C < , c > 0, we
have the bound [a(t)[ Ce
c|t|
for all t R. This is sucient to
ensure that /: L
p
L
p
for any p.
(a) First consider the case where p = , and let f

(t) = e
jt
, where
R. Verify that f

is an eigenvector of /. What is the


corresponding eigenvalue?
2.11. EXERCISES 47
(b) In the special case p = 2, L
p
is an inner product space with
< f, g >
L
2
=
_
f

(s)g(s) ds, f, g L
2
.
Compute the adjoint /

: L
2
L
2
, and nd conditions on a
under which / is self adjoint.
2.11.18 Let X = R
n
with the usual inner product. Let Y = L
n
2
[0, ), the
set of functions f : [0, ) R
n
with
_

0
[f(s)[
2
ds < . We dene
the inner product as before:
< f, g >
Y
=
_
f

(s)g(s) ds f, g Y
For an n n Hurwitz matrix A, consider the dierential equation
x = Ax. By stability, for each initial condition x
0
X, there exists a
unique solution x Y . Dene /: X Y to be the map which takes
the initial condition x
0
X to the solution x Y .
(a) Explain why / is a linear operator.
(b) What is the null space N(/)? What is the rank of /?
(c) Compute the adjoint /

.
Eigenvectors
2.11.19 Find the eigenvalues of the matrix
A =
_

_
1 1 0 0
0 1 0 0
4 5 1 5
1 2 0 1
_

_
2.11.20 An n n matrix A is called positive denite if it is symmetric,
A = A

=

A
T
,
and if for any x ,= , x R
n
,
x

Ax > 0.
The matrix A is positive semi-denite if the strict inequality in the
above equation is replaced by . Show that for a positive denite
matrix,
48 CHAPTER 2. VECTOR SPACES
(a) Every eigenvalue is real and strictly positive.
(b) If v
1
and v
2
are eigenvectors corresponding to dierent eigenvalues

1
and
2
, then v
1
and v
2
are orthogonal.
2.11.21 For a square matrix X suppose that (i) all of the eigenvalues of X
are strictly positive, and (ii) the domain of X possesses an orthogonal
basis consisting entirely of eigenvectors of X. Show that X is a pos-
itive denite matrix (and hence that these two properties completely
characterize positive denite matrices).
Hint: Make a change of basis using the modal matrix M = [v
1
v
n
],
where v
i
is an orthonormal basis of eigenvectors.
2.11.22 Left eigenvectors
i
of an n n matrix A are dened by
i
A =

i
, where
i
are row vectors (1n matrices) and the
i
are the
eigenvalues of A. Assume that the eigenvalues are distinct.
(a) How are the
i
related to the ordinary (right) eigenvectors of
A

?
(b) How are the
i
related to the reciprocal (dual) eigenvectors of
A?

You might also like