Professional Documents
Culture Documents
Aa547 Lecture Lec1
Aa547 Lecture Lec1
Lecture 1: Introduction
Lecturer: L.J. Ratliff
Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.
They may be distributed outside this class only with the permission of the Instructor.
1.1 Today
• Introduction to the broad concepts of modeling and analysis of engineering systems
– modeling
– analysis and control
– verification
– simulation
– validation
• Course overview
We are interested in studying the dynamical behavior of systems in order to analyze and control them.
Examples of systems exhibiting dynamical behavior, with possible inputs to be controlled and outputs to be
measured, include:
1-1
1-2 Lecture 1: Introduction
This course is about linear systems theory and a core aspect of that is controlling linear systems.
Control Theory: focuses on modeling systems and designing inputs to adjust behavior—e.g., stabilize, track
a trajectory, etc.
controller
Key theoretical tool : Fourier/Laplace transform (i.e. Frequency domain analysis—root locus, frequency re-
sponse)
Return to differential equations beginning in the ’60s to address:
• numerical simulation
• many inputs/outputs
• ill-defined inputs/outputs
• non-linearities
• optimality
Modern control theory (∼1950’s), i.e. state-space approach: Overcame some limitations of classical
control enabling control of fighter jets, e.g. (related “state space” approach to ODE’s is over 100 years old;
control theorist just adopted it)
• engine speed/velocity
• position and yaw/pitch/roll and velocities (aircraft and quadrotor)
• disk speed/velocity
• species population, food supply, predator population, etc.
• temperature along cooling line
Lecture 1: Introduction 1-3
(This is not comprehensive as there are other types of systems combining various aspects in the diagram;
however, this picture give a bit of a sense of broad categories of dynamical systems)
dynamical systems
mathematical models
non-linear non-linear
linear linear
• Database/look-up table containing all inputs and resulting outputs. (What if output depends on input
history? What if desired input is not in table?)
• Function/routine in computer code
• Set of mathematical equations
As indicated in our diagram, we will study continuous-time and discrete-time finite-dimensional systems
described by ordinary differential equations or difference equations.
ẋ = f (t, x, u), x ∈ Rn , u ∈ Rm
y = g(t, x, u), y ∈ Rp
• x is the state,
• u is the (control) input,
• y is the output (observation)
What about apparently more exotic systems with higher order derivatives?
For instance, consider
z (n) = f (t, z, z (1) , z (2) , . . . , z (n−1) )
where z (n) indicates the n–th derivative of the function z(t). For simplicity, assume z ∈ R. Define new state
variables
x1 = z, x2 = z (1) , . . . , xn = z (n−1)
Then, we have
ẋ1 = x2
ẋ2 = x3
.. ..
. .
ẋn = f (t, x1 , x2 , . . . , xn )
where k is a positive constant called the spring constant. If we ignore any external resisting forces (due to
air resistance or friction) then, by Newton’s Second Law (force equals mass times acceleration), we have
k k
Let x1 = z and x2 = ż. Then, ẋ1 = ż = x2 and ẋ2 = z̈ = − m z = −m x1 . Hence,
ẋ1 = x2
k
ẋ2 = − x1
m
Lecture 1: Introduction 1-5
equilibrium
m 0
position
z m
0 z
In this course, we will focus on (finite-dimensional) linear time-varying (LTV) systems. So what do
these look like notationally:
Continuous Time:
ẋ = A(t)x + B(t)u, x ∈ Rn , u ∈ Rm
y = C(t)x + D(t)u, y ∈ Rp
where
• t ∈ R: time
• x(t) ∈ Rn : state (vector)
• u(t) ∈ Rm : input or control
• y(t) ∈ Rp : output
• A(t) ∈ Rn×n : dynamics (matrix)
• B(t) ∈ Rn×m : input matrix
• C(t) ∈ Rp×n : output or sensor matrix
• D(t) ∈ Rp×m : Feedthrough matrix
Equations are often written as
ẋ = Ax + Bu
y = Cx + Du
Discrete Time:
Finally, we will further specialize our results to linear time-invariant (LTI) systems.
ẋ = Ax + Bu, x ∈ Rn , u ∈ Rm
y = Cx + Du, y ∈ Rp