Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

EE547: Fall 2018

Lecture 1: Introduction
Lecturer: L.J. Ratliff

Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.
They may be distributed outside this class only with the permission of the Instructor.

1.1 Today
• Introduction to the broad concepts of modeling and analysis of engineering systems
– modeling
– analysis and control
– verification
– simulation
– validation
• Course overview

1.2 Introduction: What are dynamical systems?

We are interested in studying the dynamical behavior of systems in order to analyze and control them.
Examples of systems exhibiting dynamical behavior, with possible inputs to be controlled and outputs to be
measured, include:

system input output


steam engine amount of fuel engine speed
aircraft rudder and elevator angles altitude
quadrotor thrust & torque position
hard drive disks motor voltage disk speed
rainforest ecology yearly rainfall population of species of interest
cooling system flow-rate of coolant temperature along cooling line

To study such systems we need a model. What is in a model?

• A mathematically precise description of system behavior


• Dynamic models specify how system changes over time
• Balancing act:
– Can’t model movement of every atom of a system
– Therefore, make simplifications
– Need to retain essential properties (salient features) (e.g., a model that predicts negative popula-
tion, or coolant temperature below absolute zero, is probably not useful)
0

1-1
1-2 Lecture 1: Introduction

1.3 Beyond Classical Control?

This course is about linear systems theory and a core aspect of that is controlling linear systems.

Control Theory: focuses on modeling systems and designing inputs to adjust behavior—e.g., stabilize, track
a trajectory, etc.

Classical control (developed largely pre-1960s) largely adopts an input-output approach:

reference input output


System

controller

Key theoretical tool : Fourier/Laplace transform (i.e. Frequency domain analysis—root locus, frequency re-
sponse)
Return to differential equations beginning in the ’60s to address:

• numerical simulation
• many inputs/outputs
• ill-defined inputs/outputs
• non-linearities
• optimality

Modern control theory (∼1950’s), i.e. state-space approach: Overcame some limitations of classical
control enabling control of fighter jets, e.g. (related “state space” approach to ODE’s is over 100 years old;
control theorist just adopted it)

• System/model state is defined to capture all relevant info about past


• State often denoted by x ∈ Rn , where n is state-space dimension.

What is the state for the above examples?

• engine speed/velocity
• position and yaw/pitch/roll and velocities (aircraft and quadrotor)
• disk speed/velocity
• species population, food supply, predator population, etc.
• temperature along cooling line
Lecture 1: Introduction 1-3

1.4 Dynamical Systems (non-comprehensive) Taxonomy

(This is not comprehensive as there are other types of systems combining various aspects in the diagram;
however, this picture give a bit of a sense of broad categories of dynamical systems)

dynamical systems

mathematical models

continuous time discrete time discrete state

non-linear non-linear

time- time- time- time-


varying invariant varying invariant

linear linear

time- time- time- time-


varying invariant varying invariant
This course:
given linear differential or difference equations, conduct rigorous analysis and design.

1.5 Finite Dimensional Systems as Models

How to describe a dynamical system? Some options:

• Database/look-up table containing all inputs and resulting outputs. (What if output depends on input
history? What if desired input is not in table?)
• Function/routine in computer code
• Set of mathematical equations

As indicated in our diagram, we will study continuous-time and discrete-time finite-dimensional systems
described by ordinary differential equations or difference equations.

So, in continuous time this might look like. . .


1-4 Lecture 1: Introduction

Continuous-Time: (t ∈ R+ := [0, ∞))

ẋ = f (t, x, u), x ∈ Rn , u ∈ Rm
y = g(t, x, u), y ∈ Rp

We have the following nomenclature:

• x is the state,
• u is the (control) input,
• y is the output (observation)

Discrete-Time: (k ∈ N = {0, 1, 2, . . .})

x[k + 1] = f (k, x, u), x ∈ Rn , u ∈ Rm


y[k] = g(k, x, u), y ∈ Rp

What about apparently more exotic systems with higher order derivatives?
For instance, consider
z (n) = f (t, z, z (1) , z (2) , . . . , z (n−1) )
where z (n) indicates the n–th derivative of the function z(t). For simplicity, assume z ∈ R. Define new state
variables
x1 = z, x2 = z (1) , . . . , xn = z (n−1)
Then, we have

ẋ1 = x2
ẋ2 = x3
.. ..
. .
ẋn = f (t, x1 , x2 , . . . , xn )

Thus, without loss of generality (w.l.o.g), we study first-order differential equations.

Example Systems: Vibrating Springs


We consider the motion of an object with mass at the end of a spring that is either vertical (as in Figure 1.1)
or horizontal on a level surface (as in Figure 1.2). Hooke’s Law says that if the spring is stretched (or
compressed) z units from its natural length, then it exerts a force that is proportional to z—that is,

restoring force = −kx

where k is a positive constant called the spring constant. If we ignore any external resisting forces (due to
air resistance or friction) then, by Newton’s Second Law (force equals mass times acceleration), we have

mẍ = −kx or mẍ + kx = 0

k k
Let x1 = z and x2 = ż. Then, ẋ1 = ż = x2 and ẋ2 = z̈ = − m z = −m x1 . Hence,

ẋ1 = x2
k
ẋ2 = − x1
m
Lecture 1: Introduction 1-5

equilibrium
m 0
position

z m

Figure 1.1: Vertical Pull

0 z

Figure 1.2: Horizontal Pull

In this course, we will focus on (finite-dimensional) linear time-varying (LTV) systems. So what do
these look like notationally:

Continuous Time:

ẋ = A(t)x + B(t)u, x ∈ Rn , u ∈ Rm
y = C(t)x + D(t)u, y ∈ Rp

where
• t ∈ R: time
• x(t) ∈ Rn : state (vector)
• u(t) ∈ Rm : input or control
• y(t) ∈ Rp : output
• A(t) ∈ Rn×n : dynamics (matrix)
• B(t) ∈ Rn×m : input matrix
• C(t) ∈ Rp×n : output or sensor matrix
• D(t) ∈ Rp×m : Feedthrough matrix
Equations are often written as

ẋ = Ax + Bu
y = Cx + Du

• A CT LDS is a first order vector differential equation


• also called state equations or m-input, n-state, p-output LDS
1-6 Lecture 1: Introduction

Discrete Time:

x[k + 1] = A[k]x[k] + B[k]u[k], x ∈ Rn , u ∈ Rm


y[k] = C[k]x[k] + D[k]u[k], y ∈ Rp

Finally, we will further specialize our results to linear time-invariant (LTI) systems.

Continuous Time LTI:

ẋ = Ax + Bu, x ∈ Rn , u ∈ Rm
y = Cx + Du, y ∈ Rp

where A ∈ Rn×n , B ∈ Rn×m , C ∈ Rp×n , and D ∈ Rp×m are static matrices.

Discrete Time LTI:

x[k + 1] = Ax[k] + Bu[k], x ∈ Rn , u ∈ Rm


y[k] = Cx[k] + Du[k], y ∈ Rp

Some other points:


• most linear systems encountered are time-invariant: A, B, C, D are constant as above—i.e. don’t depend
on t
• when there is no input u (hence, no B or D) system is called autonomous
• very often there is no feedthrough—i.e. D = 0
• when u(t) and y(t) are scalar, the system is called single-input, single-output (SISO); when input & output
signal dimensions are more than one, MIMO
History Lesson:
• parts of LDS theory can be traced to 19th century
• builds on classical circuits & systems (1920s on) (transfer functions . . . ) but with more emphasis on linear
algebra
• first engineering application: aerospace, 1960s
• transitioned from specialized topic to ubiquitous in 1980s (just like digital signal processing, information
theory, . . . )
Many dynamical systems are nonlinear, yet
• most techniques for nonlinear systems are based on linear methods; e.g., linearization to determine stability
or to construct extended Kalman Filters. . .
• methods for linear systems often work unreasonably well, in practice, for nonlinear systems
• if you don’t understand linear dynamical systems you certainly cant understand nonlinear dynamical
systems

You might also like