Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
4 views

Lecture Notes On Digital Control

Uploaded by

E
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Lecture Notes On Digital Control

Uploaded by

E
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 133

Lecture Notes on Digital Control

Giacomo Baggio, Mauro Bisiacco,


Augusto Ferrante e Francesco Ticozzi
2
Contents

1 Introduction 5
1.1 Digital Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Advantages and Disadvantages of Digital Control . . . . . . . . . 6

2 Discrete-time Signals and Systems and Z-Transform 9


2.1 Input-output analysis of discrete-time LTI systems . . . . . . . . . 9
2.2 Discrete-time signals . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Discrete-time linear SISO systems . . . . . . . . . . . . . . . . . . 11
2.4 Z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Properties of the Z-Transform . . . . . . . . . . . . . . . . . . . . 16
2.6 Inverse Z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.6.1 Existence of the inverse Z-Transform . . . . . . . . . . . . 25
2.6.2 Computation of the inverse Z-Transform . . . . . . . . . . 26

3 Notions of Sampling Theory 33


3.1 Sampling by Amplitude Modulation . . . . . . . . . . . . . . . . . 33
3.2 Sampling and relations between Z-Transform and the Laplace
transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3 Connections Between the Laplace Transform and the Z-transform 37
3.4 Anti-Aliasing Filters . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.5 Comments on quantization . . . . . . . . . . . . . . . . . . . . . . 41

4 Analysis of Discrete-Time Systems 43


4.1 Stability of Discrete-Time Systems . . . . . . . . . . . . . . . . . 47
4.2 Criteria for Stability: Schur Polynomials . . . . . . . . . . . . . . 47
4.2.1 Jury Test . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2.2 Bilinear (or Möbius) Transform . . . . . . . . . . . . . . . 49

5 Interconnections between continuous-time and discrete-time sys-


tems 51
5.1 Conversion between continuous and discrete systems . . . . . . . . 53
5.2 Stability of interconnections . . . . . . . . . . . . . . . . . . . . . 57
5.2.1 Stability of a negative feedback loop . . . . . . . . . . . . 57
5.2.2 Internal stability of an interconnection . . . . . . . . . . . 61
Contents 4

6 Frequency response 63
6.1 Nyquist plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

7 Control problem and controller design 65


7.1 The control problem . . . . . . . . . . . . . . . . . . . . . . . . . 65
7.1.1 Specifications on the asymptotic regime: Tracking . . . . . 65
7.1.2 Performance specifications on the transient regime . . . . . 68
7.1.3 Translation of the time-domain performance specifications:
a brief summary . . . . . . . . . . . . . . . . . . . . . . . . 70
7.1.4 Control system design . . . . . . . . . . . . . . . . . . . . 73

8 Digital controller synthesis: Emulation methods 83


8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
8.2 Digital conversion of analogic C(s)’s design (emulation) . . . . . . 84
8.3 P.I.D. cotrollers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
8.4 Reminders about the phase margin based synthesis . . . . . . . . 95
8.5 Elementary networks design . . . . . . . . . . . . . . . . . . . . . 96
8.6 P.I.D. - P.I. - P.D. design based on the phase margin . . . . . . . 98
8.6.1 P.I.D. design . . . . . . . . . . . . . . . . . . . . . . . . . 98
8.6.2 P.D. design . . . . . . . . . . . . . . . . . . . . . . . . . . 99
8.6.3 P.I. design . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

9 Digital Controllers Synthesis: Direct Synthesis Methods 101


9.1 Discrete-time direct synthesis by “canceling” . . . . . . . . . . . . 101
9.2 Dahlin’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
9.3 Choice of W (z) based on dominant pole arguments . . . . . . . . 105
9.4 Direct Synthesis from a different perspective . . . . . . . . . . . . 107
9.5 Smith delay compensation . . . . . . . . . . . . . . . . . . . . . . 108
9.6 Recalling Diophantine equations . . . . . . . . . . . . . . . . . . . 111
9.7 Controller design via diophantine equations . . . . . . . . . . . . 112
9.8 Digital controller synthesis: Deadbeat tracking . . . . . . . . . . . 114
9.9 Examples of dead-beat tracking for constant signals . . . . . . . . 118
9.10 Dead-beat control for P deriving from a sampling/holding . . . . 120

A Table of Most Common Z Transforms 123

B Table of Most Common Laplace Transforms 125

C Notions of Control in Continuous-Time 127


C.1 Routh Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
C.2 Root Locus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
C.3 Nyquist Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Chapter 1

Introduction

1.1 Digital Control


Nowadays the vast majority of control systems, especially in industrial applica-
tions, are implemented on digital devices (µP, DSP, FPGA, etc.), which use and
process sampled and quantized signals (see Fig.1.1). It is thus of paramount
importance for any control system engineer to have a deep understanding of the
functioning principles of digital control systems. This book aims to introduce the
main approaches to the design of digital control systems, as well as the principles
and mathematical tools that are needed to master them.
Digital control systems are sometimes misleadingly considered as simple ap-
proximations of control system for analog systems (as traditionally designed for
electrical, hydraulic, chemical systems and plants): this is however an unsatisfac-
tory approach as it prevents one to best exploit the potential of digital control
implementations.
We shall see that when the controller design is obtained for approximation
(or emulation) of an analog design, the performance of the controlled system are
in the best case scenario close to those of a continuous controller.
On the other hand, when the controller is directly designed to operate in the
digital domain, its performance can potentially surpass those of analog controller.
The possibility of obtaining dynamical behaviours that have no analogue in the
analog world, such as finite impulse response systems, opens up new opportunities
in terms of control design.
In the light of these considerations, it is only natural that a book in digital
control be of hybrid nature, borrowing from the approaches of both methodolog-
ical and applied courses. The first part of the book will be dedicated to the
study of mathematical methods for the modeling and analysis of sampled-data
signals and systems, mimicking the familiar route typically taken to introduce
classical control in the frequency domain. In particular, the Z-transform will
be introduced and used to manipulate discrete-time linear systems, developing a
dedicated input-output transfer function approach and modal analysis. A par-
ticular emphasis will be on the effect of sampling, not only on signals but also on
Chapter 1. Introduction 6

systems and their interconnections. As it almost always happens in engineering,


the theoretical foundations will be interspersed with good-practice consideration
or approximate formulas that stem from experience and provide tentative, first-
attempt choices in achieving the desired controlled performance. In the second
part the focus will be on design techniques for digital feedback controllers for both
intrinsic discrete-time systems and sampled continuous-time systems. Emulation
techniques will be introduced first, followed by direct digital design ones. The
control problems that will be treated will include not only regulation (set-point
control), but also asymptotic tracking or finite-time, dead-beat control.

Synthesis Block Information Acquisition/ Processing Block

Control Process/
State
Algorithm Signal
Estimation
Design Model

Control
Algorithm Process/
Adjust. Signal
Analysis
y

Control b
Actuator Process Sensors
Algorithm
u
Physical Block
n

Figure 1.1: A paradigmatic architecture for a digital control system, including


subsystems dedicated signal-processing, identification and adaptation. [1]

1.2 Advantages and Disadvantages of Digital Con-


trol
The main advantages connected to the use of digital control systems include:

1. On the hardware level:

• flexibility (the same type of controller can be interfaced to different


type of physical systems, even simultaneously);
• standardization and robustness of the components;
• reliability;
• time invariance (negligible aging of digital components);
• less noise;
Chapter 1. Introduction 7

• smaller dimensions and lower costs (due to the use of mass-produced


standard components);
• Easy maintenance and substitution;

2. On the design and software level:

• better performance achievable by the controlled system;


• simpler algorithms to be implemented;
• possible use of Finite Impulse Response – FIR systems;
• easily generated reference signals;
• ease of reconfiguration;
• operation scheduling;
• easier monitoring and data acquistion;
• possible inclusion of a user interface.

On the other hand, there are some disadvantages, which are to be carefully
considered in the design and implementation phases, which are related to:
1. the technology and its functioning environment:

• digital devices need power;


• digital devices do not work in extreme conditions (extreme tempera-
tures, pressures, radiations, etc.);

2. digital signal processing:

• quantization is nonlinear, and potentially introduce noise and other


undesirable effects that are hard to model and account for (e.g. limit
cycles);
• there can be problems introduced by sampling (e.g. aliasing);
• digital devices typically exhibit some delay in the acquisition and pro-
cessing of signals;
• difficult real-time processing for high-frequency signals;

3. installation and maintenance:

• testing can be difficult;


• state-of-the-art technologies change quickly;
• the development of dedicated software requires significant resources in
terms of time and costs;
• these devices require dedicated programming languages;
• difficult to re-train personnel that is accustomed to analog systems.
Chapter 1. Introduction 8
Chapter 2

Discrete-time Signals and


Systems and Z-Transform

2.1 Input-output analysis of discrete-time LTI


systems
In basic control courses, continuous-time systems and their interconnections have
been analyzed. In particular, the input-output (I/O) behaviour of continuous-
time systems may be conveniently described by using the Laplace transform.
The success of the continuous-time I/O analysis is a consequence of the following
assumptions and facts:

1. We consider linear, time-invariant, causal systems with lumped param-


eters that are described by ordinary differential equations (ODE). These
equations are homogeneous for autonomous systems (i.e. systems without
inputs), while in the general case when inputs are present, the correspond-
ing ODE is not homogeneous.

2. By employing the Laplace transform, the solution of the ODE and the
analysis of the structure of said solution can be handled by simple algebraic
techniques.

3. Linearity of the systems implies that the solution can be additively de-
composed as the sum of two signals: the free response depending only on
the initial conditions and the forced response depending only on the input
signal.

4. The I/O behaviour of the system, i.e. the properties of the function map-
ping a certain input signal to the corresponding forced response can be
analyzed by considering the transfer function (TF) of the system. For the
linear, time-invariant, causal systems with lumped parameters the TF is a
rational function of the complex variable. Such variable is usually denoted
by s.
Chapter 2. Discrete-time Signals and Systems and Z-Transform 10

5. The position of the poles of the TF in the complex plane accounts for
many important properties of the systems including BIBO stability, and
important features of the step response of the system.

The first part of this book is devoted to introducing the tools needed for
deriving a similar approach for discrete-time signals and systems and for sampled
signals.

2.2 Discrete-time signals


Definition 2.1. A discrete-time signal x(k) is a sequence of real or complex
values (or of samples), i.e. a function x : Z(Z+ ) → R(C), k 7→ x(k).1

Some relevant sets of signals are:

1. Bounded signals: this is the vector space

`∞ := {x(k) : ∃M < +∞, s.t. |x(k)| ≤ M ∀k ∈ Z} .

This vector space is endowed with the norm:

kxk∞ := inf M = sup |x(k)|.


k

With this norm `∞ is a Banach space, i.e. all the Cauchy sequences in `∞
converge to an element of `∞ .

2. Finite energy signals: this is the vector space


( )
X
2
`2 := x(k) : |x(k)| < ∞ .
k

This vector space is endowed with the internal product:


X
hx, yi = x∗ (k)y(k),
k

which induces the norm:


!1/2 !1/2
X X
kxk2 = x∗ (k)x(k) = |x(k)|2 .
k k

Again, all the Cauchy sequences in `2 converge to an element of `2 so that


`2 is a Hilbert space.
1
The symbol Z+ denotes the set of nonnegative integers (so that 0 ∈ Z+ ).
Chapter 2. Discrete-time Signals and Systems and Z-Transform 11

3. Absolutely summable signals: this is the vector space


( )
X
`1 := x(k) : |x(k)| < ∞ .
k

It is endowed with the norm:


X
kxk1 = |x(k)|
k

and is a Banach space with respect to this norm.


Remark 2.1. More in general, we can define the vector space of the sequences
with summable p power, with 1 ≤ p ≤ ∞:
( )
X
`p := x(k) : |x(k)|p < ∞ , 1 ≤ p ≤ ∞.
k

Each of these spaces is endowed with a norm:


!1/p
X
kxkp := |x(k)|p
k

and is a Banach space with respect to this norm. An interesting feature which
does not have a counterpart in continuous-time signals is the following strict
inclusion:
`p ( `s , ∀ 1 ≤ p < s ≤ ∞.

2.3 Discrete-time linear SISO systems


Let us consider a system understood as a transformation mapping an input se-
quence {u(k)}k=+∞ k=+∞
k=−∞ to an output sequence {y(k)}k=−∞ . By imposing that the
system is linear we get that the following relation
X X
y(k) = aj (k)y(j) + bl (k)u(l). (2.1)
j6=k l

If we further impose that the system is causal, i.e. the output at each “time” k
only depends on the input at times l ≤ k, we get the following representation
k−1
X k
X
y(k) = aj (k)y(j) + bl (k)u(l). (2.2)
j=−∞ l=−∞

In practice, to effectively implement any signal processing algorithm only a finite


number of samples can be stored on the limited memory of the processor. By
imposing this constraint to (2.2) we limit the class of systems to those for which
Chapter 2. Discrete-time Signals and Systems and Z-Transform 12

the output at each “time” k only depends on the last n samples of the output
itself and on the last m + 1 samples of the input (where n and m are natural
numbers and for the input we consider m + 1 samples because the sample at “the
present time” k of the input may be used). In this case, in place of (2.2) we have
the equation
k−1
X Xk
y(k) = aj (k)y(j) + bl (k)u(l). (2.3)
j=k−n l=k−m

Finally, if we impose that the system is time-invariant, i.e. its behaviour does
not change in time, then the coefficients aj (k) only depend on the difference k − j
and the coefficients bl (k) only depend on the difference k − l, so that

k−1
X k
X
y(k) = ak−j y(j) + bk−l u(l)
j=k−n l=k−m
Xn Xm
= aj y(k − j) + bl u(k − l). (2.4)
j=1 l=0

We will develop the I/O analysis for systems of the form (2.4). These systems
are obtained by discretizing continuous-time equations modeling, for example,
classical mechanical or electrical systems. We shall see, however, that there are
discrete-time systems described by (2.4) that cannot be obtained by discretizing
a continuous-time linear system of dynamical equations.

Example 2.1. Let us consider the following linear differential equation:

d2 d
a2 2
y(t) + a1 y(t) = b0 u(t). (2.5)
dt dt
We now describe how this equation can be “discretized”, i.e. approximated by a
discrete-time system. To this aim we sample the continuous-time signals u(·), y(·)
and derive a difference equation that is approximately satisfied by the sampled
signals: the key point being that when the sampling time tends to 0 also the
approximation error tends to zero.
Let T be the sampling time and assume that u(·), and y(·) are sufficiently
smooth signals (more precisely u(·), y(·) are of class al least C n , n being the
order of the ODE so that, in this case, n = 2). Then, for T sufficiently small, we
d
can approximate arbitrarily well the differential operator dt with the difference

quotient T , where the discrete difference operator ∆ acts on a function f (t) as
follows:
∆f (t) = f (t) − f (t − T ). (2.6)

We define the discrete-time signals

ỹ(k) := y(kT ), ũ(k) := u(kT ), k ∈ Z, (2.7)


Chapter 2. Discrete-time Signals and Systems and Z-Transform 13

and with straightforward algebraic manipulations we get

ỹ(k) − ỹ(k − 1)
   
d ∆
y(t) ' y(t) = (2.8)
dt | T | T
 2  t=kT  2 t=kT
d ∆ ỹ(k) − 2ỹ(k − 1) + ỹ(k − 2)
y(t) ' y(t) = . (2.9)
dt2 |t=kT T2 |t=kT T2

By plugging these expression on (2.5) we get the following difference equation

ã0 ỹ(k) + ã1 ỹ(k − 1) + ã2 ỹ(k − 2) = b0 ũ(k), (2.10)

where
a2 a1 2a2 a1 a2
ã0 :=
2
+ , ã1 := − 2 − , ã2 := 2 .
T T T T T
Notice, in passing, that the coefficients of the discrete-time system obtained
by discretizing a continuous-time one, depend on the sample time T which has
therefore a relevant impact on numerical properties of the discrete-time system. ♦

2.4 Z-Transform
The Z-Transform may be viewed as the discrete-time counterpart of the Laplace
Transform. It is a linear operator mapping sequences in Z+ to functions of the
complex variable z ∈ C.

Definition 2.2. Let f : Z+ → R(C), k 7→ f (k) be a discrete-time signal. We


define the (unilateral)2 Z-Transform of f to be the sum of the following series
for the values of the complex variable z for which the series converges:

X
Z[f ] = F (z) := f (k)z −k , (2.11)
k=0

Example 2.2. Let f (k) be the discrete impulse also known as Kronecker delta:
(
1 k=0
f (k) = δ(k) = (2.12)
0 otherwise

The corresponding Z-Transform is

Z[δ] = 1 (2.13)
2
There also exists the bilateral Z-Transform defined by
+∞
X
Z[f ] = F (z) := f (k)z −k .
k=−∞

We will only use the unilateral Z-Transform as we mainly consider causal signals.
Chapter 2. Discrete-time Signals and Systems and Z-Transform 14

which converges, and hence is well defined, for all complex z. ♦

We now focus on the convergence of the Z-Transform defined in (2.11). The


following result holds:

Theorem 2.1. Let f : Z+ → R(C), k 7→ f (k) be a discrete-time sequence and


z ∈ C. Then there exists %0 ∈ [0, +∞] such that the series

X
f (k)z −k
k=0

1. is absolutely convergent outside a circle of radius %0 centered in the origin


of C i.e. ∀z ∈ {z ∈ C : |z| > %0 }.

2. Is divergent ∀z ∈ {z ∈ C : |z| < %0 }.

The “radius of convergence”3 %0 ∈ [0, +∞] is given by:


1
%0 = lim sup |f (k)| k . (2.14)
k→∞

Some remarks on the previous result are in order:

1. Remind that lim sup or superior limit of a sequence g(k) is always well
defined (also when the sequence does not admit limit) and is given by the
following procedure: given g(k), define the new sequence

l(k) := sup g(h).


h≥k

Then
lim sup g(k) := lim l(k).
k→∞ k→∞

Notice that l(k) is by construction monotonic non-increasing so that it


necessarily admits limit which can be finite of infinite. Therefore, formula
(2.14) always provides a value of %0 .
1
2. When the sequence |f (k)| k has limit, this limit necessarily coincides with
the superior limit so that, in these cases, we can use the simpler formula

%0 = lim |f (k)|1/k (2.15)


k→+∞

In other words, %0 can be computed by using the simpler formula (2.15) if


and only if the limit in the right-hand side of (2.15) exists.
3
Notice that %0 may well be +∞: this means that the series does not converge for any
complex value so that the Z-Transform is not defined for sequences corresponding to %0 = +∞.
Chapter 2. Discrete-time Signals and Systems and Z-Transform 15

3. Formulas (2.14) and (2.15) hinge on the root test. An alternative formula,
based on the ratio test, is the following:

f (k + 1)
%0 = lim . (2.16)
k→+∞ f (k)

Also this formula is usually much easier to compute than (2.14) but holds
if and only if the limit in its right-hand side exists.
P∞ −k
4. Theorem 2.1 discusses the convergence of the series k=0 f (k)z only
for |z| > %0 and for |z| < %0 . Indeed, there are no general results on
the convergence of the series on the circle of radius %0 : depending on the
specific sequence and on the specific z such that |z| = %0 the series can be
convergent or non-convergent.

Definition 2.3. The radius of convergence (r.c.) of the Z-Transform in (2.11) is


the constant %0 defined by (2.14). The region of convergence (see Figure 2.1) of
the Z-Transform is the set

{z ∈ C : |z| > %0 }.

The radius of convergence can be computed by the simplified formulas (2.15)


and (2.16) if and only if the corresponding limits in the right-hand side exist.

=(z)

%0

0 <(z)

Figure 2.1: Convergence region (highlighted in cyan) Rc = {z ∈ C : |z| > %0 } of


the Z-Transform.

Example 2.3. Next we provide some examples of computation of Z-Transforms


and of the corresponding radius of convergence:

• Let f (k) be the discrete unit step (also known as Heaviside function)
(
1 k≥0
f (k) = δ−1 (k) := (2.17)
0 k<0
Chapter 2. Discrete-time Signals and Systems and Z-Transform 16

The corresponding Z-Transform is a geometric series of ratio z −1 ; therefore,


we have
+∞
X 1 z
Z[f ] = z −k = −1
= , (2.18)
k=0
1 − z z − 1
%0 = 1. (2.19)
In this case we obtained %0 by using the results on the geometric series.
The same result can be obtained by using formulas (2.15) and (2.16).
• Let f (k) = pk , p ∈ C. The corresponding Z-Transform is a geometric series
of ratio pz −1 ; therefore, we have
+∞
X 1 z
F (z) = pk z −k = −1
= (2.20)
k=0
1 − pz z−p
%0 = |p|. (2.21)
Observe that this kind of sequences are obtained by sampling continuous-
time exponential signals eλt with fixed sampling time T : eλT k = pk with
p = eλT .
2
• Let f (k) = 2k . The corresponding Z-Transform Z[f ] is not defined. In
fact, %0 = +∞ as it can be checked by using formula (2.16).

Challenge 2.1. Provide a sequence f (k) for which the radius of converge of the
corresponding Z-Transform cannot be computed with the simplified formulas
(2.15) and (2.16) so that formula (2.14) must be employed. In particular, prove
that for such f (k) the limits in (2.15) and (2.16) do not exist and compute the
limit in (2.14).

2.5 Properties of the Z-Transform


Next we describe the main properties of the Z-Transform. To this aim we use the
Z Z
notation f (k) −
→ F (z) or f −→ F (z) to indicate that F (z) is the Z-Transform of
f (k). Also, when possible, we denote discrete-time signals with lower-case letters
and use the corresponding upper-case letters for the relative Z-Transforms.
Z
→ F (z), then4
1. Symmetry: If f : Z+ → C, and f −
Z
X
f ∗ (k) −
→ f ∗ (k)z −k = F ∗ (z ∗ ). (2.22)
k

If f is real-valued then (2.22) implies F (z) = F ∗ (z ∗ ) or, equivalently,


F (z ∗ ) = F ∗ (z).
4
We use the notation f ∗ to denote the complex conjugate of f . If f is a vector (or a
vector-valued function) f ∗ denotes the conjugate transpose of f .
Chapter 2. Discrete-time Signals and Systems and Z-Transform 17

2. Linearity: Let F1 (z), F2 (z) be the Z-Transforms of f1 , f2 : Z+ → C, and


%1 , %2 be the corresponding radii of convergence. Then, for all c1 , c2 ∈ C
we have
Z
f := c1 f1 + c2 f2 −
→ c1 F1 (z) + c2 F2 (z), (2.23)

and the corresponding radius of convergence %0 satisfies

%0 ≤ max{%1 , %2 }. (2.24)

Example 2.4. Let f (k) = cos(ϑk) = 21 ejϑk + e−jϑk . By employing (2.20)




and the linearity of the Z-Transform, we get


 
1 z z
Z[f ] = +
2 z−e jϑ z − e−jϑ
z(z − cos ϑ)
= 2 . (2.25)
z − 2 cos ϑz + 1

Similarly, for g(k) = sin(ϑk) we get

z sin ϑ
Z[g] = . (2.26)
z2 − 2 cos ϑz + 1

3. Translation in k: We now discuss two fundamental properties that will


be used several times in this book. Let f (k) : Z+ → R, C e Z[f ] = F (z):

• Time advance: Let g(k) := f (k + a), a ≥ 0. The Z-Transform of g is

+∞
X
Z[g] = f (k + a)z −k
k=0
+∞
X
=z a
f (k + a)z −(k+a)
k=0
a−1
X
a
= z F (z) − f (j)z a−j . (2.27)
j=0
| {z }
Σa

Notice that Σa is a polynomial in z that cancels the first a terms of


z a F (z) = z a f (0) + z a−1 f (1) + · · · = Σa + f (a) + f (a + 1)z −1 + f (a +
2)z −2 + . . . .
Chapter 2. Discrete-time Signals and Systems and Z-Transform 18

• Time delay: Let g(k) := f (k − r), r ≥ 0. We have


+∞
X
Z[g] = f (k − r)z −k
k=0
+∞
X
= z −r f (k − r)z −(k−r)
k=0
r
X
−r
= z F (z) + f (−l)z −(r−l) , (2.28)
|l=1 {z }
Σr

Notice that Σr ≡ 0 if and only if f (k) is causal, i.e. f (k) = 0 for


all k < 0. Otherwise, Σr is non-zero and it is a polynomial in z −1 of
degree equal to r − 1.
Remark 2.2. We may formally introduce the temporal translation
operator q as the map on the set of sequences f : Z → R (C) to itself,
defined by:
q[f (k)] := f (k + 1). (2.29)
Clearly this map is invertible and we have

q −1 [f (k)] = f (k − 1). (2.30)

This map corresponding to a one-step time advance is illustrated in


Figure 2.2. Its inverse q −1 is the one-step time delay operator and,
clearly, q ◦ q −1 is the identity operator.
f (k) q −1 [f (k)]

−3 −2 −1 0 1 2 3 4 5 6 7 k

f (k) q[f (k)]

−3 −2 −1 0 1 2 3 4 5 6 7 k

Figure 2.2: Temporal translation of the sequence f (k).

In view of (2.27), and (2.28), we may think that the action of the
operator q corresponds, in the Z-Transform domain, to multiplication
by z and, similarly, that the action of the operator q −1 corresponds,
in the Z-Transform domain, to multiplication by z −1 . This is wrong
and would lead to gross errors. In fact, we are considering the unilat-
eral Z-Transform which is defined for signals whose domain is Z+ , or,
Chapter 2. Discrete-time Signals and Systems and Z-Transform 19

equivalently, only accounts for the causal part of signals whose domain
is the whole Z.
For example, we easily see that the Z-Transform F (z) of discrete-
time signal f (k), does not contain any information about the term
f (−1). On the other hand, g(k) defined by g(k) := q −1 f (k) is such that
g(0) = f (−1) so that its Z-Transform (that contains the information
on g(0)) cannot be obtained from F (z) unless, as observed before, we
know that f is causal so that g(0) = f (−1) = 0. Otherwise we need
to include the term Σr defined in (2.28).
In conclusion, when computing the Z-Transform of translated signals
we need to take great care over the sample entering or exiting the
domain Z+ of the transform. For this reason, in formulas (2.27) and
(2.28) appear the terms Σa and Σr , respectively.

4. Periodic repetition: Let g(k) be a discrete-time signal defined in Z.


Assume that g(k) is causal, i.e g(k) = 0 for k < 0. Let N be a positive
integer and let f (k) be the periodic repetition of g(k) with period N , i.e.

X
f (k) := g(k − iN ). (2.31)
i=0

An interesting special case is the one in which g(k) is zero also for all k ≥ N
(see Figure 2.3). In this case f (k) is clearly periodic of period N .
The Z-Transform of the periodic repetition can be easily obtained by using
linearity:

X 1 zN
F (z) := Z[f (k)] = z −iN G(z) = G(z) = G(z). (2.32)
i=0
1 − z −N z N −1
| {z }
=:ΘN (z)

This formula shows that the action of the operator of periodic repetition
corresponds, in the transform domain, to multiplication by ΘN (z).

g(k)

3 4

−3 −2 −1 0 1 2 5 6 7 k

Figure 2.3: Signal g(k) vanishing for k < 0 and for k ≥ N = 5. The corresponding
periodic repetition of period N = 5, gives a causal signal f (k) that equals g(k)
for 0 ≤ k ≤ N − 1 = 4 and for which these 5 samples are repeated periodically
for k ≥ 5.
Chapter 2. Discrete-time Signals and Systems and Z-Transform 20

Example 2.5. Let f (k) : Z+ → {0, 1} be the periodic signal


(
1 k even
f (k) :=
0 k odd

This signal can be seen as the periodic repetition of the discrete impulse
(2.12) with period N = 2. Therefore, from (2.32) we get
"∞ #
X z2
F (z) := Z[f (k)] = Z δ(k − i2) = 2 .
i=0
z −1


f (k) 1

−3 −2 −1 0 1 2 3 4 5 6 7 k

Figure 2.4: Signal f (k) used in example 2.5.

5. Scaling in the z-domain C: Given the signal f (k), let F (z) = Z[f (k)]
and %0 be the corresponding radius of convergence. Then
∞  
Z
X
k −k z
k
p f (k) −
→ f (k)p z = F , r.c. %00 = |p|%0 . (2.33)
k=0
p

This property will be very useful fo the analysis of modes of discrete-time


systems.
Example 2.6. Let f (k) := λk cos(ϑk). By combining formula (2.33) with
the Z-Transform of the cosine function (2.25), we get
z z

z 
λ λ
− cos ϑ
F (z) := Z[f (k)] = F = z2
λ λ2
− 2 cos ϑ λz + 1
z(z − λ cos ϑ)
= 2 . (2.34)
z − 2 cos ϑλz + λ2
Notice that if λ is real and 0 < λ < 1 then f (k) may be viewed as the
sampled version of a continuous-time damped oscillation. ♦

6. Discrete integration: Given the signal f (k), let F (z) = Z[f (k)]. The
discrete integral of f (k) is defined as
k
X
g(k) := f (l).
l=0
Chapter 2. Discrete-time Signals and Systems and Z-Transform 21

The Z-Transform of this discrete integral is easily obtained as5


k ∞ k ∞ ∞
! !
Z
X X X X X
f (l) −
→ z −k f (l) = z −l f (l) z l−k
l=0 k=0 l=0 l=0 k=l
1 z
= F (z) = F (z). (2.35)
1 − z −1 z−1
Example 2.7. Let f (k) = k be the discrete-time ramp signal. Consider
the
Pk discrete unit step δ−1 (k) defined (2.17) and its discrete integral g(k) :=
l=0 δ−1 (l): we easily see that f (k) = k = g(k − 1). Therefore, from (2.35)
and by taking the one-step delay into account, it follows:
  
−1 z z z
F (z) = z = . (2.36)
z−1 z−1 (z − 1)2
Z
→ z −1 G(z).
Notice that here g(k) is a causal signal so that g(k − 1) − ♦

7. Discrete derivative: Given the signal f (k), let F (z) = Z[f (k)]. Let ∆
be the discrete derivative operator defined in (2.6) and g(k) := ∆f (k) =
f (k) − f (k − 1). By using (2.28) we get

Z[g(k)] = Z[∆f (k)] = Z[f (k) − f (k − 1)]


= F (z) − z −1 F (z) − f (−1)
= (1 − z −1 )F (z) − f (−1). (2.37)

In general, the Z-Transform of the discrete derivative of order n may be


obtained by a similar computation:
" n   #
X n
Z[∆n f (k)] = Z (−1)l f (k − l)
l=0
l
n−1 n  
l n
X X
−1 n −h
= (1 − z ) F (z) + z (−1) f (h − l). (2.38)
h=0 l=h+1
l

Remark 2.3. Notice that the (double) sum


n−1 n  
X
−h
X
l n
S(z) := z (−1) f (h − l)
h=0 l=h+1
l

is a polynomial in z −1 of degree at most n − 1. Its coefficients are linear


combinations of the terms f (−`), ` = 1, . . . , n, that play the same role of the
initial conditions in the (unilateral) Laplace Transform of a continuous-time
5
Notice that the formula for the Z-Transform of the discrete integral is the same of (2.32)
for the periodic repetition of period N = 1. The reader is invited to explain this fact.
Chapter 2. Discrete-time Signals and Systems and Z-Transform 22

signal. For the n-th order discrete derivative ∆n , the sum S(z) depends on
the n samples f (−`), for ` = 1, . . . , n. This corresponds to the formula
for the Laplace transform of the derivative of order n of a continuous-time
signal where n initial conditions are required (i.e. the limits for t → 0− of
the signal and of its first n − 1 derivatives). In the discrete-time, the idea
is similar but to specify the first difference in k = 0 we need the sample
f (−1), to specify the second difference in k = 0 we need the two samples
f (−1) and f (−2), and so on. In general, to specify the n-th difference in
k = 0 we need the n samples f (−`), for ` = 1, . . . , n.

8. Derivation of the Z-Transform: Consider a signal f (k), and let


F (z) = Z[f (k)] be its Z-Transform and %0 be its convergence radius. Then
F (z) is an analytic 6 function in the region of convergence. By computing
the derivative of F (z), we find


d X
F (z) = (−k)f (k)z −k−1 = −z −1 Z[kf (k)]. (2.39)
dz k=1

As a consequence, we have:

d
Z[kf (k)] = −z F (z). (2.40)
dz

Example 2.8. We now show as some relevant Z-Transforms can be com-


puted by using (2.40).

• Let f (k) = k 2 for k ≥ 0; then

d z
Z[f (k)] = Z[k · k] = −z
dz (z − 1)2
 
1 2z
= −z −
(z − 1)2 (z − 1)3
z(z + 1)
= . (2.41)
(z − 1)3

 
k
• Let f (k) = . This notation has to be understood as f (k) :=
2
k!
2!(k−2)!
for k ≥ 2 and f (k) := 0 for k < 2 so, in particular f (k) is

6
We recall that a function of the complex variable z is analytic in an open set if in this set
it can be locally represented as the sum of a convergent power series. We recall also that F (z)
is analytic in an open set if and only if it is differentiable (as a function of the complex variable
z) in this set and, in this case, F (z) is differentiable infinitely many times in the same set.
Chapter 2. Discrete-time Signals and Systems and Z-Transform 23

causal. We have
 2
k(k − 1)
  
k k
Z[f (k)] = Z =Z −
2 2 2
z(z − 1)
 
1 z(z + 1)
= −
2 (z − 1)3 (z − 1)2 (z − 1)
z
= . (2.42)
(z − 1)3
 
k
• Let f (k) = . Similarly to the previous example, this notation has
l
k!
to be understood as f (k) := l!(k−l)! for k ≥ l and f (k) := 0 for k < l.
We have
z
Z[f (k)] = . (2.43)
(z − 1)l+1
Proof. The proof of (2.43) will be carried over by induction on l.
 
k
Base case: for l = 0, 1 relation (2.43) holds as Z = Z[δ−1 (k)] =
  0
z k z
z−1
and Z = Z[k] = (z−1) 2 (see formulas (2.17) and (2.36)).
1
Induction step: assume that (2.43) holds for l − 1. We can write
k k−1
      
k k!
Z =Z =Z ,
l l!(k − l)! l l−1
and by using the discrete derivative formula (2.40) and the translation
formula (2.28),7 we get
k k−1
    
z d −1 z z
Z =− z = .
l l−1 l dz (z − 1)l (z − 1)l+1

 
k k−l
• Let f (k) = p ; in view of (2.43) and of (2.33), we get the im-
l
portant expression
1 z/p z
Z[f (k)] = l l+1
= . (2.44)
p (z/p − 1) (z − p)l+1

9. Asymptotic behaviour (of the Z-Transform): Consider a signal


f (k), and let F (z) = Z[f (k)] be its Z-Transform and %0 be its convergence
radius. If %0 ∈ R, i.e. if it is finite, then
lim F (z) = f (0). (2.45)
|z|→+∞
7 k

Notice that l−1 is, by definition, causal.
Chapter 2. Discrete-time Signals and Systems and Z-Transform 24

Remark 2.4. Formula (2.45) must be understood as follows: the limit in


the left-hand side is independent of the way in which |z| diverges and is
equal to the “first sample” f (0) of the sequence f (k).
Remark 2.5. Notice the analogy between the previous formula and the
Initial Value Theorem for the Laplace Transform.
Example 2.9. Let f (k) := cos(2k) + 3k ; in view of (2.20) and (2.25), its
Z-Transform is
z(z − cos 2) z
F (z) = 2 + .
z − 2 cos 2z + 1 z − 3
We have
lim F (z) = 1 + 1 = 2 = f (0).
z→+∞

Remark 2.6. Relation (2.45) may be generalized as follows: if f (k) van-


ishes for all k < r, then

lim z r F (z) = f (r). (2.46)


|z|→+∞

10. Final Value Theorem: Consider a signal f (k), and let F (z) = Z[f (k)]
be its Z-Transform. If limk→∞ f (k) exists and is finite, then

lim f (k) = lim(1 − z −1 )F (z). (2.47)


k→∞ z→1

Example 2.10. Next we show how the Final Value Theorem can be used
and that if the assumptions of theorem do not hold, formula (2.47) provides
a wrong result!

• Let f (k) := 0.9k +1. Since limz→∞ f (k) = 1, we can use formula (2.47)
that gives
z−1
  
z z
lim + = 1.
z→1 z z − 0.9 z − 1
• Let f (k) := sin(ϑk) with ϑ 6= hπ, h ∈ Z+ . The limit limz→∞ f (k) does
not exist, and formula (2.47) (which cannot be used) wrongly gives

lim f (k) = lim(1 − z −1 )F (z) = 0.


k→∞ z→1

• Let f (k) := k2k . In this case the limit limz→∞ f (k) exists but is +∞
(and so is not finite). Again and formula (2.47) (which cannot be used)
wrongly gives
z−1
   
d z 2
lim (−z) = lim(z − 1) = 0.
z→1 z dz z − 2 z→1 (z − 2)2


Chapter 2. Discrete-time Signals and Systems and Z-Transform 25

11. Convolution: Let f (k), g(k) be two causal signals and F (z), G(z) be the
corresponding Z-Transforms. We define the discrete convolution of f (k)
and g(k) as
+∞
X
h(k) := f (k) ⊗ g(k) := f (l)g(k − l). (2.48)
l=−∞

Let H(z) be the Z-Transform of h(k). Then

H(z) = F (z)G(z). (2.49)

Proof. Since f is causal, we have:


+∞
X +∞
X
h(k) = f (l)g(k − l) = f (l)g(k − l).
l=−∞ l=0

By computing the Z-Transform, we get


+∞ X
X +∞
H(z) = f (l)g(k − l)z −k
k=0 l=0
+∞ +∞
!
X X
= f (l) g(k − l)z −k
l=0 k=0
+∞ +∞
!
−k0 −l
X X
= f (l) g(k 0 )z
l=0 k0 =−l
+∞ +∞
!
−k0
X X
= f (l)z −l g(k 0 )z
l=0 k0 =0
= F (z)G(z),

where k 0 := k − l and the last-but-one equality is a consequence of the fact that


g is causal.

2.6 Inverse Z-Transform


Next we focus on the following inverse problem: given a function F (z) of the
complex variable, compute a signal f (k) such that F (z) = Z[f (k)]. The first
issue is clearly existence.

2.6.1 Existence of the inverse Z-Transform


Before computing inverse Z-Transform we must confirm that this inverse indeed
exists, i.e. that the map is bijective, or, equivalently, both injective and surjective.
Injectivity: recall that the Z-Transform is a linear map defined on the
vector space of causal sequences. To show that this map is injective we must prove
Chapter 2. Discrete-time Signals and Systems and Z-Transform 26

that if f1 (k) and f2 (k) are two causal sequences having the same Z-Transform
F (z), then f1 (k) ≡ f2 (k). Since the Z-Transform is a linear map, we may simplify
the proof by observing that the Z-Transform of d(k) := f1 (k)−f2 (k) is D(z) ≡ 0.
Therefore it is sufficient to show that if the transfer function of a causal signal
d(k) is D(z) ≡ 0 then d(k) ≡ 0. To this end, we fist observe that, by causality,
d(k) = 0 for all k < 0. Moreover, by using (2.45), we get:

d(0) = lim D(z) = 0. (2.50)


|z|→+∞

Hence,

X ∞
X ∞
X
−k −1 −k+1 −1
0 ≡ D(z) = d(k)z =z d(k)z =z d(h + 1)z −h . (2.51)
k=1 k=1 h=0

Therefore, by setting D1 (z) := zD(z) ≡ 0 and d1 (k) to be the causal sequence


d1 (k) := d(k + 1) we have

X
0 ≡ D1 (z) := zD(z) = d1 (k)z −k , (2.52)
h=0

or, equivalently, 0 ≡ D1 (z) is the Z-Transform of d1 (k). We can now use again
(2.45), to obtain 0 = d1 (0) := d(1). By iterating this argument, we obtain
d(2) = 0, d(3) = 0, and so on, so inductively d(k) = 0 for all k which concludes
the proof.
We remark that the previous argument can be adapted to find the first l
samples (l being an arbitrary integer) of the inverse Z-Transform.
Surjectivity: this is a delicate issue because to proof surjectivity, we must
specify the co-domain of the Z-Transform understood as a linear map. We have
already seen that if F (z) is a Z-Transform then it is an analytic function in the
open set {z ∈ C : |z| > %0 } so this set can be naturally assumed as the co-domain
of the Z-Transform. We have the following result.
Theorem 2.2. Any function of the complex variable that is analytic in a region
of the form {z ∈ C : |z| > %0 }, with %0 ≥ 0, is the Z-Transform of a causal
sequence.
As a consequence of the previous theorem and of injectivity, we have that
Z-Transform can be inverted and for any function F (z) that is analytic outside
a circle there exists a unique causal f (k) such that F (z) = Z[f (k)]. We are left
with the task of computing such an f (k).

2.6.2 Computation of the inverse Z-Transform


We present three different methods:
Iterative method: this method draws inspiration from the previous proof
of injectivity and allows to iteratively compute the samples f (k) of inverse Z-
Transform of an analytic function F (z). To this aim we iteratively use formula
Chapter 2. Discrete-time Signals and Systems and Z-Transform 27

(2.45) and get:

f (0) = lim F (z), F1 (z) := z[F (z) − f (0)]


|z|→+∞
f (1) = lim F1 (z), F2 (z) := z[F1 (z) − f (1)]
|z|→+∞
.. .. (2.53)
. .
f (k) = lim Fk (z), Fk+1 (z) := z[Fk (z) − f (k)]
|z|→+∞
.. ..
. .

Integral method: This method is very general and has great conceptual
importance but is seldom practically viable. The first sample is computed as
before:
f (0) = lim F (z). (2.54)
|z|→+∞

For the other sample, we have the formulas:


I
1
f (k) = F (z)z k−1 dz, k = 1, 2, . . . (2.55)
2πj γ

where γ is a circle traversed counterclockwise centered in the origin of C and


with radius larger than the radius %0 of the circle outside which F (z) is analytic
(actually, γ can be taken to be any positively oriented Jordan curve lying on the
region where F (z) is analytic).
Inverse Z-Transform of proper rational functions: If F (z) a
proper rational function, i.e. if F (z) = N (z)
D(z)
is the ratio of two polynomials
with deg[D(z)] ≥ deg[N (z)], we can compute explicitly its inverse Z-Transform
as shown below. We hasten to observe that as a consequence of the following
argument we can also characterize the set of the sequences f (k) whose transfer
function is rational.
Let us start by writing F (z) in the form
Pm
bl z l
F (z) = Pnl=0 l , (2.56)
l=0 al z

and let r := n − m ≥ 0 be the difference between the degree of the denominator


and that of the numerator, i.e. the so called relative degree of F (z); moreover,
without loss of generality, we assume that the denominator is monic, i.e. an = 1.
Divide both sides of (2.56) by z and define
Pm
F (z) bl z l
F1 (z) := = Pl=0 . (2.57)
z z nl=0 al z l

By factoring the denominator of F1 (z) in polynomials of first degree, we get:

N (z)
F1 (z) = QN , (2.58)
ni
i=0 (z − pi )
Chapter 2. Discrete-time Signals and Systems and Z-Transform 28

where p0 = 0 is the zero at 0 of z nl=0 al z l . Notice that p0 is not present if


P
F (0) = 0. Otherwise, the multiplicity n0 of p0 is equal to the multiplicity of the
pole at 0 of F (z) plus 1, so that if F (z) does not have poles or zeros at 0 then
the multiplicity n0 of p0 is equal to 1.
Let us now compute the partial fraction decomposition of F1 (z):
N ni −1
X X Ai,l
F1 (z) = . (2.59)
i=0 l=0
(z − pi )l+1

The coefficients Ai,l can be computed, for example, as:

Ai,ni −1 = lim (z − pi )ni F1 (z); (2.60)


z→pi
 
ni −1 Ai,ni −1
Ai,ni −2 = lim (z − pi ) F1 (z) − ; (2.61)
z→pi (z − pi )ni
 
ni −2 Ai,ni −1 Ai,ni −2
Ai,ni −3 = lim (z − pi ) F1 (z) − − ; (2.62)
z→pi (z − pi )ni (z − pi )ni −1
..
.
ni
!
X A i,k−1
Ai,l = lim (z − pi )l+1 F1 (z) − k
. (2.63)
z→pi
k=l+2
(z − p i)

We now multiply (2.59) by z and we write separately the terms corresponding


to the pole at 0. We get:
N ni −1 0 n
X X z X 1
F (z) = Ai,l l+1
+ A0,l l . (2.64)
i=1 l=0
(z − pi ) z
| {z } l=0 | {z }
Fpi ,l F0,l

By recalling equations (2.13) and (2.44) we immediately recognise by inspection


the sequences whose transforms are the functions Fpi ,l (z) and F0,l (z):

Z −1 Ai,l k k
Fpi ,l (z) = Ai,l (z−pzi )l+1 −−→ pli l
pi , (2.65)
Z −1
F0,l (z) = A0,l z −l −−→ A0,l δ(k − l). (2.66)

By taking into account that the inverse Z-Transform is linear, from (2.65), (2.66)
and (2.64), it follows:

N n i −1   n0
X X Ai,l k k X
f (k) = l
p i + A0,l δ(k − l), k ≥ 0. (2.67)
i=0 l=0
p i l l=0

Remark 2.7. Notice that as the inverse Z-Transform is by construction a causal


sequence, the expression (2.67) holds only for k ≥ 0 while for k < 0, f (k) = 0.
Chapter 2. Discrete-time Signals and Systems and Z-Transform 29

Remark 2.8. Notice that the relative degree r of F (z) has an important in-
terpretation. In fact, we easily see that exactly the first r samples of f (k), i.e.,
f (0), f (1), . . . , f (r − 1) are zero. In other words f (r) is the first non-zero sample
of f (k). Thus, r can be viewed as the “inner delay” of f (k). This fact may be
used as a convenient “sanity check” after computing the inverse Z-Transform.

Remark 2.9. Notice that if F (z) is, as it will always be in our setting, a real
rational function (i.e. the coefficients of the numerator and denominator of F are
real) then for any complex pole p = %ejϑ of F (z), its complex conjugate p∗ = %e−jϑ
is also a pole of F (z) and p and p∗ have the same multiplicity. Moreover, in the
Al A0l
partial fraction expansion (2.59) the terms (z−p) l+1 and (z−p∗ )l+1 have complex

conjugate coefficients, i.e. A0l = A∗l . Therefore, the inverse Z-Transforms of the
Al A∗l
two complex conjugate terms (z−p) l+1 and (z−p∗ )l+1 sum to a real signal. We can

compute explicitly this signal. In fact, by denoting by < e = the real and the
imaginary part, we easily get:

A0l
 
−1 Al
Z + (2.68)
(z − p)l+1 (z − p∗ )l+1
Al k k A∗l k ∗k
     
k−l k
+ Āl e−j(k−l)ϑ
 j(k−l)ϑ 
= l p + ∗l p =% Al e
p l p l l
 
1 k k
% 2<(Al e−jlϑ ) cos(ϑk) − 2=(Al e−jlϑ ) sin(ϑk) .

= l (2.69)
% l
Az A∗ z
In particular, if p is a simple pole (np = 1), the two terms z−p
and z−p∗
sum
to
Az A∗ z z(α(z − % cos ϑ) − β% cos ϑ)
+ ∗
= , (2.70)
z−p z−p z 2 − 2zp cos ϑ + p2
with
2A = α + jβ = M ejϕ . (2.71)
Therefore, we have

A∗ z
 
−1 Az
Z + = M %k cos(kϑ + ϕ). (2.72)
z − p z − p∗

Let us see an example of computation of the inverse Z-Transform of a rational


proper function.

Example 2.11. Let

3z 4 + 8z 3 + 7z 2 − 26z + 26
F (z) = .
z(z − 1)(z + 2)2 (z 2 − 2z + 2)

We immediately see that the relative degree of F (z) is r = 2 and that F (z) has
a pole at 0. Since this pole corresponds to a one-step delay, we can simplify the
Chapter 2. Discrete-time Signals and Systems and Z-Transform 30

procedure by decoupling this delay. More precisely:


1. We define the new function
3z 4 + 8z 3 + 7z 2 − 26z + 26
F0 (z) := zF (z) = .
(z − 1)(z + 2)2 (z 2 − 2z + 2)

2. We compute the inverse Z-Transform f0 (k) of F0 .


3. The inverse Z-Transform of F will be obtained simply by f (k) = f0 (k − 1).
For the second step, we define
F0 (z) A B C1 C2 D D∗
F1 (z) = (= F (z)) = + + + + + ,
z z z − 1 z + 2 (z + 2)2 z − p z − p∗
√ √
where p := 1 + j = 2 exp(jπ/4) and p∗ = 1 − j = 2 exp(−jπ/4) are the roots
of (z 2 − 2z + 2) and, by using (2.60), (2.61), (2.62), and (2.63), we get:

A = lim zF1 (z) = −13/4,


z→0
B = lim(z − 1)F1 (z) = 2,
z→1
C2 = lim (z + 2)2 F1 (z) = 3/2,
z→−2
3/2
 
z}|{
 C2 
C1 = lim (z + 2)  F 1 (z) −  = 5/4,
z→−2  (z + 2)2 

D = lim (z − p)F1 (z) = −j.


z→p

Thus
z z z z z
F0 (z) = zF1 (z) = A + B + C1 + C2 +D + D∗
z−1 z+2 (z + 2) 2 z−p z − p∗
so that

f0 (k) = Aδ(k) + Bδ−1 (k) + C1 (−2)k + C2 k(−2)k−1 + Dpk + D∗ (p∗ )k .

We know that f0 is a real-valued signal but the previous representation does not
highlight this fact. Hence, we regroup the two complex conjugate terms Dpk and
D∗ (p∗ )k as: Dpk + D∗ (p∗ )k = 2<[Dpk ]. By writing D as D = α + jβ and p as
p = ρ exp(jθ), we get:

Dpk + D∗ (p∗ )k = 2<[Dpk ] = 2<[(α + jβ)ρk (cos(kθ) + j sin(kθ))]


= 2αρk cos(kθ) − 2βρk sin(kθ).

In this specific case, α = 0, β = −1, ρ = 2 and θ = π/4 so that:
13 5 3 √
f0 (k) = − δ(k) + 2δ−1 (k) + (−2)k + k(−2)k−1 + 2( 2)k sin(kπ/4). (2.73)
4 4 2
Chapter 2. Discrete-time Signals and Systems and Z-Transform 31

Notice that this expression holds only for k ≥ 0. Notice also that the relative
degree of F0 is equal to 1 so that we must have f0 (0) = 0: by plugging k = 0 in
(2.73), we easily get f0 (0) = − 13
4
+ 2 + 45 = 0 that is a reassuring sanity check.
Eventually, we obtain f (k) as f0 (k − 1):
13 5 3
f (k) = f0 (k − 1) = − δ(k − 1) + 2δ−1 (k − 1) + (−2)k−1 + (k − 1)(−2)k−2
4√ 4 2
k−1
+ 2( 2) sin((k − 1)π/4), k > 0.

Clearly, for k = 0 (and for k < 0) we have f (k) = 0. Indeed, the expression
(2.73) of f0 (k) only holds for k ≥ 0.
Chapter 2. Discrete-time Signals and Systems and Z-Transform 32
Chapter 3

Notions of Sampling Theory

3.1 Sampling by Amplitude Modulation


Dealing with sampled signals has several advantages. Beside those already men-
tioned in Chapter 1, we also mention that sampled signals are easier to process,
correct, code, and transmit.
Example 3.1 (ADP noise). An Avalanche Photo Diode (ADP) is a sensor that
can detect the presence of a single photon by amplifying the photoelectric effect.
One of the relevant issues with these sensors is related to the so called dark counts,
i.e. thermally-generated pulses that (wrongly) detect a photon even if the device
is in complete darkness. Usually the average number of dark counts per unit of
time is assumed to be constant so that if the ADP is left on for the whole interval
of time [0, T ], the average number of dark counts is proportional to the length T
of the interval. Therefore it is much more convenient to transmit the information
coded in “pulses” of length ∆t separated by a “dark” period td in such a way
that ∆t  T0 := ∆t + td . In this way the ADP can remain active only for the
fraction ∆t/T0 of the time and, since ∆t  T0 , the effect of the dark counts can
be neglected. ♦

There exist several types of sampling. We mention:


• Pulse Amplitude Modulation (PAM) sampling;

• Pulse Width Modulation (PWM) sampling;

• Pulse Frequency Modulation (PFM) sampling.


In control theory PAM sampling is used because it is (at least in the ideal case)
a linear transformation which is an important and highly desired property.
Let f (t) be a continuous-time signal and consider a PAM sampling transform
represented by the block in Figure 3.1. The sampling time T is assumed to be
constant and we denote by fh (t) the output signal. The latter can be obtained
by modulating the input signal with a carrier signal having a comb structure.
More precisely, the carrier signal is the periodic repetition with period T of a
Chapter 3. Notions of Sampling Theory 34

clock

f (t) T

ց fh (t)
sampling

Figure 3.1: Sampling block.

box signal having height 1/h and support [−h/2, h/2] with h < T . In this way
fh (t) can be understand as signal that is equal to the input scaled by the factor
1/h in each of the intervals [kT − h2 , kT + h2 ], k ∈ Z, and is zero outside these
R kT + T
intervals. Therefore, for all k ∈ Z, kT − T2 fh (t)dt is equal to the average value of
2
f (t) on the interval [kT − h2 , kT + h2 ]. This value converges to f (kT ) as h → 0.
The latter case can be viewed as the modulation of the input signal with a Dirac
comb carrier signal i.e. a train of pulses of the form:
+∞
X
ϕ(t) = δ(t − kT ).
k=−∞

The corresponding output is denoted by fδ (t) and is clearly given by:



X ∞
X
fδ (t) = f (t)δ(t − kT ) = f (kT )δ(t − kT ). (3.1)
k=−∞ k=−∞

If f (t) is a causal function (i.e. f (t) = 0 ∀t < 0) formula (3.1) becomes



X ∞
X
fδ (t) = f (t)δ(t − kT ) = f (kT )δ(t − kT ). (3.2)
k=0 k=0

Remark 3.1. Notice that technically the Dirac “delta function” is not a function
but a distribution defined by the following integral action on the test functions f
(i.e. infinitely differentiable functions having compact support):
Z +∞
δ(t − τ )f (τ )dτ = f (t).
−∞

From (3.2) it easily follows that:


Z kT +T /2
fδ (τ )dτ = f (kT ), ∀k. (3.3)
kT −T /2

Remark 3.2. It is now clear that the continuous-time signal fδ (t) and the
discrete-time signal f˜(k) = f (kT ) contain exactly the same information: we can
obtain one from the other by using (3.2) or (3.3). Therefore fδ is an intuitive and
mathematically consistent way of representing the information of a discrete-time
signal by using a continuous-time one.
We will also see that fδ provide a powerful tool to connect the Z-transform
and the Laplace transform.
Chapter 3. Notions of Sampling Theory 35

3.2 Sampling and relations between Z-Transform


and the Laplace transform
Remark 3.3. The (unilateral) Laplace transform is defined, for functions f :
R+ → R(C), as: Z +∞
L[f (t)] = f (τ )e−sτ dτ. (3.4)
0
As a consequence, we have:

L[δ(t)] = 1, L[δ(t − kT )] = e−kT s .

Some relevant Laplace transforms are listed in Appendix B.


Let f (t) be a causal function and consider the corresponding function fδ (t)
defined by (3.2). We have

"∞ #
X X h i
Fδ (s) := L[fδ (t)] = f (kT )e−kT s = f (kT )z −k = Z[f˜(k)] ,
|z:=eT s
k=0 k=0 |z:=eT s
(3.5)
˜
In other words the Laplace transform of fδ (t) is the Z-transform of f (k) := f (kT )
evaluated at z = eT s .
Next we analyze in greater detail the connections between F (s) = L[f (t)],
Fδ (s) = L[fδ (t)] and F̃ (z) = Z[f˜(k)]. In particular, we look for an explicit
connection between a continuous-time causal signal f (t) (or its Laplace transform
F (s)) and the Laplace transform Fδ (s) of the corresponding modulated signal
fδ (t). In this way we can complete the following diagram.

L
f (t) F (s)

⊗ϕ(t) ??
L
fδ (t) Fδ (s)

≈ z = eT s
Z
f˜(k) F̃ (z)

We can write (3.2) as



X
fδ (t) = f (t)δ(t − kT ) = f (t)ϕ(t), (3.6)
k=0
P+∞
where ϕ(t) := k=−∞ δ(t − kT ) (since f (t) is causal, in the previous sum the
terms corresponding to negative values of k do not give any contribution). Since
Chapter 3. Notions of Sampling Theory 36

ϕ(t) periodic of period T , it can be written as a Fourier series as


+∞
X
ϕ(t) = ak ejkΩt , (3.7)
k=−∞

where Z T /2 Z T /2
1 −jkΩt 1 1
ak = ϕ(t)e dt = δ(t)e−jkΩt dt = .
T −T /2 T −T /2 T
Notice that ak is independent of k. By plugging these values in (3.7), we get
+∞
1 X jkΩt
ϕ(t) = e . (3.8)
T k=−∞

By taking the latter into account, (3.6) becomes


∞ +∞
X 1 X
fδ (t) = f (t)δ(t − kT ) = f (t)ϕ(t) = f (t)ejkΩt , (3.9)
k=0
T k=−∞

We can now compute the Laplace transform in both sides of (3.9). We get:
+∞
1 X
Fδ (s) := L[fδ (t)] = F (s − jkΩ). (3.10)
T k=−∞

Therefore, Fδ (jω), is periodic of period Ω (known as the angular frequency of the


sampling). Indeed, Fδ (jω), is obtained by periodic repetition of F (jω). There-
fore, under suitable conditions on the support of F (jω), F (s) can be computed
from Fδ (s) (which means that we can reconstruct the original signal from the sam-
pled one) by means of a low-pass filter. This is better clarified by the following
result.

|F (jω)|

−ωB ωB Ω
0 2 Ω f

Figure 3.2: The solid line represents |F (jω)| for a band-limited signal (with band
ωB smaller than Ω2 ). The dashed line represents |F (jω)| for a signal with unlimited
band: we see that in this case the multiple copies of the signal generated by the
sampling (corresponding to periodic repetition in the frequency domain) interfere
with one another so that the original signal F (jω) can no longer be recovered
from the periodic repetition.
Chapter 3. Notions of Sampling Theory 37

Theorem 3.1 (Shannon, 1949). A continuous-time signal f (t) can be recovered


from its samples f˜(k) = f (kT ) by interpolation if and only if it is band-limited
(i.e. there exists ωB such that |F (jω)| = 0 for all ω such that |ω| > ωB ) and the
angular frequency Ω of the sampling satisfies the condition Ω > 2ωB .
When the conditions of the previous Theorem are satisfied, F (s) can be easily
recovered by using the formula

F (s) = A0 (s)Fδ (s) (3.11)

where A0 (s) is the transfer function of an ideal low-pass filter defined by



T if |ω| ≤ Ω/2,
A0 (jω) =
0 otherwise.

The frequency Ω/2 is known as Nyquist frequency.

3.3 Connections Between the Laplace Transform


and the Z-transform
As we have already observed in (3.5) a central role in the connection between the
spectral representations of a continuous-time signal and of its sampled version is
played by the map z = esT which (once T is fixed) maps the complex plane where
the Laplace variable s is defined to the complex plane where the Z-variable z is
defined. This map is of crucial importance in digital control and it is therefore
important to clarify its meaning. To this end let us start with a motivating
example.
Example 3.2. consider the continuous-time signal f (t) = eat , t ≥ 0. The
1
corresponding Laplace transform is F (s) = s−a . On the other hand, the sampled
˜
version of f (t) is f (k) = (e ) , k ≥ 0 and its Z-transform is F̃ (z) = z−ez aT ,
aT k

where T is the sampling time. Thus, not only both F (s) and F̃ (z) are rational
functions but the pole eaT of F̃ (z) is obtained by mapping the pole a of F (s) via
the map z = esT .
This is a particular case of a more general result: if f (t) = qn (t)eat , t ≥ 0,
Qn (s)
with qn (t) being a polynomial in t of degree equal to n, then F (s) = (s−a) n+1

with Qn (s) being a polynomial in s of degree at most equal to n. the sampled


version of f (t) has the form f˜(k) = q̃n (k)(eaT )k , k ≥ 0 so that its Z-transform is
z Q̃n (z) aT
F̃ (z) = (z−e aT )n+1 . Again the pole e of F̃ (z) is obtained by mapping the pole a
sT
of F (s) via the map z = e and the multiplicity of the two poles is the same.
It is possible to further generalize this result and show that if f (t) is a
continuous-time signal whose Laplace transform F (s) is a rational function and
f˜(k), k ≥ 0 is its sampled version, then the Z-transform F̃ (z) of f˜(k) is still a
rational function whose poles can be obtained from the poles of F (s) by the map
z = esT . ♦
Chapter 3. Notions of Sampling Theory 38

=(s)


2

5 2 6
7

3 1 4 <(s)

− Ω2

=(z)

6 2 5 3 1 4 <(z)

Figure 3.3: Correspondence between some points on the primary stripe and their
images according to the map esT .

To analyze the map z = esT , let us consider s = σ +jω. First of all we observe
that e(s+j2lπ/T )T = esT ej2lπ = esT for all l ∈ Z. In other words the map z = esT
is periodic of period Ω := 2π/T along any vertical axis of the complex plane.
Therefore, to study the map z = esT we can restrict attention to the primary
stripe  
Ω Ω
S := s : s = σ + jω, − < ω ≤ .
2 2
Chapter 3. Notions of Sampling Theory 39

It easy to see that, once restricted to S, the map z = esT is injective, i.e. if
s1 , s2 ∈ S and s1 6= s2 then es1 T 6= es2 T . Moreover the image of the map is the
whole complex plane except for the origin, i.e. for all z̄ ∈ C \ {0} there exists
s̄ ∈ S such that z̄ = es̄T . The origin z̄ = 0 can be achieved as the image of the
map only at the limit for s tending to −∞. More precisely, we have

lim esT = 0.
<(s)→−∞

It is easy to check that the intersection of S with the left half complex plane
is mapped to the open unit disk, the intersection of S with the imaginary axis
is mapped to the unit circle S1 , and the intersection of S with the right half
complex plane is mapped to the open region outside unit circle as depicted in
Figure 3.3.

3.4 Anti-Aliasing Filters


In practice, none of the signals considered in control problems is perfectly band-
limited: for example causal signals can be band-limited only if they are zero. For
this reason aliasing is inevitable. Thus, before sampling, often the continuous-
time signal is filtered with a low-pass filter in order to reduce the effects of the
aliasing. This filter is known as Anti-Aliasing (A.A.) filter. Ideally, the frequency
response A(jω) of the filter should be a perfect “box”:

1 |ω| ≤ Ω/2
A(jω) :=
0 |ω| > Ω/2

This ideal low-pass response, however, cannot be realized in practice. In fact,


this response does not even correspond to a causal filter. Therefore, we need
to resort to approximate filters that can be causally realized. The approximate
filter, however, introduces a distortion on the signal. Thus, by adding such a filter
in the control loop we modify the dynamics of the overall closed-loop system.
Remark 3.4. Usually to practically realize the A.A. filter a first order low-pass
Bessel filter or, better, a second order Butterworth filter is used. These low-pass
filters affects (usually negatively) the following aspects of the closed-loop systems:
stability, noise rejection and transient behavior. Therefore, when designing the
control system we need to take into account of the presence of the A.A. filter in
the control loop.
Let us consider the circuit depicted in Figure 3.4. It realizes a second order
Butterworth filter. In particular, by selecting C1 = 2ξC 3
, C2 = 3C

1
, ωn = RC , C2 =
C1 C2 , ξ = √12 , the transfer function of the filter is

Vout (s) −1 −1
= 2
= 2 , (3.12)
Vin (s) sR C1 C2 + 3RC1 s + 1 s
ωn
+ 2 ωsξn +1
Chapter 3. Notions of Sampling Theory 40

C1
R R

+
vin C2 −


+ +
vout

Figure 3.4: Circuit realizing a second order low-pass Butterworth filter.

where ωn is the 3dB bandwidth of the filter.


This filter provides a sensible reduction of the high-frequency components of
the signals but introduces a delay that may have relevant effects on the stability
or, at least, on the phase margin of the system.

To give an idea of the effect of the A.A. filter consider the closed-loop system
in Figure 3.5 and assume for the moment that there is no A.A. filter (i.e. that
A(s) = 1). Assume that the gain crossover frequency is ωc (i.e. |P (jωc )| = 1) and
the phase margin of the system is mϕ and that these values satisfy the design
specs. Now let us introduce the A.A. filter in the loop, i.e. A(s) is no longer
equal to 1 but is given by
ωn2
A(s) = (3.13)
s2 + 2ξωn s + ωn2
which is the transfer function of the circuit in Figure 3.4 except for the sign that
can be changed simply by inverting the output polarity in the circuit.
+ e
r̃ P (s) b y

A(s)

Figure 3.5: Closed-loop interconnection with an anti-aliasing filter.

In order for the A.A. filter to provide a good attenuation at the Nyquist
frequency Ω/2 without perturbing too much the phase margin, we must design
the sampling frequency and tune the parameters of the A.A. filter in such a way
that
ωc  ωn  Ω/2.
Chapter 3. Notions of Sampling Theory 41

Under this condition, the gain crossover frequency ωc remains essentially unaf-
fected by the presence of the A.A. filter and we can easily estimate the phase
delay −ϕ introduced by the A.A. filter at this frequency which accounts for how
the phase margin is affected by the filter. In fact, we have
 
2ξωn ωc
ϕ = arg[A(jωc )] = − arctan , (3.14)
ωn2 − ωc2
and since we are assuming ωc  ωn , ϕ this can be approximated as
ξωc
ϕ ' −2 . (3.15)
ωn
Let  2
1 Ω Ωξ
a :=  = 1− +j . (3.16)
A j Ω2 2ωn ωn
This parameter accounts for the attenuation of the filter at the Nyquist frequency
and therefore it must be at lest equal to a minimum value set by the specs in
order to effectively deal with the aliasing effect.
Since we are assuming Ω  2ωn we have the following approximation:
 2

a' . (3.17)
2ωn

Then ωn ' √
2 a
,which, plugged into (3.15) yields:
ω √ 
c
ϕ ' −4ξ a . (3.18)

The latter provides a first approximation of the phase margin loss introduced by
the A.A. filter. This value can be used in the design of the controller.
Example 3.3. Consider the following specs: a = 10, |ϕ| < 0.1, ξ = √1 and
2
ωc = 10 rad/s. Compute the minimum sampling (angular) frequency Ω.
By using (3.18) we easily get

4 5
|ϕ| < 0.1 ⇒ Ω > 10 ' 900 rad/s
0.1

3.5 Comments on quantization


In digital systems not only time is discrete: the values of the signals are quantized.
At each time k the quantized signal can only assume a finite number of values
known as quantization levels. While discretizing time is a linear process, quanti-
zation introduces a “nasty” non-linearity. To deal with it, we treat quantization
as noise, i.e. we consider the quantized signal to be the sum of the original signal
an a noise n(k) for which we have the bound −d/2 < n(k) ≤ d/2, with d being
the difference between two consecutive quantization levels.
Chapter 3. Notions of Sampling Theory 42

x(t) x(k) x(t) xq (k)


1

3 0
Q
0 1 2 t 0 1 2 3 t
1-bit Quantizer

Figure 3.6: Quantization of the sampled signal x(k).


Chapter 4

Analysis of Discrete-Time
Systems

As discussed before, we are interested in LTI systems described by difference


equations of the form
N
X N
X
y(k) = ai y(k − i) + bl u(k − l). (4.1)
i=1 l=0

Next we analyze the properties of these systems in the frequency domain. To


this aim we assume that u(k) is a causal input, i.e. u(k) = 0 for all k < 0. By
taking the Z-Transform of both sides of (4.1) and using the backward translation
property (2.28), we get:

Y (z) := Z[y(k)]
N
! N
! N r
X X X X
= ai z −i Y (z) + bl z −l U (z) + ar y(−k)z −r+k
| i=1
{z } | l=0
{z } |r=1 k=1
{z }
Ã(z) B̃(z) C(z)

= Ã(z)Y (z) + B̃(z)U (z) + C̃(z) (4.2)

Notice that C̃(z) is a polynomial in z −1 whose coefficients are linear combinations


of the “initial conditions” y(k), k = −1, −2, . . . , −N of the output. Moreover,
since u(k) is assumed to be causal, the term analogous to C̃(z) involving the
“initial conditions” of u(k) is missing.
Now we can solve (4.2) for Y (z) and we get:

B(z) C(z)
Y (z) = U (z) + = H(z)U (z) +Yl (z) = Yf (z) + Yl (z) (4.3)
A(z) A(z) | {z }
| {z } | {z } Yf (z)
H(z) Yl (z)

where we have set A(z) := z N [1 − Ã(z)] (which is a monic polynomial of degree


N in z), B(z) := z N B̃(z) (which is a polynomial of degree at most N in z)
Chapter 4. Analysis of Discrete-Time Systems 44

and C(z) := z N C̃(z) (which is a polynomial of degree at most N in z). The


proper rational function H(z) is known as the transfer function (T.F.) of the
discrete-time system.
In (4.3) the Z-Transform Y (z) of the output of the system has been decom-
posed as the sum of two terms: Yf (z) and Yl (z). The former, known a forced re-
sponse, only depends (linearly) on the input of the system while the latter, known
a free response, only depends (linearly) on the initial conditions {y(−l)}Nl=1 .
−1
The forced response in the time domain is defined by yf (k) := Z [Yf (z)] =
Z −1 [H(z)U (z)] and can be easily obtained by using the convolution result (2.49):

yf (k) = h(k) ⊗ u(k), (4.4)

where
h(k) = Z −1 [H(z)], (4.5)
is the so-called impulse response of the system (the reason is obvious as when
u(k) = δ(k) is the discrete impulse, U (z) = 1, so that Yf (z) = H(z) and, finally
yf (k) = h(k)).
The transfer function H(z) is rational and proper so that its inverse transform
h(k) can be obtained by the procedure described in §2.6.2. Next, we recall the
main steps of this procedure: we first compute the partial fraction decomposition
of H(z)
z
:
N n −1
i 0 n
H(z) X X 1 X A0,l
= Ai,l l+1
+
z i=0 l=0
(z − pi ) l=0
z l+1

Then we have
N n i −1 n0
X X z X
H(z) = Ai,l l+1
+ A0,l z −l
i=0 l=0
(z − pi ) l=0

and, finally, by taking the inverse Z-Transform:


N n i −1   n0
X X Ai,l k k X
h(k) = l
pi + A0,l δ(k − l)
i=0 l=0
p i l l=0
N n i
X X Ai,l−1   n0
k k X
= p + A0,l δ(k − l) + A δ(k) (4.6)
p l
i l i
| {z } | 0,0{z }
i=0 l=0 l=1
| {z } FIR modes discrete impulse
IIR modes
Some comments are in order:

1. The expression in (4.6) shows that the impulse response of the system can
be additively decomposed as the sum of an impulse and a linear combination
k k
of discrete functions of the form l pi and δ(k−l), where pi are the non-zero
poles of the system’s transfer function. The modes δ(k − l) are associated
Chapter 4. Analysis of Discrete-Time Systems 45

with the poles of H(z) at the origin and are non zero only at the discrete
time instant l: for this reason they are called the modes of the Finite
Impulse Response (FIR) part
 of the transfer function H(z). On the contrary,
the modes of the form kl pki are associated with the non-zero poles pi of
H(z) and are non-zero for all k ≥ l: for this reason they are called the
modes of the Infinite Impulse Response (IIR) part of the transfer function
H(z).
The position of the poles in the complex plane are associated with the
asymptotic character of the corresponding modes as follows:

• if |pi | < 1 ⇒ the corresponding modes are convergent (i.e. they decay
to zero as k diverges);
• if |pi | > 1 ⇒ the corresponding modes are divergent (i.e. they explode
as k diverges);
• if |pi | = 1 and k > 1 ⇒ the corresponding modes are divergent;
• if |pi | = 1 and k = 1 ⇒ the corresponding modes are bounded but not
convergent (i.e. as k diverges, they remain bounded but they do not
converge to zero).

2. If the transfer function H(z) has only the pole p = 0 and n0 is the pole
multiplicity then it has the form

N (z)
H(z) =
z n0
with N (z) being a polynomial of degree at most n0 . In this case, H(z) can
be rewritten as
n0
X Al
H(z) =
l=0
zl
with An0 6= 0. The impulse response is therefore
n0
X
h(k) = Al δ(k − l). (4.7)
l=0

The sum in (4.7) is finite so that h(k) is identically zero after n0 steps. In
this case the system is purely FIR. Notice that this is a situation that has
no analogous in continuos time.

3. The transfer function H(z) is rational and proper; in fact, it can be written
as the ratio H(z) = B(z)
A(z)
, with deg[A(z)] ≥ deg[B(z)]. The relative degree
r := deg[A(z)] − deg[B(z)] of H(z) has a very important meaning. Indeed
let us consider an input u(k) and let U (z) be its Z-Transform. Since U (z)
is analytic outside the radius of convergence we have that lim|z|→∞ U (z)
exists and is finite and it is zero if and only if u(0) = 0. Therefore, by using
Chapter 4. Analysis of Discrete-Time Systems 46

the argument in (2.53) we immediately see that the forced response yf (k)
of the system to the input u(k) is such that

yf (0) = yf (1) = · · · = yf (r − 1) = 0;

moreover, yf (r) 6= 0 if and only if u(0) 6= 0. Then r has the meaning of the
intrinsic delay of the system. We recall that in continuous-time a delay is
only produced by a non-rational transfer function.

Let us now discuss a simple example highlighting an important issue that, if


neglected, may lead to blunders.

Example 4.1. Consider the following system:

y(k) + ay(k − 1) = u(k) + au(k − 1).

Its transfer function is:


z+a
H(z) = = 1. (4.8)
z+a
One may be led to think that y(k) = u(k) but this is wrong! This is true only
for the forced response yf (k) = u(k) but for the full response of the system, we
need to take account also the free response yl (k) and hence the initial conditions
y(−1). In this case the free response is yl (k) = (−a)k y(0). In the case when
|a| > 1 the output diverges whenever y(0) 6= 0! We can conclude that

• When we consider the transfer function, we are restricting attention to the


the part of the behaviour of the systems associated with the forced response
yf (k).

• If the polynomials A(z) := z N [1 − Ã(z)] and B(z) = z N B̃(z), where Ã(z)


and B̃(z) are defined in (4.2), have common zeros, then part of the system’s
dynamics does not appear in the input-output behaviour of the system; in
fact, the only relevant factors for the latter are the poles of the transfer
function, i.e. the zeros of A(z) that are not “canceled” by zeros of B(z).
On the other hand, also the zeros of A(z) that are canceled by zeros of
B(z) are relevant for the system’s dynamics as they correspond to modes
appearing in the free response of the system.

• Even if we restrict attention to the input-output behaviour of the system,


zeros of A(z) that are canceled by zeros of B(z) are problematic if those
zeros have magnitude greater than or equal to 1. In fact, if as a consequence
of small variations of the parameters, the cancellation is not perfect, new
modes emerge in the input-output behaviour that do not converge to zero.
Chapter 4. Analysis of Discrete-Time Systems 47

4.1 Stability of Discrete-Time Systems


Definition 4.1 (Asymptotic Stability). An LTI system (Σ) is said to be asymp-
totically stable if

lim yl (k) = 0 ∀ initial condition {y(−1), y(−2), . . . }, (4.9)


k→+∞

where yl (k) is the free response of the system.

As a consequence of (4.3) we have that (Σ) is asymptotically stable if and only


if all the zeros of the polynomial A(z) have magnitude strictly smaller than 1 i.e.
they are inside the unit disk S1 . In fact, we can always find initial conditions of
y such that the polynomial C(z) is constant so that it cannot cancel any of the
zeros of A(z).

Definition 4.2 (BIBO Stability). An LTI system (Σ) is said to be BIBO (Bounded
Input Bounded Output) stable if for any (causal) bounded input signal u(t), the
corresponding forced response yf is bounded, i.e.

∀u ∈ `∞ ∞
+ , yf (k) = h(k) ⊗ u(k) ∈ `+ . (4.10)

The forced response yf (k) = kl=0 h(l)u(k−l) = +∞


P P
l=−∞ h(l)u(k−l) is a linear
functional of the input u(k). Therefore, BIBO stability may be mathematically
formulated as the fact that this functional maps `∞ to `∞ , i.e. `∞ is an invariant
subspace of the functional. It can be shown that this P is equivalent to the fact
+∞
that the impulse response of the system is in `1 , i.e. k=0 |h(k)| < +∞. The
latter condition is clearly equivalent to the fact that all the poles of the transfer
function H(z) are inside the unit disk, i.e. |pi | < 1 for all pi that are poles of
H(s).
As a consequence, we have the following implications:

• (Σ) asymptotically stable =⇒ (Σ) BIBO stable: in fact, the poles of H(z)
are necessarily a subset of the zeros of A(z).

• If the common zeros of A(z) and B(z), if any, have all magnitude less than
1 then: (Σ) asymptotically stable ⇐⇒ (Σ) BIBO stable.

4.2 Criteria for Stability: Schur Polynomials


As much as in continuous-time BIBO-stability and asymptotic stability of an
LTI finite-dimensional system can be tested by checking whether or not a cer-
tain polynomial is Hurwitz (i.e. all its zeros are in the open left half-plane), in
discrete-time a similar condition holds. In fact, we have seen that the system
is asymptotically stable if and only if all the zeros of the polynomial A(z) are
Chapter 4. Analysis of Discrete-Time Systems 48

strictly inside the unit disk and the system is BIBO-stable if and only if all the ze-
ros of the denominator of a coprime representation of the transfer function H(z)
are strictly inside the unit disk. Therefore the only difference with respect to
the continuous-time case is that the region of the complex plane where the zeros
must be is the open unit disk instead of the left half-plane. To check stability of
a discrete-time system it would then be useful to have a discrete-time countepart
of the Routh-Hurwitz test, i.e. a test that, given the coefficients of a polynomial,
checks whether or not all its zeros are in the open unit disk without computing
said zeros. To discuss this issue the following definition comes in handy.
Definition 4.3. A polynomial A(z) is said to be a Schur polynomial if all its
zeros are inside the unit disk of the complex plane.
We now describe two different approaches to check whether or not a given
polynomial is a Schur polynomial (without computing its zeros):
1. Jury test: it is an algebraic test that, given the coefficients of a polynomial,
provides a necessary and sufficient condition for the polynomial to be Schur.

2. Bilinear transform: it is a transform that given a polynomial A(z), provides


a new polynomial Ã(s) such that A(z) is Schur if and only if Ã(s) is Hurwitz.

4.2.1 Jury Test


Let A(z) = nk=0 ak z n−k and without loss of generality, assume that a0 > 0. The
P
following is known as the Jury table associated with A(z). Here, the coefficients

a0 a1 a2 ··· ··· ··· an


an an−1 an−2 ··· ··· ··· a0
b0 b1 b2 ··· ··· bn−1 0
bn−1 bn−2 bn−3 ··· ··· b0 0
c0 c1 c2 ··· cn−2 0 0
cn−2 cn−3 cn−4 ··· c0 0 0
.. .. .. .. .. .. ..
. . . . . . .
.. .. .. .. .. .. ..
. . . . . . .
r0 0 ··· ··· ··· ··· 0

bi are obtained from the coefficients ai by


 
1 a0 an−i
bi := det . (4.11)
a0 an ai

The coefficients ci are obtained from the coefficients bi by following the same
procedure, and so on for the subsequent rows of the table. Notice that from
(4.11) we get bn = 0, as shown in the table. The following result provides a
simple test to check whether or not a given polynomial is a Schur polynomial.
Chapter 4. Analysis of Discrete-Time Systems 49

Theorem 4.1. Consider a polynomial A(z) and the corresponding Jury table
built as just shown. Then the polynomial A(z) is Schur if and only if all the
coefficients a0 , b0 , c0 ,. . . , r0 have the same sign.

If we denote by {pk }nk=1 the zeros of A(z), the previous theorem can be sum-
marised by the following formula:

b0 c0 r0
|pk | < 1, k = 1, . . . , n ⇐⇒ > 0, > 0, . . . , > 0. (4.12)
a0 a0 a0

Example 4.2. Let us consider the polynomial:


1
A(z) := z 3 + z 2 + z + .
2
and the corresponding Jury Table (Table 4.1). obtained as described above.

1 1 1 1/2
1/2 1 1 1
3/4 1/2 1/2 0
1/2 1/2 3/4 0
5/12 1/6 0 0
1/6 5/12 0 0
7/20 0 0 0

Table 4.1: Jury Table associated with Example 4.2.

In this case, all the relevant coefficients of the first column of the table are
positive and we can conclude that A(z) is a Schur polynomial. By computing
explicitly the zeros of A(z) we get the three zeros p1,2 = −0.1761 ± j0.8607,
p3 = −0.6487, with magnitudes |p1,2 | = 0.878 and |p3 | = 0.647 which are, as
expected, smaller than 1. ♦

4.2.2 Bilinear (or Möbius) Transform


This transform, that will be denoted by M (·), is a map of the complex plane
devoid of the point (0, −1) onto the complex plane itself. Moreover the image of
this map, restricted to the set {z : |z| < 1} is the open left complex half-plane
{s : Re(s) < 0} and, conversely, any point in {s : Re(s) < 0} is the image of a
point in {z : |z| < 1} via the map M (·). The map is defined by:
z−1
C → C, z 7→ s = M (z) := . (4.13)
z+1
Notice that
1+s
z = M −1 (s) = .
1−s
Chapter 4. Analysis of Discrete-Time Systems 50

=(z) =(s)
z−1
s= z+1

=⇒
<(z) <(s)

Figure 4.1: Bilinear (or Möbius) Transform.

Therefore, M (z) is finite and well defined for all z 6= −1 and can take any
complex value except 1.
Next we√prove that |z| < 1 if and only if < (M (z)) < 0. Let z = a + jb so
that |pi | = a2 + b2 . We have

z−1 a2 − 1 + b2 + j2b
M (z) = = ,
z+1 (a + 1)2 + b2

so that
a2 − 1 + b 2
< (M (z)) = .
(a + 1)2 + b2
Therefore, we have

• |z| < 1 if and only if < (M (z)) < 0.

• |z| > 1 if and only if < (M (z)) > 0 and M (z) 6= 1.

• |z| = 1 and z 6= −1 if and only if < (M (z)) = 0.

• z 6= −1 if and only if M (z) = ∞.

By using the bilinear transform, we can analize the BIBO-stability of a ratio-


nal function H(z) understood as the transfer function of a discrete-time system.
In fact, we can compute the continuous-time transfer function

Hc (s) := H(z) 1+s


z=M −1 (s)= 1−s

and analyze its stability by employing the Routh-Hurwitz test (see Appendix
C.1).
Chapter 5

Interconnections between
continuous-time and
discrete-time systems

Interconnections between discrete-time systems can be handled exactly as those


for continuous time systems. In fact, in view of linearity and of the convolution
result, all the block-diagram representations of continuous-time and discrete-time
systems follow exactly the same rules. For example, consider the top diagram
of Figure 5.1 whose closed-loop transfer function is, as is well known, W (s) =
C(s)P (s)
1+C(s)P (s)
. In the discrete time case, we can consider the same diagram (see the
bottom diagram of Figure 5.1) whose closed-loop transfer function can be easily
seen to be
C(z)P (z)
W (z) = , (5.1)
1 + C(z)P (z)
where the structure of W (z) is obtained by following exactly the same steps as
those used to obtain W (s) in continuous-time.
+ e u
r C(s) P (s) b y

+ ẽ ũ
r̃ C(z) P (z) b

Figure 5.1: Feedback interconnections of continuous-time systems (top diagram)


and of discrete-time systems (bottom diagram).

The rest of this section will be dedicated to the much more delicate analysis
of closed-loop control systems where the controller is modelled as a discrete-
time system and the to-be-controlled plant as continuous-time one. Of course to
Chapter 5. Interconnections between continuous- and discrete-time systems 52

connect these systems we need to use a sampling block and a Digital/Analogue


(D/A) interface.
Let us consider the block diagram depicted in Figure 5.2 were A/D e D/A
blocks are present. The most typical choice for the (D/A) interface in control
architectures is the so called Zero Order Holder (ZOH), usually denoted by a
block H0 . Its input is a discrete-time signal ũ(k) and the corresponding output is
a piece-wise constant continuous-time signal ū(t) that takes the value ū(t) = ũ(k)
for all t ∈ [kT, (k + 1)t), with T being the sampling time.
D/A
r̃ + ẽ ũ ū
r T

ց C(z) H0 P (s) b y

T
ց

Figure 5.2: Interconnection of continuous-time and discrete-time systems with


D/A and A/D converters.

We can write the relation between the discrete-time signal ũ(k) and the cor-
responding continuous-time signal ū(t) as:
+∞
X
ū(t) = u(k) (δ−1 (t − kT ) − δ−1 (t − (k + 1)T )) . (5.2)
k=0

Since the input of H0 is a discrete-time signal and its output is a continuous-


time signal we cannot define a transfer function. We can, however, consider a
continuous-time version of H0 , that we denote by H00 that has the same output
of H0 , but whose input is a continuous-time version of ũ(k) i.e. the pulse signal
defined as X
uδ (t) := ũ(k)δ(t − kT ).
k

It is now easy to obtain the (continuous-time) transfer function of H00 . In fact,


the Laplace transform of ū(t) is
+∞
X 1 −ksT
− e−(k+1)sT

Ū (s) = ũ(k) e
k=0
s
+∞
1 − e−sT X
= ũ(k)e−ksT , (5.3)
s
|k=0 {z }
Uδ (s):=L[uδ (t)]

with T being the sampling time. It immediately follows from (5.3) that the
transfer function of the ZOH in the Laplace variabile s is
1 − e−sT
H00 (s) = . (5.4)
s
Chapter 5. Interconnections between continuous- and discrete-time systems 53

This result will be useful below when we compute the overall discrete transfer
function of the series interconnection obtained by cascading H0 , P (s) and a sam-
pling block.
Remark 5.1. Some comments are in order.
• In place of the ZOH we could use higher order holders, for example the first
order holder (FOH). These devices however, perform a derivative action
with the disadvantage of amplifying the noise.
• ZOH are fast, cheap and easy to implement as they can be built by using
op-amp, and resistances.
• As we will see with more details in the following, on average the ZOH
introduces a delay corresponding to a half of sampling time T .

5.1 Conversion between continuous and discrete


systems
For digital control design we are interested in two opposite types of conversions:
1. Discrete-time model of a continuous-time system: this is represented in
Figure 5.3 and is used when we need to design the controller in the discrete
domain but the to-be-controlled plant is a continuous-time one.
2. Continuous-time model of a discrete-time controller : this is represented in
Figure 5.4 and is used when we need to design a discrete-time controller
emulating the behaviour of a given continuous time one.

ũ(k) F̃ (z) ỹ(k)


ū(t) y(t)
ũ(k) H0 F (s) T

ց ỹ(k)

Figure 5.3: Discrete-time model of a continuous-time system.

In other words, in the first case we are interested in a discrete-time transfer


function F (z) describing the behaviour of the cascade of systems depicted at the
bottom of Figure 5.3. In the second case we seek for an approximation C(s) of
the behaviour of the cascade of systems depicted at the bottom of Figure 5.4.
The remainder of this section will be dedicated to the solution of the first
problem. Indeed, we shall compute the transfer function F (z) and show that it
accounts exactly for the the behaviour of the cascade of systems depicted at the
bottom of Figure 5.3, i.e. no approximations are needed.
Chapter 5. Interconnections between continuous- and discrete-time systems 54

e(t) C(s) u(t)


ẽ(k) ũ(k)
e(t) T

ց C̃(z) H0 ū(t)

Figure 5.4: Continuous-time model of a discrete-time controller.

Consider the cascade depicted in Figure 5.5 and assume that F (s) is the
transfer function of a causal system. We also assume (and this is a fundamental
assumption) that the sampling and the hold devices are synchronous, i.e. that
they work with the same clock of period T .

ū(t) y(t)
ũ(k) ∼ uδ (t) H0 F (s) T

ց ỹ(k)

Figure 5.5: Block diagram of the cascade ”ZOH”, F (s) and ”Sampling”.

Let H(s) := F (s)H00 (s), where H00 (s) is the continuous-time version of the
ZOH as defined above. We first compute the forced output of the filter with
transfer function H(s) fed with the pulse input

X
uδ (t) = ũ(l)δ(t − lT ).
l=0

Let h(t) := L−1 [H(s)]; we have:


Z t
y(t) = h(t − τ )uδ (τ )dτ
0

!
Z t X
= h(t − τ ) ũ(l)δ(τ − lT ) dτ
0 l=0

X Z t 
= ũ(l) h(t − τ )δ(τ − lT )dτ . (5.5)
l=0 0

Notice that
Z t 
h(t − lT ) if lT ≤ t
h(t − τ )δ(τ − lT )dτ =
0 0 if lT > t

Notice also that both the system of transfer function F (s) and the continuous-
time versione of the ZOH are causal so that their cascade, whose transfer func-
tion is H(s) := F (s)H00 (s), is a causal system as well, i.e. its impulse response
Chapter 5. Interconnections between continuous- and discrete-time systems 55

h(t) := L−1 [H(s)] is a causal function i.e. it vanishes for all negative values of t.
Therefore, we have
Z t
h(t − τ )δ(τ − lT )dτ = h(t − lT ).
0

By plugging this expression in formula (5.5) and by setting f˜(k) := h(kT ), we


get the sampled version of y(t) that is given by

X
ỹ(k) = y(kT ) = f˜(k − l)ũ(l). (5.6)
l=0

By taking the Z-Transform on both members of (5.6), in view of the convolution


Theorem, we get:
Ỹ (z) = F̃ (z)Ũ (z), (5.7)
where F̃ (z) is the Z-Transform of f˜ which, in turn, is the sampled version of
h(t) = L−1 [H0 (s)F (s)].
Let S[·; T ] denote the sampling operator with time T which is defined by:

S[f (t); T ] = f˜(k) = f (kT ), k ∈ Z+ .

We will also use the brief notation ST [·] in place of S[·, T ].


By taking into account (5.4) and the linearity of the operators of inverse
Laplace Transform, Z-Transform and sampling ST [·], we now have

F̃ (z) = Z S[L−1 [H0 (s)F (s)]; T ]


 
−sT
−1 1 − e
    
=Z S L F (s) ; T
s
    
−1 −1 F (s)
= (1 − z )Z S L ;T . (5.8)
s
h i
Often, with abuse of notation, the expression F̃ (z) = (1 − z −1 )Z F (s)
s
is used.
It is important to observe that the transfer function F̃ (z) obtained in (5.8)
does not have any approximation so that we have an exact formula for the
discrete-time transfer function of the cascade depicted in Figure 5.5. As we
shall discuss below, if F (s) is a rational function then F̃ (z) turns out to be also
rational and this is an important and surprising result as the transfer function
H00 (s) is not rational.

Remark 5.2. Some comments are in order.

• It is important to observe that the cascade where we first hold a discrete-


time signal and then sample the result, performs the identity operator on
the set discrete-time signals. This is essentially the reason why the function
F̃ (z) obtained in (5.8) does not have any approximation. On the contrary
Chapter 5. Interconnections between continuous- and discrete-time systems 56

the cascade of the same two operators in the opposite order is not the
identity: its output, indeed, is always a piece-wise constant function which
is typically different from the input signal which is, in general, not a piece-
wise constant signal. Indeed, the latter cascade is a projection whose output
retains only the information corresponding to values of the input at the
sample points.

• In spite of the previous observation, the cascade in Figure 5.4 (in which
some of the information coded in the input is indeed lost and hence does
not appear in the output) can be used to approximate a continuous-time
system.

• Equation (5.8) has a very important interpretation. In fact, by taking into


account that L[δ−1 (t)] = 1s and Z −1 [δ−1 (k)] = (1 − z −1 )−1 , (5.8) can be
rewritten as      
−1 z −1 F (s)
Z F̃ (z) ≡S L ;T . (5.9)
z−1 s
Thus the step-response of the discrete-time system of transfer function
F̃ (z) coincides with the sample version of the step-response of the origi-
nal continuous-time system of transfer-function F (s).

• As already observed, when F (s) is rational, F̃ (z) is also rational and, by


using the rules for the inverse Laplace Transform and for the Z-Transform,
we easily see that the poles of F (s) are mapped into poles of F̃ (z) by
following the map
s = pi → z = e p i T
that we have already seen in the sampling chapter. A more delicate question
concerns the zeros: what is the relation between the zeros of F (s) and those
of F̃ (z)? Not only there is no simple way to describe how the zeros of F (s)
are mapped into those of F̃ (z), but it may also well happen that the number
of zeros of F (s) is different from the number of zeros of F̃ (z): typically, F̃ (z)
has more zeros than F (s); these extra zeros are called sampling zeros.

• The presence of the sampling zeros may be intuitively explained as follows:


if the transfer
h function
i F (s) is rational and strictly proper, its step-response
−1 F (s)
y(t) = L s
is zero for t = 0 and is non-zero for almost all values of
t > 0, as it is a linear combination of the modes of the system and of the
step signal. Then, except for very special choices of the sampling time T ,
the relative degree of the transfer function F̃ (z) is 1, independently on the
relative degree of F (s). This notwithstanding, it could happen that

– F (s) is not rational and has a delay so that it has the form F (s) =
0
e−T s F0 (s) with T 0 > T ;
h i
– F (s) is rational but its step-response y(t) = L−1 F (s)
s
is such that
y(T ) = 0.
Chapter 5. Interconnections between continuous- and discrete-time systems 57

In these cases the relative degree is at least equal to 2.


Let us see an example of computation of (5.8).
2
Example 5.1. Let G(s) := (s+1)(s+2) . Compute the corresponding discrete trans-
fer function obtained by cascading a sampling block of period T , G(s), and a
ZOH.
Solution. By developing the computations in (5.8), we have
G(s) 1 2 1 L−1
= − + −−→ 1(t) − 2e−t + e−2t ,
s s s+1 s+2
so that
z−1 
Z δ−1 (k) − (2e−t )k + (e−2t )k

G̃(z) =
z 
z−1

z z z
= −2 +
z z−1 z − e−T z − e−2T
(e−T − 1)2 (z + e−T )
= .
(z − e−T )(z − e−2T )
Notice that the relative degree of G(s) is 2 while the discrete-time counterpart
G̃(z) has relative degree equal to 1. One sampling zero z1 = −e−T is present in
G̃(z) while G(s) has no zeros. ♦

5.2 Stability of interconnections


5.2.1 Stability of a negative feedback loop
Nc (z)
Consider the block diagram depicted in Figure 5.6, and let C(z) = Dc (z)
and
Np (z)
D(z) = Dp (z)
be a rational proper transfer functions. The closed-loop transfer
function is
kC(z)P (z) kNc (z)Np (z)
W (z) = = . (5.10)
1 + kC(z)P (z) Dc (z)Dp (z) +k Nc (z)Np (z)
| {z } | {z }
=:D(z) =:N (z)

e(k) u(k)
+
r(k) kC(z) P (z) b
y(k)

Figure 5.6: Retroazione negativa con guadagno k.

To analyze the stability of W (z) we consider two separate cases:


Chapter 5. Interconnections between continuous- and discrete-time systems 58

1. k is fixed: in this case we resort to the Jury stability criterion or to the


bilinear transform;

2. k can change and must be selected: root locus.

To draw the root locus we use the same rules of the continuous-time case (clearly
the form of the locus does not depend on the fact that the complex variable is
named z instead of s). The important difference, however, is in the interpretation
of the result. Indeed, the critical points are no longer those in which the locus
intersect the imaginary axis; now they are the intersections between the locus and
the unit circle {z : |z|2 = <(z)2 + =(z)2 = 1}. In fact, now the stability region
is the one contained inside the unit circle, i.e. {z : |z|2 = <(z)2 + =(z)2 < 1}.

=(z) =(s)
ϕcr

ωcr

<(z) ωcr
<(s)
ϕcr

Figure 5.7: Root locus (in red) in the z and s domains with the corresponding
critical points.

To compute the critical values kcr of the gain k, we need to solve the equation

kcr N (ejϕcr ) + D(ejϕcr ) = 0. (5.11)

in the two unknowns kcr and ϕcr . This is particularly easy when the form of the
locus allows to conclude that z = ±1 are critical points.

Example 5.2. Let N (z) = c ∈ R+ e D(z) = z + 21 . The relative degree of


N (z)/D(z) is r = 1 so that the locus has an asymptote lying in the negative real
half-line. The (positive) root locus is depicted in Figure 5.8.

=(z)

kcr
×
<(z)

Figure 5.8: Root locus (in red) for the example 5.2.
Chapter 5. Interconnections between continuous- and discrete-time systems 59

In this case, equation (5.11) is easy to solve and gives


1 1
kcr c + (−1) + = 0 ⇒ kcr = .
2 2c

Example 5.3. Let us consider the block diagram depicted at the top of Figure
1
5.9, where P̃ (s) := s+1 (we use the notation P̃ (s) in such a way that P can be
reserved for the discrete-time system). We want to compute the corresponding
discrete-time system depicted at the bottom of Figure 5.9 and discuss the stability
of the closed-loop system. By using equation (5.8), we get:
+ e ẽ ũ ū
 k′
r(t) T
ց
1−z −1
H0 P̃ (s) b
y(t)

+
k′
r(k) 1−z −1
P (z) b
y(k)

Figure 5.9: Block diagrams for example 5.3.


" " "# ##
P̃ (s)
P (z) = (1 − z −1 )Z S L−1 ;T
s
    
−1 −1 1 1
= (1 − z )Z S L − ;T
s s+1
= (1 − z −1 )Z S δ−1 (t) − e−t ; T
  
 
−1 z z
= (1 − z ) −
z − 1 z − e−T
1 − e−T
= .
z − e−T
The open-loop transfer function is
z N (z) z 1 − e−T z
k0 P (z) = k 0 = k0 −T
=k , (5.12)
z−1 D(z) z −1z −e (z − 1)(z − e−T )
where we have defined k := k 0 (1 − e−T ). We want to analyze stability of (5.12) in
terms of k 0 and T . To this aim we draw the locus (in terms of the new parameter
k) and, after computing the critical values of k, we will easily solve the problem
in terms of the original parameters k 0 and T .
The root locus is depicted in Figure 5.10, where we see that ϕcr = π, so that
(5.11) implies
2(1 + e−T )
0 = kcr + (−1 − 1)(−1 − e−T ) ⇒ kcr = 2(1 + e−T ) ⇒ kcr
0
= . (5.13)
1 − e−T
Chapter 5. Interconnections between continuous- and discrete-time systems 60

=(z)

kcr
◦× ×
<(z)

Figure 5.10: Root locus (in red) for the example 5.3.

0
From (5.13), we immediately see that the critical value kcr depends on T and
gets smaller and smaller as T increases. On the contrary, as T tends to 0 critical
0
value kcr tends to infinity so that the closed loop system tends to be stable for
any positive value of the gain. ♦

In general, to compute the points where the root locus crosses the unit cir-
cumference {z : |z|2 = <(z)2 + =(z)2 = 1} we can use the bilinear transform
(4.13) which, as we have seen, bijectively maps the unit circumference deprived of
the point −1, to the imaginary axis of the complex plane. Notice the the missing
point −1 can be treated separately by using (5.11). By resorting to the bilinear
transform, equation (5.11) may be rewritten as
   
1+w 1+w
kN (z) + D(z) =0 ⇒ kN +D = 0. (5.14)
1−w 1−w
1+w 1+w
By setting N 0 (w) := N and D0 (w) := D
 
1−w 1−w
, we get

kN 0 (w) + D0 (w) = 0,

and we can compute the critical values kcr of k by using the Routh Criterion. Once
obtained kcr , we can compute the values jωcr such that kcr N 0 (jωcr )+D0 (jωcr ) = 0,
and hence the values of z for which the root locus crosses the unit circumference
1+jωcr
are easily obtained as zcr = ejϕcr = 1−jωcr
.
Alternatively, we can compute the points where the root locus crosses the
unit circumference by resorting to the Jury Criterion (after constructiong a Jury
table whose elements are functions of k).

Example 5.4. Consider the closed-loop system depicted in Figure 5.11. Com-
pute the corresponding critical values of the gain k.
In this case the denominator of the closed-loop transfer function is Q(z) :=
k + 2z 2 − 3z + 1. By using the bilinear transform, we get

(1 + w)2 (1 + w)(1 − w) (1 − w)2


 
1+w
Q =2 − 3 + (k + 1)
1−w (1 − w)2 (1 − w)2 (1 − w)2
Chapter 5. Interconnections between continuous- and discrete-time systems 61

+
k
r(k) (z−1)(2z−1)
b
y(k)

Figure 5.11: Closed-loop system of Example 5.4.

=(z)
kcr,2

kcr,1
× ×
<(z)

kcr,2

Figure 5.12: Root locus (in red) for the example 5.4.

whose numerator

NQ (w) = (6 + k)w2 + 2(1 − k)w + k,

is a Hurwitz polynomial if and only if 0 < k < 1. Therefore, k = 0 ⇒ wcr,1 =


√ cr,1
1 3±j 7
0 ⇒ zcr,1 = 1 and kcr,2 = 1 ⇒ wcr,2,3 = ±j √7 ⇒ zcr,2,3 = 4 . ♦

5.2.2 Internal stability of an interconnection


BIBO stability of an interconnection is a very weak and sneaky property. Indeed,
one is led to think that the behaviour of a BIBO-stable system is docile while
if this stability is achieved thanks to pole-zero cancellations in the instability
region, disasters will almost certainly happen in practice. For this reason, when
dealing with interconnections a much stronger property is always imposed:

Definition 5.1. Let us consider an interconnection with p blocks and let Pl (z),
l = 1, 2, . . . , p, be the transfer function of the l−th block. Let us perturb ad-
ditively the input of each block by adding an auxiliary input ul , l = 1, 2, . . . , p.
Let yl denote the output of the l-th block. The original interconnection is said to
be internally stable if for all i, l = 1, 2, . . . , p, the overall transfer function Wil (z)
from the input ul to the output yi is BIBO-stable.
z+2
Example 5.5. Consider the block diagram in Figure 5.13, where F (z) := z−1/2
z−1/2
and G(z) := (z+2)(z−1/3) . Notwithstanding the fact that the transfer function
from r to y is BIBO-stable:
Y (z) 1
= ,
R(z) z + 23
Chapter 5. Interconnections between continuous- and discrete-time systems 62

n(k)
+
r(k) F (z) G(z) b
y(k)

Figure 5.13: Block diagram of example 5.5.

the transfer function from n to y is not stable because of the pole in −2:

Y (z) z − 21
= .
N (z) (z + 2)(z + 2/3)

Therefore, the interconnection is not internally stable and an unstable behaviour


would almost surely emerge in practice. ♦

Testing internal stability may be a long process as the BIBO-stability of p2


transfer functions must be checked. There is a very interesting case in which this
process can be simplified thanks to the following result.

Proposition 5.1. Consider an interconnection made with a single negative feed-


Nl (z)
back loop. Let Pl (z) = D l (z)
, l = 1, 2, . . . , p, be the transfer function of the l-th
block of the interconnection and assume that Nl (z) and Dl (z) are co-prime poly-
nomials for each l = 1, 2, . . . , p. Then the interconnection is internally stable if
and only if
Q Q
1. D̄(z) := l Nl (z) + l Dl (z) is a Schur polynomial (i.e. all its roots are
inside the unit circle of the complex plane),
Q
2. deg[D̄(z)] = deg[ l Dl (z)].
Chapter 6

Frequency response

One of the key tools for the design of continuous-time controllers and for the
closed-loop stability analysis are the frequency response and its graphical repre-
sentations, i.e. the Bode plots and the Nyquist plot.
In discrete-time similar results hold. Let G(z) be the transfer function of a
LTI system and let u(k) = u0 cos(kϑ0 + Ψ0 ) be its input. If the system is BIBO-
stable then the corresponding (forced) output tends to become a pure oscillation
with the same frequency ϑ0 of the corresponding input:
k→∞
y(k) −→ y0 cos(kϑ0 + χ0 ), (6.1)

where

y0 = |G(ejϑ0 )|u0 , (6.2)


χ0 = Ψ0 + arg G(ejϑ0 ) .

(6.3)

Similarly to the continuous-time case we set

M (ϑ0 ) := |G(ejϑ0 )|, ϕ(ϑ0 ) := arg G(ejϑ0 )




and the function


G(ejϑ ) = M (ϑ)ejϕ(ϑ) , (6.4)
is called frequency response of the system.
If the system is asymptotically stable (instead of only BIBO-stable) then the
result holds for the whole output of the system (with arbitrary initial conditions)
instead of the sole forced output.

Remark 6.1. Since in (6.4), ϑ enters in G(·) via the periodic function ejϑ , the
frequency response is clearly a periodic function of period 2π. Moreover, it is also
symmetric: G(e−jϑ ) = G(ejϑ )∗ . Therefore, we need only to study the function in
the interval [0, π].

The graphics of M (ϑ) and ϕ(ϑ) in the frequency interval [0, π] are the discrete-
time Bode plots and are analogous to the continuos-time ones.
Chapter 6. Frequency response 64

6.1 Nyquist plot


As in the continuous-time case, the Nyquist plot is the representation in the
complex plane of the parametric curve G(ejϑ ) (parametrized in ϑ):

G(ejϑ ) = <(G(ejϑ )) + j=(G(ejϑ )), ϑ ∈ [0, π]. (6.5)

The Nyquist criterion holds verbatim in the discrete-time case:

Proposition 6.1 (Nyquist Criterion). Consider a discrete-time LTI system cor-


responding to a negative feedback loop where the open-loop transfer function is
G(z). Let

• N : number of times that the Nyquist contour of G(ejϑ ) encircles clockwise


the point −1 + j0;

• P : number of poles of G(z) having magnitude larger than 1 (counted with


multiplicity).

Then the closed loop system is BIBO-stable if and only if N = −P .

The proof of this result my be obtained by invoking the corresponding continu-


ous-time result (the classical Nyquist criterion). In fact, by using the bilinear
transform, we can set Ĝ(w) := G(z) z= 1+w so that the discrete Nyquist plot of
1−w

G(ejϑ ) coincides with the classical Nyquist plot of Ĝ(jω).

Corollary 6.1. Under the assumptions of Proposition 6.1, if we have P = 0 the


closed loop system is BIBO-stable if and only if N = 0.
Chapter 7

Control problem and controller


design

7.1 The control problem


A control problem is specified by the following ingredients:
1. A nominal model for the system to be controlled and the uncertainty asso-
ciated to it, where applicable;

2. The control variables and their constraints;

3. Performance specifications on the controlled system and its output vari-


ables. Typically these include:

• Stability (asymptotic, BIBO, internal) of the controlled system;


• A regulation and or tracking task (the central aim of the control de-
sign) and the associated specifications on the asymptotic regime;
• Specification on the transitory regime, expressed either in the time or
the frequency domain. These address the required dynamic precision
and trade-off between the promptness and the filtering properties of
the controlled system.

A solution to the control problem is a choice of the control variables, possibly


dependent on the available outputs, such that the performance specifications are
met.
Having already discussed how to guarantee the desired stability properties in
§4.1, the performance specification on the asymptotic and transitory regime will
be the subject of the next sections.

7.1.1 Specifications on the asymptotic regime: Tracking


The most general approach to study and design control systems to track some
target class of reference signals is based on the internal model principle. Consider
Chapter 7. Control problem and controller design 66

a tracking problem for an control interconnection (either continuous or discrete)


as the one described in Fig. 7.1.

+ E U
R C P b
Y

Figure 7.1: Control interconnection used in tracking a reference signal R.

Let R be the Z- or L-transform of the refernce signal to be tracked. Assume


R is a rational function of the form
NR
R= . (7.1)
DR
The transfer function of the direct chain is:
NDC
CP = . (7.2)
DDC
In this way, the transform of the error variable E can be written as:
NE
E=
DE
CP
=R− R
1 + CP
1
= R
1 + CP
DDC NR
= . (7.3)
NDC + DDC DR
In the continuous-time case, asymptotic tracking is achieved if
t→+∞
|y(t) − r(t)| −−−−→ 0, (7.4)

and this is possible if and only if the polynomial DE is Hurwitz. In discrete time,
tracking is achieved if
k→+∞
|y(k) − r(k)| −−−−→ 0, (7.5)
and this is the case if and only if DE has roots with modulus strictly lesser than
one, i.e. inside the unit disc in the complex plane. For the sake of simplicity,
with a slight abuse of terminology, such a DE is said to be stable. Note that from
(7.3) we have that roots(DE ) ⊆ roots(NCA + DCA ) ∪ roots(DR ), due to possible
cancellations. Being NCA + DCA stable, because of the closed-loop stability re-
quirements, it has roots inside the unit circle. Thus, two possibilities remain to
be discussed:
1. DR is stable,
Chapter 7. Control problem and controller design 67

2. DR is not stable.

In the first case tracking is already guaranteed, while to address the second it is
convenient to factorize DR in a stable and an unstable part, SR and IR respec-
tively:
DR = SR IR . (7.6)
In order to eliminate the effect of the non-convergent modes associated to IR on
the error E it is necessary to cancel such factor in (7.3). Assuming NR and DR
coprime the only possible cancellation is between DCA and IR . Summing up, if
pR
i is an unstable root of DR :

• if it is also a root of DCA , then asymptotic tracking is guaranteed;

• otherwise, asymptotic tracking will not be achieved.

This conclusion is a formulation of the so-called internal model principle for LTI
systems in an input-output description, in discrete time.

Proposition 7.1 (Internal model principle). For a stable feedback interconnec-


tionn as in Fig.7.1, asymptotic tracking of a reference signal r(k) with rational
NR
Z transform R(z) = D R
, where NR and DR are coprime, is achieved if and only
if the unstable roots of DR are also roots of DCA .
Notice that the condition roots(IR ) ⊆ roots(DCA ) also guarantees asymptotic
tracking for all signals that have Z with unstable poles included in the roots of
IR . An equivalent result holds for continuous-time systems and their transfer
functions.

Example 7.1 (Type of the loop and tracking of steps and ramps). Consider
again the control interconnection of Figure 7.1. In the continuous-time case,
if the reference is a ramp of order l, R(s) = sAl , in order to attain asymptotic
tracking it is necessary to add a number of poles in zero ∆ = l − n0 in C(s), with
n0 the number of poles in zero already present in P (s). This is typically proven
using the final value theorem, see e.g. [7]. We now treat the discrete-time case
using Proposition 7.1: consider the ramp of order l

Az l̃
R(z) = ,
(z − 1)l

with ˜l − l < 0 the intrinsic delay of the signal. In order to obtain asymptotic
tracking with zero error, from the internal model principle we have that

NP (z) NC (z)
if P (z) = =⇒ C(z) = ,
D̃P (z)(z − 1)nP D̃C (z)(z − 1)∆

with ∆ := l−nP . It is critical of course to choose a D̃C (z) such that NC (z)NP (z)+
(z − 1)l D̃P (z)D̃C (z) be stable. ♦
Chapter 7. Control problem and controller design 68

Example 7.2 (Tracking of sinusoidal signals). Given an input u(k) = u0 cos(ϑ0 k+


Ψ0 ) from (6.1) we know that the system’s forced response is still a sinusoidal signal
with amplitude and phase determined by the harmonic response of the system:
y(k) = y0 cos(ϑ0 k + χ0 ).
In order to have asymptotic tracking, we need u0 ≡ y0 and Ψ0 ≡ χ0 .
If Ψ0 ≡ 0, the input Z transform is given by (2.25)
z(z − cos ϑ0 )
U (z) = u0 .
z2 + 2z cos ϑ0 + 1
By exploiting the trigonometric addition formulas it is easy to derive the Z of
the input also in the case Ψ0 6= 0:
u0 cos(ϑ0 k + Ψ0 ) = u0 cos(ϑ0 k) cos Ψ0 − u0 sin(ϑ0 k) sin Ψ0 ,
u0 cos(Ψ0 )z(z − cos ϑ0 ) − u0 sin(Ψ0 )z(sin ϑ0 )
U (z) = .
z 2 + 2z cos ϑ0 + 1
In√ both cases the U (z) denominator is the same, with poles: p1,2 = cos ϑ0 ±
j 1 − cos2 ϑ0 = e±jϑ0 .
Here, due to the internal model principle, if P (z) has no pole in e±jϑ0 , C(z)
must be chosen of the form
N (z)
C(z) = ,
D̃C (z)(z 2 − 2z cos ϑ0 + 1)
and such that
NC (z)NP (z) + D̃C (z)(z 2 − 2z cos ϑ0 + 1)DP (z)
is stable. ♦

7.1.2 Performance specifications on the transient regime


The performance specifications on the transient regime aim to impose the de-
sired promptness and precision in the response to the controlled system to input
changes. As it has been mentioned earlier there are two typical approaches to
the quantification of such qualities of the response:
• in the time domain: the specifications are subsumed in a small number
of relevant parameters characterizing the response of the system to a step
input and are depicted in Fig. 7.2:
– raise time tr : the time the system takes to move from the 10% to the
90% of the asymptotic value of the response to a step;
– settling time at 5% ta,5% : the time starting from which the response of
the system is bounded within ±5% of the asymptotic value1 ;
1
The raise time is also denoted as ts,5% , and values different from 5% might be used.
Chapter 7. Control problem and controller design 69

– overshoot mp : if in the transient regime the response surpasses tem-


porarily its asymptotic value, it is the maximum overshoot (in percent)
that the response attains.
Intuitively it is possible to think that tr prescribes the desired promptness,
mp the precision and ta,5% the duration of the transient itself.

y(t)
y(∞)

1 + mp

1 + 0.05
1
0.9 1 − 0.05

0.1
tr ta,5% t

Figure 7.2: Specification on the transitory regime in the time domain: mp , tr e


ta,5% .

• in the frequency domain: in this approach a desired “shape” for the fre-
quency response W (jω) of the controlled system is specified (Fig. 7.3).
Typically a value of |W (jω)| ' 1 is sought for ω < B1 , if the expected ref-
erence signals have limited bandwidth B1 ∼ B3dB =: B, while |W (jω)| ' 0
is desirable for ω > B2 , a band beyond which only noise is expected. Tol-
erances ∆1 and ∆2 prescribe the areas in which |W (jω)| must remain. In
particular, the maximum absolute value M of the frequency response should
not overly exceed 1.
Empirical relations can be found which link the key parameters of the two
approaches:

Btr ' 0.4 (7.7)


1 + mp ' M (7.8)

Remark 7.1. Other performance specifications, or way to express these, can also
be used, e.g.,
• internal stability;
• Optimality: among all possible inputs or control laws one is chosen so that
a given cost functional is minimized J(y, u). The cost functional is typically
either a quadratic function that can be interpreted as an “energy” for the
signals, or the time in which a certian task is achieved.
Chapter 7. Control problem and controller design 70

|W (jω)|

1 + ∆1 =: M

1 − ∆1

∆2

B1 B2 ω

Figure 7.3: Specification on the transitory regime in the frequency domain: ∆1 ,


∆2 , B1 e B2 .

• Robustness: a control law is sought which ensures that a set of performance


specifications is guaranteed for a whole class of systems, or input signals.
The latter is used to model the uncertainty on the system.

7.1.3 Translation of the time-domain performance speci-


fications: a brief summary
Assume that a controller C has to be designed for a plant P (s) to be used in a
feedback interconnection, so that the closed-loop transfer function W (s) satisfies
certain performance indexes. A common approach is to approximate W (s) with
a second-order transfer function
kωn2
W (s) = 2 , (7.9)
s + 2ξωn s + ωn2

where

• ωn is the natural frequency,

• ξ is the damping ratio,

• k the asymptotic gain.

The poles of (7.9) are


p  p 
p1,2 = −ωn ξ ± ξ 2 ωn2 − ωn2 = −ωn ξ ± ξ 2 − 1 . (7.10)

These will represent the dominant poles of the actual W (s). If ξ ≥ 1 the two
poles are real (overdamped system), if ξ < 1 they are a complex-conjugate pair
(underdamped system).
Chapter 7. Control problem and controller design 71

The step response for the second order system is given by

eσt
w−1 (t) = δ−1 (t) − p (sin(ωd t + ϕ)), (7.11)
1 − ξ2
p
where σp:= <(p1,2 ) = −ωn ξ, ωd := |=(p1,2 )| = ωn 1 − ξ 2 and ϕ such that
sin ϕ = 1 − ξ 2 and cos ϕ = ξ.
If the derivative of (7.11) and its first zero are computed, corresponding to
the first extremal point, the peak time and tp the peak value 1 + mp are found:

π
tp = (7.12)
ωd
ξ
−√ π
mp = e 1−ξ2 (7.13)

Furthermore from the envelopes 1±eσt it is possible to compute the settling times
at 5% and, where needed, at other values, e.g.:

3
ta,5% ' (7.14)
|σ|
4.6
ta,1% ' (7.15)
|σ|

For the computation of the raise time different approaches exist:

• if the value of ξ is known, the following approximate relation, valid for


ξ ∈ [0.5, 0.7] can be used:
1.8
tr = (7.16)
ωn

• it is possible alternatively use the relation:

tr B ' 0.35 ÷ 0.4 (7.17)

from which one can also compute

2πB
ωn = q p (7.18)
2ξ 1 − ξ 2 + 2 − ξ 2 + 4ξ 4

• it is computed using the following:

1
ωn tr ' p ϕ (7.19)
1 − ξ2
Chapter 7. Control problem and controller design 72

Given specifications on tr , ta,5% , mp , the region of the complex plane in which


to place the two poles so that these requirements are satisfied corresponds to

1.8
ωn ≥ tr

ξ ≥ √ | ln2 mp | 2

ln mp +π
(7.20)

|σ| ≥ 3

ta,5%

and it is illustrated in Fig.7.4.

=(s)

ωn

σ ϕ
<(s)

Figure 7.4: Admissible regions for the poles of a second-order system satisfying
a given set of specifications in the time domain.
Chapter 7. Control problem and controller design 73

7.1.4 Control system design


The key steps for designing a digital control system are the following ones:

1. acquiring (or building) a model (either linear or linearized, continuous-time


or discrete-time);

2. acquiring and translating the requirements, like

• the required asymptotic behavior;


• |σ|, ξ, ωn , for a second order system, evaluated through either time-
domain requirements (mp , tr , ta ) or frequency-domain specifications
(B1 , B2 , ∆1 , ∆2 ), as described in the previous section;

3. choosing both the control architecture and the controller structure;

4. choosing the sampling time;

5. designing/tuning of free parameters;

6. performance evaluation (through simulations);

7. choosing sensors, actuators, A/D and D/A converters;

8. engineering evaluation.

The first two points have been already discussed in the previous sections, so in
the following we’ll focus on phases 3, 4 and 5.

Control architecture choice


Four main control structures exist: open-loop control, closed-loop control, closed-
loop control based on two degrees of freedom with a pre-filter, closed-loop control
as just described, with an additional feedforward structure. We want to mention
the pros and cons of each these solutions, by referring ourselves to the following
architecture characteristics (expressed in the frequency-domain, Fig. 7.5):

• denoting with W the whole scheme transfer function, the tracking property
requires |W (iω)| ' 1 for ω < B1 ;

• for a good disturbance rejection, we need |W (iω)| < ∆2 for ω > B2 ;

• denoting with WNi the i−th disturbance transfer function, we need it to be


as small as possible (internal stability and disturbances rejection);

• we need system robustness: the most useful parameter for quantifying ro-
bustness is the sensitivity function:
∆W/W ∂ ln W
SPW = lim = ; (7.21)
∆P →0 ∆P/P ∂ ln P
Chapter 7. Control problem and controller design 74

|W (jω)|

1 + ∆1
1
1 − ∆1

∆2

B1 B2 ω
tracking filtering

Figure 7.5: Desired characteristics in the frequency-domain.

U
• for avoiding the actuator overloading, if A = R
we’ll look for A “small
enough”.

1. Open-loop control (Fig. 7.6). The simplest control architecture, we have to


resort to whenever an output measurement is not available, so that the out-
put can’t be used as a feedback signal. In order to satisfy the requirements
listed above, it is necessary to have
1
• C' P
in the frequency-range 0 − B1 ;
• C  1 in the frequency-range B2 − +∞.

What is impossible to modify are the three relations WN1 ' P , WN2 ' 1 and
SPW = 1, so in particular robustness is prevented. Moreover, A = C ' P1
risks to overload the actuator whenever P is “small” (which means that
some zeros of P lie in the interest bandwidth).

n1 (k) n2 (k)
u(k)
r(k) C P y(k)

Figure 7.6: Open-loop control architecture (with possible disturbances).

2. Closed-loop control (feedback) (Fig. 7.7). This makes the various transfer
CP P 1
function given by, respectively, W = 1+CP , WN1 = 1+CP , WN2 = 1+CP and
W W 1
SP = SC = 1+CP . Any of these transfer functions has to be BIBO-stable
if internal stability of the whole system is required. The tracking property
translates into |C|  |P1 | , which means |CP |  1 in the frequency-range
0 − B1 , besides to |C||P |  1 in the frequency-range B2 − +∞. This way,
for ω < B1 both WN1 , WN2 ' 0 and SPW , SCW ' 0 hold true. The actuator
C
transfer function A = 1+CP has to be carefully evaluated, since ω < B1
1
implies |A| ' |P | , which in case of P “small enough” could overload the
Chapter 7. Control problem and controller design 75

actuator, while ω > B2 implies |A| ' |C|. By summarizing, the closed-loop
control leads to significant improvements w.r.t. the open-loop structure.
Furthermore, C can embed internal model components, so leading to an
asymptotically exact tracking of the reference signal.2

n1 (k) n2 (k)
+ u(k)
r(k) C P b
y(k)

Figure 7.7: Closed-loop control architecture (with possible disturbances).

3. Closed-loop control based on two degrees of freedom with a pre-filter (Fig. 7.8).
The dynamic control action is split up between L, C, H. The vari-
P CL P
ous transfer functions are given by W = 1+HP C
, WN1 = 1+HP C
and
W 1
WN2 = SP = 1+HP C , respectively, while the actuator transfer function
LC
is given by A = 1+HP C
. This way, L allows the freedom of suitably split-
ting the control tasks in such a way that A can remain small enough, C
is devoted to endowing internal model components, while the H’s task is
usually that of obtaining stabilization. A careful design of the block L is
needed as SLW = 1.

n1 (k) n2 (k)
+ u(k)
r(k) L C P b
y(k)

H

Figure 7.8: Closed-loop control based on two degrees of freedom with a pre-filter
(with possible disturbances).

4. Closed-loop control based on two degrees of freedom with a pre-filter and


an additional feedforward (Fig. 7.9). Let’s notice that if e(k) (the error
which appears to be the input of C) is zero, then the scheme works like
an open-loop control through the feedforward block F , while the feedback
takes place whenever “some things are going bad” (disturbances presence,
unsatisfying output behavior, etc). Intuitively:

• F takes care of the nominal behavior, if P is assumed to be known


(F ' P1 for ω < B1 for the tracking property);
2
However, in case of unstable plants P , some limitations arise, overall w.r.t. the disturbance
rejection (cf. Bode’s integrals).
Chapter 7. Control problem and controller design 76

• H takes care of the scheme stabilization;


• C is needed for endowing the internal model components;
• L is needed for pre-filtering reasons, in such a way that L ' 1 for
ω < B1 .

The various transfer functions are given by W = (F1+CP+LC)P


H
, WN1 and WN2
are exactly the same of the previous case without feedforward, respectively,
while the actuator transfer function splits into two components, the first
LC
one due to feedforward, the second one due to feedback, A = F + 1+CP H
=
AF + AR . Finally, a careful calibration of all open-loop components is
needed for obtaining |F ||P | ' 1.

F n1 (k) n2 (k)
+ u(k)
r(k) b
L C P b
y(k)

H

Figure 7.9: Closed-loop control based on two degrees of freedom with a pre-filter
and an additional feedforward (with possible disturbances).

Choosing the sampling time


That phase is typical in digital control for continuous-time plants. The sampl-
ing-time T has both to satisfy some constraints and to be chosen based on some
criterions, we will deal with in the sequel.

Main constraint. If a µP based system is used, with Tc denoting the time re-
quired to the control algorithm for performing computations and other functions,
listed below:
?
• input signal test (e(k) ∈ [emin , emax ]);

• digital filtering (e.g. “smoothing” in derivatives computation);

• signal conditioning (BIAS elimination, values adaptation);

• estimation (or extraction) of the signal values (if only ỹ(k) = f (y(k)) is
available, to compute f −1 (ỹ(k)) is needed);

• I/O interfaces management (log’s creation, data storing, data visualization);

• possible µP sharing (multitasking) in case of more than one process to


control (if theP
output generation to deliver to the i-th process requires time
Ti , then T ≥ i Ti ).
Chapter 7. Control problem and controller design 77

The obvious requirement is T ≥ Tc .

Criterions for the T choice. A detailed list follows, dealing with various
criterions to be taken into account for the sampling-time T choice:

• Bounding variability (roughness) of the control signal. If at time kT the


input ũ(k) is generated by the controller, the system behaves like an open-
loop system until the next sample is elaborated. Given the scheme depicted
in Fig. 7.10, in case T would be too large (w.r.t. the P (s), N (s), R(s)
bandwidths) or in case of instability of some of these transfer functions,
then ẽ(k + 1) could be too large too, so also u(k + 1) could be so and
(possibly) very different from u(k). So some troubles could arise:

– input signal saturation;


– arising of mechanical resonances;
– arising of limit cycles due to non-linearities.

These are often critical problems in many applications (as an example,


biomedical ones).
n
r̃ + ẽ ũ ū
r T

ց C(z) H0 P (s) b y

T
ց

Figure 7.10: Architecture of a digital control system.

• Random disturbances rejection. Let’s consider both the continuous-time and


discrete-time schemes in Fig. 7.11 and let w be a white noise with zero mean
and unit mean square (variance). Assume that C(s) makes the closed-loop
transfer function an approximate version of a second order system endowed
with a pair of dominant poles associated with a damping factor ξ = 0.7 and
a bandwidth ωb = 2 rad/s. Assume that the corresponding digital controller
C(z) has been designed for reproducing the same closed-loop properties.
As shown in Fig. 7.12 it holds

– for ωΩb < 10, with Ω the sampling frequency, the noise sensitivity is
worse in the discrete system,
2]
– for ωΩb > 20, the index E[|y(k)|
E[|y(t)|2 ]
is lower than 1.2, which implies a
reduced noise sensitivity in the discrete system.

As rule of thumb for obtaining satisfactory noise inputs rejection, we should


assume Ω > 20ωb .
Chapter 7. Control problem and controller design 78

w
+
r(t) ≡ 0 C(s) 1/s2 b
y(t)

w
+
r(k) ≡ 0 C(z) H0 1/s2 b
y(k)

T
ց

Figure 7.11: Comparison of the random disturbance rejection obtained in both a


continuous and a discrete system.

E[|y(k)|2 ]
E[|y(t)|2 ]

1.2

1 10 100 Ω
ωb

E[|y(k)|2 ] Ω
Figure 7.12: Behavior of E[|y(t)|2 ]
while ωb
varies.

• System dynamics and related delays. Assume that at a certain time instant
tg either a step reference signal appears or a disturbance w get started
to act. Let (k − 1)T and kT be two subsequent sample acquiring times.
The controller “sees” the step signal (or the disturbance) only at time kT
(Fig. 7.13). So a delay given by ∆ = kT − tg is unavoidable and, in the
worst case (the step signal suddenly arises as soon as the time instant k
got over), ∆ = T . We already saw that the system “quickness” depends
on the rise-time tr , and that delay phenomena have a negative impact on
the system transient performances. In order to overcome that, we usually
assume
tr
T ≤ , (7.22)
10
2π10
or, equivalently, Ω ≥ tr
.
k
Example 7.3 (first order system). Let G(s) = 1+sτ , with τ ∈ R+ the
system time constant. We know that tr ' 2.3τ , so in order to obtain
tr ≥ 10T it should hold Tτ ≥ 4.3. By introducing the cut-off frequency
Chapter 7. Control problem and controller design 79

r(t)

• •

• • •(k − 1)T

tg kT t

Figure 7.13: Delay caused by the sampling-time T on the input step (δ−1 (t − tg ))
response.

ωt = τ1 , in the frequency domain the previous inequality translates into



ωt
= 2πτ
T
≥ 25 ÷ 30. ♦
ωn 2
Example 7.4 (second order system). Let G(s) = s2 +2ξω 2 . The rise-
n s+ωn
time can be estimated by a formula equivalent to those introduced in §7.1.3,
ϕ
e tan(ϕ)
tr ' , (7.23)
ωn

where we recall that cos(ϕ) = ξ. If our target is to obtain tr ≥ 10T , from


(7.23) it follows
ϕ
e tan(ϕ)
ωn T ≤ .
10

For instance, ξ = 1/ 2 implies ϕ = π/4 and tan ϕ = 1. From the above
condition we have: ωn T ≤ 0.2, ωΩn ≥ 32. ♦

• Delay generated by the interpolator. As briefly mentioned in §5.1, the zero-


order interpolator leads to a “delay” of T /2 (as a magnitude order), as
depicted in Fig. 7.14.

y(t), ỹ(t)

T t

Figure 7.14: Delay induced by ZOH on the output signal. The signal y(t) - blue -
represents the non-delayed signal, while ỹ(t) - red dashed - is the approximation
of the delayed signal due to the holder.

By referring ourselves to Fig. 7.10, the plant transfer function is expressed


as P̃ (s) ' e−sT /2 P (s). This implies a clockwise rotation of the Nyquist
Chapter 7. Control problem and controller design 80

plot, and, consequently, a decrease in the phase margin given by


ωa
∆ϕ = , (7.24)
2/T

with ωa the open-loop transfer function crossover frequency. If ϕM de-


notes the maximum admissible decrease in the phase margin, the following
inequality has to hold
ωa ωa π
≤ ϕM =⇒ Ω ≥ . (7.25)
2/T ϕM

• Anti-aliasing filter effects. This issue has been already addressed in Chapter
3, by referring in particular to second order Butterworth filters. In order
to obtain an attenuation a at frequency Ω/2 with damping factor ξ, an
estimate of the phase margin variation ϕ is given by (3.18). Therefore, the
presence of an anti-aliasing filter leads to add to the interpolator delay a
further delay, generated by the anti-aliasing filter, so that, by denoting, as
usually we did, with ϕM the maximum admissible decrease in the phase
margin, we need to impose
ωa √ ωa
ϕM ≥ + 4ξ a (7.26)
2/T Ω

from which, by T = 2π/Ω, and solving in the unknown Ω,



ωa [π + 4ξ a]
Ω≥ , (7.27)
ϕM
where ωa is the crossover-frequency of the open-loop transfer function and
ϕM is the maximum admissible decrease for the phase margin.
k
• parameters variation sensitivity. Given the first order system G(s) = 1+sτ ,
b −T /τ
its discretized version assumes the form G(z) = z−a with a = e and
−T /τ
b = k(1 − e ). If the knowledge of τ is not so precise, or if τ represents
a varying parameter, for ∆τ small enough it holds
da ∆a T ∆τ
∆a ' ∆τ =⇒ ' . (7.28)
dτ a τ τ
Hence the parameters variation sensitivity is linear w.r.t. T and, for T large
enough, by maintaining the same C(z) the system behavior becomes worse.

We are going to conclude this section by listing in Table 7.1 some practical
considerations regarding typical sampling-time choices in relation to the physical
system of interest.
Chapter 7. Control problem and controller design 81

to-be-controlled variables T magnitude order


tank liquid level, temperature seconds, second fractions
pressure, flow rate tens of msec
voltage, electric current ∼ msec
power electronics tens of µs
solid state electronic, QM applications ∼ nsec

Table 7.1: Some examples of the magnitude order for the sampling time T .
Chapter 7. Control problem and controller design 82
Chapter 8

Digital controller synthesis:


Emulation methods

8.1 Introduction
We are going to deal with the problem of designing a discrete-time controller
C(z) for a given continuous-time plant P (s), as shown in Fig. 8.1.
+ ẽ ũ ū
r̃ C(z) H0 P (s) b y

T
ց

Figure 8.1: C(z) synthesis.

Various methodologies exist for designing a suitable C(z):


• Emulation methods. An approximate method based on the following
procedure:
T
– P̃ (s) = e−s 2 P (s) is considered, for taking care of the delay introduced
by the ZOH;
– C(s) is thereafter designed, by continuous-time reasonings;
– C(s) is “translated” in the z domain, C̃(z) ' C(s).
• Synthesis via bilinear transformation. It requires the following
steps:
h h h i ii
– P̃ (z) = (1 − z −1 )Z S L−1 P (s)
s
; T is first of all evaluated;

– the Tustin (bilinear) transformation is adopted1 P̃1 (w) := P̃ (z) 1+ wT


2
;
z=
1− wT
2

1
The “scaling factor” T2 has no relevance from a mathematical viewpoint, but the reason
why we introduced that will be clear in the sequel.
Chapter 8. Digital controller synthesis: Emulation methods 84

– C̃1 (w) is designed in the continuous-time domain;


– the inverse transformation is finally used: C̃(z) = C̃1 (w) z−1
.
w= T2 z+1

This method allows to obtain an exact pole placement, as no approxima-


tions have been introduced.

• Direct synthesis in the discrete-time domain. This way the re-


quired steps are the following ones:
h h h i ii
– a domain change is first done: P̃ (z) = (1 − z −1 )Z S L−1 P (s)
s
;T ;
– C(z) is designed directly in the discrete domain, by means of either a
direct synthesis formula or a diophantine equation.

In the following sections both the first and the third method will be discussed in
detail.

8.2 Digital conversion of analogic C(s)’s design


(emulation)
Given a desired C(s), we want to implement it in the z domain as C̃(z). This will
need some approximations due to the finite bandwidth related to both sampling
and ZOH’s delay.

1. First methods group. A transfer function implements a differential/in-


tegral equations system, so we need to approximate the time derivative (in
d
the s domain) dt with a suitable operator in the z domain. Alternatively,
we can resort to the pole mapping z = esT . Three main methods exist:

• Euler forward method (EF).

dx(t) x(t + T ) − x(t)


' , (8.1)
dt T
with T the sampling time. In the transform domain it becomes
z−1
s' . (8.2)
T
Moreover, from z = esT it follows that we are actually resorting to the
approximation
esT ' 1 + sT ; (8.3)
• Euler backward method (EB).

dx(t) x(t) − x(t − T )


' , (8.4)
dt T
Chapter 8. Digital controller synthesis: Emulation methods 85

which corresponds to
1 − z −1
s' . (8.5)
T
which leads to the approximation
1
esT ' ; (8.6)
1 − sT
• Tustin’s method. It corresponds to a trapezoidal approximation of the
time derivative:
x(t + T ) − x(t)
 
d x(t + T ) + x(t)
' . (8.7)
dt 2 T

This way z = esT is approximated by


sT
1+
z = esT ' 2
sT
, (8.8)
1− 2

from which
2 z−1
s= . (8.9)
T z+1
Let’s remark that the Tustin’s transformation is exactly the bilinear
one, except for a “scaling” factor.
After having chosen one of the previous approximations, we can compute

C̃(z) = C(s) (8.10)


s=(8.2), (8.5), (8.9)

Particular care has to be devoted to stability, and therefore to the mapping


of the left-hand half plane <(s) < 0 in the various cases, as clearly explained
in Fig. 8.2. It’s self-evident that the EF method is not recommended, as it
could make unstable the transfer function (unless T is really very “small”).
2. Matched Pole-Zero method (MPZ). By knowing the exact poles map-
ping, we could try to resort to it directly, and resort to approximations only
dealing with the zeros mapping. This is the philosophy underlying the MPZ
method: given C(s) we can:
• Evaluate poles and zeros of C(s) and express it in Evans’ form
m
Y
(s − ẑi )
i=1
C(s) = KE n , (8.11)
Y
ν
s (s − p̂i )
i=1

where the pole in the origin has been highlighted for reasons which
will be later clarified.
Chapter 8. Digital controller synthesis: Emulation methods 86

=(s)

<(s)


=(z) =(z) =(z)
EF EB Tustin

<(z) <(z) <(z)

Figure 8.2: Left-hand half plane mapping through the three methods.

• map the computed poles and zeros through z = esT , so obtaining C1 (z)
m
Y
(z − eẑi T )
i=1
C1 (z) := n . (8.12)
Y
ν
(z − 1) (z − ep̂i T )
i=1

• equip C1 (z) with further l zeros at −1, for diminishing the delay

C2 (z) = C1 (z)(z + 1)l . (8.13)

From a practical perspective, if r := n + ν − m is the relative degree of


C1 (z), we can assume l = r, if no delay is desired, otherwise l = r − 1,
if a delay step in the controller is needed, for allowing the µP to have
time enough for implementing the control algorithm (l < r − 1 is
almost never chosen). It’s worthwhile to remark that zeros at z = −1
are needed whenever the relative degree of C1 (z) (which is the same of
C(s)) is greater than zero. This way C(s) is endowed with “zeros at
infinity”, i.e. some zeros at infinite frequency. Zeros at z = −1 are the
discrete-time counterpart of that, as the maximum discrete frequency
is given by ejπ = −1.
• evaluate KD in such a way that C(s) and C̃(z) have the same zero-
frequency gain
C̃(z) = KD C2 (z), (8.14)
which requires to distinguish between two cases:
Chapter 8. Digital controller synthesis: Emulation methods 87

ν = 0. This implies that C(s) is devoid of poles at s = 0, so that


C2 (z) hasn’t poles at z = 1, and we have simply to make equal
the zero-frequency gains (C(0) = C̃(1))
m
Y m
Y
(−ẑi ) (1 − eẑi T )
C(0) = KE i=1
n = C̃(1) = KD i=1
n 2l ; (8.15)
Y Y
(−p̂i ) (1 − ep̂i T )
i=1 i=1

from which
n
Y m
Y
p̂i T
(1 − e ) (−ẑi )
i=1 i=1
KD = KE m n . (8.16)
Y Y
l ẑi T
2 (1 − e ) (−p̂i )
i=1 i=1

ν 6= 0. This happens whenever C(s) is endowed with a pole at s = 0


(and therefore C2 (z) is endowed with a pole at z = 1 with the same
multiplicity ν). By being C(0) and C2 (1) no more well-defined,
the same zero-frequency gain refers to the only poles different from
0 and from 1 for C(s) and C2 (z), respectively. For taking into ac-
count the not-considered pole (with the corresponding multiplicity
ν), the gain needs to be multiplied by T ν , so that:
n
Y m
Y
(1 − ep̂i T ) (−ẑi )
i=1 i=1
KD = KE T ν m n . (8.17)
Y Y
l ẑi T
2 (1 − e ) (−p̂i )
i=1 i=1

Remark 8.1. The following comments are in order.


(i) (8.17) holds true even in case of ν = 0 (the separate analysis has been
only made for clarity sake).
(ii) Multiplying by T ν is related to rules dealing with the system type. For
instance, consider the scheme

r(t) - k - y(t)
+ C(s) - P (s) -

6

where C(s) has a simple pole at s = 0 and Bode gain KB , while the plant
P (s) is devoid of poles at s = 0 and has unit Bode gain. Assume the closed-
loop is BIBO-stable and driven by the reference signal r(t) = t · 1(t) (the
Chapter 8. Digital controller synthesis: Emulation methods 88

unit ramp with Laplace transform R(s) = 1/s2 ). By being 1 the system
type, the steady-state error is the inverse of KB , the open-loop gain (the
Bode gain of C(s)P (s)), i.e.

1
er = . (8.18)
KB

The corresponding discrete-time version is represented in the next figure

r̃(k) - k - ỹ(k)
+ C̃(z) - P̃ (z) -

6

and, again, assume the BIBO-stabilty of the closed-loop system and let P̃ (z)
be the sample/hold version of P (s), so that P̃ (z) is devoid of poles at z = 1
and P̃ (1) = 1. The reference signal sampled version is r̃(k) = T k · δ−1 (k),
so that
z
R̃(z) = T . (8.19)
(z − 1)2
Therefore, if C̃(z) has a simple pole at z = 1 and C̃0 (z) := (z − 1)C̃(z) is
KB0 −valued at z = 1, i.e. C̃0 (1) = KB0 , from the final value theorem we get

T
ẽr = (8.20)
KB0
1
which is equal to the steady-state error er = KB
of the continuous-time case
if we assume
KB0 = T KB (8.21)
in perfect agreement with (8.17). A similar reasoning should be developed
in case of C(s) with a pole at s = 0 with multiplicity ν and with a canonical
reference signal whose transform is R(s) = 1/sν+1 .

Example 8.1. We are now going to show two applications of the MPZ
method.

• Let C(s) = kc s+a


s+b
. Then

z − e−aT
C1 (z) = kd = C2 (z),
z − e−bT
with kd such that

a 1 − e−aT a 1 − e−bT
lim C(s) = kc = kd = lim C1 (z) ⇒ kd = kc .
s→0 b 1 − e−bT z→1 b 1 − e−aT
Chapter 8. Digital controller synthesis: Emulation methods 89

s+a
• Let C(s) = kc s(s+b) . If we want zero relative degree

(z + 1)(z − e−aT )
C2 (z) = kd .
(z − 1)(z − e−bT )

kd is evaluated through

T a 1 − e−bT
kd = kc .
2 b 1 − e−aT

Example 8.2. Let C(s) be a proper rational transfer function and, for
simplicity sake, assume that C(s) is devoid of zeros at s = 0. Let Ct (z)
be the discrete version of C(s), obtained via the Tustin’s method, and let
Cm (z) be the discrete version of C(s), obtained via the MPZ method (by
choosing zero relative degree). Denote by pit , i = 1, . . . , n1 , and by zit ,
i = 1, . . . , n1 , the poles and the zeros, respectively, of Ct (z), and by pim ,
i = 1, . . . , n1 , and zim , i = 1, . . . , n1 , the poles and the zeros, respectively,
of Cm (z). It is required to prove that

pit − pim
lim = 0, ∀i = 1, . . . , n1 , (8.22)
T →0 T

and
zit − zim
lim = 0, ∀i = 1, . . . , n1 . (8.23)
T →0 T
Moreover, if KEt and KEm denote the Evans’ gains of Ct (z) and Cm (z),
respectively, we want to prove that also

KEt − KEm
lim = 0. (8.24)
T →0 KEt

holds true.
Proof. Express C(s) in Evans form, by pointing out the (possible) pole at
s = 0 (obviously, ν = 0 in case that pole is absent):

m
Y
(s − zi )
i=1
C(s) = KE n , (8.25)
Y
ν
s (s − pi )
i=1

By evaluating
2 z−1
 
Ct (z) = C (8.26)
T z+1
Chapter 8. Digital controller synthesis: Emulation methods 90

after simple computations we get


m "m
 #
Y Y
  n+ν−m (1 − zi T /2)  (z − zit ) (z + 1)n+ν−m
 T i=1  i=1

Ct (z) = KE

n  "Y n
#
2 Y
(1 − pi T /2) (z − pit ) (z − 1)ν
 

| {zi=1 } i=1
KEt
(8.27)
where
1 + zi T /2
zit := , i = 1, . . . , m, (8.28)
1 − zi T /2
and
1 + pi T /2
pit := , i = 1, . . . , n. (8.29)
1 − pi T /2
From (8.27) it easily follows that

zit = −1, i = m + 1, . . . , n1 = n + ν, (8.30)

and
pit = 1, i = n + 1, . . . , n1 = n + ν, (8.31)
and, moreover
m
Y
 n+ν−m (1 − zi T /2)
T i=1
KEt = KE n . (8.32)
2 Y
(1 − pi T /2)
i=1

Let’s now compute Cm (z). By the above procedure and taking into account
that l = r = n + ν − m is needed because of the zero relative degree
requirement, we obtain
"m #
Y
(z − zim ) (z + 1)n+ν−m
i=1
Cm (z) = KEm " n
# , (8.33)
Y
(z − pim ) (z − 1)ν
i=1

where
zim := ezi T , i = 1, . . . , m, (8.34)
and
pim := epi T , i = 1, . . . , n. (8.35)
Chapter 8. Digital controller synthesis: Emulation methods 91

So KEm , as expected from (8.17), is given by


n
Y m
Y
pi T
(1 − e ) (−zi )
KE T ν i=1 i=1
KEm = m n . (8.36)
2n+ν−m Y zi T
Y
(1 − e ) (−pi )
i=1 i=1

Moreover from (8.33) it clearly holds

zim = −1, i = m + 1, . . . , n1 = n + ν, (8.37)

and
pim = 1, i = n + 1, . . . , n1 = n + ν. (8.38)

We are now in a position to prove what we claimed before. Let’s get started
with (8.22). It’s obvious for i = n + 1, . . . , n1 = n + ν, so only its validity
for i = 1, . . . , n has to be verified. In fact, for any i = 1, . . . , n, we have:
1+pi T /2
pit − pim 1−pi T /2
− epi T
lim = lim
T →0 T T →0 T
1 + pi T /2 − epi T (1 − pi T /2)
= lim
T →0 T (1 − pi T /2)
pi /2 − pi epi T (1 − pi T /2) − epi T (−pi /2)
= lim = (8.39)
0,
T →0 1 − pi T
where, in the last-but-one step, we resorted to the de l’Hôpital rule.
Furthermore, (8.23) easily follows from similar reasonings.
Only (8.24) remains to be proved, and that follows from
KEt − KEm
lim
T →0 KEt
m n 
Y Y epi T − 1
n+ν−m  (1 − zi T /2)
KE T2  i=1 pi 
m−n i=1

= lim 
n − T m zi T

T →0 KEt Y Y e − 1

(1 − pi T /2)
 
i=1 i=1
zi
Yn  m
Y n
Y epi T − 1

(1 − pi T /2)  (1 − zi T /2)
i=1
 i=1 i=1
pi T  
= lim m 
n − m zT  = 0.
T →0 Y e −1
i
 Y Y 
(1 − zi T /2) (1 − pi T /2)

i=1 i=1 i=1
zi T

Let’s remark that, as T vanishes (tends to zero), all poles/zeros, both of


Ct (z) and of Cm (z) (except for the zeros at z = −1 which are fixed and
Chapter 8. Digital controller synthesis: Emulation methods 92

independent of T ), tend to 1. So, it isn’t surprising that as T vanishes


poles/zeros of Ct (z) tend to become equal to those of Cm (z). However, our
result is more powerful. In fact, while the differences pit − 1 and pim − 1
vanish as T does (except for the zeros at z = −1), and the differences zit − 1
and zim −1 do the same, the differences pit −pim and zit −zim vanish quicker
than T .
Similar considerations could be developed about the gains KEt and KEm . ♦

Remark 8.2. Let’s list some important remarks:

• The MPZ method is equivalent to the approximate Tustin’s one from


a performance perspective, while both EF and EB are worst;
• depending on the values assumed by the sample frequency Ω, we usu-
ally get
– Ω ≤ 5ωb , with ωb the system bandwidth, instability often appear;
– Ω ≤ 10ωb , the system if significantly underdamped;
– Ω ≥ 20ωb , good performances are usually obtained;
– Ω ≥ 30ωb , the performances become almost those of the continu-
ous system as well;
• The most significant contribution to the errors is due to ZOH. This
contribution can be included in the design procedure by resorting to
T
a Padé approximation of e−s 2 , e.g. an approximation of degree (0, 1):
2/T
GZOH (s) = s+2/T . This way allows us to design a controller for P̃ (s) =
GZOH (s)P (s), instead of P (s).

3. Step response invariance method. We already talked about it when


dealing with translating the plant P (s) from the continuous case to the
discrete one (§5.1). The same method applies for translating C(s), even
though some approximations arise.2
The guarantee of good results is ensured only for piecewise constant signals,
whose switch instants are all sampling ones. In this case it holds
    
−1 −1 C(s)
C̃(z) = (1 − z )Z S L ;T (8.40)
s

So, as a general statement, that method leads to worse results w.r.t. ei-
ther Tustin or MPZ. A better method would be that of considering a ramp
response invariance issue, however this method would be similar, at a per-
formance level, to the Tustin’s one, which is simpler to implement.
2
Let’s recall that the hold/sample order (∼ 1 for discrete signals) is different w.r.t. the
sample/hold order (6= 1 for continuous signals).
Chapter 8. Digital controller synthesis: Emulation methods 93

8.3 P.I.D. cotrollers


P.I.D.’s are controllers endowed with three actions (proportional, integral and
derivative) which can be implemented both in an analogic structure and in a
digital one. These widely spread regulators (they are actually used in ∼ 80%
of industrial control applications), because of both their standard structure and
the existence of simple “tuning” procedures, can be mainly built either in the
“parallel” (non-interacting) structure or in their “series” version.
1. Parallel implementation (Fig. 8.3). The model is expressed by
 
KI 1
CP ID,p (s) = KP + + KD s = KP 1 + + sTD , (8.41)
s sTI
with KP the proportional gain and
KD
TD := , advance time, (8.42)
KP
KP
TI := , integral action time. (8.43)
KI

KP

1
e(t) b
sTI u(t)

sTD

Figure 8.3: Parallel implementation of a P.I.D.

By being (8.41) an improper transfer function, because of the term sTD ,


we need to introduce another pole at high frequencies both for physical
realizability and for noise filtering:
 
0 KI 1 sTD
CP ID,p (s) = KP + + KD s = KP 1 + + ,
s sTI 1 + sTL
s2 (TI TL + TI TD ) + s(TI + TL ) + 1 TD TD
= KP ≤ TL ≤ .
sTI (1 + sTL ) 10 3
(8.44)

2. Series implementation (Fig. 8.4). The controller transfer function can be


rewritten in the form
(sTI + 1)(sTD + 1) TD TD
CP0 ID,s (s) = KP ≤ TL ≤ . (8.45)
sTI (1 + sTL ) 10 3
That implementation is often preferred, as the zeros expressions z1 = − T1I
and z2 = − T1D are more helpful when dealing with Bode plots.
Chapter 8. Digital controller synthesis: Emulation methods 94

1 1+sTD
e(t) KP 1+ sTI 1+sTL u(t)

Figure 8.4: Series implementation of a P.I.D.

Now we have to extend the previous considerations to the discrete-time case,


where often discrete-time versions of the three actions are considered.
s→z
• Proportional action: KP −−→ KPd .
UD (s) sTD
• Derivative action: by the EB method CD (s) := E(s)
= 1+sTL
becomes,
after some computations,
TD 1 − z −1
CD (z) = TL
. (8.46)
T + TL 1 − T +T z −1
L

which in time-domain can be rewritten as


TL TD
uD (k) = uD (k − 1) + (e(k) − e(k − 1)). (8.47)
T + TL T + TL
• Integral action: by the EB again, we have
T
uI (k + 1) = uI (k) + e(k + 1), (8.48)
TI
1 Tz
which means s
' z−1
, i.e. the step Z transform.
Remark 8.3 (the role played by actions P, I, D). Some remarks have to be made
about the role which any of the three actions play.
• P action:
– the larger is KP , the more the system becomes quicker;
– the larger is KP , the more instability problems can arise.
• I action:
– it allows to annihilate the steady-state error (in case the system should
have not already a pole at s = 0);
– the phase margin gets worse, with possible consequences on the system
stability;
– saturation problems can easily arise.
• D action:
– the phase margin gets better;
– both rise and settling time decrease;
– the actuator signal could become too much large;
– the output noise is amplified.
Chapter 8. Digital controller synthesis: Emulation methods 95

8.4 Reminders about the phase margin based


synthesis
In the continuous-time case, a well-known technique is available for the controller
design. This technique will be extended to the discrete-time case through the
techniques developed in §8.2. We know how to translate closed-loop (W ) time-
requirements (e.g. tr , ta,5% , mp ) in terms of open-loop (G := CP ) frequency-
requirements (crossover frequency ωa∗ and phase margin m∗ϕ ). Let’s the following
assumptions hold true (cf. Fig. 8.5):

• the Nyquist plot of P exhibits a sole crossing point on the (negative) real
axis;

• the Nyquist plot of P exhibits a sole crossing point on the unit circle (cor-
responding to the crossover frequency ωa );

• the Nyquist plot of P ends at s = 0, which means that P (s) is a strictly-


proper rational function.

=(s)


ωa <(s)

Figure 8.5: P (s)’s Nyquist plot.

Assume we want the open-loop system to have both a (desired) crossover


frequency ωa∗ and a (desired) phase margin m∗ϕ . In case P doesn’t satisfy these
requirements, we need to design C in such a way to obtain those to be satisfied
for the open-loop CP . So the C(s)’s design can be explained in terms of a steps
sequence:

1. translation of requirements on tr and mp in terms of ωa∗ and m∗ϕ , via the


useful formulas:
2
ωa∗ ' , (8.49)
tr
m∗ϕ ' 1.04 − 0.8mp ; (8.50)
Chapter 8. Digital controller synthesis: Emulation methods 96

2. from steady-state requirements (or even from internal model components


which need to be added) a first version of the controller is designed:

kRP
C 0 (s) = . (8.51)
sνRP
We can imagine this term being a “fictitious” part of the plant P (s)

P 0 (s) = C 0 (s)P (s); (8.52)

3. by either analytical reasonings or Bode plots, we can evaluate the terms:


arg(P 0 (ωa∗ )) and |P 0 (ωa∗ )|;

4. thereafter, C(s) is designed in such a way that

C(jωa∗ ) = M ejϕ , (8.53)

with
1
M := |C(jωa∗ )| =
|P 0 (jωa∗ )|
ϕ := arg(C(jωa∗ )) = m∗ϕ − π − arg(P 0 (jωa∗ ).
Therefore, it easily follows that the open-loop transfer function now is sat-
isfying the requirements about both phase margin and crossover frequency
(clearly, closed-loop stability must be preserved first of all). C(s) can be now
designed by resorting either to one of the actions or elementary networks or
to a standard controller, as explained in the next sections. Obviously, the
whole controller will be the product of the steady-state requirements part
and of the transient requirements (phase margin and crossover frequency
adjustment) one:
C(s)00 = C(s)C 0 (s). (8.54)

8.5 Elementary networks design


An usual way for designing C(s) is related to the choice of a suitable elemen-
tary action/network. The choice depend on both the absolute value/phase
of P 0 (jωa∗ ) and the requirements to be fulfilled. Table 8.1 explains how to
choose the network (the various action are discussed in detail in the sequel).

|P 0 (jωa∗ )| < 1 |P 0 (jωa∗ )| > 1


arg(P 0 (jωa∗ )) > m∗ϕ + π proportional action attenuator network
arg(P 0 (jωa∗ )) < m∗ϕ + π antipator network saddle network

Table 8.1: Correctors action.


Chapter 8. Digital controller synthesis: Emulation methods 97

• Proportional action. The transfer function is a pure gain Cp (s) = K.


• Attenuator network. The transfer function is
1 + αT s
Catt (s) = , 0 < α < 1, T > 0. (8.55)
1 + Ts
By imposing (8.53) we get

M (cos ϕ − M )
α= , (8.56)
1 − M cos ϕ
1 M cos ϕ − 1
T = ∗ , (8.57)
ωa M sin ϕ

Let’s remark that α > 0 requires M < cos ϕ, even though this condi-
tion is not so strong, as if it doesn’t hold it suffices to assume a smaller
value for ϕ, which allows to obtain a greater total phase margin mϕ .
So the performances could be a little bit worse, however stability is
guaranteed anyway.
• Anticipator network. This way we have

1 + Ts
Cant (s) = , 0 < α < 1, T > 0, (8.58)
1 + αT s
with
M cos ϕ − 1
α= , (8.59)
M (M − cos ϕ)
1 M − cos ϕ
T = ∗ ., (8.60)
ωa sin ϕ

Condition α > 0 implies M > cos1 ϕ , which differently to the previous


case has absolutely to be satisfied.
• Anticipator/attenuator (saddle) network. The transfer function is now

1 + αT1 s 1 + T2 s
Csella (s) = , 0 < α < 1, T1 > T2 > 0. (8.61)
1 + T1 s 1 + αT2 s

It’s needed when we have, at the same time, both |P 0 (jω ∗ )| > 1 and
arg(P 0 (jωa∗ )) < m∗ϕ + π. In order to evaluate the unknown parameters
in (8.61) we can follow the simple procedure explained below:
– the condition M < cos ϕ is necessary to check, otherwise the prob-
lem has no solution;
– the following parameter is evaluated

M (cos ϕ − M )
m := ; (8.62)
1 − M cos ϕ
Chapter 8. Digital controller synthesis: Emulation methods 98

– and by choosing the ratio


1
k> ; (8.63)
m
– α is computed through
km − 1
α= ; (8.64)
k−m
– now we compute
r 2
M 2 (α2 +k2 )−1−α2 k2 M 2 (α2 +k2 )−1−α2 k2
1−M 2
+ 1−M 2
− 4α2 k 2
x+ = ; (8.65)
2α2 k 2
– so finally getting
1√
T2 = x+ , T1 = kT2 ; (8.66)
ωa∗

5. a final check about the desired requirements is needed, if not we can recur-
sively repeat the whole procedure, after having changed the values of ωa∗
and m∗ϕ .

8.6 P.I.D. - P.I. - P.D. design based on the phase


margin
By applying exactly the same arguments we dealt with in the previous section,
standard controllers can be designed too.

8.6.1 P.I.D. design


Starting from
KI
1 + TI s + TI TD s2 ,

CP ID (s) = (8.67)
s
we can choose the term KsI accordingly to the steady-state requirements, while
the rest has to be designed accordingly to the above procedure. Letting P 0 (s) :=
KI
s
P (s), the values of M and ϕ (defined in (8.53)) are given by (the added inte-
grator causes a further phase delay of π/2):

ωa∗ 1 3π
M= , ϕ = m∗ϕ + − arg(P 0 (jωa∗ )). (8.68)
KI |P (jω ∗ )| 2

By equating both real and imaginary parts of

C(jωa∗ ) = 1 + TI sjωa∗ − TI TD (ωa∗ )2 = M ejϕ , (8.69)


Chapter 8. Digital controller synthesis: Emulation methods 99

the values of TI , TD easily follow:


M sin ϕ
= : TI ωa∗ = M sin ϕ ⇒ TI = , (8.70)
ωa∗
1 − M cos ϕ
< : 1 − (TI TD ωa∗ )2 = M cos ϕ ⇒ TD = , (8.71)
M sin ϕωa∗
Remark 8.4. In order to make possible the C(s)’s design, we need what follows:
1
• TD > 0, which implies M < cos ϕ
;

• TI > 0, which implies ϕ ∈ [0, π];


• a further high frequency pole 2πTL is needed for obtaining properness of C.

8.6.2 P.D. design


Starting from
CP D (s) = KP (1 + TD s), (8.72)
and after an evaluation of the crossover frequency, (8.72) leads to
CP D (jωa∗ ) = KP + jωa∗ KP TD = M ejϕ . (8.73)
Again, by equating both real and imaginary parts we get
< : KP = M cos ϕ, (8.74)
tan ϕ
= : TD = ,. (8.75)
ωa∗
This time TD > 0 ⇒ ϕ ∈ [0, π/2] is needed (analogously to the anticipator
network).

8.6.3 P.I. design


Starting from  
1
CP I = KP 1 + , (8.76)
sTI
and after the evaluation of ωa∗ we obtain
1
CP I (jωa∗ ) = KP + KP = M ejϕ . (8.77)
jωa∗ TI
Again, by equating both real and imaginary parts we get
< : KP = M cos ϕ, (8.78)
−cotan ϕ
= : TI = . (8.79)
ωa∗
This time TI > 0 ⇒ ϕ ∈ [−π/2, 0] has to be fulfilled (analogously to the attenu-
ator network).
Chapter 8. Digital controller synthesis: Emulation methods 100
Chapter 9

Digital Controllers Synthesis:


Direct Synthesis Methods

9.1 Discrete-time direct synthesis by “cancel-


ing”

n1 (k) n2 (k)
+ u(k)
r(k) C P b
y(k)

Figure 9.1: Closed loop system in presence of disturbances.

Let’s consider the closed-loop control architecture depicted in Fig.9.1, and as-
sume the system desired behavior and requirements have been translated in some
desired closed-loop transfer function W (z). A simple way for exactly obtaining
W (z) passes through expressing C(z) in terms of both W (z) and P (z) as follows.
From
C(z)P (z)
W (z) = ,
1 + C(z)P (z)
the desired controller can be obtained as
W (z) 1
C(z) = . (9.1)
1 − W (z) P (z)

However this naive solution can’t be considered satisfactory as some troubles


arise, namely:

• C(z) could result to be too highly complex (although the computation capa-
bilities of actual µP’s could overcome that problem, it might be preferable
to deal with simpler controllers);
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 102

• C(z) could result to be not a proper (causal) controller, and the conditions
on W (z) (our degree of freedom in the design) which might prevent that
aren’t so clear;

• some zero-pole cancellations could arise, leading to stability problems and/or


tricky issues in case of both approximate model and defective knowledge
about the plant.

First of all let’s investigate what conditions need to be satisfied in order to


obtain a causal system, i.e. C(z) to be a proper transfer function. By defining

NP NC NW
P = , C= , W = , (9.2)
DP DC DW

where all numerators and denominators (Ni , Di ) are assumed to be coprime each
other and of degrees ni e di respectively, for i = P, W, C. By substituting into
the expression of C(z) given by (9.1) it easily follows

NC NW DP
C= = , (9.3)
DC DW − NW NP

with
nC = nW + dP − ∆, dC = dW + nP − ∆, (9.4)
where ∆ represents the cancellations (between NC and DC ) number, and where we
assumed the degree of DW −NW still being dW .1 In case the last assumption would
be not satisfied, it is needed to consider d0W = dW − ∆0 , where ∆0 represents the
degree loss in DW − NW due to cancellations between the maximal degree terms
in DW and NW . So, for guaranteeing properness of C, the following relations
have to hold true

nC ≥ dC ⇔ nW + dP ≤ d0W + nP ⇔ dW − nW ≥ d0W − nW ≥ dP − nP . (9.5)


| {z } | {z }
rW rP

By concluding, the admissibile W ’s must have a relative degree rW (which cor-


responds to the closed-loop impulse response delay) not less than that of P . In
case of typical P (z)’s, related to continuous systems via sampling and holding, it
holds rP = 1 and therefore rw ≥ 1.
Now let’s switch to the W internal stability problem. Because of Proposition
5.1 we have to verify that

D̄ := NP NC + DP DC is stable and deg(D̄) = deg(DP DC ). (9.6)

Equivalently, all the transfer functions between inputs (and possibly auxiliary
inputs acting on the various blocks inputs) and internal variables have to be
1
This always holds true if W is either strictly proper or is proper with negative Evans’ gain.
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 103

stable ones. More explicitly (by referring to Fig. 7.7 again), the following transfer
functions have to be stable ones
Y CP
=W = , (9.7)
R 1 + CP
Y P DC
= =W , (9.8)
N1 1 + CP NC
U 1 DP DC
= =W , (9.9)
N1 1 + CP NP NC
U C DP
= =W . (9.10)
R 1 + CP NP
It is worthwhile to notice that the disturbance N2 hasn’t been taken into account,
as it could be considered as a version of R (except for a sign change). So the
conditions we are searching for are
• from (9.7) W has to be stable (which, on the other hand, is a fundamental
requirement on W );
• from (9.10) the unstable roots of NP must be canceled by NW ’s roots;
• from (9.8) it follows
DC NW (DW − NW )NP
W = , (9.11)
NC DW NW DP
which easily implies that all unstable poles of P must be DW − NW ’s roots
too;
• from the previous conditions, BIBO-stabilty of (9.9) is ensured.
Remark 9.1. As final remarks
• the direct synthesis method is simple to apply, unless P is unstable. In that
case NW , DW have to be chosen in order to guarantee that DW − NW has
some suitable roots;
• if we are interested in including internal model components, they can be
embedded in a fictitious way in the plant P (s) as follows: Let R = DN R
S DU
R R
S U
be the signal we want to track, with DR stable and DR unstable. It suffices
1
to define P̃ (z) = DU P (z) and thereafter to apply the above procedure;2
R

• since internal stability implies that all poles belongs to the open unit circle,
by being the poles continuously depending on the coefficients, small enough
coefficients variations do preserve stability. This means that small model-
ing errors appearing in P (z) don’t affect stability (in case they are small
enough).
2
In this respect we have to be somewhat careful, as the situation corresponds to the “diffi-
cult” case which requires to include the P unstable roots into DW − NW .
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 104

Some open questions remain

1. how can the closed-loop transfer function W be chosen?

2. how can tracking requirements be embedded?

3. how can we take into account delays, which could appear either in the plant
P to control or in the measurement processes?

4. are these methods in accordance with standard P.I.D. & Co. design?

We’ll answer these question, starting with describing some methods which apply
for reasonable choices of W .

9.2 Dahlin’s method


The idea underlying this method is that of trying to find W (z) which, in some
sense, is similar to its continuous counterpart W (s) given by

e−std
W (s) = , (9.12)
1 + sτ

with td the closed-loop system delay and τ1 a pole related to the system rise-time
(the more τ is, the quicker the system is). Let’s assume td being an integer
multiple of the sampling time T , i.e. td = nT . From the step response invariance
methodology, expressed by (5.8), it follows
   
−1 −1 W (s)
W (z) = (1 − z )Z S ◦ L ;T
s
1 − e−T /τ
= z −n . (9.13)
z − e−T /τ

The computation of n requires to look at the relative degree of P (z), rP , as (9.5)


has to hold true
n + 1 ≥ rP ⇒ n ≥ rP − 1. (9.14)

This way the controller C(z) takes the form

W (z) 1 1 − e−T /τ 1
C(z) = = −T /τ n −T /τ
, (9.15)
1 − W (z) P (z) −1 + e + z (z − e ) P (z)

where, implicitly, P (z) has been assumed to be minimum-phase, in order to


guarantee internal stability.
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 105

9.3 Choice of W (z) based on dominant pole ar-


guments
As well known, in the continuous-time case the transient requirements have the
form
mp ≤ m∗p , tr ≤ t∗r , ta ≤ t∗a . (9.16)
As a consequence, the admissible region for dominant poles placement is depicted
in Fig. 7.4. The discrete-time “translation” of the approximated transfer function
1
W (s) = 2 leads to
1+2ξ ωs +( ωs )
n n
   
−1 −1 W (s)
Wd (z) = (1 − z )Z S ◦ L ;T
s
K(z − z1 )
= , (9.17)
(z − pd1 )(z − pd2 )
p
with pd1,2 = ep1,2 T and p1,2 = −ωn (ξ ± ξ 2 − 1). More precisely, the following
algorithm can be implemented
• from the requirements about mp , tr , and ta , T can be chosen and the
corresponding admissible region in the s domain can be evaluated;
• poles can be placed (possibly close, even though not too much close, to the
region boundary);
• discrete-time poles are evaluated through pd1,2 = ep1,2 T ;
• a suitable delay has to be added to W (z) in order to guarantee properness
of C(z), and z1 in (9.17) is considered too, for taking into account unstable
zeros (if there are any), otherwise it suffices to choose z1 with small absolute
value;
• K is chosen in such a way to obtain a unit zero-frequency gain.
The question now is: how does the admissible region modify by passing from
continuous case to the discrete one?

• From the rise-time requirement t∗r it follows ωn ≥ 1.8


t∗r
. The corresponding
(to this inequality) region is mapped into (see Fig. 9.2)
cost.
• From the settling-time requirement t∗a it follows σ = ωn ξ ≥ ta
. The
corresponding situation is depicted in Fig. 9.3.
• From the maximum-peak requirement m∗p , the following condition has to
− ln m∗p
be verified: ξ > ξ ∗ = q
2 ,3 which can be graphically expressed in
(ln m∗p ) +π2
the z domain like Fig. 9.4 does.
3
Recall ϕ = cos(ξ).
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 106

=(s) =(z)

ωn
1
z


<(s) <(z)

Figure 9.2: Mapping from s plane to z one of the region related to t∗r .

=(s) =(z)

1
σ z


<(s) <(z)
eσT

Figure 9.3: Mapping from s plane to z one of the region related to t∗a .

=(s) =(z)

1
ϕ z


<(s) <(z)

Figure 9.4: Mapping from s plane to z one of the region related to m∗p .

By intersecting the three regions described above, the admissible region for
placing the W (z)’s poles is obtained (see Fig. 9.5).

Remark 9.2. It is worthwhile to remark that in the discrete-time case a pole


at z = 0 is equivalent to a continuous-time pole at s = −∞. This way leads to
a dead-beat controller, which allows to obtain the exact (after a finite number of
steps) reference signal tracking.
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 107

=(z)

<(z)

Figure 9.5: An example of admissible region for pole placement in the z plane.

9.4 Direct Synthesis from a different perspec-


tive
Let’s consider a different viewpoint for deriving the direct synthesis formula start-
ing from a different interconnection, which holds true both for continuous-time
and discrete-time systems. Given the systems connection depicted in Fig. 9.6, let
P be the (real) plant to be controlled, P̃ be a suitable approximation of P the
controller designer has availability of, and C 0 be a given preliminary controller.
It is straightforward to see that whenever the plant model would be equal to the
real plant, P̃ ≡ P , the variable b would be equal to zero, so the open-loop control
with C 0 = W P
would lead to the desired controlled behavior W .
e u
+
r C ′ b
P b y

− +

b

Figure 9.6: Systems connection for the direct synthesis.

The blocks connection in Fig. 9.6 can be equivalently replaced with that in
Fig. 9.7, where it’s easier to notice that, denoting by C the positive-feedback
connection of C 0 and P , it holds true
W
C0 P P̃ ≡P W 1
C= = = , (9.18)
0
1 − C P̃ W
1 − P P̃ 1−W P
which represents nothing more than the well-known direct synthesis formula.
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 108

e u
+
r C ′ b
P b y


C

Figure 9.7: Equivalent systems connection for the controller direct synthesis.

9.5 Smith delay compensation


A wide class of plants obeys to

P (s) = e−std P 0 (s), (9.19)

which means that a delay td is present. In most cases the tracking target is
adapted to the delay, in the following sense

lim y(t − td ) = r(t). (9.20)


t→+∞

As a case study (see [4]), a delaying system, for which a controller leading to
(9.20) is needed, is now presented.

Example 9.1 (Temperature control for a fluid in a duct). Let the physical scheme
described in Fig.9.8 be given.

R v RT

l
u(t)
y(t)

Figure 9.8: Temperature control for a fluid in a duct.

Let RT be the thermistor for measuring the duct temperature, and y(t) be
proportional to the measured temperature. The duct is endowed with a resistor R
for controlling its temperature, thanks to an input voltage u(t) which modulates
the delivered power. An approximate model for the to-be-controlled plant transfer
function is (without taking into account any delay),
k
P 0 (s) = .
1 + sτ
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 109

If the fluid speed in the duct is assumed to be constant, the temperature measure
appears to be delayed of td = l/v. So, taking into account this delay, the to-be-
controlled plant becomes
ke−std
P (s) = .
1 + sτ
A proportional controller (with gain KP ) leads to the systems interconnection
expressed by the scheme depicted in Fig. 9.9. From Nyquist plots of both P 0 (s)
and P (s) (9.14) the arising of instability for KP large enough is self-evident, so
an alternative control scheme is required.
+
r(t) KP P (s) b
y(t)

Figure 9.9: Proportional control scheme.

=(s) =(s)

<(s) <(s)

Figure 9.10: Nyquist plot of P (s) (left) and of its delayed version P 0 (s) (right).
For large enough gain values the P 0 (s) diagram encircles the point −1+j0, leading
to instability (red dotted line).

The Smith compensator (predictor) is now introduced. By referring ourselves


to Fig. 9.11, we would obtain

B(s)
= P 0 (s), (9.21)
U (s)

which would correspond to a “canceling” of the delay appearing in P (s). It would


be simple to do if P 0 (s)U (s) would be known, but that is not the case, as the
output of P 0 can’t be directly measured.
Let P (s) = P 0 (s)e−std be the plant endowed with the delay, so that (9.21)
implies

P (s) − D(s) = P 0 (s) ⇒ D(s) = −(1 − e−std )P 0 (s) = e−std P 0 (s) − P 0 (s). (9.22)
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 110

e u
+
r C ′ b
P′ e−std b y

− +
D
b

Figure 9.11: Smith’s predictor control scheme.

By assuming that some estimates P̃ 0 (s) and t̃d of both P 0 (s) and td are available,
the control scheme in Fig. 9.11 can be reduced to that in Fig. 9.12. The same
figure shows as the external feedback loop “disappears” if the estimates would
be exact ones.
+ +
r C′ b
P′ e−std b y
− −
b
− +
P̃ ′ e−st̃d

Figure 9.12: Equivalent Smith’s predictor control scheme.

This way, C 0 (s) could be designed regardless of the delay P (s) is endowed
with. As a matter of fact, the feedback output (i.e. the signal b), if P̃ 0 ≡ P 0 and
t̃d ≡ td , will coincide with that of the plant P 0 deprived of the delay.
By evaluating the whole system transfer function

Y (s) C 0 (s)P (s)


W (s) = =
R(s) 1 + C 0 (s)(P (s) − D(s))
C 0 (s)P 0 (s) −std
= e , (9.23)
1 + C 0 (s)P 0 (s)

it follows that (9.23) can be factorized into two terms: the first one without any
delay, and the second one expressing 0a delay equal to the desired one. So, a
0 C (s)P 0 (s)
correct design of C (i.e. satisfying 1+C 0 (s)P 0 (s) ' 1) leads to reach the tracking
target (9.20).

Remark 9.3. The previous analysis holds true both for continuous-time and
discrete-time systems. However, substantial differences arise in the practical sit-
uations:

• A delay estd is far from trivial to be realized by means of electric networks,


so in the continuous-time case an exponential approximation is preferred
1−std /2
(e.g., the Padé approximation (1, 1) e−std = 1+st d /2
);
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 111

• On the contrary, in the discrete-time case non-linear functions like the ex-
ponential are easily built: it suffices to store the function values in suit-
able look-up tables placed into the µP. So we can resort to the scheme in
Fig. 9.13, with e−st̃d ∼ z −N , N T = t̃D and
C 0 (z)
C(z) = .
1 + (1 − z −N )C 0 (z)P̃ 0 (z)

C(z)
+ +
r C ′ (z) b
P (z) b y
− −
− + b
P̃ ′ (z)

z −N

Figure 9.13: Discrete-time version of the Smith’s predictor structure.

9.6 Recalling Diophantine equations


So far an important aspect of direct synthesis has been neglected: whenever the
to-be-controlled plant P (z) is equipped with unstable poles, they must be roots of
DW (z) − NW (z) in order to ensure internal stability. The so far considered tech-
niques apply to simple W (z)’s, and in general how to implement this constraint
hasn’t been clarified.
The denominator of the closes-loop transfer function W , when cancellations
are lacking, takes the form:
NC NP + DC DP = DW ,
which is nothing more than a diophantine (or Bézout) equation in the (polyno-
mial) unknowns X = DC , Y = NC , which are the compensator parameters to
be evaluated. All transfer functions between inputs/disturbances of the closed-
loop system have poles which represent a subset of the DW ’s roots. Therefore,
by ensuring that DW is Hurwitz we obtain the interconnection internal stability.
These equations are well-known, widely studied in the literature, and necessary
and sufficient conditions for their solvability are available.
See the file: note15.pdf
We give up any requirement about the zeros, as the direct synthesis method
we are going to explain (see below) focuses only on poles. In other words, no
zeros allocation will be taken into account in order to ease the interconnection
stability. Nevertheless, it is worthwhile to recall that zeros can dramatically in-
fluence the system performances, in particular when P is not minimum phase.
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 112

9.7 Controller design via diophantine equations


More in detail, this analytical direct synthesis method works as follows: given
the interconnection in Fig. 9.14, we have to impose that the closed-loop transfer
function is
Yo (z) NC NP ! NW (z)
W (z) = = = , (9.24)
R(z) NC NP + DC DP D∗ (z)
where D∗ (z) takes into account the desired pole placement and X := DC and
Y := NC are unknowns polynomials to be evaluated. As a counterpart, NW is a
free parameter, the structure of is completely unconstrained and which will be a
by-product of the synthesis procedure.

+
NC (z) NP (z)
r(k) C(z) = DC (z)
P (z) = DP (z)
b
yo (k)

Figure 9.14: Closed-loop control interconnection.

Internal stability requires BIBO-stability of all transfer functions W , W


C
, W
P
,
W
CP
, and it’s easy to see that finding X and Y which make D∗ stable solves our
problem.4

So, we have to investigate the existence of polynomials X and Y which solve

NP Y + DP X = D∗ . (9.25)

Remark 9.4. Some remarks:

• as previously discussed, once X and Y have been found, the compensator


Y
C=X makes internally stable the whole interconnection;

• (9.25) doesn’t allow to place the W (z)’s zeros.

Proposition 9.1. Let NP and DP be coprime polynomials (devoid of common


zeros), and let n := deg(DP ). Assume that deg(X) = n − 1 and deg(Y ) ≤ n − 1.
Choose any polynomial D∗ with deg(D∗ ) = 2n − 1. Then the following facts hold
true:
Y
• a solution C = X
does exist;

• that solution is a proper rational function.


4
Actually it could be proved that this condition for internal stability is not only sufficient
but necessary too.
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 113

Remark 9.5. The problem here deals with deg(D∗ ) = 2n − 1, which could be
“too large”, and henceforth we should impose requirements on 2n − 1 poles.
However the solution is easily found out, by assuming

D∗ (z) = Ddom (z)∆(z), (9.26)

with Ddom (z) endowed with the dominant poles, while ∆(z) has only “quick
poles” (e.g. poles at z = 0).

Let5

DP (z) = z n + a1 z n−1 + · · · + an , (9.27)


NP (z) = b0 z n + b1 z n−1 + · · · + bn , (9.28)
X(z) = x1 z n−1 + x2 z n−2 + · · · + xn , (9.29)
Y (z) = y1 z n−1 + y2 z n−2 + · · · + yn , (9.30)
D∗ (z) = c1 z 2n−1 + c2 z 2n−2 + · · · + c2n . (9.31)

Solving the diophantine equation (9.25) is equivalent to solving a linear system


of the form6
   
  x1 c1
1 0 0 ··· 0 b0 0 0 ··· 0  x   c 
. . .. . . ..   .2   .2 
. .

 a1 1 0 . b1 b0 0 .    ..   .. 
   

.. .. .
.. .. .. .
.. 
 a2 a1 . . b2 b1 . .   ...   ... 
    
 . .. .. .. .. .. .. ..
 ..
    
. . . 0 . . . . 0   ..   .. 
  .   . 
 an−1 . . . a2 a1 ...
    
 1 bn−1 b 2 b1 b0    x n  =  cn 
   
 an . . . . . . a2 ... ...
b1   y1   cn+1 
    
a1 bn b2
  y2   cn+2 
.. .. .. ..

 0

an . . a2 0 bn . . b2    ..   .. 
   
.. .. .. .. ..   ..   .. 
.. 
    
 0 0 . . . 0 0 . . .  .   . 
 .   . 

 .. .. .. .. .. .. .. ..
 . . . . an−1 . . . . bn−1    ...   ... 
  
0 ··· ··· 0 an 0 ··· ··· 0 bn
| {z } yn c2n
=:A (2n×2n) | {z } | {z }
=:x =:b
(9.32)
So (9.32) implies that the coefficients of the polynomials X e Y are given by

x = A−1 b. (9.33)

Remark 9.6 (Asymptotic tracking via diophantine equations). The problem of


introducing internal model dynamics is simply overcome, as it suffices to replace
5
DP (z) is assumed to be monic, without loss of generality.
6
Under the coprimeness assumption on NP e DP , the appearing matrix is a non-singular
one.
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 114

U U
DP in (9.25) with D̃P = DP DR , where DR takes into account for the g unstable
poles of the reference signal. This way, the degree of D̃P increases up to n + g.
Solving the linear system associated with the diophantine equation, with the
constraint of having both a minimal unknowns number and a proper controller,
reduces to choosing:

D̃P (z) = z n+g + a1 z n+g−1 + · · · + an+g , (9.34)


NP (z) = b0 z n + b1 z n−1 + · · · + bn , (9.35)
X(z) = x1 z n−g−1 + x2 z n−g−2 + · · · + xn−g , (9.36)
Y (z) = y1 z n−1 + y2 z n−2 + · · · + yn , (9.37)
D∗ (z) = c1 z 2n−1 + c2 z 2n−2 + · · · + c2n . (9.38)

This way, the increase in the degree of DP is compensated by a corresponding


decrease in the degree of X. Let’s notice that the matrix blocks are no longer
squared. By solving the system X, Y , leading to a proper compensator which
solves our problem, are found:

Y
C= I
.
XDR

9.8 Digital controller synthesis: Deadbeat track-


ing
Consider the following definition.

Definition 9.1. A controller is deadbeat (DB) with respect to a given reference


signals class if the tracking error for any signal within the class vanishes after a
finite number of steps.

Definition 9.2. A DB controller is minimum time if the required steps num-


ber for annihilating the error is minimum (without losing the controller causality).

Given the usual control scheme in Fig.9.15, let’s assume that:


NR (z)
• the reference signal transform is a rational function R(z) = DR (z)
, with
DR (z) monic;

• DR (z) is known and its degree is nR .

Moreover, let nP be the DP ’s degree, with DP the denominator of P.


The Z transform of the error is
1
E(z) = R(z) = (1 − W (z))R(z), (9.39)
1 + C(z)P (z)
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 115

E(z)
+
r(k) C(z) P (z) b
y(k)

Figure 9.15: Closed-loop control scheme.

and it vanishes after a finite number of steps if an only if it is a polynomial in


z −1 .7 In other words:
NE (z)
E(z) = . (9.40)
zk
Two possibilities are available, associated with the equivalent decompositions of
E(z) discussed in (9.39), respectively:
Diophantine equation synthesis: Decomposition
1 DC DP NR
E(z) = R(z) = , (9.41)
1 + C(z)P (z) DC DP + NC NP DR
suggests to resort to the already seen asymptotic tracking method via dio-
phantine equations, but this requires to include the whole reference signal
denominator in D̃P . So we have to solve:
D̃P X + NP Y = DW , (9.42)
with D̃P = DP DR , of degree nP + nR , while X has degree nP − 1, NP
is therefore expressed as a polynomial of degree nP and Y is assumed to
have degree nP + nR − 1. The DB tracking requirement translates into
DW = z 2nP +nR −1 . Thereafter, we have
Y
C= , (9.43)
XDR
and we only need to verify that
1
E = R (9.44)
1 + CP
XDR DP NR
= (9.45)
ZDR DP + NP Y DR
XNR DP XNR DP
= = 2n +n −1 . (9.46)
DW z P R

Synthesis by canceling: The second approach resorts to this alternative method,


which allows to evaluate the numerator of the closed-loop transfer function
W too. In this respect, it appears to be more convenient and instruc-
tive, but careful attention has to be taken whenever P is endowed with
zeros/poles with greater than (or equal to) 1 absolute value.
Pr  Pr
7
We know that Z −1 l=0 al z −l = l=0 δ(k − l).
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 116

We first need to express R(z) in a more manageable form:

z −nR NR (z) ÑR (z −1 )


R(z) = = . (9.47)
z −nR DR (z) D̃R (z −1 )

So (9.39) is equivalently rewritten as

ÑR (z −1 )
E(z) = (1 − W (z)) . (9.48)
D̃R (z −1 )

Requiring that E(z) is a polynomial in z −1 reduces to

1 − W (z)
= Q(z −1 ), (9.49)
−1
D̃R (z )

with Q(z −1 ) a polynomial of degree q, devoid of any constraint at now. This


implies
E(z) = ÑR (z −1 )Q(z −1 ). (9.50)
It is easily seen that the more q=degQ(z −1 ) is small, the quicker the deadbeat
response is (in the minimum steps number sense).
From (9.49) it follows

NW
W (z) = = 1 − D̃R (z −1 )Q(z −1 )
DW
z κ − z κ D̃R (z −1 )Q(z −1 )
=

z κ − DR (z)Q̃(z)
= , (9.51)

where κ := nR + q and Q̃(z) := z q Q(z −1 ) have been defined. Therefore, from


(9.51) a necessary condition for the DB response is expressed in terms of the
poles of W (z), which all have to be zero. From direct synthesis formula (9.1) the
desired controller is
W (z) 1
C(z) =
1 − W (z) P (z)
NW (z) DP (z)
=
DW (z) − NW (z) NP (z)
NW (z)DP (z)
= . (9.52)
DR (z)Q̃(z)NP (z)

It is easy to see that the open-loop transfer function C(z)P (z) = D N(z)
W (z)
Q̃(z)
is
R
endowed, as expected, with internal model components (related to the whole
reference signal denominator).
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 117

Only a question remains unsolved: what W (z) are obtainable by resorting to


DB controllers? From (9.51) it follows

NW (z) = z κ − DR (z)Q̃(z), (9.53)

with Q̃ an arbitrary polynomial. So we can obtain a suitable desired NW (z) if


we are able to solve the following equation in the unknown Q̃(z)

DR (z)Q̃(z) = z κ − NW (z), (9.54)

without losing both the controller causality and the closed-loop system internal
stability.
Remark 9.7. Let

DR (z) = anR z nR + anR −1 z nR −1 + · · · + a0 , (9.55)


NW (z) = bκ z κ + aκ−1 z κ−1 + · · · + b0 , (9.56)
Q̃(z) = xq z q + xq−1 z q−1 + · · · + x0 , (9.57)

and note that (9.54) can be rewritten in terms of a linear system of the form

0 ···
 
anR 0 0 
1 − bκ

..     −bκ−1 
 anR −1 anR 0 ··· . 

xq
 .
 .. .. ... .. ..   x
  .. 
.
 
. . .   q−1   
 ..   ..


 a0 .. ... ..   .   .

a1 . . 
 
 .  =  ..

(9.58)
.. ..

   ..   .

 0 a0 . . anR     
 . .. ..   .   ..

 .. ..   .

0 a0 . .    
.

 . .. .. .. ..  x0

..

 ..

. . . .
 
0 0 · · · · · · a0 −b0
| {z }
(κ+1)×(q+1)

However, a problem remains, as the previous system is not always solvable,


because the q unknowns are not sufficient for obtaining nR + q + 2 arbitrary co-
efficients.

Other constraints which W (z) has to satisfy to are the following ones:

• the relative degree of W (z) has to be evaluated, in order to guarantee


realizability (i.e. causality) of C(z). Recalling that

– the causality condition of C(z) is satisfied if W (z) has relative degree


at least equal to that of P (z);
– P (z) (in case it has been obtained by sampling/holding) has typically
relative degree equal to 1.
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 118

from (9.51), by being DR (z) assumed monic w.l.o.g, it follows that W (z)
could have relative degree (at least) 1 only if Q̃(z) would be monic too.8

Remark 9.8. Q̃(z) monic ensures properness of C(z), however strict-


properness is not ensured. Q̃(z) monic together with a suitable choice of the
remaining coefficients can guarantee strict-properness of C(z): this implies
that Q̃(z) must contain terms of degree either less than or equal to κ − 2,
which implies an increase in the number of steps required for annihilating
the error (recall E(z) = ÑR (z −1 )Q(z −1 )).

• a further constraint is related to the asymptotic tracking, which requires

z κ − DR Q̃(z)
lim W (z) = = 1 ⇐⇒ NW (1) = 1. (9.59)
z→1 zκ z=1

• Finally, the free parameters in Q(z) can be used in order to:


(1) endowing the denominator of W with the (possible) unstable zeros of
P;
(2) making the unstable poles of P to be roots of DW − NW . In this case,
DW − NW is exactly equal to DR Q̃. So, unstable poles of P destroy internal
stability unless the corresponding monomial terms are inserted into Q.

Remark 9.9. As far as the last situation is concerned, we need to take care of
the roots of DR (z), as in those z−values there is no effect of Q. This implies
that both zeros in W and roots of DW − NW can’t be placed in any pole of the
reference signal we want to track in the dead-beat sense.
Some examples follow, referring to simple situations.

9.9 Examples of dead-beat tracking for constant


signals
z
This particular case implies R(z) = z−1 = 1−z1 −1 . The relative degree of R(z)
(considered as a rational function in z −1 ) is nR = 1. By being ÑR (z −1 ) = 1 it
holds
E(z) = ÑR (z −1 )Q(z −1 ) = Q(z −1 ), (9.60)
so the Q degree (q) plays the crucial role for understanding the number of steps
required for annihilating the error. As previously mentioned, it’s better to as-
sume Q endowed with the minimum possible degree, for obtaining the “best” DB
controller (in the sense of the minimum steps number).
8
In fact NW (z) = z κ − DR (z)Q̃(z) = z κ − z nR +q + (terms of degree ≤ κ − 1) = z κ − z κ +
(terms of degree ≤ κ − 1).
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 119

• Case q = 0. It holds

Q(z −1 ) = a = cost. (9.61)


⇒ NW (z) = z − (z − 1)a = (1 − a)z + a (9.62)
(1 − a)z + a
⇒ W (z) = , (9.63)
z
and the constraints to be satisfied are

– W (1) = 1 ⇔ NW (1) = 1, always verified condition;


– C(z) properness. If, as usual, we assume deg(P (z)) = 1, then W (z)
must have degree at least 1. Since (9.63) implies deg(NW (z)) = 0, the
choice a = 1 is necessary.

So, by concluding
W (z) = z −1 , (9.64)
i.e., W (z) becomes the pure unit delay, and the controller assumes the
expression
W (z) 1 1 DP (z)
C(z) = = . (9.65)
1 − W (z) P (z) z − 1 NP (z)
Notwithstanding, the previous W (z) is obtainable only for minimum phase
plants P (z) (otherwise the internal stability is lost) with relative degree
rP = 1.

• Case q = 1. This way

Q(z −1 ) = a + bz −1 (9.66)
⇒ NW (z) = z 2 − (z − 1)(az + b) (9.67)
z 2 − (z − 1)(az + b)
⇒ W (z) = . (9.68)
z2
The constraints are

– W (1) = 1, always verified.


– C(z) properness: the relative degree of W (z) must be at least 1, so
Q(z) has to be monic, and therefore a = 1.

Now we can choose b thinking of different goals:

1. for obtaining that the relative degree of W (z) is equal to 2. From


NW (z) = z 2 − (z − 1)(z + b) = z(1 − b) + b that holds if and only if
b = 1.
2. for placing an unstable zero in W (for internal stability purposes).
b
That zero is zb = b−1 , for b 6= 1.
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 120

3. for placing an unstable root in DW − NW (the same as above). This


root is zb = −b, for b 6= 1.
• Case q = 2. Now Q̃(z) is given by
Q̃(z) = az 2 + bz + c. (9.69)
As we made in the previous cases, let a = 1 for ensuring W (1) = 1: this
way
NW (z) = z 3 − (z − 1)(z 2 + bz + c) = z 2 (1 − b) + z(b − c) + c, (9.70)
so that
– b = 1 and c = 1 imply a relative degree of W (z) equal to 3;
– b = 1 and c 6= 1 imply a relative degree of W (z) equal to 2 and,
c
moreover, W (z) has one zero at zc = c−1 ;
– b 6= 1 and c 6= 1 imply a relative degree of W√(z) equal to 1 and,
−(b−c)± (b−c)2 −4(1−b)c
moreover, W (z) has two zeros at zbc1,2 = 2(1−b)
.
Alternatively, b, c could be used for inserting unstable roots in DW − NW ,
when needed to obtain the closed-loop internal stability.

For q > 2 a similar reasoning applies.


Remark 9.10. If different canonical reference signals are considered
NR (z)
R(z) = , (9.71)
(z − 1)l+1
it follows
z l+1+q − (z − 1)l+1 Q̃(z)
W (z) = . (9.72)
z l+1+q
Once again, by being DR (z) monic, the same has to happen to Q̃(z) too, in
order to obtain a relative degree of W (z) greater than 0. Furthermore, condition
W (1) = 1 is guaranteed thanks to the term (z − 1)l+1 . If Q̃(z) is desired to be
constant (zero degree), then Q̃(z) = 1 for having the relative degree of W (z)
equal to 1.

9.10 Dead-beat control for P deriving from a


sampling/holding
In section §9.8 we dealt with the problem of designing a DB compensator for
a given plant P (z). What does it happen between two subsequent samples in
case P (z) is the sample/hold version of a continuous-time plant P (s)? We are
going to discuss that with reference to a significant example, recalling the scheme
depicted in Fig. 9.16.
Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 121

+ e u
r(k) C(z) H0 P (s) T

ց b
y(k)

Figure 9.16: Feedback control for a plant obtained via sample/hold.

1
Example 9.2. Assume P (s) = s(s+1) , and let the goal be that of designing a
DB compensator for the step response of the corresponding sample/hold plant.
Without loss of generality, assume T = 1. From (5.8)
   
−1 −1 P (s)
P̃ (z) = (1 − z )Z S ◦ L ;T
s
1 − 2e−1 + ze−1
=
(z − 1)(z − e−1 )
0.264 + 0.368z
= ,
(z − 1)(z − 0.368)
the relative degree of P̃ (z) is rP = 1. Moreover, P̃ (z) is devoid of unstable zeros,
while it has an unstable pole at z = 1. The internal stability is therefore lost,
unless W (z) = NW (z)/DW (z) with DW (z) − NW (z) having a zero at z = 1 is
chosen. The DB requirement leads to W (z) = z −1 , and the controller, from the
direct synthesis formula (9.1), becomes
z −1 (z − 1)(z − 0.368)
C(z) = ·
1 − z −1 0.264 + 0.368z
z − 0.368
= .
0.264 + 0.368z
The variabile y(t) (the output of P (s)) behavior is shown in Fig. 9.17 (upper
graph), together with the response y(k). So y(t) exhibits some oscillations
(ripples), despite the right value (1) is assumed in the various sampling time
instants. We can also note that
• in general, stability outside the sampling times can’t be guaranteed;
• by decreasing the sample step, greater inputs are required;
• if the physical limitation of the actuator are overtaken (i.e., the saturation
levels) the ideal behavior is disregarded and, in particular, the dead-beat
property is definitely lost.
Conditions ensuring DB control without ripples exist (they are available in
various textbooks and in the attached notes). They are strongly related both to
state-space representations and to elementary modes conversion from continuous-
time into discrete-time.

Chapter 9. Digital Controllers Synthesis: Direct Synthesis Methods 122

δ−1 (k) y(k)

0 1 2 3 4 5 t

u(t)

1 3 5

2 4 t

Figure 9.17: Qualitative behavior of y(t) (grey dashed) and of y(k) (blue) for
the step response (upper figure) and behavior of the actuating input u(t), which
drives the plant P (s) (lower figure).
Appendix A

Table of Most Common Z


Transforms

f (k), k ∈ Z+ d .... t Z[f (k)] = F (z) Rc

δ(k) d ... t 1 ∀z ∈ C

δ(k − n), n ∈ Z+ d ... t z −n ∀z ∈ C, z 6= 0

d ... t z
δ−1 (k) |z| > 1
z−1

d .... t z
kδ−1 (k) |z| > 1
(z − 1)2

d ... t z(z + 1)
k 2 δ−1 (k) |z| > 1
(z − 1)3
 
k d .... t z
,l≥0 |z| > 1
l (z − 1)l+1

d .... t z
pk δ−1 (k), p ∈ C |z| > |p|
z−p
 
k k−l d .... t z
p , l ≥ 0, p ∈ C |z| > |p|
l (z − p)l+1
Appendix A. Table of Most Common Z Transforms 124

d .... t z(z − cos ϑ)


cos(ϑk)δ−1 (k) |z| > 1
z2 − 2 cos ϑz + 1

d .... t z sin ϑ
sin(ϑk)δ−1 (k) |z| > 1
z2 − 2 cos ϑz + 1

d .... t z(z − p cos ϑ)


pk cos(ϑk)δ−1 (k), p ∈ C |z| > |p|
z 2 − 2p cos ϑz + p2

d .... t zp sin ϑ
pk sin(ϑk)δ−1 (k) p ∈ C |z| > |p|
z2 − 2p cos ϑz + p2
Appendix B

Table of Most Common Laplace


Transforms

f (t), t ∈ R+ d t L[f (t)] = F (s) Rc

δ(t) d t 1 ∀s ∈ C

δ(t − τ ), τ ∈ R+ d t e−τ s ∀s ∈ C

d t 1
δ−1 (t) <(s) > 0
s

d t 1
tδ−1 (t) <(s) > 0
s2

tn d t 1
δ (t),
n! −1
n∈N <(s) > 0
sn+1

d t 1
eαt δ−1 (t), α ∈ R <(s) > α
s−α

tn αt d t 1
n!
e δ−1 (t), α∈R <(s) > α
(s − α)n+1
Appendix B. Table of Most Common Laplace Transforms 126

d t s
cos(ϑt)δ−1 (t) <(s) > 0
s2 + ϑ2

d t ϑ
sin(ϑt)δ−1 (t) <(s) > 0
s2 + ϑ2

d t s−α
eαt cos(ϑt)δ−1 (t), α ∈ R <(s) > α
(s − α)2 + ϑ2

d t ϑ
eαt sin(ϑt)δ−1 (t) α ∈ R <(s) > α
(s − α)2 + ϑ2
Appendix C

Notions of Control in
Continuous-Time

C.1 Routh Test


Let
A(s) = an sn + an−1 sn−1 + · · · + a1 s + a0 .
be a given polynomial. It is said to be a Hurwitz polynomial if all is zeros are on
the open left half-plane. The Routh Test is based on a table (Routh table) with
n + 1 rows: the first and second rows are defined by:

an an−2 an−4 . . .
an−1 an−3 an−5 . . .

Each of the subsequent rows is obtained as function of the elements in the two
rows before it as shown in the example below. Consider the thee consecutive
rows:

pi+2 pi pi−2 . . .
qi+1 qi−1 qi−3 . . .
ri ri−2 ri−4 . . .

Then rj is given by
pi+2 pj
h i
det qi+1 qj−1 pi+2
rj = − = pj − qj−1 . (C.1)
qi+1 qi+1
This expression holds also for j = 0, 1 with the proviso of considering to be zero
all the elements of the two preceding rows that have negative index.
With reference to the table just defined we have the following result (known
as Routh Theorem):
Theorem C.1 (Routh). The following hold:
Appendix C. Notions of Control in Continuous-Time 128

1. A(s) is Hurwitz if and only if the construction of the table can be completed
(i.e. none of the elements in the first column of the table except the last
one is zero) and all the elements in the first column of the table have the
same sign (strictly).

2. If the construction of the table can be completed then the number nn of zero
of A(s) (counted with multiplicities) that are in the open left half-plane is
equal to the number of consecutive pairs of elements having (strictly) the
same sign in the first column of the table.

3. If the construction of the table can be completed then the number np of zero
of A(s) (counted with multiplicities) that are in the open right half-plane
is equal to the number of sign changes in the first column of the table.

4. If the construction of the table can be completed A(s) has at most a simple
zero in the imaginary axis. This zero, if present, is 0 and is present if and
only if the last element in the first column of the table is zero.

C.2 Root Locus


The root locus is a useful tool to analize stability of a closed-loop system and
how its poles varies in the complex plane as the controller gain changes. Mathe-
matically this can be reduced to the problem of determining how the zeros of the
polynomial
pK (s) := D(s) + KN (s), (C.2)
Qn
Qm in function of the parameter K ∈ R. Here D(s) = i=1 (s−pi ) and N (s) =
varies
i=1 (s − zi ) are given monic polynomials with n := deg[D(s)] ≥ m := deg[N (s)].
With reference to pk (s) in (C.2), we define the positive root locus to be the set

L+ := {s ∈ C : ∃K ≥ 0 t.c. pK (s) = 0}. (C.3)

Similarly, we define the negative root locus to be the set

L− := {s ∈ C : ∃K < 0 t.c. pK (s) = 0}. (C.4)

The complete root locus is finally defined as

L := L+ ∪ L− = {s ∈ C : ∃K ∈ R t.c. pK (s) = 0}. (C.5)

The following result provides some simple rules allowing to draw a qualitative
sketch of L+ and L− .

Theorem C.2. With reference to L+ , the following properties hold:

1. L+ is symmetric with respect to the real axis.


Appendix C. Notions of Control in Continuous-Time 129

2. L+ is formed by n curves (branches) originating, for K = 0, from the zeros


pi of the polynomial D(s). These branches are continuous curves in the
complex plane.

3. As K → ∞, m of the branches tend to the zeros zi of the polynomial N (s)


and the remaining n − m diverge to infinity.

4. The n − m diverging branches tend to infinity along n − m asymptotes. All


such asymptotes originate from the same point σc defined by
Pn Pm
i=1 pi − i=1 zi
σc = .
n−m
Moreover the angles that the asymptotes form with the real axis are
(2k + 1)π
ϕk = , k = 0, 1, . . . , n − m − 1.
n−m

5. Let zj be a zero of N (s) and µ be its multiplicity. Then, as K → ∞, µ


branches tend to zj with the following gradients in zj :
n m
1X 1 X
βj = arg(zj −pi )− arg(zj −zi )−(2k +1)π, k = 0, 1, . . . , µ−1.
µ i=1 µ i=1
zi 6=zj

Let pj be a zero of D(s) and µ be its multiplicity. Then µ branches originate


from pj with the following gradients in pj :
n m
1 X 1X
αj = − arg(pj −pi )+ arg(pj −zi )+(2k+1)π, k = 0, 1, . . . , µ−1.
µ i=1 µ i=1
pi 6=pj

6. The intersection between L+ and the real axis is the set of all real points
having to their right an overall odd number of zeros of D(s) and of N (s)
(counted with multiplicity).

7. s? is a multiple point of L+ with multiplicity µ ≥ 2 if and only if there


exists K ≥ 0 such that pK (s) and its first µ − 1 derivatives (with respect
to s) vanish for s = s? .
With reference to L− , the following properties hold:
1. L− is symmetric with respect to the real axis.

2. L− is formed by n curves (branches) originating, for K = 0, from the zeros


pi of the polynomial D(s).

3. As K → −∞, m of the branches tend to the zeros zi of the polynomial


N (s) and the remaining n − m diverge to infinity.
Appendix C. Notions of Control in Continuous-Time 130

4. The n − m diverging branches tend to infinity along n − m asymptotes. All


such asymptotes originate from the same point σc defined by
Pn Pm
i=1 pi − i=1 zi
σc = .
n−m
Moreover the angles that the asymptotes form with the real axis are
2kπ
ϕk = , k = 0, 1, . . . , n − m − 1.
n−m

5. Let zj be a zero of N (s) and µ be its multiplicity. Then, as K → −∞, µ


branches tend to zj with the following gradients in zj :
n m
1X 1 X
βj = arg(zj − pi ) − arg(zj − zi ) − 2kπ, k = 0, 1, . . . , µ − 1.
µ i=1 µ i=1
zi 6=zj

Let pj be a zero of D(s) and µ be its multiplicity. Then µ branches originate


from pj with the following gradients in pj :
n m
1 X 1X
αj = − arg(pj − pi ) + arg(pj − zi ) + 2kπ, k = 0, 1, . . . , µ − 1.
µ i=1 µ i=1
pi 6=pj

6. The intersection between L− and the real axis is the set of all real points
having to their right an overall even number of zeros of D(s) and of N (s)
(counted with multiplicity).

7. s? is a multiple point of L− with multiplicity µ ≥ 2 if and only if there


exists K ≤ 0 such that pK (s) and its first µ − 1 derivatives (with respect
to s) vanish for s = s? .

8. l := deg[D(s) − N (s)] of the branches are continuous curves in the complex


plane while the other n − l diverge to infinity for K =→ −1.

C.3 Nyquist Plot


Consider a transfer function W (s). Its Nyquist plot is a parametric curve in
the complex plane indexed in ω ∈ R. The real and imaginary coordinates of this
curve are < (W (jω)) and = (W (jω)), respectively. Since W (jω) is symmetric, i.e.
W (jω) = W (−jω), once drawn the Nyquist plot for ω ≥ 0, it is easy to complete
the curve by symmetry with respect to the real axis. Notice that |W (jω)| and
arg[W (jω)] are the polar coordinates of the points of the Nyquist plot. Hence the
Nyquist plot may be easily obtained from the Bode plot by taking into account
the following rules:
Appendix C. Notions of Control in Continuous-Time 131

1. The points for which the Nyquist plot crosses the unit circle correspond to
the values of the frequency ω for which the Bode magnitude plot crosses
the x-axis (0 dB).

2. The points for which the Nyquist plot crosses the positive real axis cor-
respond to the values of the frequency ω for which the Bode phase plot
crosses the horizontal lines with ordinate 2πk, k ∈ Z.

3. The points for which the Nyquist plot crosses the negative real axis cor-
respond to the values of the frequency ω for which the Bode phase plot
crosses the horizontal lines with ordinate π + 2πk, k ∈ Z.

4. The points for which the Nyquist plot crosses the positive imaginary axis
correspond to the values of the frequency ω for which the Bode phase plot
crosses the horizontal lines with ordinate π2 + 2πk, k ∈ Z.

5. The points for which the Nyquist plot crosses the negative imaginary axis
correspond to the values of the frequency ω for which the Bode phase plot
crosses the horizontal lines with ordinate − π2 + 2πk, k ∈ Z.

6. If W (s) has poles on the imaginary axis then |W (jω)| diverges for some
values of the frequency ω so that the Nyquist plot is an open curve.

7. If W (s) is a rational function without poles at the origin, then the Nyquist
plot starts, for ω = 0, from the real point of abscissa KB , with KB being
the Bode gain of W (s).

8. If W (s) is a strictly proper rational function, then the Nyquist plot tends
to the origin for ω → ∞.

9. If W (s) is a proper rational function and all its poles and zeros have strictly
negative real part (minimum phase system), then the phase arg[W (jω)]
tends, for ω → ∞, to −(n − m) π2 where n and m are the numbers (counted
with multiplicity) of poles and zeros of W (s), respectively.

10. If W (s) is a proper rational function but it is not strictly proper then the
Nyquist plot tends, for ω → ∞, to the real point of abscissa KE , with KE
being the Evans gain of W (s).

11. Both for ω → 0 and for ω → ∞ the gradient to the Nyquist plot of a
rational function tends to an integer multiple of π2 .
Appendix C. Notions of Control in Continuous-Time 132
Bibliography

[1] K.J. Åström and B. Wittenmark. Computer-controlled systems: theory and


design. Prentice-Hall information and system sciences series. Prentice Hall,
1997.

[2] A.E. Bryson and Y.C. Ho. Applied Optimal Control: Optimization, Estima-
tion, and Control. Halsted Press book’. Taylor & Francis, 1975.

[3] J. Carr. Applications of Centre Manifold Theory. Applied Mathematical


Sciences. Springer-Verlag, 1981.

[4] D. Ciscato. Appunti di Controllo Digitale. Libreria Progetto, 2010-11.

[5] A. Ferrante, A. Lepschy, and U. Viaro. Introduzione ai controlli automatici.


CittàStudi, 2008.

[6] E. Fornasini and G. Marchesini. Appunti di teoria dei sistemi. Libreria


Progetto, 2003.

[7] G.F. Franklin, J.D. Powell, and M.L. Workman. Digital control of dynamic
systems. Addison-Wesley world student series. Addison-Wesley, 1998.

[8] R.A. Horn and C.R. Johnson. Matrix Analysis. Cambridge University Press,
1990.

[9] E.I. Jury. Theory and application of the z-transform method. Wiley, 1964.

[10] H.K. Khalil. Nonlinear Systems. Prentice Hall, 1996.

[11] S. Lang. Algebra lineare. Programma di mat. fisica elettronica. Bollati


Boringhieri, 1985.

[12] K. Ogata. Discrete-time control systems. Prentice Hall, 1995.

You might also like