Process Control
Process Control
1.1 Introduction
Process control is very important in chemical process industry. It is necessary to use control to operate the processes such that the energy, and raw materials are utilized in the most economical and e cient ways. At the same time it is necessary that the product ful lls the speci cations and that the processes operate in a safe way. In this chapter we will give an overview of what is meant by control and give examples of how process control is used in chemical plants. Section 1.2 gives a short overview why control is necessary and how it is used. The idea of feedback is introduced in Section 1.3. To be able to design control systems it is necessary to understand how the processes work. Models of di erent kinds and complexities are thus of great importance. This is exempli ed in Section 1.4 using some typical unit operations. Typical process control concepts are illustrated using a simple example in Section 1.5. An overview of the course is given in Section 1.6.
Chapter 1
Reference Controller
Input
Process
Output
the total system is as good as possible. The performance can be measured in many di erent ways. For instance, we may want to minimize the variation in the output or minimize the energy consumption. The main idea of process control is feedback, which implies that the measured signals are used to calculate the control signals. I.e., that information is fed back into the process. This turns out to be a very important and useful concept that frequently can be used. In our daily life we are surrounded by many di erent control systems. Examples are: Thermostats in showers Heating and cooling systems in buildings Temperature and concentration controls within the human body Frequency and voltage control in power network Speed and path following when riding bikes or driving cars Some of these control systems are easy to understand and explain while others are very di cult to analyze, but they are all based on the idea of feedback. A control system in a chemical plant has many di erent purposes: Product speci cations. The outgoing product must ful ll the speci cations with respect to, for instance, quality or purity. Safety. The plant must be operated such that the safety regulations for personnel and equipment are met. Operational constraints. Due to environmental and process considerations it is necessary to keep certain variables within tight bounds. Economical. The control system must ensure that energy and raw materials are used as economically as possible. These tasks can be seen as plant-wide goals that must be ful lled. To do so it is necessary to divide the operation of the plant into subtasks and subgoals. These are then further subdivided until we reach the bottom line, which consists of simple control loops with few inputs and outputs and with quite well de ned goals. The main building blocks in process control are simple feedback controllers. The purpose of the course is to develop an understanding for how these building blocks can be used to create a control system for a large plant.
Process design
The purpose of the process design is to determine suitable operational conditions for a plant or unit process. Examples are concentrations of feed, temperatures, and through-put. These conditions may then determine sizes of tanks and pipes in the plant. The process design must also be done such that the
Error
Power Amplifier
Measurement
process has su ciently degrees of freedom for control. This means that the control system has su ciently many signals that it can be manipulated such that it is possible to maintain the plant at the desired operational points. The possibility to make good control is very much in uenced by the process design. A good design will help the control engineer to make a good control system. A bad design can on the other hand hamper the construction of a good control system for the plant. It is therefore important that control aspects are taken into consideration already at the design stage of the processes.
In the process design we try to nd the optimum operating conditions for the process. The optimization has to be done with respect to e ciency and economy. The optimization also has to be done within the constraints for the process. It is usually done on a plant-wide basis. The control system is then designed to keep the process at the desired operation point.
The main purpose of a typical control system is to assure that the output signal is as close to the reference signal as possible. This should be done despite the in uence of disturbances and uncertainties in the knowledge about the process. The main advances of feedback are Makes it possible to follow variations in the reference value. Reduces the in uence of disturbance. Reduces the in uence of uncertainties in the process model. Makes it possible to change the response time of the closed loop system compared with the open loop system.
4
Desired temperature Error signal
Chapter 1
Power
+ _
Amplifier
Water
Controller
Process
switched on or o . To obtain a better control the heater can be controlled by a thyristor. It is then possible to make a continuous change in the power. The thermostat is an example of a feedback system, where the control signal is determined through feedback from the measured output. The feedback is negative since an increase in the water temperature results in a decrease in the power. To describe processes from a control point of view we often use block diagrams. Figure 1.3 shows a block diagram for the thermostat system. The block diagram is built up by rectangles and circles, which are connected by lines. The lines represents signals. The rectangles or blocks show how the signals in uence each other. The arrows show the cause-e ect direction. The circle with the summation symbol shows how the error signal is obtained by taking the di erence between the desired temperature and the measured temperature. Notice that one symbol in the block diagram can represent several parts of the process. This gives a good way to compress information about a process. The block diagram gives an abstract representation of the process and shows the signal ow or information ow in the process. The example above shows how a simple control system is built up using feedback. The next example gives a more quantitative illustration of the bene ts with feedback. Example 1.2|Simple feedback system Consider the process in Figure 1.4. Assume that the process is described by a pure gain Kp , i.e. y = x + d = Kp u + d (1.1) where d is an output disturbance. The gain Kp determines how the control signal u is ampli ed by the process. Let the desired set point be yr . One way to try to reach this value is to use the control law u = Kcyr (1.2) with Kc = 1=Kp. The output then becomes y = yr + d
The control law (1.2) is an open loop controller, since it does not use the measurement of y . We see, however, that the controller does not reduce the e ect of the disturbance d. Further, if Kc is xed, a change in the process gain from Kp to Kp + Kp will change the output to
Kp )y + d y = (1 + K r p
The error between the desired value and the output is then
Kp y ; d e = yr ; y = ; K r p
This implies that a 10% change in Kp results in a 10% change in the error, i.e., there is no reduction in the sensitivity to errors in the knowledge about the process. Let us now measure the temperature, calculate the deviations from the desired value and use the controller
u = Kc (yr ; y ) = Kce
(1.3)
This is a proportional controller, since the control signal is proportional to the error e between the reference value and the measured output. The controller (1.3) is also a feedback controller with negative feedback from the measured output. The proportional factor Kc is the gain of the controller. The total system is now described by
y = Kp u + d = Kp Kc (yr ; y ) + d
Solving for y gives
KcKp y + 1 d y = 1+ r KK 1+K K
c p c p
(1.4) (1.5)
1 y ; 1 d e = yr ; y = 1 + K r K 1+K K
c p c p
From (1.4) it follows that the output will be close to the reference value if the total gain Ko = Kc Kp is high. This gain is called the loop gain. Further the error is inversely proportional to the return di erence Kd = 1 + Kc Kp . The loop gain can be interpreted in the following way. Assume that the process loop is cut at one point (the loop is opened) and that a signal s is injected at the cut. The signal that comes back at the other end of the cut is
s ; s1 = (1 + Kc Kp)s = Kd s
This explains why Kd is called the return di erence.
Chapter 1
The in uence of the disturbance is inversely proportional to the return di erence. It will thus be possible to obtain good performance by having a high gain in the controller. The negative feedback makes the closed loop system insensitive to changes in the process gain Kp. A change in Kp to Kp + Kp gives (for d = 0)
y = 1 ; 1 + K (K1 + K ) yr c p p e = 1 + K (K1 + K ) yr c p p The in uence of a change in the process gain is now highly reduced when the loop gain is high. To summarize it is found that the negative feedback controller reduces the in uence of the disturbance and the sensitivity to uncertainties in the process parameters. The example above shows that the stationary error reduces when the controller gain increases. In practice, there is always a limit on how much the gain can be increased. One reason is that the magnitude of the control signal increases with the gain, see (1.3). Another reason is that the closed loop system may become unstable due to dynamics in the process. These two aspects will be dealt with in detail further on. Figure 1.4 is the prototype block diagram for a control system. It can represent many di erent systems. In the course we will concentrate on how to model the process and how to determine which type of controller that should be used.
or
A h a q out
Consider the tank system in Figure 1.5. It is assumed that the output ow qout can be manipulated and that the input ow qin is determined by a previous part of the process. The input ow can be considered as a disturbance and the output ow as the control or input variable. The level in the tank is the output of the process. The change in the level of the tank is determined by the net in ow to the tank. Assume that the tank has a constant area A. The rate of change in the volume V is determined by the mass balance equation dV = A dh = q ; q (1.6) dt dt in out where h is the level in the tank. The system is thus described by a rst order di erential equation. Equation (1.6) gives a dynamic relation between the input signal qout and the output signal h. It is the rate of change of the output that is in uenced by the input not the output directly. Example 1.5|Chemical reactor Figure 1.6 is a schematic drawing of a chemical reactor. The ow and concentrations into the reactor can change as well as the ow out of the reactor. The reaction is assumed to be exothermic and the heat is removed using the cooling jacket. The reaction rate depends on the concentrations and the temperature in the reactor. To make a crude model of the reactor it is necessary to model the mass and energy balances. The detailed model will be discussed in Chapter 2. The example shows that the models quickly increase in complexity and that the relations between the inputs and output can be quite complex. The examples above hint that the modeling phase can be di cult and contains many di cult judgments. The resulting model will be a dynamical system described by linear or nonlinear ordinary di erential or partial di erential equations. Modeling and analysis of such processes are two important problems on the way towards a good control system.
1.5 An Example
Before starting to develop systematic methods for analysis and design of control systems we will give a avor of the problems by presenting a simple example that illustrates many aspects of feedback control.
Chapter 1
qA
Area A h a
Consider the problem of controlling the water level in a tank. A schematic picture of the process is shown in Figure 1.7. The tank has two inlets and one outlet. The ow qA which can be adjusted is the control variable. The ow qB , which comes from a process upstream is considered as a disturbance. The objective is to design a control system that is able to keep the water at a desired level in spite of variations in the ow qB . It is also required that the water level can be changed to a desired level on demand from the operator.
Problem Description
Modeling
The rst step in the design procedure is to develop a mathematical model that describes how the tank level changes with the ows qA and qB . Such a model can be obtained from a mass balance as was done in Example 1.4. A mass balance for the process gives
p A dh = ;qout + qA + qB = ;a 2gh + qA + qB dt
(1.7)
1.5 An Example
q o ut
o
9
Linear approximation
2gh
Figure 1.8 Relation between outlet ow and level and its linear approximation.
where A is the cross section of the tank, a the e ective outlet area and h the tank level. When writing equation (1.7) a momentum balance (Bernoulli's p equation) has been used to determine the outlet velocity as 2gh. This means that the out ow is given by p qout = a 2gh (1.8) As a rst step in the analysis of the model we will assume that the in ows are 0 and qB = q 0 . The level in the tank will then also settle constant, i.e. qA = qA B at a constant level h0 . This level can be determined from (1.7). Introducing 0 and qB = q 0 into the equation we nd that h = h0 , qA = qA B p 0 + q0 = 0 ;a 2gh0 + qA B Notice that the derivative dh0=dt is zero because h0 is constant. The system is then said to be in steady state which means that all variables are constant with values that satisfy the model. Solving the above equation gives 0 q 0 )2 B h0 = (qA2+ (1.9) ga2 This equation gives the relation between the tank level and the in ow and out ow in steady state. The model (1.7) is di cult to deal with because it is nonlinear. The source of the nonlinearity is the nonlinear relation between the out ow qout and level h. This function is shown in Figure 1.8. For small deviations around a constant level h0 the square root function can be approximated by its tangent i.e., r p 0 r g p 0 0 0 qout = a 2gh a 2gh + a 2h0 (h ; h ) = qout + a 2g h0 (h ; h ) Introduce the deviations from the equilibrium values
y = h ; h0 0 u = qA ; qA 0 v = qB ; qB
10
Chapter 1
Equation (1.7) can then be approximated by p 0 r g d A dt (h ; h0) ;a 2gh ; a 2h0 (h ; h0 ) + qA + qB r 0 0 0 = ;a 2g h0 (h ; h ) + (qA ; qA ) + (qB ; qB ) This equation can be written as
dy = ; y + u + v (1.10) dt where p 0 0 0 r a g a 2gh = qA + qB = A 2h0 = 2Ah 0 2Ah0 1 =A Of physical reasons we have > 0. Notice that 2 is the ratio between the total ow through the tank and the tank volume. Since (1.10) is of rst order it can be solved analytically. The solution is y (t) = e; ty (0) +
Zt
0
e; (t; ) (u( ) + v ( )) d
y (t) = e; ty (0) +
= e; t y (0) +
Zt
0
e; (t; ) d
;
1 ; e;
t
;0 0 u +v
(u0 + v o )
From this solution we can draw several conclusions. First assume that u0 +v 0 = 0. The output y (t) will then approach zero independent of the initial value y (0). The speed of the response is determined by . A small value of gives a slow response, while a large value of gives a fast response. Also the response to changes in u and v is determined by the open-loop parameter . We have thus veri ed that there will be a unique steady state solution at least if the linear approximation is valid. From the solution we also see that the output in steady-state is changed by Kp = = if the input is changed by one unit. Kp is called the static gain of the process.
Proportional Control
A typical control task is to keep the level at a constant value yr in spite of variations in the inlet ow qB . It may also be of interest to set the value yr at di erent values. The value yr is therefore called the reference value or set point. A simple controller for the process (1.10) is given by
u = Kc(yr ; y )
(1.11)
1.5 An Example
11
This is called a proportional controller because the control action is proportional to the di erence between the desired value yr (the set point) and the actual value of the level. Eliminating u between equations (1.10) and (1.11) gives dy = ; y + K (y ; y ) + v c r dt (1.12) = ;( + Kc )y + Kc yr + v which is a rst order di erential equation of the same form as (1.10). If there are no disturbances i.e. v = 0, the equilibrium value of the level is obtained by putting dy dt = 0 in (1.12). This gives
(1.13)
where Kp = = is the static gain of the process. The level y will thus not be equal to the desired level yr . They will, however, be close if the product Kp Kc is large. Compare with the discussion based on static models in Section 1.3.
A drawback with the proportional controller (1.11) is that the level will not be at the desired level in steady state. This can be remedied by replacing the proportional controller by the controller
u = Kc(yr ; y ) + Ki (yr ( ) ; y ( )) d
0
Zt
(1.14)
This is called a proportional and integrating controller because the control action is a sum of two terms. The rst is proportional to the control error and the second to the time integral of the control error. The error must be zero if there is a steady state solution. This can be seen from (1.14). If u is constant then the error e = yr ; y must be zero because if it is not then u cannot be constant. Notice that we do not know if there exist a steady state solution. The output may, for example, oscillate or grow without settling. To analyze the closed loop system the variable u is eliminated between (1.10) and (1.14). Taking the derivatives of these equations we get
d2y + ( + K ) dy + K y = K dyr + K y + dv (1.15) c dt i c dt i r dt2 dt This equation describes how the level y is related to the set point yr and the disturbance ow v .
12
Chapter 1
We rst observe that if the disturbance ow v and the set point are constant a possible steady state value (all derivatives are zero) for the output is y = yr The controller will thus give a closed loop system such that the output has the correct value. Equation (1.15) is a second order di erential equation with constant coefcients. Systematic methods of dealing with such equations of arbitrarily high order will be developed in Chapter 3. Before methods have been developed we can still understand how the system behaves by making a simple analogy. The motion of a mass with damping and a restraining spring is also described by equation (1.15). The mass is 1, the damping coe cient is + Kc and the spring constant is Ki. The behavior of the system can be understood from this mechanical analog. If the spring constant is positive it will push the mass towards the equilibrium. If the spring constant is negative the mass will be pushed away from the equilibrium. The motion will be damped if the damping coe cient is positive, but not if the damping is negative. From this intuitive reasoning we nd that the system is stable if the spring constant and the damping constant are positive i.e., + Kc > 0 Ki > 0 The damping can thus be increased by increasing Kc and the spring coe cient can be increased by increasing Ki . With a negative sign of Ki the spring coe cient is negative and the system will not reach an equilibrium. This corresponds to positive feedback. It follows from Equation (1.15) that changes in the disturbance ow will result in changes of the level. The control laws (1.11) and (1.14) are pure feedback strategies i.e., they will generate control actions only when an error has occurred. Improved control laws can be obtained if the disturbance v is measured. By replacing the proportional control law (1.11) with u = Kc (yr ; y ) ; v we nd that Equation (1.12) is replaced by dy + y = K (y ; y ) (1.16) c r dt The e ects of the disturbance ow is thus completely eliminated. The control law (1.16) is said to have a feedforward action from the measured disturbance. The PI controller can be modi ed in a similar manner.
Feedforward
13
15
function of the system. Behavioral representations are introduced in Section 2.3. They represent the process behavior to speci c process inputs such as step disturbances. These representations may be obtained in a conceptually appealing way, just by introducing the desired form of disturbance at the process input and measuring the output. Such a representation is called an input-output representation. Once the process is operating a behavioral representation may be obtained and used to quantitatively represent the process behavior to this type of disturbance. Mathematical models are used for quantitative description of a system. These models describe the approximate function of the system elements in mathematical form. A combination of block diagram and mathematical models is an e ective method to provide an overview and detailed description of the function of a system. The system function can then be related to the real process using a process ow sheet and a general process layout. Mathematical modeling principles are introduced in Section 2.4. In Section 2.5 models for a number of processes are given, which lead to sets of ordinary di erential equations. Process examples which lead to models consisting of sets of partial di erential equations are given in Section 2.6. The state space model is introduced as a generalization of the above process models in Section 2.7. Typical process disturbances are introduced subsequently. The nonlinear terms are linearized leading to a general linear state space model which will be used throughout the notes. A state space model gives a detailed description of an internal structure of a system. In contrast input{output models only describe the relationship between the process inputs and outputs. The idea of also modeling disturbances is introduced in Section 2.8. Two di erent types of disturbances are discussed. These types are stochastic or random and deterministic disturbances. In addition typical operation forms of chemical processes are de ned. In Section 2.9 a few methods are given for determination of a mathematical model from experimental data. This methodology is in contrast to the derivation of models from a physical{chemical basis used above in Sections 2.5 and 2.6. Combinations of the approaches are also given. This whole subject is called process or system identi cation. Finally in Section 2.10 a summary of the material presented in this chapter is given.
16
Chapter 2
feed
Crude oil
Ethylene
Polyethylene
Figure 2.3 Process owsheet for a simple plant with a reactor, heat exchanger, and a
distillation column.
no standards or rules for these drawings. However, good pictures are easily identi ed. Within the process industry a limited number of unit operations and reactor types are used. Therefore a set of almost international standard symbols have been introduced for the di erent components. Similar standards have been introduced by the power generation and the sanitation industries. A process ow sheet is illustrated in Figure 2.3.
Process ow sheets
17
B
Kylvatten L T REAKTOR VVX LC TC T TC
DESTILLATIONSKOLONN
Topprodukt
TC nga KOKARE
Kylvatten T
VVX TC
LC
Produkt C
Figure 2.4 Simpli ed process and instrumentation (PI) diagram for the process in Figure 2.3. Signal connections are indicated with dashed lines. Controllers are indicated by circles with text: CC concentration control, FC ow rate control, LC level control and TC temperature control.
To illustrate the combined process and instrumentation system a set of standard symbols have been proposed and is applied within the process and control industry. These diagrams are often referred to as PI (Process and Instrumentation) diagrams. A simple example is shown in Figure 2.4. The PI diagram is a factual representation of the combined process and instrumentation system hardware. For understanding process control and its design the information ow is essential, whereas the speci c implementation technology, e.g. electrical, mechanical or pneumatic, only is of secondary importance. Therefore a stylized pictorial representation has been developed to describe the information ow in a control system. This pictorial representation is called a block diagram and is illustrated in Figure 2.5. The fundamental principle is to enhance the signal or information ow and to suppress other details. The elements of a control system are represented by rectangular blocks. The blocks are connected by lines with arrows. The direction of the arrows illustrates cause and e ect in the interaction among the various elements. Designations of the variables represented by the lines are shown next to the lines. Each block thus has inputs and outputs. The inputs are control signals and disturbances and the outputs are controlled signals and measurements, as indicated in Figure 2.5b. The input{output relationship may be shown within a block. A circle is used to indicate a summation where the signs may be shown next to the arrows. Simple rules to manipulate block diagrams are given in Chapter 3. The block diagram is an abstract representation of a system, the usage of which requires habits and thoughts to relate the abstract picture to the technical reality. The abstract representation has the big advantage that many di erent technical systems have the same block diagram. Thus their fundamental control problems are the same as long as the processes ful ll the validity assumptions for the block diagram.
Block diagrams
18
Chapter 2
Output y
b) yr
e Controller
u Process
Figure 2.5 Block diagram for simple process in a) open loop, and b) closed loop.
19
Operator training. In addition mathematical models can be very useful for simulation of hazardous conditions, and hence for establishing safety procedures. Often the models used for such purposes must be more detailed than needed for control system tuning simulations. The latter example of simulating hazardous conditions illustrates that the needed complexity of a model depends upon the goal of the model usage. When using and developing models it is very important to keep this goal in mind in order to develop a model of suitable complexity. When developing models for control system simulation and control tuning it is often essential to be able to investigate the speed of the control system response, and the stability of the controlled system. Dynamic models are thus needed at least for the dynamically dominating parts of the process. Mainly two di erent views on mathematical models are used in process control. The simplest considers an external representation of a process which is conceptually closely connected to the block diagram where the process is considered as a black box. The mathematical model then speci es an output for each value of the input. Such an external model is called an input-output model. If available such a model can provide an e cient representation of the process behavior around the operating point where it is obtained. Inputoutput models was introduced in the previous section in this chapter and will be used throughout the course. In the alternative view a process is modeled using fundamental conservation principles using physical, chemical and engineering knowledge of the detailed mechanisms. With these models it is attempted to nd key variables, state variables, which describe the characteristic system behavior. Thus such a model may be used to predict the future development of the process. This type of model is called an internal model or a state space model. Development of state space models is the main subject of the remainder of this chapter. The contents of this section are organized as follows: The basis for development of conservation law models is rst outlined, then the degrees of freedom of a model and thereafter dimensionless variables are introduced. A number of models are developed for understanding process dynamics for process control. The examples are organized such that rst two relatively simple processes are modeled, in a few di erent settings. This approach is chosen in order to illustrate some of the e ects of process design upon process dynamics. Some of these e ects can have consequences for the resulting control design. This aspect is illustrated by investigating the degrees of freedom available for control in some of the rst examples. Thereafter the concepts of equilibrium and rate based models are introduced. These concepts are each illustrated by one or two examples. All the models developed in this section contain ordinary di erential equations possibly in combination with algebraic equations. The complexity of the models increase through the section from just a single linear di erential equation to sets of a possibly large number of nonlinear equations for a distillation column.
Linear system
Linear systems play an important role in analysis and design of control systems. We will thus rst introduce the useful concepts of linearity, time invariance and stability. More formal de nitions are given in Chapter 3. A process is linear if linear combinations of inputs gives linear combinations of outputs.
20
Chapter 2
A process is time invariant if it behaves identically to the same inputs or disturbances at all times. Stability implies that small disturbances and changes in the process or inputs lead to small changes in the process variables. Linearity implies, for instance, that doubling the amplitude of the input give twice as large output. In general, stability is a property of a particular solution, and not a system property. For linear, time invariant processes, however, it can be shown that if one solution is stable then any other solution also will be stable, thus in these special cases a process may be called stable. Stable processes were often called self-regulating in the early control literature, since they dampen out the e ect of disturbances.
To describe the static and dynamic behavior of processes mathematical models are derived using the extensive quantities which according to the laws of classical physics (i.e. excluding particle physics phenomena) are conserved. The extensive quantities most important in process control are: Total mass Component mass Energy Momentum The variables representing the extensive quantities are called state variables since they represent the system state. The changes in the system or process state are determined by the conservation equations. The conservation principle for extensive quantity K may be written for a selected volume element V of a process phase
Accumulation of K in V unit time
(2.1)
The balance must be formulated for each of the relevant phases of the process and expressions for the quantity transfer rates between the phases must be formulated and included in the ow terms. Note that by convention an in ow of quantity into a phase is positive on the right hand side of (2.1). The type of di erential equation resulting from application of the conservation principle depends upon the a priori assumptions. If the particular phase can be assumed well mixed such that there are no spatial gradients and the quantity is scalar (e.g. temperature or concentration) then the conservation balance may be formulated for the whole volume occupied by the phase. In that case an ordinary di erential equation results. Other types of models, i.e. where spatial gradients are essential, yield partial di erential equations. These are illustrated in Section 2.6. Some of the most often applied conservation balances for the case of ideal mixing are listed below. Note that the momentum balance usually plays no role in this case.
(2.2)
21
(2.3)
(2.4)
molar concentation (moles/unit volume) total energy. E = U + K + P volume ow rate enthalpy in per unit mass kinetic energy potential energy net heat received from adjacent phase net reaction rate (production) mass density U internal energy V volume W net work done on the phase The energy balance can often be simpli ed, when the process is not moving dE=dt = dU=dt. For a liquid phase furthermore dU/dt ' dH=dt. The speci c enthalpy is here often simpli ed using a mean mass heat capacity, e.g. for a liquid phase: Hj = Cpj (Tj ; T0) where T0 is the reference temperature for the mean heat capacity. For accurate calculations an appropriate polynomial should be used to evaluate the speci c enthalpy Cpj , e.g. Smith and van Ness (1987).
where:
c E v H K P Q r
Static relations
If the process variables follow input variable changes without any delay then the process is said to be static. The behavior then may be described by the above equations with the time derivatives equal to zero. In that case the model for well mixed systems is seen to be a set of algebraic equations. If these equations are linear (see Chapter 3) in the dependent variables then this set of equations has one solution only provided the model is well posed, i.e. the number of equations correspond to the number of dependent variables. See Appendix C. If the static equations are nonlinear then multiple solutions may be possible. The wellposedness of the model may be investigated by investigating the number of degrees of freedom of the model. Since this analysis also provides information about the number of free variables for control it is most useful for control design.
Once a model is derived, the consistency between the number of variables and equations may be checked, by analyzing the degrees of freedom. This may be done by identifying the following items in the model: The number of inputs and disturbance variables : Nu
22
Chapter 2
The number of dependent variables: Nx The number of equations: Ne The number of degrees of freedom Nf is determined by
Nf = Nx + Nu ; Ne
(2.5)
Note that input and disturbance variables are determined by the external world. The degrees of freedom is only determined by the number of variables and the equations relating these variables and does not depend on the parameters in the model. In order to solve for the dependent variables it is necessary to specify Nf operating variables, which usually are inputs and disturbances to the process. Now the following possibilities exist: Nf < 0 There are more equations than variables, and there is in general no solution to the model. Nf = 0 In this case the model is directly solvable for the dependent variables, but there are no degrees of freedom left to control or regulate the process. Nf > 0 There are more variables than equations. Thus in nitely many solutions are possible. Nf variables or additional equations must be speci ed in order to be able to obtain a nite number of solutions (a single in the linear case) to the model. Note that usually Nf = Nu thus the degrees of freedom are used to pinpoint the possible controls and disturbances. If Nf 6= Nu then either the model or the process design should be reconsidered. Note that when an input variable is selected as an actuator variable to ful ll a measured control objective, then the control loop introduces an additional equation thus reducing the number of degrees of freedom with one. Later it will be clear that selection of control variable is one of the important choices of process control design. Once the model is formulated, some of its properties may be more easily revealed by introducing dimensionless variables. This is achieved by determining a nominal operating point to de ne a set of nominal operating conditions. This set is then used as a basis for the dimensionless variables. One advantage of this procedure is that it may be directly possible to identify time constants and gains in the process models. Thus one can directly obtain insight into the process behavior. This methodology will be illustrated on several of the models.
Dimensionless variables
23
The following example for liquid ow through a storage tank illustrates development of a model. The example also illustrates the possible in uence of three di erent types of downstream equipment upon the model. Finally the example discusses di erent choices of manipulated variable for controlling the level. Example 2.1|Liquid level in storage tank The objective of modeling the level tank in Figure 2.6 is to understand its dynamics and to be able to control the tank level. The in ow vin is assumed to be determined by upstream equipment whereas three cases will be considered for the out ow. The level of the tank may be modeled using a total mass balance over the tank contents. The liquid volume V = Ah, h is the tank surface level over the outlet level and A the cross sectional area, which is assumed independent of the tank level. Subscript in denotes the inlet and v the volume ow rate d Ah = v ; v in in dt Assuming constant mass density the following balance results A dh (2.6) dt = vin ; v Compare Example 1.4 and Section 1.5. Although this clearly is a rst order di erential equation the behavior depends upon what determines the in{ and out ows. In order to illustrate the machinery the degrees of freedom will be analyzed and dimensionless variables will be introduced. The degrees of freedom in equation (2.6) are determined as follows: The possible input and disturbance variables vin and v : Nu = 2 The model variable h: Nx = 1 The equation: Ne = 1 Hence Nf = Nu + Nx ; Ne = 2 + 1 ; 1 = 2. Thus only if the in{ and out ows are determined or known one way or another, this model may be solved. The simple tank system is thus described by one di erential equation. We then say that the system is of rst order. The equation (2.6) gives a generic model for a storage tank. The in ow is determined by the upstream equipment, while the out ow will depend on the tank and its outlet. If the in ow is given then the out ow needs to be determined. Here three cases will be considered
24
Level h
Chapter 2
tau_p=0.1
R=1
tau_p=1
0.5
tau_p=0.1 tau_p=1
R=0.5
0 0
Time t
Figure 2.7 Step response of level tank with laminar out ow. Di erent values of the static
gain and the residence time are shown.
First there is assumed to be a ow resistance in the out ow line. This resistance may be exerted by the tube itself or by a valve as indicated in Figure 2.6. In analogy with Ohm's law the out ow, in this case, may be written: v = h=R where R is the linear valve resistance. Inserted into (2.6) the following model results 1 A dh dt = vin ; R h (2.7)
The steady state solution, indicated with superscript 0, is obtained by setting the time derivative equal to zero, thus 0 h0 = Rvin (2.8) From (2.8) the parameter R is seen to describe how strongly a change in the steady state inlet ow rate a ects the steady state level. Thus R is properly called the static gain of the process. The dynamic model may now be written where p = AR. Notice that p has dimension time and it is called the time constant of the process. Equation (2.9) is a linear constant coe cient rst order di erential equation which can be directly integrated. If the inlet ow disturbance is a unit step disturbance at time zero, where the tank level initially was zero, then the solution is directly determined to be
p dt = ;h + Rvin
dh
(2.9)
h(t) = R 1 ; e;t= p
(2.10)
This response is shown in Figure 2.7 together with the inlet ow disturbance for a number of parameter values. Note that the curves may all be represented by one curve by scaling the axes: h=R and t= p . The larger gain R the larger the steady state level will become. A small p gives a fast response, while a long time constant gives a slow response.
25
Let a be the cross sectional area of the outlet. If a=h is small we can use Bernoulli's equation where g is the gravitation constant. This implies that the out ow is proportional to the square root of the tank level. This also implies that the pressure drop over the valve resistance is P v2 . Thus the following model results by inserting into equation (2.6)
p = v ; c h A dh in dt
(2.11)
In this case a nonlinear rst order di erential equation results. The steady state solution is 0 =c)2 , thus the static gain in this case varies with the size of the disturbance. In h0 = (vin this particular case the nonlinear model equation (2.11) with a step input may be solved analytically. In general, however, it is necessary to resort to numerical methods in order to solve a nonlinear model. Subexample 2.1.3|Externally xed out ow The out ow v is now determined, e.g. by a pump or a constant pressure di erence between the level tank and the connecting unit. Equation (2.6) now becomes or
A dh dt = vin ; v
(2.12)
and out ows are constant, then, depending upon the sign of the right hand side the tank level will rise or fall until the tank oods or runs empty. The level will only stay constant if the in{ and out ows match exactly . This type of process is called a pure integration or a pure capacitance because the integration rate is solely determined by the input variables. The is in this case the time needed to change the volume by vin ; v assuming a constant ow rate di erence. Note that for (2.12) both the in- and out ows may be disturbances, or externally set variables. Such a purely integrating process most often is controlled since it does not have a unique steady state level.
dh = u dt where both v and vin are independent of h. Further = A and u = vin ; v. If the in-
The behavior of the three exit ow situations modeled in Subexamples 2.1.1{3 are illustrated in Figure 2.8 for a) the lling of the tanks and b) the behavior in response to a step disturbance in inlet ow rate.
Subexample 2.1.4|Dimensionless variables
Dimensionless variables are introduced using design information about the desired nominal operating point hnom . Dividing (2.6) by the nominal production rate vnom and introducing the dimensionless variables: xh = h h
in uv in = vv nom uv = v v nom nom
(2.13)
where is seen to be the nominal volume divided by the nominal volume ow rate. Thus is the nominal liquid lling time for the tank if the out ow is zero. Subexample 2.1.5|Control of liquid storage process In practice liquid storage processes are most often controlled either by a control system, or by design e.g. of an outlet weir to maintain the level. The rst possibility is considered in this case. As control objective it is selected to keep the level around a desired value. The level may easily be measured by a oat or by a di erential pressure cell with input lines
26
a) 1.5 1 0.5 0 0 b) 1.3 1.2 1.1 1 0 Level h Level h
Chapter 2
10
Time t
15
Pump
Turbulent Laminar 5 10 15
Time t
Figure 2.8 Dynamic behavior of level tanks with exit ow rate determined by: valve with
laminar ow, valve with turbulent ow, and pump. a) Filling of the tanks to a nominal volume. b) Response to a +5% increase in inlet ow rate from rest. All parameters are 1, except the di erence between pump in{ and out ow which is 10%.
from the top and bottom of the tank. From the model development the following inputs and disturbances and degrees of freedom result:
Nu Nx Ne Nf
Laminar
vin h
Turbulent
(2.7) 1
vin h
(2.11) 1
(2.12) 2
In the rst two cases the outlet ow is described by an equation involving the dependent variable the outlet resistance. A possible manipulated variable could be the inlet ow rate in all three cases. In that case the outlet ow rate would be a disturbance in Subexample 2.1.3. In the case of outlet ow through a resistance then variations in this resistance may be a possible disturbance, which, however, is not included in the model as a variable, but as a parameter embedded into coe cients of the process model. Now consider the inlet ow rate as a disturbance. Then in the case of externally xed outow, this exit ow rate is most conveniently selected as manipulated variable to ful ll the control objective. For the case of ow through an outlet resistance variation of this resistance by changing the valve position is a feasible way to control the process. This selection of manipulated variable, however, leads to a variable process gain and time constant in the laminar ow case. See (2.9). Note that this selection of control variable introduces an equation between the ow resistance and the level: R = f (h). Thus a process model with time varying coe cients results. Note also that in this case the selection of manipulated variable does not reduce the number of degrees of freedom, since both Nu and Ne are increased by 1. However, a rather complex model|and process|results. In order to avoid such complications one may decide to change the design into one where the out ow is determined by the external world. Even though a valve is used in the outlet, by measuring the outlet ow rate and controlling the valve position to give the desired ow rate. Such a loop is often used in practice. Subexample 2.1.6|Two tanks in series Let two tanks be connected in series such that the out ow of the upper tank is the in ow to the lower tank. The system is then described by the two di erential equations for the levels
27
v, T
Figure 2.9 Process diagram of a continuous stirred tank (CST) heater or cooler.
in the tank, compare (2.11),
1 A1 dh 1 h1 + vin dt = ;c p 2 = c h ; c ph A2 dh dt 1 1 2 2
p
The next example introduces a type of operation which often occurs in various forms in chemical plants. First a model for the heated liquid is developed. Thereafter two di erent types of heating are modeled, i.e. electrical resistance heating and heating by condensing steam in a submerged coil. For the latter model which consists of two di erential equations, an engineering assumption is introduced, which reduces the model to one algebraic and one di erential equation. Finally the degrees of freedom are investigated for the di erent heaters. Example 2.2|Continuous stirred tank (CST) heater or cooler A continuous stirred tank heater or cooler is shown in Figure 2.9. This type of operation occurs often in various forms in chemical plants. It is assumed that the liquid level is constant, i.e. v = vin . Thus only a total energy balance over the tank contents is needed to describe the process dynamics. In the rst model it is simply assumed that the energy ux into the tank is Q kW. Assuming only negligible heat losses and neglecting the contribution by shaft work from the stirrer the following energy balance results
dV CpT = v C T ; v C T + Q in in p in p dt Assuming constant heat capacity and mass density, the balance may be simpli ed (2.14) V Cp dT dt = v Cp(Tin ; T ) + Q
In the case of a heater the energy may be supplied from various sources here rst an electrical resistance heating and then steam condensing within a heating spiral are considered.
28
Chapter 2
In this case the capacitance of the heating element is assumed negligible compared to the total heat capacitance of the process. The model (2.14) can now be written on the form
0 and Q = Q0 where = V=v and K = 1=(v Cp ), which is similar to (2.9). Fixing Tin = Tin gives the steady state solution 0 + KQ0 T 0 = Tin Thus the static gain K = 1=(v Cp ) is inversely proportional to the steady state ow rate. Subexample 2.2.2|Dimensionless variables Introduce a nominal heating temperature Tnom , and a nominal volumetric ow rate vnom . Equation (2.14) may be divided by vnom Cp Tnom = Qnom which is the nominal or design power input ; T V d Tnom v (Tin ; T ) + Q v dt = v T Q
nom nom nom nom
dT = ;T + Tin + KQ dt
(2.15)
Introduce the dimensionless variables: xT = T=Tnom , xT in = Tin =Tnom , uv = v=vnom , u = Q=Qnom , and the parameter nom = V=vnom , i.e. the liquid residence time. The above equation may then be written
nom
dxT = uv (xT in ; xT ) + u dt
v
(2.16)
Thus the static gain kp = 1=u0 v is inversely proportional to the steady state ow rate. Subexample 2.2.3|Stirred heater If there is no through{ ow of liquid, i.e. a stirred heater, then equation (2.14) with v = 0 gives 0 dxT = Q (2.17) dt 0 Note that here = V Cp . Just as in Subexample 2.1.3 a pure integration results. However, in this case the assumption of negligible heat loss no longer is reasonable when the system is cooled by the environment. In that case a heat loss through the tank wall and insulation may be modeled as Qa = UA(T ; Ta ), where UA is the product of the total transfer coe cient U , the total transfer area A, and index a denotes ambient conditions. Thus the energy balance becomes V Cp dT dt = Q ; UA(T ; Ta ) or dT = ;T + Ta + KQ (2.18) dt where = (V Cp )=(UA) and K = 1=(UA). Heating systems often have much longer cooling than heating times, since UA 1, i.e. Q in uences much stronger than Qa . This type of process design gives rise to a control design problem: Should the control be designed to be e ective when heating up or when the process operates around its steady state temperature where it is slowly cooling but perhaps heated fast? The main problem is that the actuator (energy input) only has e ect in one direction. Often this problem is solved by dividing the heating power and only using a small heater when operating around steady state conditions.
Subexample 2.2.4|CST with steam coil
u0 0 0 0 x0 T = xT in + u0 = xT in + kp u
The energy is assumed to be supplied by saturated steam such that the steam pressure uniquely determines the steam condensation temperature Tc , where index c denotes condensation. In this case the heat capacity of the steam coil both may be signi cant, thus requiring formulation of an energy balance for two phases: The liquid phase in the tank and the wall of the steam coil. The energy transfered from the condensing steam to the steam wall is modeled using a lm model
Qc = hc w Ac w (Tc ; Tw )
29
Q = hw l Aw l (Tw ; T )
where hi j is a lm heat transfer coe cient between phase i and j . Thus the two balances become rst for the tank contents, with index l, based upon equation (2.14)
Vl lCp l dT dt = vl l Cp l (Tin ; T ) + hw l Aw l (Tw ; T ) and for constant heat capacity and density for the wall, with index w w Vw w Cp w dT dt = hc w Ac w (Tc ; Tw ) ; hw l Aw l (Tw ; T )
(2.19)
(2.20)
The process is now described by two rst order di erential equations. The input or manipulated variable is now Tc and the disturbance is Tin . The wall temperature Tw is an internal variable in the model. Subexample 2.2.5|Quasi{stationarity of the wall temperature If the wall to liquid heat capacity ratio
w w Cp w b = VV l lCp l
is small then the wall temperature may be assumed almost steady, such that the wall temperature is virtually instantaneously determined by the steam and liquid temperatures. This w approximation will only be valid if dT dt is small. For systems where such an approximation does not hold special investigations are necessary. Provided the above assumption is valid the quasi steady wall temperature may be found from the quasi{steady version of equation (2.20) 0 = ;(hc w Ac w + hw l Aw l )Tw + hc w Ac w Tc + hw l Aw l T or + hw l A w l T Tw = hchw AcAw Tc + (2.21) h A Inserting into (2.19) gives
cw cw wl wl
In the above four cases it seems most relevant to select a representative variable for the energy input as an actuator to achieve a control objective such as keeping the process temperature constant. Note that in Subexample 2.2.5 one equation is algebraic.
Evaporator
The following example introduces a single stage evaporator which is a process which may be modeled using three di erential equations. First such a model is derived without specifying the heating source. Then condensing vapor is used, and the degrees of freedom for control are discussed. This discussion presents
30
Chapter 2
the problem of selecting among a number of di erent actuators in order to satisfy the control objectives. This problem setting will later in the course turn out to be typical for chemical processes. Finally a model for a multistage evaporator is given, where the number of di erential equations is proportional to the number of stages in the evaporator. Often four to six stages is applied in industrial practice. Example 2.3|Evaporator An evaporator is schematically shown in Figure 2.10. This sketch may be considered as a crude representation of evaporators, di ering in details such as energy source or type of liquid to be evaporated, i.e. whether the main product is a concentrated solution or the produced saturated vapor. The general features of evaporators are that a liquid/vapor mixture is produced by the supply of energy through a heating surface. The produced vapor is separated from the liquid and drawn o to the downstream unit. The feed solution is assumed to contain a dissolved component to be concentrated with the feed concentration cin (kg/kg solution). The following additional simplifying assumptions are made i) Vapor draw-o or consumption conditions speci ed, thus setting the vapor production rate V (kg/h). ii) The produced vapor is saturated iii) Low pressure, i.e. g l iv) Negligible heat losses Indices l and g denote liquid and gas phase respectively. The assumption of saturated vapor implies that the evaporator pressure is determined by the temperature. Following assumption iii) only liquid phase mass balances will be considered. Thus three balances are required to model this evaporator: Total mass, component mass and total energy. Again the energy input is initially assumed to be Q kW. In this example the writing is simpli ed by using mass units, i.e. the liquid hold up is mass W kg, mass ow rate is F kg/h and the ow rate of the produced vapor is V kg/h. The liquid mass density is assumed constant. dW = F ; F ; V (2.22) in dt d(Wc) = F c ; Fc in in dt d(WHl) = F H ; FH ; V H + Q in l in l g dt Note that the energy balance is expanded compared to Example 2.2 due to
31
the produced vapor, and that the energy content in the gas phase has been neglected in the accumulation term. The latter is reasonable due to assumption iii), which gives Hg = Cp g (T ; T0 ) + . Where the heat of vaporization is assumed independent of temperature which is reasonable at low pressure. The component and energy balances may be reduced by di erentiating the left hand side and using the total mass balance equation (2.22) dW = F (c ; c) + cV = F c ; Fc ; c (2.23) W dc in in in in dt dt l = F (H ; H ) ; V (H ; H ) + Q (2.24) W dH in l in l g l dt The evaporator is thus described by the three di erential equations (2.22), (2.23), and (2.24). The nal form of the model depends upon what determines the vapor production rate V .
Subexample 2.3.1|Condensing vapor heated evaporator
In this case the produced vapor is assumed condensed at a surface through which heat is exchanged to a lower temperature Tc in a downstream process unit. An energy balance over the condensation surface, indexed c, gives
V (Hg ; Hl ) = Uc Ac (T ; Tc )
(2.25)
where the heat capacity of the heat exchanging wall has been considered negligible, a total heat transfer coe cient Uc , and area Ac are used. Provided Tc is known the above equation de nes the vapor rate which may be introduced into the equations above. Together with the vapor supplied energy Q = Uv Av (Tv ; T ) and the expressions for the speci c enthalpy of the liquid and gas phase equation (2.24) becomes
(2.26)
Thus a model for an evaporator stage consists of three di erential equations, (2.22), (2.23), and (2.26). Subexample 2.3.2|Degrees of freedom for evaporator stage The variables and equations are directly listed as:
Nu Nx Ne
6: 3: 3:
Thus the degrees of freedom are 6. Assuming the inlet conditions cin and Tin to be determined by upstream equipment leaves four variables as candidates for actuators: F , Fin , Tc and Tv . The control objectives may be formulated as controlling: liquid level liquid temperature outlet concentration vapor production rate. Thus the control con guration problem consists in choosing a suitable loop con guration, i.e. selecting suitable pairs of measurement and actuator. Subexample 2.3.3|Dimensionless variables Dimensionless variables will now be introduced in the evaporator model in Subexample 2.3.2. De ning the nominal feed ow rate Fnom , nominal concentration cnom and a nominal liquid heat ow Qnom = Fnom Cp l Tnom normalized variables can be introduced into equations (2.22), (2.23), (2.24), and (2.26)
dxw = uf in ; uf ; xV dt
(2.27)
32
where
Chapter 2
normalized similarly
= Wnom =Fnom , xw = W=Wnom , uf in = Fin =Fnom , uf = F=Fnom , and xV = V=Fnom . The component balance is normalized by dividing with the nominal concentrate production rate Fnom cnom and de ning xc = c=cnom c (2.28) xw dx dt = uf in (xc in ; xc ) + xV xc Similarly for the energy balance by dividing with Qnom and de ning xT = (T ; T0 )= Tnom etc. T xw dx (2.29) dt = uf in (xT in ; xT ) ; c (xT ; xT c ) + v (xT v ; xT ) where i = (Cp l Wnom )=(Uin Ain ) for i = c and v. The vapor production rate may be
V = Uc Ac Wnom Cp l Tnom (xT ; xT c ) = g(xT ; xT c ) (2.30) xV = Fnom Cp l Wnom Fnom (Hg ; Hl ) c where g = (Cp l Tnom )=(Hg ; Hl ) often will be relatively small due to a large heat of
vaporization. Thus a model for an evaporator stage consists of the four last equations. Subexample 2.3.4|Multistage evaporator The evaporator model may be directly extended to a multistage evaporator. The model for the j 'th stage using the same assumptions as above, directly can be formulated from (2.27){ (2.30). For simplicity it is assumed that the stages have identical parameter values. Noting that both the entering liquid (old index in) and the supply vapor (old index v) comes from stage j ; 1, whereas the produced vapor is condensed in stage j + 1, the following model results
dxw j = uf j;1 ; uf j ; xV j dt dx cj =u xw j dt f j;1 (xc j;1 ; xc j ) + xV j xc j Tj xw j dx dt = uf j;1 (xT j;1 ; xT j ) ; c (xT j ; xT j+1 ) + v (xT j;1 ; xT j ) xV j+1 = c g(xT j ; xT j+1 )
The above equations for j = 1 2 : : : N with speci ed inlet (j = 0) and downstream conditions (j = N + 1) may be directly solved by using a di erential equation solver. Subexample 2.3.5|Degrees of freedom for multistage evaporator Assuming N stages and with inlet conditions indexed with in (i.e. j = 0) the following table may be formed: Nu N + 4 : uf j uf in xc in xT in xV in Nx 4N : xW j xc j xT j xV j Ne 4N : Thus Nf = N + 4. These free variables may be utilized to control: Each liquid level The production rate of concentrate The concentrate concentration Thus leaving two degrees to be determined by the external world, these could be the inlet concentration xc in and temperature xT in .
The processes in the above examples have been simple, and their behavior have been determined by transfer rate coe cients. This type of processes are called rate processes. When developing and designing chemical processes in general two basic questions are: can the processes occur and how fast? The rst question is dealt with in courses on thermodynamics, and the second question in courses on chemical reaction engineering and transport phenomena. In a
33
number of industrially important processes, such as distillation, absorption, extraction and equilibrium reactions a number of phases are brought into contact. When the phases are not in equilibrium there is a net transfer between the phases. The rate of transfer depends upon the departure from equilibrium. The approach often taken when modeling equilibrium processes is based upon equilibrium assumptions. Equilibrium is ideally a static condition in which all potentials balance, and no changes occur. Thus in engineering practice the assumption is an approximation, which is justi ed when it does not introduce signi cant error in the calculation. In recent years rate based models have been developed for some of these multiphase processes which has made it possible to check the equilibrium models. Especially for multicomponent mixtures it may be advantageous to go to rate based models, which, however, are much more elaborate to solve. Modeling of equilibrium processes is illustrated in the following with a ash and a distillation column. The previous example with an evaporator also used an equilibrium approach, through the assumption of equilibrium between vapor and liquid phases. Rate processes is also illustrated in the last example of this section with a continuous stirred tank reactor.
Equilibrium processes
As a phase equilibrium example, which is of widespread usage in chemical processes, vapor-liquid equilibrium is considered. The equilibrium condition is equality of the chemical potential between the vapor and liquid phases. For ideal gas (Dalton's law) and ideal liquid phase (Raoult's law) this equality implies equality between the partial pressures Nc X Pyj = xj Pj0 and P = xj Pjsat where P is the total pressure, xj and yj the j'th component liquid and equilibrium vapor mole fractions, Nc is the number of components, and Pjsat is the pure component vapor pressure. The pure component vapor pressure is a function of temperature only, and may be calculated by the empirical Antoine equation B ln Pjsat = A ; T + C where A, B and C are constants obtainable from standard sources. In practice, few systems have ideal liquid phases and at elevated pressures not even ideal gas phases. For many systems it is possible to evaluate their thermodynamic properties using models based upon grouping atoms and/or upon statistical properties of molecules. These models may provide results in terms of equilibrium vaporization ratios or K values yj = Kj (T P x1 : : : xNc y1 :: yNc)xj The usage of thermodynamic models to yield K values is adviceable for complex systems. To obtain simple, i.e. fast, calculations for processes with few components at not too high pressure relative volatility may be used yi xi ij = yj xj
j =1
34
Chapter 2
Tray N
Recycling L, x D
Mi Tray i L i ,x i
V i 1 ,y i 1
Accumulator Feed F, z f M D, x D
Top product D, x D
Tray 1
The relative volatilities are reasonably constant for several systems. Note that for a binary mixture the light component vapor composition becomes y = 1 + x( x ; 1) (2.35)
Distillation column
A typical equilibrium process is a distillation column. Example 2.4|Distillation column for separating a binary mixture A schematic process diagram for a distillation column is shown in Figure 2.11. The main unit operations in the process are reboiler, tray column, condenser and accumulator from which the re ux and the distillate are drawn. A model will be developed for a binary distillation column. It is straightforward to generalize the model to more components, by including additional component balances and using an extra index on the concentrations. The main simplifying assumptions are: i) Thermal equilibrium on the trays
35
ii) Equimolar counter ow, i.e. Vi;1 = Vi constant molar gas ow rate iii) Liquid feed enters at bubble point on feed tray. iv) Ideal mixing is assumed in each of reboiler, trays, condenser and accumulator. v) A total condenser is used. vi) Negligible vapor holdup, i.e. moderate pressure In the present example the pressure will be assumed xed by a vent to atmosphere from the accumulator. In a closed distillation column with a total condenser the pressure would be mainly determined by the energy balances around the reboiler and condenser, but these will be omitted in this model. Thus the balances included here are total mass and light component on each tray. These are written for the i'th tray, with molar holdup Mi and liquid phase light component concentration xi . Introducing LN +1 = L, xN +1 = xD , and with y0 determined by the liquid concentration in the reboiler we get
(2.36) (2.37)
for i = f (i ; f ) = 1 0 for i 6= f: F feed ow rate and zf feed composition and yi is the vapor phase concentration of the light component on tray i. The liquid ow rate from each tray may be calculated using, e.g. Francis weir correlation Li = k
i
Mi ; Mo Ai
1:5
(2.38)
where i molar liquid density kmole=m3 Mo liquid holdup at zero liquid ow kmole A tray cross sectional area m2 k coe cient m1:5=s If there is complete equilibrium between the gas and liquid phase on a tray, then yi = yi , where the equilibrium composition is determined using an appropriate thermodynamic model. But most often the gas and liquid are not completely in equilibrium on a tray, this additional e ect may be accounted for by using the Murphree tray e ciency
yi ; yi;1 Em = y ;y
i
i;1
(2.39)
often the tray e ciency varies between the rectifying and stripping sections.
36
Chapter 2
v in , c A,in
Figure 2.12 Stirred tank reactor (STR) with possibility for feed addition and product
withdrawal
dMB = L ; (V + B) (2.40) 1 dt dMB xB = L x ; (V + B)x (2.41) 1 1 B dt where V the boil up rate or vapor ow rate is determined by the energy input to the reboiler. Condenser and accumulator The condenser model is simple for a total condenser with assumption vi). Here simply the vapor is assumed to be condensed to liquid with the same composition, i.e. xc = yN . The accumulator or distillate drum (index D) model becomes dMD = V ; (L + D) (2.42) dt dMDxD = V x ; (L + D)x (2.43) c D dt
where subscript D indicates thus also the distillate composition. The model is described by 2(N + 2) rst order di erential equations. The main disturbances will be feed ow rate and composition: F , zf . The possible manipulated variables will be V , L, D, B these may be combined in a number of ways to control the four degrees of freedom. (It is left as an exercise to show that there indeed are four degrees of freedom). Two actuators should be used to maintain the levels in the reboiler and distillate drum, and the remaining two actuators for controlling the top and bottom compositions as needed. This problem in distillation control will be dealt with later in this course.
Reboiler (index B)
Stirred tanks are used often for carrying out chemical reactions. Here a few examples will be given to illustrate some of the control problems. First a general model will be formulated subsequently it will be specialized to a cooled exothermic continuous reactor. Example 2.5|Stirred tank reactor (STR) A simple stirred tank is shown in Figure 2.12. The tank can be cooled and/or heated in a for the moment unspeci ed manner. The liquid phase is assumed
37
well mixed, and the mass density is assumed constant. Thus the following general balances result for a liquid phase reaction A ! B with the reaction rate r mol/s for conversion of the reactant.
Total mass
dV= dt
in vin ;
dnA = dV cA = v c ; vc ; V r in A in A dt dt If a solvent is used an additional component mass balance is needed. For the product the mass balance becomes dnB = dV cB = v c ; vc + V r in B in B dt dt note that X = MA cA + MB cB + MS cS = Mj cj
where subscript S designates solvent.
j=A B and S
Energy
dH = v H ^ +Q+W (2.44) in in ^ in ; v H dt Since the mass density is assumed constant the total and reactant mass balance equations are modi ed as in the evaporator example dx = u ; u (2.45) dt v in v dxA = u (x ; x ) ; r (2.46) v in A in A dt cA nom The variables have been made dimensionless through normalization. The energy balance may be reduced, albeit with some e ort, as follows. Noting that enthalpy is a state function H = H (P T nj) j = 1 : : : N the total di erential may be written @H dT + X @H dn = C V dT + X H dn dH = @H dP + j p | j @P @T j @nj j
where H| is the partial molar enthalpy of component j . Note that the enthalpy dependence on pressure has been assumed negligible which is reasonable for liquid mixtures at normal pressure. The time derivative of enthalpy may be determined using the component balances
38
Chapter 2
Since
H| (T ) = H|(T0) + Mj Cp j (T ; T0) the above summations may be written X X X ^ (T0) + Cp(T ; T0) cj H| (T ) = cj H| (T0) + cj Mj Cp j (T ; T0) = H
j j j
where the enthalpy and heat capacity P P for the mixture have been de ned as ^ H (T0) = j cj H|(T0)= and Cp = j cj Mj Cp j = . Thus
dxT = u (x ; x ) + (; H (T ))r + Q + W v in T in T dt V V
where constant properties have been assumed.
Cp Tnom (2.47)
An example reaction which can display many of the facets of chemical reactors is a simple conversion A ! products which is exothermic. This reaction scheme may be an approximation for more complicated reactions at special conditions. A simpli ed reactor model will now be developed. The reaction rate is assumed to be rst order in the reactant and to follow an Arrhenius temperature dependence with the activation energy Ea , thus
Ea Tnom cA r = k exp ; (2.48) RT cA = k exp (; )) exp 1 ; T where T = Ea =R, R is the gas constant and = T =Tnom . With these de nitions 1=(k exp(; )) = r becomes the reaction rate at the nominal conditions. The dimensionless concentration is xA = cA =cAnom Inserting the reaction rate in the component balance
equation (2.46) gives
(2.49)
where Dm = = r is called the Damk hler number. Even though the reaction is exothermic it may initially require heating in order to initiate the reaction then cooling may be required to control the reaction rate. The heating and cooling is assumed achieved by manipulating the temperature of a heating/cooling medium. This medium may be in an external shell or in an internal coil. The models for the heating/cooling medium energy balance will depend upon the actual cooling system, here it
39
simply will be assumed that the medium temperature Tc is perfectly controlled. Thus the rate of removing energy from the medium is
Q = UA(Tc ; T )
Inserting the reaction rate expression and energy removal rate into the energy balance equation gives when shaft energy is assumed negligible
(2.50)
(2.51)
Cp where Tadb is the dimensional adiabatic temperature rise and c = vnom UA . Thus the CSTR model consists of the reactant mass balance equation (2.49) and equations (2.50) and (2.51). This model is clearly nonlinear due to the exponential temperature dependence in the reaction rate expression and this dependence will be a function of and of the adiabatic temperature rise for the particular reaction.
40
Chapter 2
A Process 1
v in =v T in (t) T(t,l=L)
Condensate
or delay time inside the pipe is = AL v This delay implies that any change occurring in the stream properties as concentration or energy content at the tube inlet only will be seen in the outlet after the delay or dead time, thus for the dependent variable x
41
where is the uid residence time, uv = v=vnom , f w = CpA=(hi Pi ), w f = w Cp w Aw =(hiPi ) and w v = w Cp w Aw =(hy Py ). To solve this model initial state functions must be known, i.e. xT (t = 0 z ) and xT w (t = 0 z ). If the solution starts at steady state these may be obtained by solution of the steady state versions of the above two dynamic equations. In addition an inlet condition must be speci ed, i.e. xT (t z = 0). The solution of a partial di erential equation model requires an approach to handle the partial derivatives. This is usually done through an approximation scheme as illustrated later in this section. To control a double pipe heat exchanger the condensing vapor pressure might be used as actuator. The disturbances may in that case be inlet temperature, liquid ow rate, i.e. uv and perhaps varying heat transfer parameters as fouling develops. Example 2.8|Process furnace A process furnace is shown schematically in Figure 2.15. Process Furnaces are used to heat vapors for subsequent endothermic processing. Fuel is being combusting in the rebox which encloses the vapor containing tube. In specially designed furnaces reactions may occur in the furnaces, e.g. thermal cracking of high viscosity tars to short chain unsaturated carbohydrides, particularly ethylene. In this example, however, no reactions are assumed to occur in the furnace tube vapor phase. To simplify the modeling the following main assumptions are used i) The re box is isothermal.
42
Chapter 2
l=L
ii) A su cient amount of air is present to allow complete combustion of fuel. iii) The heat loss through the insulation is described by the re box insulation e ciency: B . iv) Vapor is assumed ideal. v) Material properties are assumed constant. vi) A single tube is assumed to be representative, i.e. in uence of neighboring tubes is neglected. With these assumptions energy balances for vapor, tube wall and the rebox are needed. The length along the tubes is indicated by l as in the previous example.
dA lCp T (t:l) = FC (T (t l) ; T (t l + l)) + h P l(T (t l) ; T (t l)) p i i w dt dAw l wCp w Tw (t l) = P l(T (t) ; T (t l)) ; h P l(T (t l) ; T (t l)) y B w i i w dt
Fire box
dWB CB TB (t) = F H f f dt
f;
Z
0
Where F is the mass ow rate of vapor kg/ s Cp the heat capacity of vapor J/( kg K) the thermal emissivity of the tube surface is the Stefan{Boltzman constant J/s/m2/K4 WB the re box mass kg CB and Fire box speci c thermal capacity J/kg/K Ff fuel mass ow rate kg/s Hf fuel heating value J/kg Bg is the mass faction of gas produced by fuel combustion.
43
With assumption iv) the pressure P = RT=M is constant (where R is the gas constant and M the mean molecular weight), thus the accumulation term in the vapor phase energy balance is zero. Dimensionless distance z = l=L and dependent variables xT i = Ti = Tnom are introduced. In addition a thermal residence time for the tube is introduced w Cp Aw L = C (2.54) p Fnom Thus the three equations above become rst for the vapor (t z ) + c(x (t z ) ; x (t z )) 0 = ;uF @xT@z Tw T where c = hi Pi L=(Cp F ) and uF = F=Fnom . (2.55)
Tube wall
@xT w = c (x4 (t) ; x4 (t z )) ; c(x (t z ) ; x (t z)) rad T B Tw T Tw @t 3 L=(Cp Fnom ). where crad = Py Tnom
(2.56)
Fire box
uF f = Ff =Ff nom cg f = Bg Cp g Tnom=Hf r = Cp Fnom Tnom =(Ffnom Hf ) The latter expression for r is the ratio of nominal vapor energy ow to that combusted.
@xT B B @t = uF f ( where
Z
0
The partial di erential equations describing the dynamics of distributed processes may be solved numerically in a multitude of ways. Here only a simple procedure, which approximates the spatial derivatives locally with nite differences will be illustrated. In this simple approach the length coordinate is subdivided into N equidistant sections of length z = 1=N . The rst order derivative then may be approximated using a backwards rst order di erence quotient @x = xi ; xi;1 + O( z2) (2.58) @z z
i
@x = xi+1 ; xi;1 + O( z3 ) @z i 2 z
(2.59)
which has smaller truncation error than the backwards di erence quotient. The usage of the backwards approximation is illustrated in the next example on condensing vapor heated tubular exchanger in one of the previous examples.
44
Chapter 2
The partial di erential equation in the tubular heat exchanger example is directly converted into the following set of ordinary di erential equations using equation (2.58)
dxT i(t) = ;u N (x (t) ; x (t)) + (xT w i(t) ; xT i(t)) (2.60) v Ti T i;1 dt fw dxT w i(t) = (xT c(t) ; xT w i(t)) ; (xT w i(t) ; xT i(t)) (2.61) dt
wc wf
where i = 1 2 : : :N and xT 0(t) = xT in (t). This set of equations may be directly solved using a solver for ordinary di erential equations. Note that the approximate model for the uid phase is equivalent to approximating the tubular plug ow with a sequence of CST-heat exchangers each of the size =N . Perhaps due to this physical analogy the backwards di erence is quite popular when approximating distributed models, even though this approximation numerically is rather ine cient. Several more sophisticated approximation methods exist for partial differential equation models. In one type of methods the partial derivatives are approximated globally using polynomials or other types of suitable functions in which case the internal points no longer are equidistantly distributed over the spatial dimension. Such methods may often yield much higher accuracy using just a few internal points than that obtained with nite di erence methods using several scores of internal points. Thus such approximations may be preferred when requiring an e cient approximation with a low order model.
Vector notation
In the two preceding sections a number of di erential equations have been developed for approximate modeling of the dynamic behavior of a number of
45
processes. In general a set of di erential equations results: dx1 = f (x x : : : x u u : : : u ) 1 1 2 n 1 2 m dt dx2 = f (x x : : : x u u : : : u ) 2 1 2 n 1 2 m dt . . . . . . dxn = f (x x : : : x u u : : : u ) n 1 2 n 1 2 m dt This set of di erential equations may be e ciently represented in a notation using vector notation, i.e. column vectors for each of the process states x, the process inputs u, and for the right hand sides f as follows 8f 9 8x 9 8u 9 1> 1> 1> > > > > > > > > > > > > > > f x2 > u2 > 2> > > > > > > > > > > x=> u=> f => . . . > > > . . . > > > > > > > > > :.> :.> : . > xn um fn Thus giving the state vector di erential equation dx = f (x u) (2.62) dt The states are the variables which accumulate mass, energy, momentum etc. Given the initial values of the states and the input signals it is possible to predict the future values of the states by integrating the state equations. Note that some of the time derivatives may have been selected to be xed as zero, e.g. by using a quasi-stationarity assumption. In that case these equations may be collected at the end of the state vector by suitable renumbering. In case of e.g. more complicated thermodynamics additional algebraic equations may have to be satis ed as well. However, for the present all equations will be assumed to be ordinary di erential equations. The models for measurements on the process, which are assumed to be algebraic may similarly be collected in a measurement vector
y = h(x u)
(2.63)
46
Chapter 2
The tank with turbulent out ow is discussed in Subexample 2.1.2. The system is described by (2.11), which can be written as
dx = ; c px + 1 u = f (x u) dt A A
Consider the continuous stirred tank with steam coil in Subexample 2.2.4. The system is described by (2.19) and (2.20). Introduce the states and the inputs 8 9 8 9 8 9 8 9 x T u T > > > > 1 x = :x = :T u=> : u1 > = > : Tin > 2 w 2 c This gives the state equations
dx = Ax + Bu dt
where
8 v hw l Aw l 9 h A w l w l l > > ; ; > > V V l Vl lCp l l lCp l > > > > A=> h A > h A + h A wl wl c w c w w l w l : ; Vw w Cp w 8 vl Vw w Cp w 9 0 > > > Vl > > B=> > h A > > c w c w :0 V C > w w pw
The system is a linear second order system with two input signals. In general, the di erential and algebraic equations (2.62) and (2.63) will be nonlinear. For testing control designs it is often essential to use reasonable approximate process models which most often consist of such a set of nonlinear di erential equations. This set may be solved using numerical methods collected in so called dynamic simulation packages. One such package is Simulink that is used together with Matlab.
47
When operating continuous processes the static process behavior may be determined by solution of the steady state equations obtained by setting the time derivative to zero in equation (2.62) 0 = f (x0 u0) (2.64) The steady state solution is labeled with superscript 0: x0 and u0 . E.g. the operating conditions u0 may be speci ed and then x0 is determined by solving equation (2.64). The solution of this set of equations is trivial if f contains linear functions only. Then just one solution exists which easily may be determined by solving the n linear equations. In general, however, f contains nonlinear functions which makes it possible to obtain several solutions. In that case it may be important to be able to specify a reasonable initial guess in order to determine the possible operating states. Large program packages called process simulators are used in the chemical industry for solving large sets of steady state equations for chemical plants. Such plants often contain recycle streams. Special methods are used for this purpose in order to facilitate the solution procedure. To develop an understanding for process dynamics and process control it is very useful to investigate the behavior of equations (2.62) and (2.63) around steady state solutions, where many continuous processes actually operate. The mathematical advantage of limiting the investigation to local behavior is that in a su ciently small neighborhood around the steady state linear models are often su cient to describe the behavior. Such linear models are amenable for explicit solution, for control design, and for usage in regulation and control. Linearization is carried out by performing a Taylor series expansion of the nonlinear function around the steady state solution. For the i'th equation this may be written dxi = f (x u) dt i = fi (x1 x2 : : : xn u1 u2 : : : um ) @fi (x ; x0 ) + : : : + @fi (x ; x0 ) = fi (x0 u0) + @x 1 1 @xn 0 n n 1 0 @fi (u ; u0) + : : : + @fi (u ; u0 ) + @u 1 m 1 m @u + higher order terms @f @fi 0 = fi (x0 u0) + @xi (x1 ; x0 1 ) + : : : + @x (xn ; xn ) 1 0 n 0 @fi (u ; u0) + : : : + @fi (u ; u0 ) + @u 1 m 1 m @u
1 0 1 0
Linearization
m 0
m 0
where the partial derivatives are evaluated at x = x0 and u = u0 . The expansion has been truncated to include only linear terms in the variables. These terms have the constant coe cients given by the partial derivatives evaluated at the steady state solution. Inserting the steady state solution equation (2.64) and de ning variables which describe only deviations from the
48
Chapter 2
0 steady state solution: xj = xj ; x0 j with j = 1 : : : n and uj = uj ; uj where j = 1 : : : m. The above equation may be written
dxi = dxi dt dt @fi x + : : : + @fi x + @fi u + : : : + @fi u = @x 1 @xn 0 n @u1 0 1 @um 0 m 1 0 = ai1 x1 + : : : + ain xn + bi1u1 + : : : + bim um
(2.65)
where the partial derivatives evaluated at the steady state have been denoted by constant @fi @fi aij = @x and bij = @u Thus by performing a linearization a constant coe cient di erential equation has been obtained from the original nonlinear di erential equation. This result may be generalized to all n di erential equations in equation (2.62), thus giving
j 0 j 0
dx1 = a x + + a x + b u + + b u 11 1 1n n 11 1 1m m dt . . . dxi = a x + + a x + b u + + b u i1 1 in n i1 1 im m dt . . . dxn = a x + + a x + b u + + b u n1 1 nn n n1 1 nm m dt This set of equations may be written as the linear constant coe cient vector di erential equation dx = Ax + Bu (2.66) dt
8 x1 ; x0 9 8 u1 ; u0 9 1> 1 > > > > > > > 0 x2 ; x2 > u2 ; u0 > > 2 > > > > > x=> u=> > > . . > > > > . . > > > > . . > > > > : : 0 0 xn ; xn um ; um
cient matrices are: 8b b : : : a1n 9 11 12 > > > > > : : : a2n > b > 21 b22 > > > and B = > > . . ... . . . . > . > . . > > : : : : ann bn1 bn2
and the constant coe 8a a 11 12 > > > a 21 a22 > A=> > . . . . > . . > : an1 an2
: : : b1m 9 > > : : : b2m > > > > ... . . . > > : : : bnm
Thus the general nonlinear state space di erential equation (2.62) has been linearized to obtain the general linear state space equation (2.66). In analogy herewith the nonlinear measurement equation (2.63) may be linearized as
49
yi = hi (x u) = hi (x1 x2 : : : xn u1 u2 : : : up ) @hi (x ; x0) + + @hi (x ; x0 ) = hi (x0 u0) + @x 1 1 @xn 0 n n 1 0 @hi (u ; u0 ) + + @hi (u ; u0 ) + @u 1 1 @um 0 m m 1 0 + higher order terms @h @hi 0 = hi (x0 u0) + @x i (x1 ; x0 1) + + @x (xn ; xn ) 1 0 n 0 @hi (u ; u0 ) + + @hi (u ; u0 ) + @u 1 1 @um 0 m m 1 0
Now introducing deviation variables for states, inputs as above and here for the measurements also, i.e. yi = yi ; yi0, where yi0 = hi (x0 u0) gives the following constant coe cient measurement equation
yi = ci1x1 +
+ cin xn + di1 u1 +
+ dim um
(2.67)
Note that the dij constants only appear when an input a ects the output directly. Note also that if a state is measured directly then the C matrix contains only one nonzero element in that row. Equation (2.67) also may be conveniently written as an algebraic vector equation, in analogy with the state space equation (2.66) above, as
where the partial derivatives evaluated at the steady state are denoted with the constants @hi @hi cij = @x and dij = @u
j 0 j 0
y = C x + Du
where the constant matrices are 8d d 8c c ::: c 9 11 12 11 12 1 n > > > > > > > > > d21 d22 > > c21 c22 : : : c2n > > > and D = C=> > > > . . . . . . > > > . . . . .. . > > . . > . . . : : dp1 dp2 cp1 cp2 : : : cpn
(2.68)
: : : d1m 9 > > : : : d2m > > > > ... . . . > > : : : dpm
Thus the general nonlinear state space model consisting of the two vector equations developed early in this section have been approximated with two linear and constant coe cient vector equations which are called general linear
dx = Ax + Bu dt (2.69) y = C x + Du This general linear state space model may be used for describing the behavior of chemical processes in a neighborhood around the desired steady state solution.
50
Chapter 2
Example 2.12|Linearization
Consider the nonlinear system dx = (x2 ; 3)u ; u3 = f (x u) dt y = x2 = h(x u) We will now linearize around the stationary value obtained when u0 = 1. The stationary point is de ned by x0 = 2 u0 = 1 y0 = 4 Notice that the system has two possible stationary points for the chosen value of u0 . The functions f and h are approximated by @f = 2xu @f = (x2 ; 3) ; 3u2 @x @u and @h = 2x @h = 0 @x @u 0 When x = ;2 we get dx = ;4x ; 2u dt y = ;4x and for x0 = 2 we get dx = 4x ; 2u dt y = 4x The storage tank with turbulent out ow is described in Subexample 2.1.2. Let the system be described by dh = ; a p2gh + vin dt A A where vin is the input volume ow, h the level, A the cross sectional area of the tank, and a the cross sectional area of the outlet pipe. The output is the level h. We will now determine the linearized model for the stationary point 0 . From the state equation we get obtained with the constant in ow vin 0 = ap2gh0 vin (2.70) A Taylor series expansion gives 0 a p2g=h0 (h ; h0 ) + 1 (v ; v0 ) dh = dh = ; a p2gh0 + vin ; dt dt A A 2A A in in Introduce the deviations x = h ; h0 0 u = vin ; vin y = h ; h0
Example 2.13|Liquid level storage tank
51
1.5
0.5
0 0
Time t
10
Figure 2.16 A comparison between the linearized system (full lines) and the nonlinear
tank model (dashed lines). The out ow is shown when the in ow is changed from 1 to a new constant value. The tank is linearized around the level 1.
dx = ; x + u dt y=x
where
(2.71)
0 p 0 ap2gh0 vin a = 2A 2g=h = 2Ah0 = 2V 0 = 1=A V 0 = Ah0 0 the tank can be described For small deviations from the stationary in ow vin by the linear equation (2.71). Figure 2.16 shows how well the approximation describes the system. The approximation is good for small changes around the stationary point. Figure 2.17 shows a comparison when the in ow is turned o . Notice that the linearized model can give negative values of the level.
52
1 Level h
Chapter 2
0.5
0.5
1 0
Time t
10
Figure 2.17 A comparison between the linearized system (full lines) and the nonlinear
tank model (dashed lines). The level is shown when the in ow is turned o .
used to investigate process and control dynamics. The other disturbance type is more complicated and no models will be given. Thereafter the occurrence of disturbances as seen from a control loop this leads to introduction of two standard types of control problems. Finally various operation forms for processes are described and viewed in the perspective of the two standard control problems and combinations thereof. Disturbances can have di erent origins when seen from a controlled process. Three or four origins will be discussed. Finally di erent types of process operation forms will be discussed to introduce the control problems arising therefrom. The simple disturbance types have simple shapes which also are simple to model mathematically. Four of these are shown in Figure 2.18. The pulse is used to model a disturbance of nite duration and with nite area (energy). The pulse may often be approximated by its mathematical idealization the unit impulse of in nitely short duration and unit area shown in Figure 2.18c. These simple disturbances may be modeled as outputs of dynamic systems with an impulse input. Thus the analysis of a process with disturbances may be reduced to the standard problem of analyzing the impulse response of a system which contains the combined model of the disturbance and the process. In some cases disturbances have a much more complex character and it may be advantageous to use a statistical description. Even for this type of disturbance the concept of modeling disturbances as outputs of dynamical systems is most useful. Then the generic disturbance used is so called white noise, i.e. an unregular statistical signal with a speci ed energy content.
Disturbance character
53
c)
(t t 0 )
d)
Figure 2.18 Four of the most usual disturbance functions. a) Step b) (Rectangular)
pulse c) Impulse d) Sine wave.
Load disturbance Process Measurement noise
Reference input
Controller
Output
Figure 2.19 Block diagram of closed loop with showing possible disturbance sources.
Disturbances can enter in di erent places in a control loop. Three places are indicated in Figure 2.19. One disturbance location is seen when a change in production quality is desired. Here the purpose of the control system is to track the desired change in quality as closely as possible and possibly to perform the grade change as fast as possible. This type of control problem is called servo or tracking control. Clearly a process start{up may be considered a perhaps complex set of servo problems. The command signals applied usually have a simple form though. A second location for disturbances is load disturbances which act upon the process. Such loads may be, e.g. varying cooling requirements to a cooling ow control loop on a CST-reactor, or varying requirements of base addition to a fermentation where pH is controlled by base addition. This second type of disturbance gives rise to what is called a regulation or disturbance rejection control problem. In chemical processes changes in loads often also is of simple form, but occur at unknown points in time, e.g. due events elsewhere in the plant. Another type of disturbance may be noise such as measurement noise i.e. the di erence between a process variable and it's measurement or such as process noise, i.e. where the noise source may be process parameters which may vary during operation, e.g. due to fouling or to catalyst activity changes. Measurement noise can both have simple form, e.g. step like calibration errors and pulse like measurement errors and have more complex form.
Process disturbances
54
Chapter 2
Process plants may be viewed as operating in mainly three di erent operation forms all of which often are found within a large plant site. stant quality and rate. Batch operation where the product is produced in nite amounts, the size of which is determined by the process equipment. The batch processing time may be from hours to weeks. Periodic operation where part of the process is operated deliberately periodically. The boundaries between these operation forms are not sharp. Continuous operation must be started by a so called startup procedure consisting of a sequence of operations to bring the process from the inoperative state to the desired operating conditions. At the end of continuous operation the process must go through a shut down sequence as well. Thus continuous operation may be viewed as a batch operation with a very long batch time. There may be several years between startups. Similarly a sequence of successive batch operations may be considered a periodic operation (with a slightly broader definition than given above). Thus the distinction introduced above is determined by the frequency of the transient operations which is lowest in continuous and highest in periodic operation. During the transient phases the actions to be taken are a sequence of starts of pumps opening of valves etc. and purpose for the control loops is to reach speci ed set points, thus we have a sequence of servo or tracking problems. During the production phase the control problem will be of disturbance rejection type.
Continuous operation where the product is produced continuously at con-
55
Transient analysis
A linear time invariant process may be completely described by it's step or impulse response. These responses can be determined by relatively simple experiments. The desired disturbance, step or (im-)pulse, is introduced when the system is at rest and the output is recorded or stored. To make the impulse response analysis it is important that the pulse duration is short relative to the process response time. The disturbance may be in one of the process variables, or if measurement is di cult, then a suitable, e.g. radioactive, tracer may be used. The advantage of transient analysis is that it is simple to perform, but it is sensitive to noise and disturbances. A single disturbance may ruin the experiment. If a sine wave is introduced into a stable linear time invariant process then the output will also be a sine wave with the same frequency but shifted in time. The amplitude ratio between the input and output sine waves and the phase shift between input and output are directly related to the process dynamics and parameters (see Chapter 3). By reproducing the experiment for a number of frequencies the frequency function, consisting of the amplitude and the phase shift as a functions of frequency, for the process may be obtained. The advantage of frequency analysis is that the input and output signals may be bandpass ltered to remove slow disturbances and high frequency noise, thus providing relatively accurate data. The frequency function may be determined quite accurately by using correlation techniques. The main disadvantage of frequency analysis is that it can be very time consuming. This disadvantage may be partially circumvented by using an input signal which is a linear combination of many sine waves. This methodology is called spectral analysis. Both transient and frequency analysis require estimation of process parameters from the experimentally determined data. These parameters may be obtained using relatively simple methods depending upon the particular model. If a linear time invariant model is used then the estimation may be performed relatively simply.
Frequency analysis
For linear time invariant models an alternative method to frequency and spectral analysis is parametric identi cation. In this methodology the parameters are directly determined from experimental data. The methodology may be illustrated by the following procedural steps: Determination of model structure Design of and carrying out experiment Determination of model parameters, i.e. parameter estimation Model validation The model structure may be chosen as a black box model, but it is possible to use available process knowledge, to reduce the number of unknown parameters. The experiment can be carried out by varying the process inputs using e.g. sequences of small positive and negative steps and measuring both inputs and outputs. The experimental design consists in selecting suitable size and frequency content of the inputs. The experiments may, if the process is very sensitive, be carried out in closed loop, where the input signals may
56
Chapter 2
be generated by feedback. An important part of the experimental design is to select appropriate ltering of data to reduce the in uence of disturbances. The unknown model parameters may be determined such that the measurements and the calculated model values coincide as well as possible. Once the model parameters have been obtained, it is important to establish the validity of the model, e.g. by predicting data which has not been used for parameter estimation. For linear time invariant models the parametric identi cation methodology is as e ective as spectral analysis in suppressing the in uence of noise.
Neural nets
Neural nets represent a methodology which yields a di erent representation of process dynamics from the above mentioned. The methodology is behavioral, in that measured input{output data obtained during normal operation is used to catch the process dynamics. Selection of net structure. Determination of net parameters. At present it seems that many observations are needed to x the net parameters, which may limit the applicability of the method. But the application of neural nets may be an interesting alternative for complex processes if measurements are inherently slow and conventional mathematical models are not available.
2.10 Summary
We have in this chapter seen how mathematical models can be obtained from physical considerations. The model building will in general lead to state space models that are nonlinear. To be able to analyze the behavior of the process and to design controllers it is necessary to linearize the model. This is done by making a series expansion around a working point or nominal point. We then only consider deviations around this nominal point. Finally a short introduction to the identi cation problem was given.
3.1 Introduction
The previous chapter showed how to obtain dynamical models for various types of processes. The models can be classi ed in di erent ways, for instance Static or dynamic Nonlinear or linear Time-variant or time-invariant Distributed or lumped Models were derived for di erent types of chemical processes. In Chapter 2 we found that two tanks connected in series can be described by a second order system, i.e. by two di erential equations. We also found that the exothermic continuous stirred tank reactor can be described by a similar second order system. It is then convenient to handle all systems in a systematic way independent of the physical background of the model. In this chapter we will take a systems approach and unify the treatment of the models. The models are treated as abstract mathematical or system objects. The derived properties will be valid independent of the origin of the model. The mathematical properties can afterwards be translated into physical properties of the speci c systems. The di culty to make a global analysis of the behavior of general systems leads to the idea of local behavior or linearization around a stationary point. In most control applications this is a justi ed approximation and we will see in this chapter that this leads to considerable simpli cations. The class of systems that we will discuss in the following is linear time-invariant systems. The goal with this chapter is to analyze the properties of such systems. We will, for instance, investigate the behavior when the input signals are changed or when disturbances are acting
57
58
Chapter 3
on the systems. It will then be necessary to solve and analyze ordinary linear di erential equations. The linear time-invariant systems have the nice property that they can be completely described by one input-output pair. The properties of the system can thus be determined by looking at the response of the system for one input signal, for instance a step or an impulse. This property will make it possible for us to draw conclusions about the systems without solving the di erential equations. This will lead to considerable simpli cations in the analysis of process dynamics. Section 3.2 gives formal de nitions of linearity and time-invariance. Section 3.3 shows how we can solve di erential equations. The most important idea is, however, that we can gain much knowledge about the behavior of the equations without solving them. This is done using the Laplace transform. We will call this the time domain approach. The state equation in matrix form is solved in Section 3.4. The connection between the higher order di erential equation form and the state space form is derived in Section 3.5. This section also discusses how to determine the response of the systems to di erent input signals. Simulation is a good tool for investigating linear as well as nonlinear systems. Simulation is brie y discussed in Section 3.6. Block diagrams are treated in Section 3.7. Another way to gain knowledge about a dynamical system is the frequency domain approach discussed in Section 3.8. The connections between the time domain and the frequency domain methods are given in Section 3.9.
59
A linear time-invariant system is conveniently described by a set of rst order linear di erential equations dx(t) = Ax(t) + Bu(t) (3.1) dt y (t) = Cx(t) + Du(t) This is called an internal model or state space model. Let n be the dimension of the state vector. We will also call this the order of the system. In Chapter 2 it was shown how simple submodels could be put together into a matrix description as (3.1). This model representation comes naturally from the modeling procedure. The model (3.1) describes the internal behavior of the process. The internal variables or state variables can be concentrations, levels, temperatures etc. in the process. At other occasions it may be su cient to describe only the relation between the inputs and the outputs. This leads to the concept of external models or input-output models. For simplicity we will assume that there is one input and one output of the system. Equation (3.1) then de nes n + 1 relations for n + 2 variables (x, y , and u). We can now use these equations to eliminate all the state variables x and we will get one equation that gives the relation between u and y . This equation will be a n:th order ordinary di erential equation.
dny + a dn;1 y + + a y = b dm u + b dm;1 u + + b u (3.2) n 0 dtm 1 dtm;1 m dtn 1 dtn;1 Later in this section we will give a general method how to eliminate x from (3.1) to obtain the input-output model (3.2). The solution of this equation will now be discussed. Before deriving the general solution we will rst give the solution to a rst order di erential equation. Example 3.2|Solution of a rst order di erential equation Consider the rst order linear time-invariant di erential equation dx(t) = ax(t) + bu(t) (3.3) dt y (t) = cx(t) + du(t)
The solution to (3.3) is given by
Z
0
ea(t; )bu( ) d
(3.4)
That (3.4) is the solution to (3.3) is easily veri ed by taking the derivative of (3.4). This gives dx = aeatx(0) + d eat Z t e;a bu( )d dt dt Zt 0 = aeat x(0) + aeat e;a bu( )d + eat e;at bu(t) 0 Zt at = ae x(0) + a ea(t; )bu( )d + bu(t) = ax(t) + bu(t)
0
60
2 State x
Chapter 3
b=2
b=1 b=0.5
0 0
Time t
10
Figure 3.1 The solution (3.4) for a = ;1 and b = 0:5, 1, and 2, when x(0) = 0 and
u(t) = 1.
1.5 1 0.5 0 0 State x
a=-2 1 0.5
Time t
10
Figure 3.2 The solution (3.4) for a = ;0:5, ;1, and ;2, when b=a = ;1, x(0) = 0, and
u(t) = 1.
The rst term on the right hand side of (3.4) is the in uence of the initial value and the second term is the in uence of the input signal during the time interval from 0 to t. If a < 0 then the in uence of the initial value will vanish as t increases. Further, the shape of the input signal will in uence the solution. For instance if u(t) = u0 is constant then b (eat ; 1)u x(t) = eat x(0) + a 0 The steady state value is ;b=a if a < 0. A change in b changes the steady state value of x, but not the speed of the response, see Figure 3.1. The speed of the response is changed when a is changed, see Figure 3.2. The speed of the response becomes faster when a becomes more and more negative. Solution of di erential equations are in general treated in courses in mathematics. One way to obtain the solution is to rst nd all solutions to the homogeneous equation, i.e. when the driving function is zero and then nd one solution to the inhomogeneous equation. We will not use this method in these lecture notes, since it usually gets quite complicated.
61
Laplace transform will thus make it possible to qualitatively determine how the processes will react to di erent types of input signals and disturbances. The Laplace transform will be used as a tool and the mathematical details are given in courses in mathematics.
De nition
The Laplace transform is a transformation from a scalar variable t to a complex variable s. In process control t is the time. The transformation thus implies that time functions and di erential equations are transformed into functions of a complex variable. The analysis of the systems can then be done simply by investigating the transformed variables. The solution of di erential equations is reduced to algebraic manipulations of the transformed system and the time functions. Definition 3.3|Laplace transform The Laplace transform of the function f (t) is denoted F (s) and is obtained through Z1 F (s) = Lff (t)g = e;st f (t) dt (3.5) provided the integral exists. The function F (s) is called the transform of f (t). Remark. De nition (3.5) is called the single-sided Laplace transform. In the double-sided Laplace transform the lower integration limit in (3.5) is ;1. In process control the single-sided transform has been used by tradition since we mostly can assume that the initial values are zero. The reason is that we study systems subject to disturbances after the system has been in steady state. It is also possible to go back from the transformed variables to the time functions by using inverse transformation. Definition 3.4|Inverse Laplace transform The inverse Laplace transform is de ned by Z +i1 est F (s) ds (3.6) f (t) = L;1 fF (s)g = 21 i ;i1
Example 3.3|Step function
0;
n t<0 f (t) = 0 1 t 0
;
F (s) =
Z1 e;st dt =
0;
e;st 1 = 1 s 0 s
n t<0 f ( t) = 0 at t 0
62
Chapter 3
The transform is
F (s) =
Z1
0;
ate;st dt = sa2
Assume that f (t) = sin(!t). Then Z1 F (s) = sin(!t)e;st dt 0; Z 1 ei!t ; e;i!t = e;st dt 2 i 0; " ;(s;i!)t ;(s+i!)t #1 1 = 2i ; e s ; i! + e s + i! 0 ! = s2 + ! 2
The Laplace transform of a function exists if the integral in (3.5) converges. A su cient condition is that Z1 jf (t)je; t dt < 1 (3.7)
0
for some positive . If jf (t)j < Me t for all positive t then (3.7) is satis ed for < . The number is called the convergence abscissa. From the de nition of the Laplace transform it follows that it is linear, i.e.
Lfa1 f1 (t) + a2f2 (t)g = a1 Lff1 (t)g + a2 Lff2 (t)g = a1 F1 (s) + a2 F2 (s)
Since the single-sided Laplace transform is used F (s) does not contain any information about f (t) for t < 0. This is usually not a drawback in process control since we can de ne that the systems are in steady state for t < 0 and let the inputs start to in uence the systems at t = 0. One of the properties of the Laplace transform that makes it well suited for analyzing dynamic system is: Theorem 3.1|Laplace transform of a convolution Let the time functions f1 (t) and f2 (t) have the Laplace transforms F1 (s) and F2 (s) respectively. Then Zt L f1( )f2(t ; ) d = F1(s)F2 (s) (3.8)
0
The theorem implies that the transform of a convolution between two signals is the product of the transforms of the two functions. The integral on the left hand side of (3.8) occurred in (3.4) and will show up when solving (3.1) and (3.2). The following two theorems are useful when analyzing dynamical systems.
63
Let F (s) be the Laplace transform of f (t). Then lim f (t) = s lim sF (s) !0 if the limit on the left hand side exists. Theorem 3.3|Initial value theorem Let F (s) be the Laplace transform of f (t). Then
t!0
if the limit on the left hand side exists. These two theorems can be used to obtain steady state and starting values of solutions to di erential equations. To obtain the Laplace transform of a model like (3.2) it is necessary to derive the transform for a time derivative of a signal. Theorem 3.4|Laplace transform of time derivative Let f (t) have the Laplace transform F (s). Then
df (t) = sF (s) ; f (0) dt where f (0) is the initial value of the function f (t). Proof: The theorem is proved using partial integration. If the initial values are zero then taking the time derivative corresponds to multiplication by s. If the di erential equation has initial values the expressions become more complex. Appendix A contains a summary of some properties of the Laplace transform. From Table A.1 it is, for instance, seen that integration in the time domain corresponds to division by s in the transformed variables. The Laplace transform will be used together with a table of transform pairs for basic time functions. A table of Laplace transforms is found in Appendix A, Table A.2.
L
Consider the linear ordinary di erential equation with constant coe cients
dn y + a dn;1 y + + a y = b dm u + b dm;1 u + + b u (3.9) n 0 dtm 1 dtm;1 m dtn 1 dtn;1 where u(t) is a known time function such that u(t) = 0 for t < 0. Assume that all initial values are zero. The Laplace transform of (3.9) is
(sn + a1sn;1 + + an ) Y (s) = A(s)Y (s) = (b0sm + b1sm;1 + + bm )U (s) = B (s)U (s) where U (s) is the Laplace transform of u(t). Solving for Y (s) gives (s) U (s) = G(s)U (s) Y (s) = B A(s) (3.11) (3.10)
64
Chapter 3
G(s) is called the transfer function of the system. Equation (3.11) shows that using the Laplace transform it is possible to split the transform of the output into an input dependent part and a system dependent part. The time function y (t) is now obtained through inverse transformation. This is not done by using (3.6), but by using the table of transform pairs given in Appendix A. The rst step is to make a partial fraction expansion of the right hand side of (3.11) into rst and second order terms. Assume that Bf (s) U (s) = A (s)
f
A(s) = 0 Af (s) = 0
Assume the the roots are p1 : : : pn and r1 : : : rr respectively. If the roots are distinct the the Laplace transform of the output can be written as
c1 + Y (s) = s ; p
1
cn + d1 + + s; pn s ; r1
dr + s; r
(3.12)
Using the table in Appendix A and the linearity property the time function is given by y(t) = c1ep1 t + + cnepnt + d1er1 t + + dr err t If the roots are complex it is convenient to combine the complex conjugate roots to obtain second order terms with real coe cients. If the root pi has the multiplicity ki the time function is of the form
Pki ;1 (t)epi t
where Pki ;1 (t) is a polynomial of degree less than ki . Compare Table A.2:26. Example 3.6|Multiple roots of A(s) Let the system be described by 1 U (s) Y (s) = G(s)U (s) = (s + 2)3 and let the input be an impulse, i.e., U (s) = 1. Using Table A.2:26 gives that the Laplace transform 1 Y (s) = (s + 2)3
y (t) = t2 e;2t
The methodology to solve di erential equations using the Laplace transform is illustrated using two examples.
65
Determine the solution to the di erential equation d2y + 4 dy + 3y = u(t) (3.13) dt2 dt where the input u(t) = 5 sin 2t and with the initial conditions y (0) = 0 dy dt = 0 when t = 0 The di erential equation (3.13) can now be solved using Laplace transform. Taking the transform of the equation and the driving function gives (s2 + 4s + 3)Y (s) = s210 +4 The transform of the output is 10 10 Y (s) = (s2 + 4s + = 2 3)(s + 4) (s + 1)(s + 3)(s2 + 4) (3.14) 5 1 8 s 1 2 1 = s + 1 ; 13 s + 3 ; 13 s2 + 4 ; 13 s2 + 4 The table now gives 5 e;3t ; 8 cos 2t ; 1 sin 2t y (t) = e;t ; 13 13 13 Using the Laplace transform is thus a straight forward and easy way to solve di erential equations. Example 3.8|ODE with initial value Let the system be described by dy + ay = u dt with y (0) = y0 and let u be a unit ramp, i.e. u(t) = t, t 0. Laplace transformation of the system gives, see Appendix A, sY (s) ; y0 + aY (s) = s12 1 0 + Y (s) = s y 2 + a s (s + a) y 0 + A + Bs + C = s+ a s+a s2 The coe cients in the partial fraction expansion must ful ll the relation As2 + Bs2 + Bas + Cs + Ca = 1 which gives A = 1=a2 B = ;1=a2 C = 1=a The solution to the equation is now 1 e;at ; 1 + t y(t) = e;aty0 + a 2 a2 a The in uence of the initial value will decrease as t increases if a > 0.
66
Chapter 3
The solution thus satis es (3.1). Remark. As in Example 3.2 the solution (3.15) consists of two parts. The rst part depends on the initial value of the state vector x(0). This part is also called the solution of the free system or the solution to the homogeneous system. The second part depends on the input signal u over the time interval from 0 to t. The exponential matrix is an essential part of the solution. The characteristics of the solution is determined by the matrix A. The matrix eAt is called the fundamental matrix or the state transition matrix of the linear system (3.1). The in uence of the B matrix is essentially a weighting of the input signals.
We will now show that the eigenvalues of the matrix A play an important role for the solution of (3.1). The eigenvalues of the quadratic matrix A are obtained by solving the characteristic equation det( I ; A) = 0 (3.16) The number of roots to (3.16) is the same as the order of A. Let us rst
67
assume that the A matrix in (3.1) is diagonal, i.e. 8 9 0 0 : : : 0 1 > > > > > > 0 0 : : : 0 > > 2 > A=> > > . . . . .. . > > > . . > : 0 0 0 ::: n For a diagonal matrix the diagonal elements i are the eigenvalues of A. Notice that we may allow i to be a complex number. The matrix exponential is then 9 8 1t : : : 0 e 0 0 > > > > > > 2t 0 : : : 0 e 0 > > At e => > . . . > > . . . > > . . . > > : 0 0 0 : : : e nt The eigenvalues of the A matrix will thus determine the time functions that build up the solution. Eigenvalues with positive real part give solutions that increase with time, while eigenvalues with negative real part give solutions that decay with time. If A has distinct eigenvalues then it is always possible to nd an invertible transformation matrix T such that
TAT ;1 = = diag
:::
n]
where is a diagonal matrix with the eigenvalues of A in the diagonal. Introduce a new set of state variables
z(t) = Tx(t)
then
(3.17)
dz = T dx = TAx + TBu = TAT ;1z + TBu = z + TBu dt dt The solution z (t) of this equation contains exponential functions of the form e it. When we have obtained the solution z(t) we can use (3.17) to determine that x(t) = T ;1 z (t) will become linear combinations of these exponential functions. We can now conclude that if A has distinct eigenvalues then the solution x(t) to (3.1) will be a linear combination of the exponential functions e it. To give the solution to (3.1) for a general matrix A we need to introduce the concept of multiplicity of an eigenvalue. The multiplicity of an eigenvalue is de ned as the number of eigenvalues with the same value. If the matrix A has an eigenvalue i with multiplicity ki then it can be shown that the corresponding time function in eAt becomes Pki ;1 (t)e
it
where Pki ;1 (t) is a polynomial of t with a maximum degree of ki ; 1. Example 3.9|Eigenvalues with multiplicity two Assume that 9 8 ;1 0 > > > A = : 0 ;1 >
68
Chapter 3
This matrix has the eigenvalue ;1 with multiplicity 2. The matrix exponential is 8 ;t 9 e 0 > > eAt = > : 0 e;t > Now change A to
8 9 ;1 1 > > A = : 0 ;1 which also has the eigenvalue ;1 with multiplicity 2. The matrix exponential is now 8 ;t ;t 9 > At e => : e0 te ; t e
To summarize the solution of the free system is a sum of exponential functions Pki ;1 (t)e i t where i are the eigenvalues of the matrix A. Real eigenvalues correspond to real exponential functions. The characteristic equation (3.16) can also have complex roots = + i! Such a root corresponds to the solution
e t = e t+i!t = e t(cos !t + i sin !t) How the solutions depend on and ! is treated in detail in Section 3.9. Each exponential term de nes a mode to the free system. If all eigenvalues of the matrix A have negative real part then all the components of the state vector will approach zero independent of the initial value, when the input is zero. The system is then said to be stable. If A has any eigenvalue with positive real part then the system is unstable, since at least one exponential will increase without bounds. We will later show how we can use the eigenvalues of A to draw conclusions about the system without explicitly solving the di erential equations.
The solution (3.15) can also be obtained using the Laplace transform. Taking the transform of (3.1) gives
sX (s) ; x(0) = AX (s) + BU (s) Y (s) = CX (s) + DU (s) Notice that we include the initial value of x. Solving for X (s) gives (sI ; A)X (s) = x(0) + BU (s) X (s) = (sI ; A);1x(0) + (sI ; A);1 BU (s) Similar to the scalar case it can be shown that
L;1
(3.18)
See Appendices A and . Further the inverse transform of the second term in the right hand side of (3.18) is a convolution Zt ; 1 ; 1 L (sI ; A) BU (s) = eA(t; ) Bu( )d
0
69
We thus get
eA(t; )Bu( )d 0 y(t) = Cx(t) + Du(t) which is the same as (3.15). x(t) = eAtx(0) +
G(s) is the transfer function of the system. The transfer function gives the properties of the system and can be used to determine its behavior without solving the di erential equations. We may write
(s) G(s) = C (sI ; A);1 B + D = B A(s) (3.20)
where B (s) and A(s) are polynomials in the Laplace variable s. Notice that
A(s) = det(sI ; A)
i.e. the characteristic polynomial of the matrix A, see (3.16). The degree of A(s) is n, where n is the dimension of the state vector. B(s) is of degree less or equal n. By examining the eigenvalues of the matrix A or the roots of the denominator of the transfer function we can determine the time functions that constitute the output of the system. Remark. Traditionally A and B are used for notations of the matrices in (3.1) as well as for the polynomials in (3.20). When there is a possibility for confusion we will use arguments for the polynomials, i.e. we write A(s) and B (s). From (3.19) and (3.20) we get
sn + a1sn;1 +
; + an Y (s) = b0 sm + b1sm;1 +
+ bm U (s)
where n m. Taking the inverse transform gives (3.2). The connection between the state space form (3.1) and the input-output form (3.2) is summarized by the following theorem.
70
Chapter 3
Remark 1. If the system has several inputs and outputs then G(s) will be a matrix. The elements in G(s) are then rational functions in s. Element (i j ) gives the in uence of the j :th input on the i:th output. Remark 2. There are also ways to go from the input-output form to state space form. This is sometimes referred to as realization theory. The state then introduced will, however, not have any physical meaning. Example 3.10|From state space to input-output form Derive the input-output relation between u and y for the system 8 9 8 9 ;2 1 > > >u x _ = : 1 0 x+> :1 0 8 9 y = : 1 3 x + 2u First compute
;1 > > = (sI ; A);1 = > : s + 2 ;1 >
;1
Equation (3.19) gives 98 9 8 98 s 1 1> 1 > >> > Y (s) = : 1 3 > :1 s +2> :0> 2 s + 2s ; 1 + 2 U (s) s + 3 + 2 U (s) = 2s2 + 5s + 1 U (s) = B(s) U (s) = s2 + 2s ; 1 s2 + 2s ; 1 A(s) The di erential equation relating the input and the output is thus
71
Impulse response
Consider the solution (3.15) to the state space equation. Let the input be an impulse, i.e. u(t) = (t) then Zt x(t) = eAtx(0) + eA(t; )B ( ) d = eAt x(0) + eAt B and
0
y (t) = CeAt x(0) + CeAt B + D (t) If the initial value x(0) is zero then y (t) = CeAt B + D (t) = h(t)
The function h(t) is called the impulse response of the system. The impulse response can physically be interpreted as the output when the input is an impulse. Once again it is seen that the fundamental matrix eAt plays an important role. Consider the solution of the state equation when the initial value of the state is zero and when the input has been applied from ;1. In analogy with (3.15) we get
x(t) =
Zt
;1
eA(t; )Bu( ) d
y (t) = C
= =
Zt
;1
Zt
;1 Zt ;1
h(t ; )u( ) d
We have now proved the following theorem. Theorem 3.7|Input-output relation Assume that the process has the impulse response h(t) and that the initial value is zero. The output generated by the input is then given by Zt Z1 y(t) = u( )h(t ; ) d = u(t ; )h( ) d (3.21)
;1
0
Remark. The impulse response h(t) is also called the weighting function.
72
a) 1 0.5 0 0 b) 1 0.5 0 0 c) 1 0.5 0 0 2 4 6 8 10 2 4 6 8 10 2 4 6 8 10 Upper tank
Chapter 3
1 0.5 0 0 1 0.5 0 0 1 0.5 0 0
10
Upper tank
Lower tank
10
Upper tank
Lower tank
10
Figure 3.3 Impulse responses as function of time. The level of the upper and lower
tanks of the two tank system system when the input is an impulse in the in ow and when T2 = = 1. a) T1 = 0:1 b) T1 = 1 c) T1 = 2:5.
A series connection of two liquid tanks, compare Subexample 2.1.6, and Example 2.13, can be described by the state equations (in linearized form)
Step response
(3.22)
73
0 0 2T 4T Time 6T
The solution (3.15) can now be written as Zt x(t) = eAtx(0) + eA(t; ) BS ( ) d = eAt x(0) + eAt A;1 (;e;At + I )B = eAt x(0) + A;1 (eAt ; I )B The last equality is obtained since A;1 and eAt commute, i.e., A;1 eAt = eAt A;1 . See Appendix B. Finally, for x(0) = 0 we get y (t) = CA;1 (eAt ; I )B + D t 0 The step response of a system can easily be obtained by making a step change in the input and by measuring the output. Notice that the system must be in steady state when the step change is made. The initial values will otherwise in uence the output, compare (3.15). There are several ways to determine a controller for a system based on the knowledge of the step response, see Chapters 4 and 5. Example 3.12|Step response of a rst order system Consider a rst order system with the transfer function K G(s) = 1 + Ts and determine its step response. The Laplace transform of the output is K 1 = K ; KT Y (s) = 1 + Ts s s 1 + Ts Using the table in Appendix A we get the step response
0
y(t) = K 1 ; e;t=T (3.23) Figure 3.4 shows the step response of the system. The parameter T determines how fast the system reacts and is called the time constant. A long time constant implies that the system is slow. From (3.23) we see that the steady state value is equal to K . We call K the gain of the system. Further y (T ) = K (1 ; e;1) 0:63K . The step response thus reaches 63% of its steady state value after one time constant. The steady state value is practically reached after 3{5 time constants. Further the initial slope of the step response is y _ (0) = K=T . This can also be obtained by the initial value theorem.
74
Chapter 3
The linearized model of the liquid level storage tank was derived in Example 2.13, see (2.71). The input is the input ow and the output is the level. The transfer function is G(s) = s + (3.24) where p 0 a 2gh = 2Ah 0 1 =A Equation (3.24) can be written as 2h0 p = = a 2gh0 = K G(s) = s + 1 s p 2Ah0 + 1 sT + 1 a 2gh0 This is of the same form as transfer function in Example 3.10 with p 0 gh K= 2 ag
p
2vh0 T = A ag Figure 3.4 can then represent the step response also for this system. The gain and the time constant depend on the process and the linearizing point h0 . An increase in the tank area A increases T . A decrease in the outlet area a will increase K and T and the response will be slower. An increase in the level will also increase the gain and time constant.
We have shown how the response to a general input can be represented using either the step response or the impulse response. Equation (3.21) is an inputoutput model that is fundamental and will be used several times in the sequel. Applying (3.8) on (3.21) gives Zt Y (s) = L u( )h(t ; ) d = H (s)U (s) (3.25)
;1
where H (s) is the Laplace transform of h(t). To obtain the second equality we have assumed that u(t) = 0 for t < 0. Equation (3.25) also implies that the transfer function is the Laplace transform of the impulse response, i.e.
L fh(t)g = H (s) = G(s)
Compare (3.11).
There are several important concepts that can be de ned using impulse and step responses.
75
A system is causal if a change in the input signal at time t0 does not in uence the output at times t < t0 . For a causal system the impulse response has the property
h(t) = 0
t<0
Assume that the impulse response is decaying in magnitude when t goes to in nity such that Z1 jh(t)j dt < 1 It then follows from (3.21) that a bounded input gives a bounded output. (Such systems are called input-output stable systems, see Chapter 4.) Previous in this section it was shown that the system will reach stationarity for a constant input if all eigenvalues of A have negative real part. From (3.1) and (3.2) it is easy to compute the steady state gain of the process when the input is constant. In stationarity or steady state all the derivatives are zero. This gives the following expressions for the steady state gain when the input is a unit step K = ;CA;1 B + D m =b a Z n1 = h(t) dt The invertibility of A will be discussed in connection with stability in Chapter 4. The steady state gain K gives the relation between the input and the output in steady state. If the input changes by u then the output will change by K u in steady state. Compare Example 3.2. The processes have been described using state space models and input-output models above. The connection between the two types of descriptions was discussed using the Laplace transform. The input-output model corresponding to (3.1) is ; Y (s) = C (sI ; A);1C + D U (s) = G(s)U (s) (3.26) when the initial value is equal to zero. The transfer function G(s) is thus easily computed from the state space representation. The transfer function can be written in several forms m + b1 sm;1 + + bm B (s) G(s) = b0ssn + (3.27) a1 sn;1 + + an = A(s) Q s ; z1)(s ; z2 ) (s ; zm ) = K Qm i=1 (s ; zi ) = K0 ( (3.28) 0 n (s ; p1 )(s ; p2) (s ; pn ) i=1 (s ; pi ) Qm0 l (1 + Tz1 s) (1 + Tzm0 s) s 0 0 =1 (1 + Tzi s) (3.29) = K sk (1 + T s) (1 + T 0 s) = K k;l iQ n0 (1 + T s) p1 s pn pi i=1 n X c1 + + cn = ci = s; (3.30) p1 s ; pn i=1 s ; pi
0 0
76
Chapter 3
Im s
Re s
Figure 3.5 Singularity diagram for the process G(s) = K (s + 1)=(s2 + 4s + 8).
Remark. To obtain (3.30) from (3.27) it is assumed that all roots of A(s) are distinct, i.e. have multiplicity 1. If A(s) has complex roots then pi (Tpi) are complex. Similar for B (s). The denominator A(s) of G(s) is called the characteristic polynomial of the process and it is obtained as
A(s) = det(sI ; A) =
n Y i=1
(s ; pi )
Compare (3.16). The roots of the characteristic polynomial pi are the poles or the modes of the process. Notice that the poles of the system are the same as the eigenvalues of the matrix A in the state space representation (3.1). The roots of the numerator B (s) of G(s) are the zeros of the process. K = G(0) is the steady state gain of the process. The parameters Tpi = ;1=pi in (3.29) are called the time constants of the process. Using (3.30) the impulse response of the process (3.25) is given by
n X ; 1 h(t) = L fG(s)g = ciepi t i=1
The poles (or time constants) thus determines which exponential functions that are present in the time responses of the system. Apart from the modes of the process we also get the in uence of the modes of the input signal, compare (3.12). The parameters ci determine the weight of each mode. The process zeros are thus an indirect measure of these weights. The poles and the zeros are important to determine the time response of a system. It is therefore common to represent a process graphically by its singularity diagram. The poles are represented by crosses and the zeros by circles in the complex s-plane, see Figure 3.5. To fully specify the transfer function from the singularity diagram it is also necessary to specify the steady-state gain of the system.
3.6 Simulation
1.8
77
1.6
1.4
1.2
Output
0.8
0.6
0.4
0.2
0.5
1.5
2.5 Time
3.5
4.5
Figure 3.6 Step response of the system G(s) = 8(s + 1)=(s2 + 4s + 8) plotted using
Matlab.
3.6 Simulation
A full solution of a di erential equation can be complicated and it can be di cult to get a feel for the nature of the solution. One illustrative way is to use simulation to investigate the properties of a system. Through simulation it is possible to nd out how di erent parameters in uence the solution. Simulation is therefore a very good complement to analytical methods. The simulation package Simulink developed by MathWorks is one commonly used general simulation package. Simulink is based on a graphical interface and block diagram symbols. Transfer functions, poles and zeros, step responses, and impulse responses are easily computed directly in Matlab. Simulations will be used throughout the lecture notes to investigate systems and di erent types of controllers. Example 3.14|Step response simulation using Matlab The step response of the system in Figure 3.5 for K = 8 can be obtained in Matlab using the commands
K=8 A= 1 4 8] B= 1 1] sys=tf(K*B,A) y,t,x]=step(sys,5) plot(t,y) xlabel('Time') ylabel('Output')
78
G1
Chapter 3
G2
G1 G2
G1
G 2
79
Controller
Actuator
Subprocess 1
Subprocess Output 2
Sensor
between subsystems it is necessary to treat them as one system. For instance, the two tanks in Figure 3.8(b) must be treated together. Example 3.15|Subdivision of a system In the modeling phase it is advantageous to divide the system into several subprocesses. This makes it possible to build up a library of standard units. The units are then connected together into the total system. Figure 3.9 shows a typical blockdiagram with blocks for subprocesses, sensor, controller and actuator, for instance a value. The three basic couplings can be used to simplify a complex system to derive the total transfer function from the input to the output. A straight forward way to derive the total transfer function from inputs to the output is to introduce notations for the internal signals and write down the relations between the signals. The extra signals are then eliminated one by one until the nal expression is obtained. Another way to simplify the derivation is the so called backward method. The method can be used on systems without inner feedback loops. Example 3.16|The backward method Consider the system in Figure 3.10 and derive the total transfer function from U (s) and V (s) to Y (s). Start from the output end of the block diagram and write down the expression for the Laplace transform of the output. This is expressed in signals coming from the left. These signals are then expressed in other signals and so on. For the example we have
80
E (s)
Chapter 3
U (s)
H 1(s)
Y 1(s) G 1(s)
G 2(s)
H 2 ( s )
U(s)
G p (s)
Y(s)
After some training it is easy to write down the last expression directly without introducing the internal variables Y1 and E . The expression for the Laplace transform of the output is now given as G1 G2H1 U (s) + G2 Y (s) = 1 + G1G2 H2 1 + G1 G2H2 V (s)
Motivation
81
impulse, or a periodic signal. The steady state in uence of a step disturbance is given by the steady state gain Gd(0). The disturbance may also be periodic. Examples of periodic or almost periodic disturbances are Measurement noise due to hum from the power supply. Unbalances in rotating mechanical parts. In uence of daily variation in outdoor temperature. Variation in feed concentration to a unit process. In Section 3.5 we showed how a linear system could be characterized by giving its step or impulse response. In this section we will investigate how periodic inputs or disturbances in steady state are in uencing the output of the process. An example is given before the general results are derived. Example 3.17|Frequency response of a rst order system Assume that the process is described by
(3.31)
where a > 0 and assume that the input signal is a sinusoidal signal u0 sin !t. The Laplace transform of the input is 0! U (s) = s2u+ !2
b u0! Y (s) = s + a s2 + !2 1 a;s 0! = !bu 2 + a2 s + a + s2 + ! 2 The rst part on the right hand side has an inverse transform which is a decaying exponential since a > 0. This part will thus vanish as t increases. The second term represents sinusoidal signals with frequency ! . Asymptotically we get a sin !t ; ! cos !t ya (t) = bu0 !2 + a2 ! 2 + a2 = u0 A(! ) sin(!t + '(! )) where A(!) = p 2b 2 (3.32) ! +a '(!) = ; arctan ! (3.33) a +n
The derivation above shows that asymptotically the response to a sinusoidal input is a sinusoidal output with the same frequency but with a change in amplitude and phase. The excitation with a single frequency forces the output to vary with the same frequency. To derive the result we used that the system is linear and stable. We can thus regard the sinusoidal response determined by A(!) and '(! ) as frequency dependent gain and phase shift. The responses of the system (3.30) to sinusoidal inputs of di erent frequencies are shown in
This gives
82
2 0 2 0 2 0 2 0 2 0 2 0 10 20 omega=5 10 20 omega=1 10 20 omega=0.2
Chapter 3
30
40
Time t
50
30
40
Time t
50
30
40
Time t
50
Figure 3.12 The input (dashed) and output (full) when the signal sin !t is applied to
the system (3.30) with zero initial value, when a = 1, b = 2, and (a) ! = 0:2 (b) ! = 1 (c) ! = 5.
Figure 3.12. The in uence of the transient is seen in Figure 3.13. In the gure y (0) 6= 0, but there will be a transient even when y (0) = 0. This is due to the response from the stable pole in ;a. The gain and phase as functions of the frequency is given in Figure 3.14. The amplitude of the output A(! ) is decreasing with increasing input frequency. The phase di erence between the input and the output is increasing with increasing frequency. Using the same idea as in the example it is possible to derive the sinusoidal response of a general stable system. Let the system have the transfer function G(s), which has all poles in the left half s-plane. The system is of order n. The Laplace transform of the output is now 0! Y (s) = G(s) s2u+ !2 n (3.34) X cj + d1 + d2 = s; p s + i! s ; i!
j =1 j
The coe cients d1 and d2 can be obtained by multiplying (3.34) by s + i! and s ; i! respectively and evaluating the result for s = ;i! and s = i! respectively. This gives 0 d1 = ; u 2i G(;i! ) 0 G(i! ) d2 = u 2i The asymptotic time function is then 0 ;;G(;i! )e;i!t + G(i! )ei!t (3.35) ya (t) = u 2i
83
10
Time t
10
Figure 3.13 The input and output when the signal sin !t is applied to the system (3.30)
when a = 1, b = 2, ! = 5, and y(0) = 2.
a) 2 Amplitude
0 0 b) 20 Phase
10
15
rad/s
20
60
100 0
10
15
rad/s
20
Figure 3.14 The frequency dependence (in linear scales) of (a) the gain A(!) given by
(3.32) (b) the phase shift given by (3.33).
G(i! ) and G(;i!) are complex numbers that can be expressed in Cartesian or polar coordinates, see Figure 3.15. If the complex number is written in
84
Im b G(i )
Chapter 3
G(i ) = a+ib
G( i)
Re
G( i) = a ib
Cartesian coordinates G(i! ) = a + ib then p jG(i! )j = a2 + b2 b arg G(i! ) = ' = arctan a In polar coordinates
G(i!) = jG(i! )jei' G(;i!) = jG(;i! )je;i' = jG(i! )je;i' where jG(i! )j is the absolute value and ' is the argument of G(i! ). Equation (3.35) can now be written as
i(!t+') ; e;i(!t+') ya (t) = u0jG(i! )j e 2i = u0 jG(i! )j sin(!t + '(! )) We have thus shown that the gain or amplitude function is given by A(!) = jG(i! )j (3.36) and the argument or phase function is given by G(i! ) '(!) = arg G(i!) = arctan Im (3.37) Re G(i! ) The derivation shows that the asymptotic response to a sinusoidal input is a sinusoidal output with the same frequency. The amplitude and phase changes are determined by the transfer function. The function G(i! ) is the transfer function evaluated along the positive imaginary axis, i.e., for s = i! . This function is called the frequency function or the frequency response of the process. The physical interpretation of the frequency function is how easy or difcult it is for signals of di erent frequencies to pass through the system. This
85
knowledge is important to be able to determine how signals and disturbances of di erent nature will in uence the output of the system. The frequency curve of the process also determines how fast it is possible to control the system. Applying sinusoidal signals to a process and measuring the asymptotic result is one way of experimentally determining the transfer function of a process. This is a method that is easy to use, for instance, on electrical circuits and systems. In chemical process control applications it is more di cult to use the frequency response method for identi cation. Some of the reasons are, for instance, the di culty to operate the process in open loop when sinusoidal inputs are fed into the system. Further, the time constants are usually quite long. This implies that it takes long time before the transient has vanished. The procedure must then be repeated for many di erent frequencies. Even if the transfer function is not determined by applying periodic signals to the process the frequency function contains much information about the process. It is thus important to be able to qualitatively determine the response of the system by \looking" at the frequency function. Several design methods are based on manipulations of the frequency function. The frequency function is usually presented graphically using Nyquist diagram or Bode diagram.
Nyquist diagram
The Nyquist diagram is a plot of the frequency function G(i! ) in the complex plane. The amplitude and phase (or equivalently the real and imaginary parts) are plotted as functions of the frequency. Drawing the Nyquist diagram by hand is usually time consuming and is preferably done using a computer. Example 3.18|Nyquist diagram of a rst order system The Nyquist diagram of the rst order system in Example 3.17 is shown in Figure 3.16. In this example the frequency function is half a circle with diameter jb=aj.
Bode diagram
It is usually quite time consuming to draw the Nyquist curve of a system without using a computer. The construction of the frequency curve is, however, easier if the amplitude and the phase are drawn separately as functions of the frequency using logarithmic diagrams. This is called a Bode diagram. The amplitude curve gives log jG(i! )j as function of log ! . The phase curve shows '(!) as function of log !. There are di erent standard units to measure the amplitude. We will use the unit decilog (dl) de ned as 10 decilog = log 10
86
Chapter 3
Other units are decibel (dB), and neper (N). Their relations are 10 dl = 20 dB = 2:303 N The frequency is measured in rad/s and the phase in radians or degrees. Further, we introduce the notations octave and decade to indicate a doubling and a tenfold change, respectively, in the frequency.
We will now give some guidelines how to draw or sketch a Bode diagram. It is rst assumed that the transfer function is of nite order. The transfer function can then be factorized as
; z1 )(s ; z2 ) (s ; zm ) G(s) = K 0 s(ks(s ; p1)(s ; p2 ) (s ; pl ) (1 + Tz1s)(1 + Tz2 s) (1 + Tzm s) = K sk (1 + Tp1s)(1 + Tp2s) (1 + Tpls)
s2 + 2 !s + ! 2
or 1 + 2 sT + (sT )2 A nite order system can thus be written as a product of factors of the form
K sn (1 + sT )n (1 + 2 sT + (sT )2)n
where n is a positive or negative integer. The logarithm of the transfer function is a sum of logarithms of such factors, i.e. log(G1G2 arg(G1G2
+ log Gn + arg Gn
(3.38) (3.39)
Further for the argument we have the relation The total Bode diagram is thus composed of the sum of the Bode diagrams of the di erent factors. Equations (3.38) and (3.39) also explains the choice of the scales of the axis of the Bode diagram. Bode diagram of G(s) = K . If G(s) = K and K > 0 then log jG(i!)j = log K and arg G(i! ) = 0. If K < 0 then arg G(i! ) = ; . The Bode diagram is thus two straight horizontal lines. Figure 3.17 shows the Bode diagram of G(s) = 5. The magnitude is constant and the phase is 0 for all frequencies.
87
Magnitude
10 0 10 -1
10 0
10 1
Phase [deg]
-50
10 -1
10 0 Frequency [rad/s]
10 1
log jG(i! )j = n log ! arg G(i! ) = n =2 The argument curve is thus a horizontal straight line. The amplitude curve is also a straight line with the slope n in the logarithmic scales. Figure 3.18 shows the Bode diagram of G(s) = 1=s2. The slope of the magnitude is ;2 in the log-log scales. The phase is constant ;180 . Bode diagram of G(s) = (1 + sT )n. For this type of terms we have p log jG(i! )j = n log 1 + ! 2T 2 arg G(i! ) = n arctan !T When !T 1 it follows that log jG(i! )j 0 and arg G(i! ) 0. For high frequencies, !T 1, we get log jG(i! )j n log !T and arg G(i! ) n =2. The amplitude curve has two asymptotes log jG(i! )j = 0 low frequency asymptote log jG(i! )j = n log !T high frequency asymptote The low frequency asymptote is horizontal and the high frequency asymptote has the slope n. The asymptotes intersect at ! = 1=jT j, which is called the break frequency. The argument curve has two horizontal asymptotes arg G(i! ) = 0 low frequency asymptote T n arg G(i! ) = 2 jT j high frequency asymptote The Bode diagram for n = 1 is shown in Figure 3.19. There is a good agreement between the amplitude curve and the asymptotes except around the break frequency.
88
10 2 10 1 10 0 10 -1 10 -2 10 -1
Chapter 3
Magnitude
10 0
10 1
Phase [deg]
10 0 Frequency [rad/s]
10 1
Figure 3.18 Bode diagram for the transfer function G(s) = 1=s2.
10 1
Magnitude
10 0 10 -1
10 0
10 1
100 80
Phase [deg]
60 40 20 0 10 -1 10 0 Frequency [rad/s] 10 1
89
Magnitude
10 0
10 -1 10 -1
10 0
10 1
Phase [deg]
Figure 3.20 Bode diagram for the transfer function G(s) = 1= ;1 + 2 sT + s2 T 2 when
=0, 0.1, 0.2, 0.5, 1, and 2. = 0 gives the upper magnitude curve and the steepest phase curve.
The asymptotes intersects at the break frequency ! = 1=T . The argument curve has two horizontal asymptotes arg G(i! ) = 0 arg G(i! ) = n low frequency asymptote high frequency asymptote
Figure 3.20 shows the Bode diagram for n = ;1. The deviation of the amplitude curve from the asymptotes can be very large and depends on the damping . The Bode diagram of a rational nite-order transfer function can now be obtained by adding the amplitude and phase curves of the elementary factors.
90
loglGl 2
Chapter 3
Break points
0 2
0.1
0.5
Figure 3.21 Construction of the Bode diagram for the transfer function 2
0:5(s+0:1) G(s) = s2 (s2 +0 :4s+0:25)(s+2) in Example 3.19.
:5(s + 0:1)2 G(s) = s2 (s2 +00 :4s + 0:25)(s + 2) :01(1 + 10s)2 = s2 (1 + 2 00 :4(2s) + (2s)2) (1 + 0:5s) The diagram is obtained from the elementary functions G1 (s) = 0:01s;2 G2 (s) = (1 + 10s)2 ; G3 (s) = 1 + 2 0:4(2s) + (2s)2 ;1 G4 (s) = (1 + 0:5s);1
(3.40)
The break frequencies of the transfer function are ! = 0:1, 0.5, and 2. The asymptotes of the amplitude curve are straight lines between the break frequencies. For low frequencies G(s) 0:01 s;2 . The slope of the low frequency asymptote is thus ;2. It is also necessary to determine one point on the asymptote. At ! = 1 we have log jG(i)j = log 0:01 = ;2 = ;20 (dl) Another way to determine a point on the asymptote is to determine when the asymptote crosses the line jG(i! )j = 1, i.e., when log jG(i! )j = 0. In this case we get ! = 0:1. The low frequency asymptote can now be drawn, see Figure 3.21. The lowest break frequency at ! = 0:1 corresponds to a double zero (the factor G2 above). The slope of the asymptote then increases to 0 after this break frequency until the next break frequency at ! = 0:5. This corresponds to the factor G3 , which has two complex poles. At this frequency the slope decreases to ;2. The last break frequency at ! = 2 corresponds to the factor G4, which is a single pole. The slope then becomes ;3. The construction of
91
Magnitude
10 0 10 -1 10 -2 10 -3 10 -2 10 -1 10 0 10 1
-50 -100
Phase [deg]
Example 3.19.
0:5(s+0:1)2 Figure 3.22 The Bode diagram of the transfer function G(s) = s2 (s2 +0 :4s+0:25)(s+2) in
the asymptotes is now nished and can be checked by observing that at high frequencies log jG(i! )j ;3 log ! + log 0:5. The slope of the high frequency asymptote is thus ;3 and it has the value jG(i! )j = 0:0005 at ! = 10. It is easy to construct the asymptote of the amplitude curve. The nal shape of the amplitude curve is obtained by adding the contributions of the di erent elementary factors. It is only necessary to make adjustments around the break frequencies. To make the adjustments we can use the standard curves in Figures 3.19 and 3.20. The phase curve is obtained by adding the phase contributions for the di erent factors. This has to be done graphically and is more time consuming than drawing the asymptotes. The complete Bode diagram for the system (3.40) is shown in Figure 3.22. The frequency !c for which jG(i!c )j = 1 is called the cross-over frequency. For comparison the Nyquist diagram is shown in Figure 3.23. The example above shows that it is easy to sketch the Bode diagram, but it can be time consuming to draw it in detail. There are many computer programs that can be used to draw Bode diagrams. One example is Matlab. It is, however, important to have a feel for how both Nyquist and Bode diagrams are constructed. Time delays are important modeling blocks. A time delay has the nonrational transfer function, see Appendix A,
G (s) = e;s
(3.41)
92
1
Chapter 3
-1
Im
-2
-3
-4
-5 -5
-4
-3
-2 Re
-1
Figure 3.23 The Nyquist diagram of the same system as in Figure 3.22.
0 Phase
-50
-100
-150 10 -1
10 0
10 1
Figure 3.24 Bode diagram (phase only) for the time delay (3.41), when = 1.
The frequency curve is de ned by (i! )j = je;i! j = 1 arg G (i! ) = arg e;i! = ;!
jG
(rad)
The amplitude curve is thus a horizontal line and the phase is decreasing with increasing frequency. See Figure 3.24.
The frequency responses of the time delay (3.41) and of G(s) = 1 have the same amplitude functions, but di erent phase functions. This implies that it is not possible to determine the amplitude function uniquely if the phase function is given. H. W. Bode has shown that the following is true Z 1 log jG(i! )j ; log jG(i! )j 2 ! 1 1 d! (3.42) arg G(i!1) 2 2 ! ; !1 0
93
1 b=-1
0.5 1 2
1 0
Time t
10
Figure 3.25 Step response of the system (3.43) for di erent values of b.
are nonminimum phase systems. Nonminimum phase systems are in general di cult to control. The reason is that such systems have extra phase shifts which will in uence the stability properties of a closed loop system. All systems with right half plane zeros are nonminimum phase systems. Many nonminimum phase systems are inverse response systems, i.e. the step response starts in the \wrong direction". This will also make it di cult to control a nonminimum phase system. Example 3.21|Inverse response system Consider the system 1 ; bs (3.43) G(s) = (1 + s)2 The step response is shown in Figure 3.25 for some values of b. The system is nonminimum phase and has an inverse response when b > 0.
Equation (3.42) is called Bode's relation. A system for which equality holds is called a minimum phase system. If the phase curve is below the upper limit given by (3.42) the system is called a nonminimum phase system. Example 3.20|Nonminimum phase systems The time delay (3.41) and ;s G(s) = 1 1+s
94
Chapter 3
of the transfer function of the system. It is also useful to look at the Bode diagram of the system. We will here assume that the system is described by a transfer function that is given in any of the forms (3.27){(3.30), the singularity diagram, or the Bode diagram. The step response or impulse response are obtained by calculating the inverse Laplace transforms. From the solution of di erential equations discussed in Sections 3.3 and 3.4 we know that the di erent poles pi give rise to time responses of the form exp(pi t). Poles with negative real parts give stable responses (decaying in magnitude), while poles with positive real parts lead to time signals that grow in magnitude without bounds. The steady state value of the step response can be obtained from the nal value theorem (Theorem 3.2) or equivalently from G(0), provided the system is stable.
Transients
By the transient of a system we mean the time from the application of the input until the system has reached or almost reached a new steady state value. The transient is dominated by the poles that are closest to the origin or equivalently by the longest time constants of the system. By determining the poles from the transfer function or from the singularity diagram it is possible to get a feel for the time scale of the system. This knowledge can also be obtained from the break frequencies in the Bode diagram of the system. If the system has several time constants Ti we can de ne an equivalent time constant of the system as
Teq =
n X i=1
Ti
The transient of the system is then 3|5 time the equivalent time constant, compare Example 3.10. Another way to measure the length of the transient is the solution time. This is usually de ned as the time it takes before the response is within 5 % of its steady state value. Example 3.22|Solution time of rst order system Consider the rst order system in Example 3.12. The time to reach within 95% of the nal value is the 95% solution time, and is given by
K 1 ; e;Ts=T = 0:95K
or
Ts = ;T ln 0:05 3T The solution time to reach 99% of the steady-state value is analogously given by Ts = ;T ln 0:01 5T
95
Tz1=5
2 Tz1=2.5
Tz1=1
Tz1=0 0 0
Time t
10
In uence of zeros
It is much more di cult to determine the in uence of a zero then the in uence of a pole. From (3.30) we see that the zeros are a measure of the weight of the di erent modes of the system. Let the system be described by
(3.44)
The in uence of Tz1 will now be determined. The step response is given by The response is a weighted sum of the step and impulse responses of the system G0(s). (Remember that the impulse response of a system is the derivative of the step response.) The step response of (3.44) is dominated by the step response of G0 when Tz1 is small. For large values of Tz1 the shape of the step response of (3.44) is dominated by the impulse response of G0 . Example 3.23|In uence of a zero in the step response Consider the system 2 G(s) = (1 + Tz1 s) (s + 1)( s + 2) The step response of (3.45) for di erent Tz1 is shown in Figure 3.26. (3.45)
96
Chapter 3
Cricket Software
Im s
0 0
Re s
Frequencies
The Bode diagram show the amplitude and phase changes when a sinusoidal signal is fed into the system. If the Bode diagram has peaks then the system will amplify signals of these frequencies. Compare Figure 3.20. A step is a signal that contains many di erent frequencies. Some of these frequencies will be more ampli ed than others and this can be seen in the step response of the system. The speed of the response also depends on the frequency response. The higher frequencies the system lets through the faster the response of the system will be. The speed of the response will thus depend on the cross-over frequency, !c , of the system. Example 3.24|Second order system Let the system have the transfer function
0 G(s) = s2 + 2 ! ! s + !2 0 0 2
(3.46)
The location of the poles are shown in Figure 3.27. The step responses for di erent values of are shown in Figure 3.28. The step response is less and less damped the lesser the parameter is. This parameter is called the damping of the system. The frequency of the response of the system when = 0 is !0 . The parameter !0 is called the natural frequency of the system. Using the Laplace transform table in Appendix A we nd that the frequency in the step
97
(c)
(d)
(e)
(f)
Figure 3.29 Singularity diagram for systems with the transfer functions (a)
1 s+1 (b) 2 ! 1 2 1 1 1 0 2 when !0 = 1:5 and = 0:2 (f) ; s+1 + (s+1)4 . s(s+1) (c) s2 (d) (s+1)4 (e) s2 +2 !0 s+!0
response is
This frequency is called the damped frequency of the system. The peak in the Bode diagram of (3.46), see Figure 3.20, occurs at !d . The oscillation period of the response for < 1 is Td = 2 =!d . The solution time of (3.46) is 0 < < 0:9 Ts !3 0 Figures 3.29{3.32 show the singularity diagrams, the step responses, the impulse responses, and the Bode diagrams for six di erent systems. From the examples it is possible to see some of the connections between the di erent representations of a system.
p !d = !0 1 ;
98
a) 1 0.5 0 0 c) 40 0 0 e) 2 1 0 0 2 4 6 8 10
Chapter 3
b) 10 5 0 0 d) 1
10
0.5 2 4 6 8 10 f) 0 0 1 0 5 10 15 20 1 0 5 10 15 20 5 10 15 20
Figure 3.30 Step responses as functions of time for the systems with the transfer functions in Figure 3.29.
a) 1 0.5 0 0 c) 10 5 0 0 e) 1 0 1 0 5 10 15 20 2 4 6 8 10 f) 0 0 1 0 1 0 5 10 15 20 5 10 15 20 2 4 6 8 10 d) 0.2 b) 1 0.5 0 0 2 4 6 8 10
Figure 3.31 Impulse responses as functions of time for the systems with the transfer
functions in Figure 3.29.
3.10 Summary
In this chapter we have given many tools which can be used for analysis of process dynamics. Starting from the modeling in Chapter 2 we obtained state space (internal) or input-output models. Using (3.26) it is possible to
3.10 Summary
a) 10 0 Magnitude b) 10 1 Magnitude
99
10 0
10 -1
10 -1 10 -1
10 0
10 1
10 -2 10 -1
10 0
10 1
Phase -100
Phase
-150
10 0
10 1
-200 10 -1
10 0
10 1
c) 10 2 10 1 10 0 10 -1 10 -2 10 -1
Magnitude
d) 10 1 10 0 10 -1 10 -2
Magnitude
10 0
10 1
10 -3 10 -1
10 0
10 1
Phase -100
0 -100
Phase
-150
-200 -300
-200 10 -1
10 0
10 1
-400 10 -1
10 0
10 1
e) 10 1
Magnitude
f) 10 1
Magnitude
10 0 10 0 10 -1
10 -2 10 -1
10 0
10 1
10 -1 10 -1
10 0
10 1
Phase
Phase
-100
-200
10 0
10 1
-300 10 -1
10 0
10 1
Figure 3.32 Bode diagrams for the systems with the transfer functions in Figure 3.29.
100
Chapter 3
transform from state space form for a higher order di erential equation or a transfer function form. Using the models we can analyze the properties of the systems. For instance, we can look at the response solving the di erential equations. This is done using Theorem 3.5 or the Laplace transform method. In Section 3.3, Eq. (3.12), we found that the response of a system subject to a general input signal can be divided into two parts. One part depends on the property of the system, poles and zeros, and one part depends on the type of input signal that is acting on the system. One key observation is that the eigenvalues of the matrix A in (3.1) are the same as the poles of the transfer function. The observations above make it possible to get a qualitative feel for how the system will respond to a change in the input signal. This is done without nding the full solution of the di erential equations. The steady state gain, poles, and zeros reveal the gross behavior and time scales of a system. The frequency response G(i! ) is another way of representing a process. G(i! ) determines how a sinusoidal signal is ampli ed and delayed when passing through a system. There are also good design tools that work directly with the frequency curve either in a Nyquist or a Bode diagram. To be able to construct good controllers for a system it is necessary to master the di erent kinds of representations and to have a knowledge about the relation between them.
Feedback Systems
GOAL: The idea of feedback is introduced. It is shown how feedback can be used to reduce in uence of parameter variations and process disturbances. Feedback can also be used to change transient properties.
We will now investigate the e ects of feedback on a simple example. A model for a continuous stirred tank heater was derived in Example 2.2. The process is shown in Figure 4.1. Assuming constant heat capacity and mass density we get the model, see (2.14), (t) V Cp dT dt = v Cp(Tin (t) ; T (t)) + Q(t) where is the density, V the volume of the tank, Cp speci c heat constant of the liquid, and v the volume ow rate in and out the tank. Let the input be the heat Q into the system and the output the liquid temperature T in the tank. The disturbance is the input temperature Tin . It is convenient to regard
101
102
v, T in
Chapter 4
Feedback Systems
Temperature T
v, T
1 1+T 1 s
T(s)
all variables as deviations from stationary values. For instance, Tin = 0 then implies that Tin is at its normal value. The system can also be written as (t) (4.1) T1 dT dt + T (t) = KQ(t) + Tin(t) The process gain K and the time constant T1 of the process are given by V T = K=v1 1 Cp v Notice that the time constant T1 has dimension time. Taking the Laplace transform of the di erential equation (4.1) gives K Q(s) + 1 T (s) T (s) = 1 + (4.2) T1s 1 + T1s in The system can be represented by the block diagram in Figure 4.2. We will assume that the parameters and the time scale are such that K = 1 and T1 = 1. The purpose is to use the input Q(t) to keep the output T (t) close to the reference value Tr (t) despite the in uence of the disturbance Tin . We will now investigate the dynamic behavior of the output temperature, when the reference value Tr and the disturbance Tin are changed. Assume that the input Q is zero, i.e. no control, and that the disturbance is a unit step. I.e., the inlet temperature is suddenly increased. Notice that the desired value in this case is Tr = 0. We don't want T to deviate from this value. The Laplace transform of the output is T (s) = 1 +1T s 1 1 s and the time response is, see Table A.2:15, T (t) = 1 ; e;t=T1
103
Figure 4.3 The temperature of the system (4.2) and the control signal, when the dis-
turbance is a unit step at t = 1 for di erent values of the gain in a proportional controller Kc = 0, 1, 2, and 5. The desired reference is Tr = 0. Kc = 0 gives the open loop response.
The temperature in the tank will approach the inlet temperature as a rst order system. The output is shown in Figure 4.3. (The curve when Kc = 0.) The change in the input temperature will cause a change in the tank temperature. The new steady state value will be the same as the input temperature. Physically this is concluded since all the liquid will eventually by replaced by liquid of the higher temperature. This is also found mathematically from (4.1) by putting the derivative equal to zero and solving for T . The time to reach the new steady state value is determined by the time constant of the open loop system T1. It takes 3|5 time constants for the transient to disappear.
Proportional control
A rst attempt to control the system is to let the input heat Q be proportional to the error between the liquid temperature and the reference temperature, i.e. to use the proportional controller
(4.3)
If the temperature is too low then the heat is increased in proportion to the deviation from the desired temperature. Kc is the (proportional) gain of the controller. Figure 4.3 also shows the response and the control signal when using the controller (4.3) for di erent values of the controller gain. The steady state error will decrease with increasing controller gain Kc . We also notice that the response becomes faster when the gain is increased. This is easily seen by introducing (4.3) into (4.1). This gives (t) + T = KK (T (t) ; T (t)) + T (t) T1 dT c r in dt
104
Chapter 4
Feedback Systems
T1 dT (t) + T = KKc T (t) + 1 T (t) 1 + KKc dt 1 + KKc r 1 + KKc in If we assume that Tr = 0 we see that in steady state T = 1+1 KK Tin
c
or
(4.4)
The error thus decreases with increasing controller gain. Equation (4.4) is also a rst order di erential equation with the new time constant
1 T1c = 1 +T KK
This implies that the speed increases with increasing controller gain. The proportional controller will, for this process, always give a steady state error independent of how large the controller gain is.
To get rid of the steady state gain due to the step disturbance we have to change the controller structure. One way is to introduce a proportional and integral controller (PI-controller). This controller can be written as 1 Z t e( ) d Q(t) = Kc e(t) + T (4.5) i where the error e(t) = Tr (t) ; T (t). The rst term is proportional to the error and the second to the integral of the error. The second term will keep changing until the error is zero. This implies that if there exists an equilibrium or steady state then the error must be zero. It may, however, happen that the system never settles. This is the case when the closed loop system becomes unstable. Figure 4.4 shows the response to disturbances when Kc = 1 and for di erent values of Ti. As soon as the integral term is used the steady state error becomes zero. The speed of the response can be changed by Kc and Ti . From the basic equations (4.1) and (4.5) it is easy to show that the steady state error always will be zero if Ti is nite. Taking the derivative of the two equations and eliminating dQ dt gives the closed loop system
2T dT + KKc T = KK dTr + KKc T + dTin T1 d + (1 + KK ) c c dt 2 dt dt T T r dt
Assuming steady state (i.e. all derivatives equal to zero) gives that T = Tr . Later in the chapter we will give tools to investigate if there exists a steady state solution for the system. Figure 4.5 shows the closed loop performance when the reference value is changed from 0 to 1 at t = 0 and when the disturbance Tin is a unit step at t = 5. The controller is a PI controller with Kc = 1:8 and Ti = 0:45.
105
Time t
Figure 4.4 Response of the system (4.1) when the disturbance is a unit step at t = 1. The PI-controller (4.5) is used with Kc = 1 and Ti = 1, 1, 0.25, and 0.04. The desired reference value is Tr = 0.
1.5 Output 1 0.5 0 0 2 Input
Time t
10
0 0 2 4 6 8 Time t 10
Figure 4.5 Reference and load disturbance for the temperature control system using a
PI-controller Kc = 1:8 and Ti = 0:45. The liquid temperature and the control signal are shown.
Sensitivity reduction
Feedback will decrease the sensitivity of the closed loop system to changes in
106
1.5 Output 1 0.5 0 0
Chapter 4
Feedback Systems
Time t
Figure 4.6 The sensitivity to 20% change in the ow through the tank, when the system
is controlled with a PI-controller with Kc = 1 and Ti = 0:25.
the dynamics of the process. Consider the process controlled by the proportional controller (4.3). The closed loop gain is given by
KKc Kcl = 1 + KK
Assume that KKc = 5 then a 10% change in KKc will give Kcl = 0:82 0:85]. The sensitivity will be less when KKc is increased. For instance KKc = 10 gives Kcl = 0:9 0:92] after a 10% change in KKc . The process gain and the time constant are inversely proportional to the ow v . Figure 4.6 shows the closed loop performance after a step in the reference value when a PI-controller is used and when the ow through the tank v is changed 20% compared to the nominal value.
Unmodeled dynamics
We will now increase the complexity of the system model by assuming that the temperature is measured using a sensor with dynamics. It is assumed that the measured value Tm is related to the true temperature by the transfer function
The sensor has the time constant Ts , but the gain is one. Figure 4.7 shows the temperature and the control signal when the PI-controller (4.5) is used with Tm instead of T . The closed loop system will be unstable for some parameter combinations. Which parameter combinations that give unstable systems are discussed later in this chapter.
Summary
In this section we have seen that feedback can be used to change the transient and steady state properties of a closed loop system. Also the in uence of the disturbances are reduced through feedback. To obtain a steady state error that is zero it is necessary to have an integrator in the controller. Finally, it was found that the closed loop system may become unstable. The results of the example can be generalized, but the gross features will be the same independent of the process that is controlled.
107
Figure 4.7 Temperature control using PI-controller and sensor dynamics. The liquid
temperature and the control signal are shown when Tin is a unit step, Ts = 0:1, Kc = 1 and Ti = 1, 1, 0.25, and 0.04. Compare Figure 4.4.
Stationary errors
108
Chapter 4
v (t) Gv r (t) yr Gr v e G1
Feedback Systems
G2
Figure 4.8 Simple feedback system with reference and disturbance signal generation.
r (t) and v (t) are impulses.
This relation can always be used as long as the closed loop system is stable. To gain more insight we will assume that (1 + q1 s + : : : + qm;1 sm;1 ) = KQ(s) G0(s) = Ks n (1 + p s + : : : + p sm ) snP (s)
1
The open loop system is parameterized to show the number of integrators, (i.e., poles in the origin) and a gain K . The open loop system has n integrators. Notice that Q(0) = P (0) = 1. The parameter K can be interpreted as the gain when the integrators have been removed. Further we let
1Q1 (s) G1(s) = K m s P1(s) 2Q2 (s) G2(s) = sK n;m P2 (s)
where Q1(0) = Q2 (0) = P1 (0) = P2 (0) = 1 and n ; m 0. We rst investigate the in uence of a reference value. Assume that yr is a step, i.e. Gr = a=s. The nal value theorem gives ( a=(1 + K ) n = 0 n P (s)a s e(1) = slim = !0 sn P (s) + KQ(s) 0 n 1 When yr is a ramp, i.e. Gr = b=s2, we get
81 n = 0 > > < n P (s) s b e(1) = slim = b=K n = 1 !0 sn P (s) + KQ(s) s > > : 0 n 2
It is possible to continue for more complex reference signal, but the pattern is obvious. The more complex the signal is the more integrators are needed to get zero steady state error. The calculations are summarized in Figure 4.9. Remark. It must be remembered that the stability of the closed loop system must be tested before the nal value theorem is applied.
109
10
10
10
Figure 4.9 The output and reference values as functions of time for di erent number of
integrators, n, when the reference signal is a step or a ramp.
Let us now make the same investigation for disturbances when the reference value is zero. Now G2(s) G (s) E (s) = ; 1 + G0(s) v n P (s) K2 Q2(s) G (s) = ; sn P (s s) + KQ(s) sn;m P2 (s) v m K2P1 (s)Q2 (s) = ;s sn P (s) + KQ(s) Gv (s) Let the disturbance be a step, i.e. Gv = a=s. The nal value theorem gives 8 ;aK2 =(1 + K ) n = 0, m = 0 > > < sm K2P1(s)Q2(s)a = ;a=K e(1) = slim ; n n 0, m = 0 1 !0 s P (s) + KQ(s) > > :0 n m, m 1 When v is a ramp, i.e. Gv = b=s2, we get
Once again the number of integrators and the gain determine the value of the stationary errors.
110
Contr
Chapter 4
Flow reference Flow
Feedback Systems
Consider the simulation in Figure 4.3, where the stirred tank heater is controlled by a P-controller. In this case G1(s) = KKc and G2 (s) = 1=(1 + T1s). A unit step in Tin gives the stationary error ;1=(1+ KKc), which is con rmed by Figure 4.3. It was shown above that it is important to have integrators in the system to eliminate stationary errors. If the process does not have any integrator we can introduce integrators in the controller. Consider the system in Figure 4.8. The transfer function from yr and v to e is given by (4.6). If yr is a step then e will be zero in steady state if there is an integrator in either G1 or G2. If v is a step it is necessary to have an integrator in G1 to obtain zero steady state. It is not su cient with an integrator in G2. To eliminate load disturbances it may be necessary to introduce integrators in the controller even if the process contains integrators.
Uncertainty reduction
Above it has been shown how feedback can be used to reduce the in uence of disturbances entering into the system. We will now show how feedback can be used to make the system less sensitive to variations in parameters in the process. Example 4.2|Nonlinear valve Control of ows is very common in chemical process control. Control valves have, however, often a nonlinear characteristic relating the ow to the opening of the valve. For simplicity, we assume that the valve is described by the static nonlinear relation y = g(u) = u2 0 u 1 where u is the valve opening and y is the ow through the valve. Small changes in the opening u will give small changes in the ow y . The change is proportional to the derivative of the valve characteristic, i.e.
y = g0(u) u = 2u u
The gain thus changes drastically depending on the opening. When u = 0:1 the gain is 0.2 and when u = 1 it is 2. This nonlinearity can be reduced by measuring the ow and controlling the valve position, see Figure 4.10. Assume that there are no dynamics in the system and that the controller is a proportional controller with gain K . The system is then described by
e = yr ; y y = g (Ke)
111
0.8
0 0
0.5
u, yr
Figure 4.11 Input-output relations for the open loop system and the closed loop system
when K = 50. Notice that the input to the open loop system is u while it is yr for the closed loop system.
yr Gf G0 y
Gy
This gives the relation 1 g ;1 (y ) = y + 1 py = f (y ) yr = y + K K where g ;1 is the inverse function. The gain of the closed loop system is given through yr = f 0 (y ) y or 2K py 2Ku y 1 y = f 0 (y) yr = 1 + 2K py yr = 1 + 2Ku r
If K is su ciently high the gain of the closed loop system will be close to 1 and almost independent of u. Figure 4.11 shows the input output relations for the open loop and the closed loop systems. Consider the block diagram in Figure 4.12. The transfer function from yr to y is given by Gf (s)Go(s) (4.7) Gc(s) = 1 + G (s)G (s) We will rst investigate how variations in the transfer functions in uence the response from the reference value to the output. One way to do this is to determine the sensitivity when there is a small change in the transfer function.
o y
112
yr
Chapter 4
Feedback Systems
y
Process
Let a transfer function G depend on a transfer function H . We de ne the relative sensitivity of G with respect to H as dG H SH = dH G This can be interpreted as the relative change in G, dG=G, divided by the relative change in H , dH=H . For the transfer function Gc in (4.7) we have SGf = 1 1 SGo = 1 + G o Gy Go Gy SGy = ; 1 + Go Gy It is seen that a relative change in Gf will give the same relative change in Gc. As long as the loop gain GoGy is large then SGo will be small. This will, however, cause SGy to become close to one. From a sensitivity point of view it is crucial that the transfer functions Gf and Gy are accurate. These transfer functions are determined by the designer and can be implemented using accurate components. The sensitivity with respect to G0 is often called the sensitivity function and is denoted by S . Further, T = ;SGy is called the complementary sensitivity function. Notice that S+T =1 This implies that both sensitivities can't be small at the same time.
113
bT/a T T
aKT
1
bKT
0 Time
u a
Time
Figure 4.14 Step response of integrator with time delay when using an on-o controller.
Example 4.3|On-o control of integrator with time delay
Assume that the process is an integrator with a time delay. The process is described by dy (t) = Ku(t ; T ) (4.9) dt Equation (4.9) has the solution
y(t) = y (t0 ) + K
Z t;T
t0 ;T
u( ) d
The step response of the closed loop system is shown in Figure 4.14. The output of the process is composed of straight lines. Due to the delay the output will change direction T time units after the error has changed sign. This results in an oscillation in the output. The period of the oscillation is
Tp = T (2 + a=b + b=a)
and the peak-to-peak value is
A = (a + b)KT
where umax = a and umin = ;b. The output has the mean value
ym = yr + 1 2 (a ; b)KT
114
e e(t)
Chapter 4
Feedback Systems
e p(t)
Td
The mean value is equal to the reference value only if a = b. The amplitude of the oscillation is in many cases so small that it can be tolerated. The amplitude can be decreased by decreasing the span in the control signal. The limit cycle oscillation is characteristic for on-o systems. The origin of the oscillation is the inertia or the dynamics of the system. It takes some time before the output reacts to a change in the control signal. One way to improve the control is to predict the error to be able to change the sign of the control signal before the error changes sign. The controller thus becomes ( umax ep 0 u= (4.10) umin ep < 0 where ep is the prediction of e. A simple way to predict the error is to make an extrapolation along the tangent of the error function, see Figure 4.15. The prediction is given by (t) ep (t) = e(t) + Td de (4.11) dt
The time Td is called prediction time or prediction horizon. The controller (4.10) using (4.11) has a derivative action, since the control signal depends on the derivative of the error signal. The prediction time Td must now be determined. Intuitively the prediction horizon should be chosen to be the time it takes for the control signal to in uence the output of the process. For the process in Example 4.3 it is natural to chose Td = T . The prediction time can be determined experimentally in the following way: Disconnect the controller from the process and measure the output of the process. Manually the control signal is given its maximum value. The output then increases and the control signal is switch to its minimum value when the output is equal to the reference value. The prediction time is chosen as Td = k0 vy (4.12) where y is the overshoot, see Figure 4.16, v is the slope when the output is equal to the reference value and k0 is an empirically determined factor about equal to one. Since many processes have asymmetric responses one should repeat the experiment approaching the reference value from above as is also shown in Figure 4.16.
115
Time Input
Time
The on-o controller is discontinuous for e = 0. This may cause problems when the error is small. The control signal will oscillate between its two values. This is called chattering when the oscillation is fast. The oscillation in the control signal can also be caused by high frequency measurement noise. In many situations the oscillations in the control signal will be harmless, since the oscillation in the output may be very small. If the actuator is mechanical the oscillation may cause wear and should thus be avoided. Example 4.4|Chattering Consider the process and controller in Example 4.3 and assume that the time delay is small. The step response of the closed loop system is shown in Figure 4.17. The control signal chatter with a high frequency. The frequency of the chattering is inversely proportional to the time delay. One way to avoid the chattering is to introduce a longer delay in the controller. This can be done by using a relay with hysteresis, see Figure 4.18. If the width of the hysteresis is larger than the peak to peak amplitude of the measurement noise then the noise will not give rise to oscillations. The frequency of the oscillation depends on the width of the hysteresis. An extension of the on-o controller is to introduce more levels than two. In the extreme case we may have in nite many level between the maximum and minimum values of the control signal. We then get a controller that is linear for small values of the error. The controller has the form 8 umax e > e0 > > < u = > ub + 2e e0 (umax ; umin ) jej e0 > : umin e < ;e0 where ub = (umax + umin )=2. This is a proportional controller with saturation. The quantity 2e0 determines the interval where the control action is linear and
116
1.5 Output 1 0.5 0 0 Input 0.5
Chapter 4
Feedback Systems
Time t
10
0.5 0 2 4 6 8 Time t 10
2e 0
is called the proportional band. The value u0 should correspond to the control signal that gives the desired reference value. The controller with linear action is more complex than the pure on-o controller since the actuator now must be able to give a continuous range of control signals to the process.
Summary
On-o control is a simple and robust form of control. The controller, however, often gives a closed loop system where the output and the control signals oscillates. These oscillations can often be tolerated in low performance control systems. The properties of the on-o controller can be improved by introducing derivative or predictive action.
4.4 Stability
117
4.4 Stability
When solving di erential equations in Chapter 3 we found that the system was unstable if any of the poles have positive real part. In this section we will discuss methods to determine stability without solving the characteristic equation. We will also give methods to determine the stability of a closed loop system by studying the open loop system. Stability can be de ned in di erent way and we will give some di erent concepts. Stability theory can be developed for nonlinear as well as linear systems. In this book we limit the stability de nitions to linear time-invariant systems. In Section 3.2 we found that the solution of a di erential equation has two parts: one depending on the initial values and one depending the input signal. We will rst determine how the initial value in uences the system output. It is thus rst assumed that u(t) = 0. Definition 4.1|Asymptotic stability A system is asymptotically stable if y (t) ! 0 when t ! 1 for all initial values when u(t) = 0. Definition 4.2|Stability A system is stable if y (t) is bounded for all initial values when u(t) = 0. Definition 4.3|Instability A system is unstable if there is any initial value that gives an unbounded output even if u(t) = 0. From (3.20) we nd that the stability is determined by the eigenvalues of the A matrix or the roots of the characteristic equation. The system is asymptotically stable if the real part of all the roots are strictly in the left half plane. If any root has a real part in the right half plane, then the system is unstable. Finally if all roots have negative or zero real part then the system may be stable or unstable. If the imaginary roots are simple, i.e. have multiplicity one, then the system is stable. To summarize we have the following stability criteria for linear time invariant systems: A system is asymptotically stable if and only if all the roots pi to the characteristic equation satisfy Re pi < 0. If any pi satis es Re pi > 0 then the system is unstable. If all roots satisfy Re pi 0 and possible purely imaginary roots are simple then the system is stable. For a system with input signal we can de ne input-output stability in the following way: Definition 4.4|Input-output stability A system is input-output stable if a bounded input gives a bounded output for all initial values. Without proof we give the following theorem:
Theorem 4.1
De nitions
For linear time-invariant systems asymptotic stability implies stability and input-output stability.
118
yr
Chapter 4
K (s+1) s
3 2
Feedback Systems
y
0 0
10
20
Time t
Figure 4.20 The output of the system in Figure 4.19 for di erent magnitudes of the step
in the reference signal.
Asymptotic stability is thus the most restricted concept. In the following we will mean asymptotic stability when discussing stability. The di erence is only possible poles on the imaginary axis.
Above stability of a system was de ned. This is only possible if the system is linear and time-invariant. For nonlinear systems it is only possible to de ne stability for a solution of the system. This implies that a nonlinear system may be stable for one input signal, but unstable for another input signal. Example 4.5|Stability of a nonlinear system Consider the system in Figure 4.19. The system contains a nonlinearity which is a saturation in a valve. Figure 4.20 shows the response of the system in Figure 4.19 for di erent magnitudes of the reference signal. The closed loop system is stable if the magnitude is small, but unstable for large magnitudes.
4.4 Stability
119
There are several ways to determine if a system is (asymptotically) stable: Direct methods Numerical solution of the characteristic equation Indirect methods Routh's algorithm Nyquist's criterion The direct solution of the characteristic equation is easily done for rst or second order systems. For higher order systems it is necessary to use a numerical method to determine the poles of the system. Using computers there are good routines for nding eigenvalues of a matrix.
Stability criteria
Routh's algorithm
The English mathematician Routh published in 1875 an algorithm to test if the roots of a polynomial are in the left half plane or not. The algorithm gives a yes/no answer without solving the characteristic equation. Consider the equation
=0
a0 > 0
(4.13)
It is assumed that the coe cients ai and bi are real. This implies that the roots of the equation are real or complex conjugate pairs. The equation can then be factorized into factors of the form z + or z 2 + z + . A necessary condition for all roots of (4.13) to be in the left half plane is that for all factors > 0, > 0, and > 0. This implies that all the coe cients in (4.13) must be positive. To obtain su cient conditions we form the table
a0 b0 c0 d0 . . .
where
a1 b1 c1 d1
a2 b2 c2 d2
(4.14)
c0 = a1 ; a0b1 =b0 c1 = a2 ; a0b2 =b0 . . . ci = ai+1 ; a0 bi+1=b0 . . . d0 = b1 ; b0c1 =c0 d1 = b2 ; b0c2 =c0 . . . di = bi+1 ; b0ci+1=c0 . . .
120
Upper left
Chapter 4
Upper right
Feedback Systems
x x
Lower left
x x x
x x x
x x
x x
x x
Each row is derived from the two previous ones using the simple algorithm illustrated in Figure 4.21. Each new element in a row is obtained from the two previous rows. Make an \imaginary rectangle" of which the right hand side is one step to the right of the element that has to be computed. Label the corners `upper left', `upper right', `lower left', and `lower right'. The computation that is done over and over again can be expressed as: left' `lower right' `new element' = `upper right' ; `upper `lower left' If any of the 'upper right' or 'lower right' elements do not exist then a zero is introduced. The procedure is repeated until n + 1 rows are obtained, where n is the degree of the equation (4.13). The algorithm fails if any of the elements in the rst column becomes zero. All roots to (4.13) are then not in the left half plane. For n = 4 and n = 5 the table is formed in the following ways n=4 n=5 x x x x x x x x 0 x x x x x x x 0 x 0 x x x x 0 x In the table x represents a number and 0 represents the zeros that have to be introduced to build up the table. We now have: Theorem 4.2|Routh's stability test The number of sign changes in a0 , b0, c0, d0, : : : (i.e. in the rst column of (4.14)) is equal to the number of roots in the right half plane of F (z ). Example 4.6|Routh's table Let the closed loop system have the characteristic equation
4.4 Stability
121
necessary condition for stability is satis ed. Form the table (4.14) 1 2 ;5 ;2 ;980 360 10 100 30 360 ;80 0 360 0
The rst column has two sign changes (from 2 to ;5 and from ;980 to 360). The equation thus has two roots in the right half plane. The roots are located in 1:78 2:89i ;2:94, and ;1:31 2:98i. Observe that we can multiply a row with any arbitrary positive number without changing the next row or the number of sign changes. This is done in the table below and the the old row is put within parentheses. 1 10 100 (2 30 360) 1 15 180 (;5 ;80) ;1 ;16 0 ;1 180 (;196 0) ;1 0 180 The rst column has still two sign changes (from 1 to ;1 and from ;1 to 180). Routh's algorithm is used to determine if a closed loop system is stable. The algorithm can also be used to determine for which values of a parameter that the system is stable. This can be useful, for instance, in connection with programs for symbolic algebraic manipulations. Routh's method does not give any insight into how the system should be changed to make the system stable. Example 4.7|Routh's table with parameter Assume that a negative unity feedback loop is closed around the open loop process K G0(s) = s(s + 1)( s + 2) The characteristic equation of the closed loop system is Routh's table becomes
(4.15)
1 2 3 K 2 ; K=3 K The system is thus stable if 2 ; K=3 > 0 and if K > 0 i.e. for 0 < K < 6.
122
B A
Chapter 4
G 0 (s)
Feedback Systems
Consider the stirred tank heater with sensor dynamics and let the controller be a PI-controller. For which values of Ti is the closed loop system stable when K = Kc = T1 = 1 and Ts =0.1? The closed loop system has the transfer function
10 20 ; 11 Ti 10 Ti The system is stable if 0 < 1=Ti < 22. The smallest value of the integration time is Ti 0:05, compare Figure 4.7.
Nyquist's theorem
Consider the block diagram with the standard con guration in Figure 4.22. The frequency function of the open loop system is G0 (i! ). If we can nd a frequency !c such that G0 (i!c) = ;1 (4.16) then the system will oscillate with the frequency !c . This can intuitively be understood in the following way. Cut the loop at the point A and inject a sinusoidal with the frequency !c . The signal will then in stationarity come back to the point B with the same amplitude and the same phase. The negative feedback will compensate for the minus sign on the right hand side of (4.16). By connecting the points A and B we would get a sustaining oscillation. If
4.4 Stability
123
the amplitude of the signal in B is less than the amplitude in A we would get a a signal with decreasing amplitude when the points are connected. If the amplitude in B is larger then we would get an oscillation with increasing amplitude. This intuitively discussion is almost correct. It is, however, necessary to consider the full frequency function to determine the stability of the closed loop system. Harold Nyquist used the principle of arguments and the theory of analytical functions to show the results strictly. We will only give a simpli ed version of the full Nyquist's theorem: Theorem 4.3|Simpli ed Nyquist's theorem Consider the simple feedback system in Figure 4.22. Assume that (i) G0 (s) has no poles in the right half plane (ii) G0 has at most one integrator (iii) the static gain of G0 (with the possible integrator removed) is positive. Then the closed loop system is stable if the point ;1 is to the left of the frequency curve (Nyquist curve) G0(i! ) for increasing values of ! . Figure 4.23 shows four cases where it is assumed that the conditions (i){ (iii) in the theorem are ful lled. Cases a) and c) gives stable closed loop systems while b) and d) give unstable closed loop systems. The Nyquist's theorem is very useful since it uses the frequency function. From the curve it is possible to determine for which frequencies the curve is close to ;1. This gives a possibility to determine a suitable compensation of the open loop system such that the closed loop system will become stable. Frequency compensation methods are discussed in Chapter 6. It is also important to point out that the closed loop stability is obtained from the open loop frequency function G0 (i! ). Nyquist's theorem can also be used when the Bode diagram is given. We then have to check that the argument of the frequency function is above ;180 when the absolute value is equal to one.
Practical stability
From a practical point of view it is not good to have a system that is close to instability. One way to increase the degree of stability is to require that the poles should be in a restricted region of the left hand plane. For instance, we can require that the poles have a real part that is smaller than ; < 0 or that the poles are within a sector in the left half plane. Another way to measure the degree of stability is given by the amplitude margin Am and the phase margin 'm . These margins are de ned in the Nyquist or Bode diagrams, see Figure 4.24. Let the open loop system have the loop transfer function G0 (s) and let !0 be the smallest frequency such that arg G0(i!0) = ; and such that arg G0(i! ) is decreasing for ! = !0 . The amplitude margin or gain margin is then de ned as Am = jG (1 0 i!0)j Further, let the cross over frequency !c be the smallest positive frequency such that jG0(i!c )j = 1
124
a) Stable Im G
Chapter 4
b) Unstable
Feedback Systems
Im G
Re G
Re G
c) Stable
Im G
d) Unstable
Im G
Re G
Re G
4.5 Summary
In this chapter we have shown how to analyze the properties of a closed loop system. It was found that speed and robustness towards parameter variations could be increased by increasing the loop gain of the system. This leads, however, in most cases to loss of stability of the closed loop system. Ways to test stability have been introduced. The Nyquist and Bode diagrams give good insights into the properties of the closed loop system. In the analysis of systems we try to derive methods, which make it possible to determine properties of the closed loop systems without solving the di erential equations describing the closed loop system. These methods thus give better intuitive feeling for the behavior and ways to change the properties than by a straight forward
4.5 Summary
1 Am Im G
125
0 m c 1 Re G
G 0 (i)
c
1
1 Am
arg G 0 (i)
m
180
Figure 4.24 De nition of the amplitude margin Am and the phase margin 'm in the
Nyquist and Bode diagrams.
mathematical solution of the di erential equations. The following chapters introduce design methods, which make it possible to alter the behavior of the systems to make them satisfy our speci cations.
PID Controllers
GOAL: To discuss di erent aspects of the PID controller. Tuning procedures and implementation aspects are treated.
5.1 Introduction
Many control problems can be solved using a PID-controller. This controller is named after its function which can be described as 2 3 t Z 1 e( ) d + T de(t) 5 = P + I + D u(t) = Kc 4e(t) + T (5.1) d dt i where u is the controller output, and e is the error, i.e. the di erence between command signals yr (the set point) and process output y (the measured variable). The control action is thus composed of three terms, one part (P) is proportional to the error, another (I) is proportional to the integral of the error, and a third (D) is proportional to the derivative of the error. Special cases are obtained by only using some of the terms i.e. P, I, PI, or PD controllers. The PI controller is most common. It is also possible to have more complicated controllers e.g. an additional derivative term, which gives a PIDD or a DPID controller. The name PID controller is often used as a generic name for all these controllers. The PID controller is very common. It is used to solve many control problems. The controller can be implemented in many di erent ways. For control of large industrial processes it was very common to have control rooms lled with several hundred PID controllers. The algorithm can also be programmed into a computer system that can control many loops. This is the standard approach today to control of large industrial processes. Many special purpose control systems also use PID control as the basic algorithm. The PID controller was originally implemented using analog techniques. The technology has developed through many di erent stages, pneumatic, relay and motors, transistors and integrated circuits. In this development much know-how was accumulated that was embedded into the analog design. In this process several useful modi cations to the \textbook" algorithm given by
126
5.1 Introduction
127
(5.1) were made. Many of these modi cations were not published, but kept as proprietary techniques by the manufacturers. Today virtually all PID controllers are implemented digitally. Early implementations of digital PID controllers were often a pure translation of the \textbook" algorithm which left out many of the good extra features of the analog design. The failures renewed the interest in PID control. It is essential for any user of control systems to master PID control, to understand how it works and to have the ability to use, implement, and tune PID controllers. This chapter provides this knowledge. In Section 5.2 we will discuss the basic algorithm and several modi cations of the linear, small-signal behavior of the algorithm. These modi cations give signi cant improvements of the performance. Modi cations of the nonlinear, large-signal behavior of the basic algorithm are discussed in Section 5.3. This includes integrator windup and mode changes. Section 5.4 deals with controller tuning. This covers simple empirical rules for tuning as well as tuning based on mathematical models. PID-control of standardized loops is discussed in Section 5.5. In Section 5.6 we discuss implementation of PID controllers using analog and digital techniques and Section 5.7 gives a summary.
Proportional Action
u = Kc (yr ; y ) + ub = Kce + ub (5.2) The control signal is thus proportional to the error. Notice that there is also a reset or a bias term ub . The purpose of this term is to provide an adjustment so that the desired steady state value can be obtained. Without the reset (integrating) term it is necessary to have an error to generate a control signal that it di erent from zero. The reset term can be adjusted manually to give the correct control signal and zero error at a desired operating point. Equation (5.2) holds for a limited range only because the output of a controller is always limited. If we take this into account the input output relation of a proportional controller can be represented as in Figure 5.1. The range of input signals where the controller is linear is called the proportional band. Let pB denote the proportional band and umin and umax the limits of the control variable. The following relation then holds Kc = umaxp; umin (5.3) B Compare the discussion of the linear on-o controller in Section 4.3. The proportional band is given in terms of the units of the measured value or in relative units. It is in practice often used instead of the controller gain. A proportional band of 50% thus implies that the controller has a gain of 2.
128
u u max
Chapter 5
PID Controllers
Slope = Gain
e 0
u min e 0
Proportional band
Time t
Figure 5.2 Illustration of proportional control. The process has the transfer function ;3
Gp (s) = (s + 1) .
The properties of a proportional controller are illustrated in Figure 5.2. This gure shows the step response of the closed loop system with proportional control. The gure shows clearly that there is a steady state error. The error decreases when the controller gain is increased, but the system then becomes oscillatory. It is easy to calculate the steady state error. The Laplace transforms of the error e = yr ; y is given by
where Yr is the Laplace transforms of the command signal. With Gp (0) = 1 and Gc (0) = Kc = 1 we nd that the error due to a command signal is 50% as is seen in the gure.
129
Early controllers for process control had proportional action only. The reset adjustment ub in (5.2) was used to ensure that the desired steady state value was obtained. Since it was tedious to adjust the reset manually there was a strong incentive to automate the reset. One way to do this is illustrated in Figure 5.3. The idea is to low pass lter the controller output to nd the bias and add this signal to the controller output. It is straight forward to analyze the system in Figure 5.3. We get U (s) = Kc E (s) + 1 +1sT U (s) i Solving for U we get 1 )E (s) U (s) = Kc(1 + sT i Conversion to the time domain gives 2 3 t Z 1 e( ) d 5 = P + I (5.4) u(t) = Kc 4e(t) + T i which is the input-output relation for a PI controller. Parameter Ti, which has dimension time, is called integral time or reset time. The properties of a PI controller are illustrated in Figure 5.4. The gure illustrates that the idea of automatic reset or PI control works very well in the speci c case. It is straight forward to show that a controller with integral action will always give a correct steady state. To do this assume that there exist a steady state. The process input, the output, and the error then are then constant. Let e0 denote the error and u0 the process input. It follows from the control law (5.4) that 0 u0 = Kc(e0 + t e Ti ) This contradicts the assumption that u0 is a constant unless e0 is zero. The argument will obviously hold for any controller with integral action. Notice, however, that a stationary solution may not necessarily exist. Another intuitive argument that also gives insight into the bene ts of integral control is to observe that with integral action a small control error that has the same sign over a long time period may generate a large control signal. Sometimes a controller of the form Zt u(t) = Ki e( ) d = I (5.5) is used. This is called an I controller or a oating controller. The name oating relates to the fact that with integral control there is not a direct correspondence between the error and the control signal.
Integral Action
130
Set point and measured variable 1.5 1 0.5 0 0 Control variable 2 Ti=2 1 Ti=5 Ti= 5 10 Ti=1 Ti=2 Ti=1
Chapter 5
PID Controllers
Ti=5 Ti= 5 10 15 20
Time t
0 0
15
Time t
20
Figure;35.4 Illustration of PI control. The process has the transfer function Gp (s) =
(s + 1) . The controller has the gain Kc = 1.
Error
ep e A
ep t t+T d
Time Figure 5.5 The predictive nature of proportional and derivative control.
A controller with proportional action has a signi cant disadvantage because it does not anticipate what is happening in the future. This is illustrated in Figure 5.5 which shows two error curves, A and B. At time t a proportional controller will give the same control action for both error curves. A signi cant improvement can be obtained by introducing prediction. A simple way to predict is to extrapolate the error curve along its tangent. This means that control action is based on the error predicted Td time units ahead, i.e. (t) ep (t + Td ) = e(t) + Td de dt (5.6)
Derivative Action
131
10
15
Time t
20
20
Figure 5.6 Illustration of the damping properties of derivative action. The process has ;3
the transfer function Gp (s) = (s + 1) . The gain is Kc = 5 and Td is varied.
which is a PD controller. With such a controller the control actions for the curves A and B in Figure 5.5 will be quite di erent. Parameter Td, which has dimension time, is called derivative time. It can be interpreted as a prediction horizon. The fact that control is based on the predicted output implies that it is possible to improve the damping of an oscillatory system. The properties of a controller with derivative action are illustrated in Figure 5.6. This gure shows that the oscillations are more damped when derivative action is used. Notice in Figure 5.6 that the output approaches an exponential curve for large values of Td . This can easily be understood from the following intuitive discussion. If the derivative time is longer than the other time constants of the system the feedback loop can be interpreted as a feedback system that tries to make predicted error ep small. This implies that
ep = e + Td de dt = 0
This di erential equation has the solution e(t) = e(0)e;t=Td . For large Td the error thus goes to zero exponentially with time constant Td . A drawback with derivative action is that parameter Td has to be chosen carefully. Industrial PID controllers often have potentiometers to set the parameters Kc , Ti , and Td. Because of the di culty in adjusting derivative time Td the potentiometer for Td is made so that derivative action can be switched
132
Chapter 5
PID Controllers
o . In practical industrial installations we often nd that derivative action is switched o . Use of derivative action in the controller has demonstrated that prediction is useful. Prediction by linear extrapolation has, however, some obvious limitations. If a mathematical model of a system is available it is possible to predict more accurately. Much of the thrust of control theory has been to use mathematical models for this purpose. This has led to controllers with observers and Kalman lters. This is discussed in Chapter 7. A pure derivative can and should not be implemented, because it will give a very large ampli cation of measurement noise. The gain of the derivative must thus be limited. This can be done by approximating the transfer function sTd as follows d (5.8) sTd 1 +sT sTd =N The transfer function on the right approximates the derivative well at low frequencies but the gain is limited to N at high frequencies. The parameter N is therefore called maximum derivative gain. Typical values of N are in the range 10{20. The approximation given p by Eq. (5.8) gives a phase advance of 90 for low frequencies. For ! = N=Td the phase advance has dropped to 45 . In the basic algorithm given by (5.1) the control action is based on error feedback. This means that the control signal is obtained by ltering the control error. Since the error is the di erence between the set point and the measured variable it means that the set point and the measured variable are treated in the same way. There are several advantages in providing separate signal treatments of those signals. It was observed empirically that it is often advantageous to not let the derivative act on the command signal or to let it act on a fraction of the command signal only. The reason for this is that a step change in the command signal will make drive the output of the control signal to its limits. This may result in large overshoots in the step response. To avoid this the derivative term can be modi ed to d D(s) = 1 +sT (5.9) sTd =N ( Yr (s) ; Y (s)) If the parameter is zero, which is the most common case, the derivative action does not operate on the set point. It has also been found suitable to let only a fraction of the command signal act on the proportional part. The PID algorithm obtained then becomes 1 (Y (s) ; Y (s)) U (s) = Kc Yr (s) ; Y (s)+ sT r i (5.10) sT d + 1 + sT =N ( Yr (s) ; Y (s)) d where U Yr , and Y denote the Laplace transforms of u, yr , and y . The idea to provide di erent signal paths for the process output and the command signal
133
20
40
Time t
60
beta=1 beta=0.5
Time t
60
Figure 5.7 Response of system to setpoint changes and load disturbances for controller
with di erent values of parameter .
is a good way to separate command signal response from the response to load disturbances. Alternatively it may be viewed as a way to position the closed loop zeros. The advantages of a simple way to separately adjust responses to load disturbances and set points are illustrated in Figure 5.7. In this case parameters Kc , Ti, and Td are chosen to give a good response to load disturbances. With = 1 the response to set point changes has a large overshoot, which can be adjusted by changing parameter . There are also several other variations of the PID algorithm that are used in commercial systems. An extra rst order lag may be used in series with the controller to obtain a high frequency roll-o . In some applications it has also been useful to include nonlinearities. The proportional term Kc e can thus be replaced by Kcejej or by Kc e3 . Analogous modi cations of the derivative term have also been used. The PID algorithms described so far are called position algorithms because the algorithm gives the controller output. In some cases it is natural to perform the integration outside the algorithm. A typical case is when the controller drives a motor. In such a case the controller should give the velocity of the control signal as an output. Algorithms of this type are called velocity algorithms. The basic form of a velocity algorithm is 1 E (s) ; s2 Td Y (s) (5.11) U (s) = Kc sE (s) + T 1 + sTd =N i Notice that the velocity algorithm in this form can not be used for a controller that has no integral action because it will not be able to keep a stationary value.
Velocity Algorithms
134
Chapter 5
PID Controllers
The algorithm given by (5.1) or the modi ed version (5.10) are called a noninteracting PID controller or a parallel form. There are other parameterizations of the controllers. This is not a major issue in principle, but it can cause signi cant confusion if we are not aware of it. Therefore we will also give some of the other parameterizations. To avoid unnecessary notations we show the alternative representations in the basic form only. An alternative to (5.1) that is common in commercial controllers is + sTi )(1 + sTd ) G0(s) = Kc0 (1 sT 0 (1 + sT 0 =N 0)
i d
(5.12)
This is called an interacting form or a series form. Notice that this is the form naturally obtained when integral action is implemented as automatic reset. See Figure 5.3. Simple calculations show that the parameters of (5.12) and (5.1) are related by 0 0 i + Td Kc = Kc0 TT 0T 0
Since the numerator poles of (5.12) are real the parameters of (5.13) can be computed from (5.1) only if Ti 4Td (5.14) Then ! r 4 T K d 0 Kc = 2 1 + 1 ; T
Ti = Ti0 + Td0 0 0 i Td Td = TT 0 + T0
i
i d
(5.13)
(5.15)
or
These forms typically appeared when programmers and other persons with no knowledge about control started to develop controllers. These parameterizations have caused a lot of confusion as well as lost production when these forms have been confused with (5.1) without recognizing that the numerical values of the parameters are di erent.
00 G(s) = K + T1 00s + Td s
i
135
Summary
The di erent control actions P, I, and D have been discussed. An intuitive account of their properties have been given. Proportional control provides the basic feedback action, integral action makes sure that the desired steady state is obtained and derivative action provides prediction, which can be used to stabilize an unstable system or to improve the damping of an oscillatory system. The advantages of modifying the idealized algorithm given by Equation (5.1) to 1 (Y (s) ; Y (s)) U (s) = Kc Yr (s) ; Y (s) + sT r i d + 1 + sT sTd =N ( Yr (s) ; Y (s)) (5.16)
have been discussed. This formula includes limitation of the derivative gain and facilities for separate control of response to load disturbances and set point changes. The controller in (5.16) has six parameters, the primary PID parameters Kc , Ti , Td , and maximum derivative gain N , further parameters and are used for set point weighting.
Integrator Windup
The combination of a saturating actuator and a controller with integral action give rise to a phenomena called integrator windup. If the control error is so large that the integrator saturates the feedback path will be broken because the actuator will remain saturated even if the process output changes. The integrator, being an unstable system, may then integrate up to a very large value. When the error changes sign the integral may be so large that it takes considerable time until the integral assumes a normal value again. The phenomena is also called reset windup. It is illustrated in Figure 5.8, which shows a simulation of a process with a PI controller. The process dynamics can be described as an integrator and the process input is limited to the range ;0:25 u 0:25. The controller parameters are Kc = 1 and Ti = 1. When a command signal in the form of a unit step is applied the computed control signal is so large that the process actuator saturates immediately at its high limit. Since the process dynamics is an integrator the process output increases
136
2 Output y and yref 1.5 1 0.5 0 0 Control variable u 0.2 0 0.2 0 10 10
Chapter 5
PID Controllers
20
Time t
20
Time t
linearly with rate 0.1 and the error also decreases linearly. The control signal will, however, remain saturated even when the error becomes zero because the control signal is given by u(t) = Kce(t) + I The integral has obtained a large value during the transient. The control signal does not leave the saturation until the error has been negative for a su ciently long time to reduce the value of the integral. The net e ect is a large overshoot. When the control signal nally leaves the saturation it changes rapidly and saturates again at the lower actuator limit. In a good PID controller it is necessary to avoid integrator windup. There are several ways to avoid integrator windup. One possibility is to stop updating the integral when the actuator saturates. This is called conditional integration. Another method is illustrated in the block diagram in Figure 5.9. In this method an extra feedback path is provided by measuring the actuator output and forming an error signal (es ) as the di erence between the actuator output (yr ) and the controller output (v ). This error is fed back to the integrator through the gain 1=Tt. The error signal es is zero when the actuator does not saturate. When the actuator saturates the extra feedback path tries to make the error signal es equal zero. This means that the integrator is reset so that the controller output tracks the saturation limits. The method is therefore called tracking. The integrator is reset to the saturation limits at a rate corresponding to the time constant Tt which is called the tracking time constant. The advantage of this scheme for anti-windup is that it can be applied to any actuator as long as the actuator output is measured. If the actuator output is not measured the actuator can be modeled and an
Anti-windup
137
1 s 1 Tt
Kc Ti
es
1 s 1 Tt
Kc Ti
es
Figure 5.9 Controller with anti-windup. A system where the actuatoroutput is measured
is shown in a) and a system where the actuator output is estimated from a mathematical model is shown in b).
equivalent signal can be generated from a mathematical model as shown in Figure 5.9b). It is thus useful for actuators having a dead-zone or an hysteresis. Figure 5.10 shows the improved behavior obtained with a controller having anti-windup based on tracking. The system simulated is the same as in Figure 5.8. Notice the drastic improvement in performance. Practically all PID controllers can run in at least two modes: manual and automatic. In the manual mode the controller output is manipulated directly by the operator typically by push buttons that increase or decrease the controller output. When there are changes of modes and parameters it is important to avoid switching transients. Since a controller is a dynamic system, it is necessary to make sure that the state of the system is correct when switching between manual and automatic mode. When the system is in manual mode, the controller produces a control signal that may be di erent from the manually generated control signal. It is necessary to make sure that the value of the integrator is correct at the time of switching. This is called bumpless transfer. Bumpless switching is easy to obtain for a controller in incremental form. This is shown in Figure 5.11. The integrator is provided with a switch so that the signals are either chosen from
138
2 Output y and yref 1.5 1 0.5 0 0 Control variable u 0.2 0 0.2 0 10 10
Chapter 5
PID Controllers
20
Time t
20
Time t
Figure 5.10 Illustration of controller with anti-windup using tracking. Compare with
Figure 5.8.
a) + yr y MCU M
Inc PID
1 s
b)
+ yr y
MCU
1 1+sT i M
A
Inc PD
c) + yr y MCU
1 1+sT i M
A
Inc PD
Figure 5.11 Controllers with bumpless transfer from manual (M) to automatic(A) mode.
The controller in a) is incremental. The controllers in b) and c) are special forms of position algorithms. The controller in c) has anti-windup. MCU is a Manual Control Unit.
the manual or the automatic increments. Since the switching only in uences the increments there will not be any large transients. A related scheme for a position algorithm where integral action is imple-
139
1 s
y uc
PD
1 Tr
1 s
1 Tr
Figure 5.12 PID controller with bumpless switching between manual (M) and automatic
(A) control.
mented as automatic reset is shown in Figure 5.11b). Notice that in this case the tracking time constant is equal to Ti0 . More elaborate schemes have to be used for general PID algorithms on position form. Such a controller is built up of a manual control modules and one PID module each having an integrator. See Figure 5.12.
It is also necessary to make sure that there are no unnecessary transients when parameters are changed. A controller is a dynamical system. A change of the parameters of a dynamical system will naturally result in changes of its output even if the input is kept constant. Abrupt changes in the output can be avoided by a simultaneous change of the state of the system. The changes in the output will also depend on the chosen realization. With a PID controller it is natural to require that there be no drastic changes in the output if the parameters are changed, when the error is zero. This holds for all incremental algorithms because the output of an incremental algorithm is zero when the input is zero irrespective of the parameter values. It also holds for a position algorithm with the structures shown in Figure 5.11b) and 5.11c). For a position algorithm it depends, however, on the implementation. Assume e.g. that the state chosen to implement the integral action is chosen as
xI =
The integral term is then
Zt
e( ) d
A change of Kc or Ti then results in a change of I and thus an abrupt change of the controller output. To avoid bumps when the parameters are changed
c I=K T xI i
140
y(t)
Chapter 5
PID Controllers
Slope R
Figure 5.13 Determination of parameters R and L from the unit step response of a
process.
Z K c e( ) d xI = T i The integral term then becomes I = xI With this realization a sudden change parameters will not give a sudden change of the controller output.
t
5.4 Tuning
To make a system with a PID controller work well it is not su cient to have a good control algorithm. It is also necessary to determine suitable values of the controller parameters. The parameters are Kc , Ti , Td, N , Tt, , , umin and umax. The primary parameters are Controller gain Kc Integral time Ti Derivative time Td Maximum derivative gain N can often be given a xed default value e.g. N = 10. The tracking time constant (Tt ) can be chosen in the range Td Tt Ti . In some implementation it has to be equal to Ti in other cases it can be chosen freely. Parameters umin and umax should be chosen inside but close to the saturation limits. Parameters and should be chosen between 0 and 1. Parameter can often be set to zero. There are two di erent approaches to determine the controller parameters. One method is to install a PID controller and to adjust the parameters until the desired performance is obtained. This is called empirical tuning. The other method is to rst nd a mathematical model of the process and then apply some control design method to determine the controller parameters. Many good tuning procedures combine the approaches. Much e ort has been devoted to nd suitable techniques for tuning PID controllers. Signi cant empirical experience has also been gathered over many years. Two empirical methods developed by Ziegler and Nichols in 1942, the step response method and the ultimate-period method, have found wide spread use. These methods will be described in the following. A technique based on
5.4 Tuning
141
In this method the unit step response of the process is determined experimentally. For a process where a controller is installed it can be done as follows: Connect a recorder to the measured variable. Set the controller to manual. Change the controller manually. Calculate scale factors so that the unit step response is obtained. The procedure gives a curve like the one shown in Figure 5.13. Draw the tangent to the step response with the steepest slope. Find the intersection of the tangent with the axes are determined graphically. See Figure 5.13. The controller parameters are then given from Table 5.1. To carry out the graphical construction it is necessary that the step response is such that it is possible to nd a tangent with the steepest slope. The Ziegler-Nichols method was designed to give good response to load disturbances. The design criterion was to achieve quarter-amplitude-damping (QAM), which means that the amplitude of an oscillation should be reduced to one quarter after one period of the oscillation. This corresponds to a relative damping of = 0:22 for a second order system. This damping is quite low. In many cases it would be desirable to have better damping this can be achieved by modifying the parameter in Table 5.1. A good method should not only give a result it should also indicate the range of validity. In the step response method the transfer function of the process is approximated by
;sL G(s) = Kp 1e+ sT
(5.17)
It has already been shown how parameter L can be determined. This parameter is called the apparent dead-time because it is an approximation of the dead-time in the process dynamics. Parameter T which is called apparent time constant can be determined by approximating the step response as is indicated in Figure 5.14. The ratio = L=T is useful in order to characterize a control problem. Roughly speaking a process is easy to control if is small and di cult to control if is large. ZieglerNichols tuning is applicable if 0:15 < L=T < 0:6
and L are obtained from Figure 5.13 and a = RL.
(5.18)
Table 5.1 PID parameters obtained from the Ziegler Nichols step response method, R
Td
0:5L
142
y(t)
Chapter 5
PID Controllers
L+T
Figure 5.14 Determination of apparent dead-time L and apparent time constant T from
the unit step response of a process.
This method is based on the determination of the point where the Nyquist curve of the open loop system intersects the negative real axis. This is done by connecting the controller to the process and setting its parameters so that pure proportional control is obtained. The gain of the controller is then increased until the closed loop system reaches the stability limit. The gain (Ku ) when this occurs and the period time of the oscillation (Tu ) is determined. These parameters are called ultimate gain and ultimate period. The controller parameters are then given by Table 5.2. The ultimate sensitivity method is also based on quarter amplitude damping. The numbers in the table can be modi ed to give a better damped system. Let Kp be the static gain of a stable process. The dimension free number Kp Ku can be used to determine if the method can be used. It has been established empirically that the Ziegler-Nichols ultimate sensitivity method can be used if 2 < KpKu < 20 (5.19) The parameters Ku and Tu are also easy to determine from the Bode diagram of the process. The physical interpretation of Ku is how much the gain can be increased before the system becomes unstable. This implies that the ultimate gain is the same as the amplitude margin of the system. The frequency of the oscillation is the frequency when the phase is equal to ;180 which is denoted !0 , compare Section 4.4. This gives
0 PID parameters obtained from Ziegler-Nichols ultimate sensitivity method.
Table 5.2
Ti Tu=1:2 Tu=2
Td
T u =8
5.4 Tuning
K big, Ti big K big, Ti good K big, Ti small
143
K good, Ti big
K good, Ti good
K good, Ti small
K small, Ti big
K small, Ti good
K small, Ti small
Consider the system in Example 3.19. In Example 4.9 we found that Am = 2:4 and that the phase was ;180 at !0 = 0:85 rad/s. This gives
Manual Tuning
144
1.5 Set point and measured variable a) b) 1 0.5 0 0 3 Control variable 2 1 0 0 a) b) c c)
Chapter 5
PID Controllers
20
40
Time t
60
20
40
Time t
60
Let us rst apply the step response method. In this case we can nd the steepest slope of the tangent and apparent dead-time analytically. Straight forward calculations give L = 1:42. The apparent time constant is obtained from the condition T + L = 4. Hence T = 2:58 and L=T = 0:55. Table 5.1 then gives Kc = 1:88 and Ti = 4:70 for a PI controller. Applying the ultimate sensitivity method we nd that ! = 1 gives the intersection with the negative real axis. I.e.
G(i 1) = ;0:25
This implies that Ku = 4 and Tu = 6:28. Since Kp = 1 it follows that KuKp = 4. The ultimate sensitivity method can thus be applied. Table 5.2 gives the Kc = 1:8 and Ti = 5:2. Figure 5.16 we show a simulation of the controllers obtained. In the simulation a unit step command is applied at time zero. At time t = 30 a disturbance in the form of a negative unit step is applied to the process input. The gure clearly shows the oscillatory nature of the responses due to the design rule based on quarter amplitude damping. The damping can be improved by manual tuning. After some experimentation we nd the parameters Kc = 0:5 and Ti = 2. This controller gives better damping. The response to the load disturbance is however larger than for the Ziegler-Nichols tuning.
5.4 Tuning
and measured variable 1.5 Set point a) b) 1 0.5 0 0 4 Control variable 3 2 1 0 0 c) a) b) c)
145
20
40
Time t
60
20
40
Time t
60
Figure 5.17 Simulation of PID controllers obtained by a) the step response method, b)
the ultimate sensitivity method and c) by manual ne tuning.
Consider the same system as in the previous example but now use PID control. The step response method, Table 5.1, gives the parameters Kc = 2:63, Ti = 2:85, Td = 0:71 for a PID controller. Applying the ultimate sensitivity method we nd from Table 5.2 Kc = 2:4, Ti = 3:14 and Td = 0:785. Figure 5.17 shows a simulation of the controllers obtained. The poor damping is again noticeable. Manual ne tuning gives the parameters Kc = 1:39, Ti = 2:73, Td = 0:793, and = 0 which gives signi cantly better damping. Notice that it was necessary to put = 0 in order to avoid the large overshoot in response to command signals.
Strictly speaking the Ziegler-Nichols methods are also based on mathematical models. Now we will, however, assume that a mathematical model of the process is available in the form of a transfer function. The controller parameters can then be computed by using some method for control system design. We will use a pole-placement design technique. The case when the process is of rst order will now be considered. Assume that the process dynamics can be described by the transfer function b Gp (s) = s + (5.20) a In Laplace transform notation the process is thus described by
(5.21)
146
Chapter 5
PID Controllers
A PI controller is described by
U (s) = Kc
(5.22)
Elimination of U (s) between (5.22) and (5.21) gives 1 )Y (s) ; G (s)G (s)Y (s) Y (s) = Gp(s)Kc( + sT r p c i where sTi Gc(s) = Kc 1 + sTi Hence 1 )Y (s) (5.23) (1 + Gp(s)Gc (s))Y (s) = Gp (s)Kc( + sT r i The characteristic equation for the closed loop system is thus c (1 + sTi ) = 0 1 + Gp (s)Gc (s) = 1 + bK sTi (s + a) which can be simpli ed to c = s2 + (a + bK )s + bKc = 0 s(s + a) + bKcs + bK (5.24) c Ti Ti The closed loop system is thus of second order. By choosing the controller parameters any characteristic equation of psecond order can be obtained. Let the desired closed loop poles be ; ! i! 1 ; 2 . The closed loop characteristic equation is then s2 + 2 !s + !2 = 0 (5.25) Identi cation of coe cients of equal powers of s in (5.24) and (5.25) gives a + bKc = 2 ! bKc = !2 Ti Solving these linear equations we get Kc = 2 !b; a (5.26) a 2 Ti = ! ; !2 This method of designing a controller is called pole-placement because the controller parameters are determined in such a way that the closed loop characteristic polynomial has speci ed poles. Notice that for large values of ! we get Kc 2 b! Ti 2 ! The controller parameters thus do not depend on a. The reason for this is simply that for high frequencies the transfer function (5.20) can be approximated by G(s) = b=s. The signi cance of approximating the rst order process by an integrator is that for the controlled process the behavior around !0 is of most relevance. That behavior is often well determined by using G(s) = b=s.
5.4 Tuning
147
With controller parameters given by Equation (5.26) the transfer function from the command signal to the process output is (1 + Tis)! 2 (5.27) G(s) = s2 + 2 !s + ! 2 The transfer function thus has a zero at s=; 1 Ti The position of the zero can be adjusted by choosing parameter . Parameter thus allows us to also position a zero of the closed loop system. Notice that the process (5.20) does not have any zeros. The zero is introduced by the controller if 6= 0. For large values of ! the zero becomes s = ;!=(2 ). With = 1 this gives a noticeable increase of the overshoot. To avoid this overshoot the parameter should be smaller than 0:25= .
The case when the process is of second order will now be considered. Assume that the process dynamics can be described by the transfer function
Gp(s) = s2 + abs + a
1
(5.28) (5.29)
U (s) = Kc
To simplify the algebra we are using a simpli ed form of the controller (N = 1). Elimination of U (s) between (5.29) and (5.30) gives
Y (s) = Gp(s)Kc
where
1 + sT )Y (s) (1 + Gp(s)Gc (s))Y (s) = Gp (s)Kc( + sT d r i The characteristic equation for the closed loop system is
c (1 + sTi + s2 Ti Td ) = 0 1 + Gp (s)Gc (s) = 1 + bK sT (s2 + a s + a ) i
1 2
148
Chapter 5
PID Controllers
(5.31)
The closed loop system is thus of third order. An arbitrary characteristic polynomial can be obtained by choosing the controller parameters. Specifying the closed loop characteristic equation as (s2 +2 !s + ! 2 )(s + ! ) = s3 +(2 + )!s2 +(1+2 )! 2 s + ! 3 = 0 (5.32) Equating coe cients of equal powers of s in (5.31) and (5.32) gives the equations a1 + bKcTd = (2 + )! a2 + bKc = (1 + 2 )!2 bKc = ! 3 T Solving these linear equations we get
i
2 Kc = (1 + 2 b)! ; a2 )! 2 ; a2 Ti = (1 + 2 ! 3 ( + 2 )! ; a1 Td = (1 + 2 )! 2 ; a 2
(5.33)
Kc Ti Td
(1 + 2 )! 2 b 1+2 ! +2 (1 + 2 )!
(5.34)
The controller parameters then do not depend on a1 and a2 . The reason for this is that the transfer function (5.28) can be approximated by G(s) = b=s2 for high frequencies. Notice that with parameter values (5.34) the polynomial 1 + sTi + s2 TiTd has complex zeros. This means that the controller can not be implemented as a PID controller in series form. Compare with Equation (5.12).
It has thus been shown that he closed loop poles of a system can be changed by feedback. For the PID controller they are in uenced by parameters Kc , Ti and Td . The zeros of the closed loop system will now be considered.
5.4 Tuning
149
With controller parameters Kc , Ti and Td given by Equation (5.33) the transfer function from the command signal to the process output is
3 (1 + sTi + s2TiTd ) G(s) = (s! 2 + 2 !s + ! 2)(s + ! )
(5.35)
The transfer function thus has a zero at the zeros of the polynomial 1 + sTi + s2TiTd The zeros of the transfer function can thus be positioned arbitrarily by choosing parameters and . Changing the zeros in uences the response to command signals. This is called zero placement. Parameter is normally zero. The transfer function (5.35) then has a zero at s=; 1 Ti By choosing 1 = !2 (5.36) = !T (1 + 2 )! 2 ; a The zero will cancel the pole at s = ; ! and the response to command signals is thus a pure second order response. For large values of ! (5.36) reduces to 1 (1 + 2 )
i
2
It has thus been demonstrated that it is straight forward to design PI and PID controllers for rst and second order systems by using pole-zero placement. From the examples we may expect that the complexity of the controller increases with the complexity of the model. This guess is indeed correct as will be shown in Chapter 7. In that chapter we will also give precise conditions for pole-placement. In the general case the transfer function from command signal to process output has two groups of zeros, one group is the process zero and the other group is zeros introduced by the controller. Only those zeros introduced by the controller can be placed arbitrarily. The two cases discussed are special in the sense that the two process have no zeros. In this section we have discussed several methods to determine the parameters of a PID controller. Some empirical tuning rules, the Ziegler-Nichols step response method, and the Ziegler-Nichols ultimate sensitivity method was rst described. They are the prototypes for tuning procedures that are widely used in practice. Tuning based on mathematical models were also discussed. A simple method called pole-zero placement was applied to processes that could be described by rst and second order models. If process dynamics can be described by the rst order transfer function (5.20) the process can be controlled well by a PI controller. With PI control the closed loop system is of second order and its poles can be assigned arbitrary
Summary
150
Chapter 5
PID Controllers
values by choosing parameters Kc and Ti of the PI controller. The closed loop transfer function from command signal to output has one zero which can be placed arbitrarily by choosing parameter of the PI controller. If process dynamics can be described by the second order transfer function (5.28) the process can be controlled well by a PID controller. The closed loop systems of third order and its poles can be placed arbitrarily by choosing parameters Kc , Ti , and Td of the PID controller. The closed loop transfer function from command signal to process output has two zeros. They can be placed arbitrarily by choosing parameters and of the PID controller.
Gas pressure control does in general only require P control or PI control with small amount of integral action. The gain in the controller can be high. Due to splashing and wave formation the measurements are usually noisy and PI controllers are used. Level control can be used for di erent purposes. One use of a tank is to use it as a bu er tank. In those cases it is only important to limit the level to the capacity of the tank. This can be done with a low gain P controller or a PI controller with long reset time. In other occasions, such as reactors and evaporators, it can be crucial to keep the level within strict limits. A PI controller with a short reset time should then be used.
Temperature Control
Temperature control loops are in general very slow with time constants ranging from minutes to hours. The noise level is in general low, but the processes often have quite considerable time lags. To speed up the temperature control it is important to use a PID controller. The derivative part will introduce a predictive component in the controller.
151
Z1 V1
V2
Concentration Control
Concentration loops can have long time constants and a variable time delay. The time delays may be due to delays in analyzing equipment. PID controllers should be used if possible. The derivative term can sometimes not be used due to too much measurement noise. The controller gain should in general be moderate.
Analog Implementation
The operational ampli er is an electronic ampli er with a very high gain. Using such ampli ers it is easy to perform analog signal processing. Signals can be integrated, di erentiated and ltered. Figure 5.18 shows a standard circuit. If the gain of the ampli er is su ciently large, the current I is zero. Using Ohm's law, we then get the following relation between the input and output voltages V2 = ; Z2 (5.37) V Z By choosing the impedances Z1 and Z2 properly, we obtain di erent types of signal processing. Choosing Z1 and Z2 as resistors, i.e. Z1 = R1 and Z2 = R2, we get V2 = ; R2 (5.38) V R
1 Hence we get an ampli er with gain R2=R1. Choosing Z1 Z2 as a capacitance, i.e. Z1 = R and Z2 = 1=sC , we get 1 1 1
as a resistance and
V2 = ; 1 (5.39) V1 RCs which implements an integrator. If Z1 is chosen as a series connection of a resistor R1 and a capacitor C1 and Z2 is a resistor R2, we get R2 R2 C1s Z2 = (5.40) Z1 R1 + 1=sC1 = 1 + R1 C1 s
This corresponds to the approximate di erentiation used in a PID controller.
152
R4 R2 yr y R + _ R R4 C2 R3
Chapter 5
PID Controllers
R1 C1
_ +
By combining operational ampli ers with capacitors and resistors, we can build up PID controllers, e.g. as is shown in Figure 5.19. This circuit implements a controller with the transfer function + sTi )(1 + sTd ) G(s) = Kc (1 (5.41) sTi (1 + sTd =N ) where the controller parameters are given by Kc = R1R4=(R(R2 + R3)) Ti = R1C1 Td = R2C2 N = R2=R3 + 1 There are many other possibilities to implement the controller. For large integration times the circuit in Figure 5.19 may require unreasonable component values. Other circuits that use more operational ampli ers are then used.
Digital Implementations
A digital computer can neither take derivatives nor compute integrals exactly. To implement a PID controller using a digital computer, it is therefore necessary to make some approximations. The proportional part P (t) = Kc ( yr (t) ; y (t)) (5.42) requires no approximation since it is a purely static relation. The integral term Zt K c I (t) = T e( ) d (5.43) i is approximated by a rectangular approximation, i.e. c h e(kh) I (kh + h) = I (kh) + K (5.44) Ti The derivative part given by Td dD + D = ;K T dy (5.45) c d dt N dt is approximated by taking backward di erences. This gives Td D(kh ; h) ; KcTd N (y(kh) ; y (kh ; h)) D(kh) = T + (5.46) Td + Nh d Nh
153
Program
This approximation has the advantage that it is always stable and that the sampled data pole goes to zero when Td goes to zero. The control signal is given as u(kh) = P (kh) + I (kh) + D(kh) (5.47) This approximation has the pedagogical advantage that the proportional, integral, and derivative terms are obtained separately. There are many other approximations, which are described in detail in textbooks on digital control. To introduce a digital version of the anti-windup scheme, we simply compute a signal v (kh) = P (kh) + I (kh) + D(kh) (5.48) The controller output is then given by (u min if v < umin (5.49) u = sat(v umin umax) = v if umin v umax umax if u > umax and the updating of the integral term given by Equation (5.44) is replaced by
(5.50)
To implement the controller using a digital computer, it is also necessary to have analog to digital converters that convert the set point yr and the measure value y to a digital number. It is also necessary to have a digital to analog converter that converts the computed output u to an analog signal that can be applied to the process. To ensure that the control algorithm gets synchronized, it is also necessary to have a clock so that the control algorithm is executed once every h time units. This is handled by an operating system. A simple form of such a system is illustrated in Figure 5.20. The system works like this. The clock gives an interrupt signal each sampling instant. When the interrupt occurs, the following program is executed: Analog to digital (AD) conversion of yr and y Compute P from (5.42) Compute D from (5.46) Compute v = P + I + D Compute u from (5.49) Digital to analog (DA) conversion of u Compute I from (5.50) Wait for next clock pulse
154
100 REM ******************************* * PID REGULATOR * * AUTHOR: K J Astrom * ******************************* 110 REM FEATURES DERIVATIVE ON MEASUREMENT ANTI WINDUP 120 REM VARIABLES YR SET POINT Y MEASUREMENT U REGULATOR OUTPUT 130 REM PARAMETERS K GAIN TI INTEGRAL TIME TD DERIVATIVE TIME TR RESET TIME CONSTANT N MAX DERIVATIVE GAIN H SAMPLING PERIOD UL LOW OUTPUT LIMIT UH HIGH OUTPUT LIMIT 140 : 200 REM PRECOMPUTATIONS 210 AD=TD/(TD+N*H) 220 BD=K*TD*N/(TD+N*H) 230 BI=K*H/TI 240 BR=H/TR 300 310 320 330 340 350 360 370 380 390 400 410 420 END REM MAIN PROGRAM YR=PEEK(100) Y=PEEK(102) P=K*(B*YR-Y) D=AD*D-BD*(Y-YOLD) V=P+I+D U=V IF U<UL THEN UL IF U>UH THEN UH U=POOKE(200) I=I+BI*(YR-Y)+BR*(U-V) YOLD=Y RETURN
Chapter 5
PID Controllers
When the interrupt occurs, digital representations of set point yr and measured value y are obtained from the analog to digital conversion. The control signal u is computed using the approximations described earlier. The numerical representation of u is converted to an analog signal using the DA converter. The program then waits for the next clock signal. An example of a complete computer code in BASIC is given in Listing 5.1. The subroutine is normally called at line 300. The precomputations starting at line 200 are called when parameters are changed. The purpose of the precomputations is to speed up the control calculations. The command
155
PEEK(100) reads the analog signal and assigns the value yr to it. The command POKE(200) executes an DA conversion. These operations require that the AD and DA converters are connected as memory mapped IO. Equation (5.47) is called position algorithm or an absolute algorithm. In some cases it is advantageous to move the integral action outside the control algorithm. This is natural when a stepper motor is used. The output of the controller should then represent the increments of the control signal and the motor implements the integrator. Another case is when an actuator with pulse width control is used. In such a case the control algorithm is rewritten so that its output is the increment of the control signal. This is called the incremental form of the controller. A drawback with the incremental algorithm is that it cannot be used for P or PD controllers. If this is attempted the controller will be unable to keep the reference value because an unstable mode, which corresponds to the integral action is cancelled. The sampling interval is an important parameter in a digital control system. The parameter must be chosen su ciently small so that the approximations used are accurate, but not so small that there will be numerical di culties. Several rules of thumb for choosing the sampling period for a digital PID controller are given in the literature. There is a signi cant di erence between PI and PID controllers. For PI controllers the sampling period is related to the integration time. A typical rule of thumb is
Incremental Algorithms
h Ti 0:1 ; 0:3 when Ziegler-Nichols tuning is used this implies h 0:3 ; 1 L where L is the apparent dead-time or equivalently h Tu 0:1 ; 0:3 where Tu is the ultimate period. With PID control the critical issue is that the sampling period must be so short that the phase lead is not adversely a ected by the sampling. This implies that the sampling period should be chosen so that the number hN=Td is in the range of 0:2 to 0:6. With N = 10 this means that for Ziegler-Nichols tuning we have h 0:01 ; 0:03 L or h 0:0025 ; 0:0075 Tu Controllers with derivative action thus require signi cantly shorter sampling periods than PI controllers.
156
Chapter 5
PID Controllers
Assume that the gain is small or that the reset time is large compared to the sampling time. The change in the output may then be smaller than the quantization step in the DA-converter. For instance, a 12-bit DA converter (i.e., a resolution of 1/4096) should give su ciently good resolution for control purposes. Yet if Kc = h = 1 and Ti = 3600, then any error less than 90% of the span of the DA converter gives a calculated change in the integral part less than the quantization step. There will be an o set in the output if the integral part is stored with the same number of digits as used in the DA converter. One way to avoid this is to use higher precision in the internal calculations that are less than the quantization level of the output. Frequently at least 24 bits are used to implement the integral part in a computer, in order to avoid integration o set. It is also useful to use a longer sampling interval when computing the integral term.
Commercial digital controllers for few loops often have a short xed sampling interval on the order of 200 ms. This implies that PI control can be used for processes with ultimate periods larger than 0.6 s but that PID controllers can be used for processes with ultimate periods larger than 25 s. From the above discussion it may appear advantageous to select the sampling interval as short as possible. There are, however, also drawbacks by choosing a very short sampling period. Consider calculation of the integral term. Computational problems, such as integration o set may occur due to the nite precision in the number representation used in the computer. Assume that there is an error, e(kh). The integrator term is then increased at each sampling time with Kch e(kh) (5.51) T
i
5.7 Conclusions
In this chapter we have discussed PID control, which is a generic name for a class of controllers that are simple, common, and useful. The basic control algorithm was discussed in Section 5.2 where we discussed the properties of proportional, integral, and derivative control. The proportional action provides the basic feedback function, derivative action gives prediction and improves stability, integral action is introduced so that the correct steady state values will be obtained. Some useful modi cations of the linear behavior of the control algorithm. Large signal properties were discussed in Section 5.3. A particular phenomena called integral windup was given special attention. This led to another modi cation of the algorithm by introducing the anti-windup feature. In Section 5.4 we discussed di erent ways of tuning the algorithm both empirical methods and methods based on mathematical models. The design method introduced was pole-zero placement. The controller parameters were simply chosen to give desired closed loop poles and zeros. In Section 5.5 we nally discussed implementation of PID controllers. This covered both analog and digital implementation. This chapter may be viewed as a nutshell presentation of key ideas of automatic control. Most of the ingredients are here. More sophisticated controllers di er from PID controller mainly in the respect that they can predict better. The pole-placement design method can also be applied to more complex controllers. The implementation issues are also similar.
6.1 Introduction
The feedback controller developed in Chapters 4 and 5 is a very powerful tool, which can provide e cient control and thus ensure stable operation of many simple processes. The simple feedback controller su ers, however, from two basic shortcomings one which originate from the necessity of the measurement of the controlled variable and another which originates in varying operating conditions. A deviation has to be registered between the controlled variable and the setpoint before any corrective action is taken by the controller. If operating conditions vary signi cantly then stability of simple feedback may be threatened due to varying process parameters. Methods to deal with some aspects of these limitations are given in this chapter. The fact that a deviation must be registered before any corrective action can be taken implies that simple feedback is not very useful for processes with large delays relative to the dominating time constants. In many processes it may be possible to measure secondary process variables, which show the e ect of disturbances before it is seen in the controlled variable. This leads to the idea of using a secondary loop to aid the primary control loop. This usage of multiple loops, called cascade control, is very widespread in the process industries to improve the performance of simple feedback. Other ways of handling disturbances and large delays using process models are discussed in Chapter 7. The ultimate case of this rst basic limitation, occurs when the desired controlled variable cannot be measured, then simple feedback cannot be applied, and one must resort to using secondary process measurements to estimate the desired controlled variable. Such methods require also process models.
157
158
Chapter 6
There are many instances where it may be advantageous to use nonlinear couplings to improve the behavior of simple feedback control. Four such cases related to varying operating conditions are mentioned below. In many processes it is known a priori that certain ow rates should stay in a xed ratio to maintain the product quality, such a relationship may be achieved by measuring one (or both) ow rate(-s) and then by a type of feedforward keep the other ow such that the ratio stays constant. This type of feedforward like control is called ratio control. In processes where the operating conditions change over fairly wide ranges, e.g. with market driven variations in product demands it may be desirable to retune the controllers quite often in order to achieve acceptable performance. This retuning may be avoided by switching between pretuned controller parameter sets as the production level changes. Scheduling of controller settings in this manner is called gain scheduling. This method is introduced as one way of handling varying linear process characteristics. In processes where a large operating range is required it may be necessary to change the actuator variable, e.g. from heating to cooling, such changes may be accomplished using what is called split range controllers. For processes where the operating regime may change during operation and create undesirable behavior it may be necessary to use interlocks and selectors for protection and safety reasons. In summary this chapter deals with improving the behavior of feedback control using rst a linear and then nonlinear functions, which are built around the simple feedback loop. The main application of these techniques stems from operating the processes under varying conditions. The implementation of these functions have been simpli ed tremendously by the digital computer. Therefore the techniques described in this chapter have reached widespread usage in the process industries. The next section introduces the linear cascade controllers. The nonlinear couplings are dealt with in Section 6.3 where ratio control is discussed. Feedforward control is treated in Section 6.4. Section 6.5 deals with varying linear process characteristics. Gain scheduling is treated in Section 6.6. The conclusions of the chapter are summarized in Section 6.7.
159
Figure 6.1 A process diagram for a process furnace with schematic control system.
Tr TC Fr FC FT TT
A process furnace
A process diagram for a process furnace is illustrated in Figure 6.1. Fuel is supplied to the burners. The energy input is determined by the fuel valve. The process uid exit temperature is measured, and a control loop to control this temperature manipulating the fuel valve is shown. The disturbances are pressure and temperature of the fuel and temperature of the process uid. Disturbances in fuel pressure are common. Cancellation hereof with the single loop controller shown in Figure 6.1 is slow since the pressure e ect is only seen when the process uid temperature changes. To improve the control system performance the fuel owrate may be measured and controlled locally using a secondary controller. Then the setpoint of this secondary controller is given by the primary controller, as illustrated in Figure 6.2. This cascade controller has many improved properties compared to the single control loop shown in Figure 6.1. It is intuitively clear that fuel supply pressure variations now rapidly may be compensated by the secondary ow controller, and that this coupling also may be able to compensate for a nonlinear fuel valve characteristic, as demonstrated in Chapter 4. The question remains whether this coupling may be able to improve the performance to disturbances in the primary loop. To ascertain the intuitive claims and the latter question a little analysis is helpful as follows in the next subsection.
160
Chapter 6
D2
Y r1
E1
G c1
Y r2
E2
G c2 Y 2 Y 1
G p2
G p1
G t2 G t1
As illustrated above cascade control normally includes a primary controller, which also is called a master controller, and a secondary controller, which also is called a slave controller. To assist analysis a block diagram for cascade control is drawn in Figure 6.3. Here the following symbols are used:
Ei (s) D i (s ) Gci (s) Gdi(s) Gpi(s) Gti (s) X i (s ) Yi (s) Yri (s)
Error signal = Yri (s) ; Yi (s) for loop i Disturbance for loop i Controller transfer function for loop i Disturbance transfer function for loop i Process transfer function for loop i Transmitter transfer function for loop i State variable to be measured for loop i Measured variable for loop i Reference value for measured variable for loop i
In terms of this block diagram the process furnace has the measurement of ow rate of fuel for the inner loop and of temperature of the process uid for the outer loop. Note also that in this block diagram for cascade control the valve transfer function in the process furnace case is Gp2 and that the disturbance transfer function Gd2 depends upon whether fuel pressure or inlet temperature disturbance is considered. From this block diagram some properties of cascade control may be developed by formulating a transfer function for the primary loop, and analyzing this transfer function for two extreme cases of secondary controller design. In the following analysis the Laplace variable argument s is omitted from variables and transfer functions in the formulas for clarity of reading. A general transfer function for the cascade controller is formulated using the block diagram manipulation rules. First the transfer function for the secondary loop is formulated. From Figure 6.3 it is noted that
(6.1)
161
Y r1
E1
G c1
Y r2 Y 1
G p1
X1
Figure 6.4 Simpli ed block diagram of the cascade control system in Figure 6.3, with
L2 = Gt2 Gp2 Gc2 .
Using the two transfer functions de ned through the last equality, the block diagram may simpli ed as shown in Figure 6.4. For the primary loop the transfer function is determined analogously to the secondary loop, using the transfer functions de ned in equation (6.1) for the secondary loop X1 = Gd1 D1 + Gp1X2 Inserting (6.1) and the relations for the primary controller Yr2 = Gc1(Yr1 ; Y1 ) and Y1 = Gt1X1 gives Gp1Hd2D2 + Gp1Hr2Gc1 Yr1 X1 = Gd1D1 + 1 (6.2) + Gt1Gp1Hr2 Gc1 The characteristic equation for the primary loop may be obtained using the transfer functions for the secondary loop in (6.1) Gc2 + Gt1Gp1Gp2Gc2Gc1 (6.3) 1 + Gt1 Gp1Hr2Gc1 = 1 + Gt2Gp2 1 + Gt2Gp2Gc2 To obtain some insight the primary loop transfer function in (6.2) is investigated in two limiting cases. i) The secondary controller is not used, i.e. Gc2 = 1, and Gt2 = 0, hence the transfer function simpli es to Gp1Gd2 D2 + Gp1 Gp2Gc1 Yr1 X1 = Gd1 D1 + 1 (6.4) + Gt1Gp1Gp2Gc1 ii) The secondary controller is perfect. This ideal case of perfect control can be obtained if it is possible to realize a controller with high gain over the relevant frequency range. This may be achieved for a rst order process even though the parameters are uncertain. That this ideal control is indeed feasible in a more general sense, for processes without delays and right half plane zeros, is shown in Chapter 7. Realization of perfect control does, however, in this latter case require that the process parameters are perfectly known. Assuming that the transfer functions Gp2 and Gt2 do abide with these limitations, the transfer function for the secondary loop equation (6.1) simpli es to 1 ;1 X2 = 0D2 + G; t2 Yr2 = Gt2 Yr2 Thus the primary loop transfer function in equation (6.3) simpli es to ;1 Gp1Gc1 Yr1 t2 X1 = Gd1 D1 +;G (6.5) 1 + Gt21 Gt1Gp1Gc1
162
Chapter 6
Comparison of the two extreme case transfer functions in equations (6.4) and (6.5), in case of a fast measurement in the secondary loop, shows the possible advantages of cascade over single loop control: The secondary loop may cancel disturbances in D2 perfectly. With perfect control in the secondary loop, the cascade controller will be insensitive to parameter variations in Gp2. In case of signi cant parameter variations in Gp2 a consequence of the previous point is that it is possible to tune the primary cascade controller in (6.5) better than the single loop controller in (6.4) towards disturbances D1 in the primary loop, as well as for setpoint changes in Yr1 . Cascade control may be extended to include several loops in inner cascade. The control system performance may be improved until a limit which is reached when all state variables are measured. In this case cascade control develops into state variable feedback. This subject is dealt with in Chapter 7. The design considerations deal with selection of the secondary measurement, choice of controller and controller tuning. The choice of secondary measurement is essential, since the secondary loop dynamics ideally should enable perfect control of this loop. Therefore the following guidelines should be followed when screening measurements for a secondary loop: i) The secondary loop should not contain any time delay or right half plane zero. ii) The essential disturbance to be cancelled should attack in the secondary loop. If it is not possible to ful ll these two requirements then the advantages of cascade control are diminished, and a closer analysis should be carried out to warrant such an application. When the secondary measurement is selected, the control structure is xed. Then the controller form and tuning parameters remain to be xed. It is di cult to give general guidelines since the choices are determined by the process dynamics and the disturbance properties. However, some rules of thumb regarding the secondary controller form may be useful. The secondary controller can often be selected as a pure P-controller. Daction may be included to improve the bandwidth of the secondary loop. In this case the reference signal should not be di erentiated. I-action is probably only bene cial if Gp1 contains signi cant time delay. Note that by this design of the secondary loop an o set is left to be eliminated by the primary loop, which also handles other slow disturbances. Tuning of cascade controllers is carried out as for simple controllers, one controller at a time. The innermost controller is tuned rst. In the case of two controllers in a cascade it is a sensible rule to tune the inner loop without overshoot. If model-based design is applied then the controller form is determined by the process dynamics, and the tuning by the accuracy of the process information, as discussed in Chapter 7. Note that application of perfect control requires precise knowledge of process parameters except if the process may be
163
represented by rst order dynamics where a high controller gain may be used to compensate for parameter variations.
Elimination of integrator windup in cascade controllers require special consideration. The di culties may be illustrated using Figure 6.3. Integral windup in the secondary loop may be eliminated as described in Chapter 5. This method may, however, not be applied in the primary loop unless its output signal Yr2 also saturates. In other words information about saturation has to be transferred from the secondary to the primary loop. This problem may be solved by de ning two operation modes for the controller: Regulation: The reference and measurement are inputs and the controller generates a normal output. Tracking: The reference, measurement and actuator limits are inputs. Thus in the tracking mode the controller function as a pure observer, which only ensures that the controller state is correct. One simple implementation is illustrated below for a PI controller code:
Input yr, y, umin, umax Output u Parameters k, ti e=yr-y v=k*e+i u= if v<umin then umin else if v<umax then v else umax i=i+u-v+k*e*h*ti end
This algorithm is the same as was used to eliminate integral windup due to actuator saturation in Chapter 5. The only di erence is that the actuator saturation limits umin and umax are considered as inputs to the controller. The controller is switched to the tracking mode by setting these limits equal to the measured signal in the secondary loop, when the secondary loop saturates.
164
a) y + y yr r ea
Chapter 6
b) y
1 r
PI
yr
ry r
PI
yr
Figure 6.5 Block diagram for two implementations of ratio PI control. The desired ratio
of y=yr is r. The symbol designates multiplication, designates division.
b) In scheme b) the denominator variable yr is multiplied with the desired ratio, thus forming the desired value ydesired for y. The deviation is formed
eb = ryr ; y
(6.7)
In both cases the controller output is generated from the error signal with a conventional PI (or PID) controller function. It follows from the two error equations that if the error is zero then (6.8) r = yy r Thus both schemes ful ll their purpose of keeping the ratio constant at steady state. The main di erence between the two schemes is seen by nding the gain of the two error signals as yr varies. These gains may be determined by nding the partial derivative of the error with respect to the varying variable yr a = y ka = @e (6.9) 2 @yr y yr @eb = r (6.10) kb = @y r y Thus the gain is variable in the a) scheme but constant in the b) scheme. Both schemes are commonly used in practice, but the b) scheme is applied most often due to the constant gain. A ratio controller can be built from a conventional PI or PID controller combined with a summation and multiplier. These are often supplied as one unit, since this control function is so common, where the controller can be switched from a ratio to a normal controller. Example 6.1|Air/fuel ratio to a boiler or a furnace Controlling a boiler or a furnace it is desirable to maintain a constant ratio between air and fuel. Air is supplied in excess to ensure complete combustion of the fuel. The greater the air excess the larger is the energy loss in the stack gases. Therefore maintaining an optimal air excess is important for economical and environmental reasons. A con guration to achieve this goal using ratio control is shown in Figure 6.6. The fuel ow is controlled by a PI controller. The air ow is controlled by a ratio controller where fuel ow is the ratio variable. This con guration includes also a bias b, which for safety reasons ensures an air ow even when no fuel ow is available. Evaluation of the control error gives e = rFf + b ; Fair (6.11)
165
PI
Ratio regulator
F air
PI
Air
Figure 6.6 Con guration for air/fuel ratio control to a boiler or a furnace.
Measurable disturbance v Process Gv
Feedforward u Gf Gp y
Control signal
6.4 Feedforward
The previous chapters showed some of the advantages with feedback. One obvious limitation of feedback is that corrective actions of course can not be taken before the in uence is seen at the output of the system. In many process control applications it is possible to measure the disturbances coming into the systems. Typical examples are concentration or temperature changes in the feed to a chemical reactor or distillation column. Measurements of these disturbances can be used to make control actions before anything is noticed in the output of the process. It is then possible to make the system respond more quickly than if only feedback is used. Consider the process in Figure 6.7. Assume that the disturbance can be measured. The objective is to determine the transfer function Gf such that it is possible to reduce the in uence of the measurable disturbance v . This is called feedforward control. The system is described by
Y = Gp U + Gv V = (GpGf + Gv )V
166
Chapter 6
(6.12)
Feedforward is in many cases very e ective. The great advantage is that rapid disturbances can be eliminated by making corrections before their in uences are seen in the process output. Feedforward can be used on linear as well as nonlinear systems. Feedforward can be regarded as a built-in process model, which will increase the performance of the system. The main drawback with feedforward is that it is necessary to have good process models. The feedforward controller is an open loop compensation and there is no feedback to compensate for errors in the process model. It is therefore common to combine feedforward and feedback. This usually gives a very e ective way to solve a control problem. Rapid disturbances are eliminated by the feedforward part. The feedback takes care of unmeasurable disturbances and possible errors in the feedforward term. To make an e ective feedforward it is necessary that the control signal can be su ciently large, i.e. that the controller has su ciently large control authority. If the ratio of the time constants in Gp and Gv is too large then the control signals become large. To illustrate this assume that the transfer functions in Figure 6.7 are
kp Gp = 1 + Tp s
kv Gv = 1 + Tv s
kv 1 + Tps Gf = ; k p 1 + Tv s
The high frequency gain is kv Tp=(kpTv ). If this ratio is large then the feedforward controller has high gain for high frequencies. This will lead to di culties if the measurement is corrupted by high frequency noise. Also if the pole excess (I.e. the di erence between the number of poles and zeros in the transfer function.) is larger in Gp than in Gv the feedforward controller will result in derivatives of the measured signal. One way to avoid the high frequency problem with feedforward is to only use a static feedforward controller, i.e. to use Gf (0). This will also reduce the need for accurate models. Example 6.2|Feedforward Consider the tank system in Figure 6.8. The level is controlled using the input ow. The output ow is a measurable disturbance. It is assumed that there is a calibration error (bias) in the measurement of the output ow. The tank is described by an integrator, i.e. Gp = 1=s, and the valve as a rst order system 1 Gvalve (s) = s + 1 The feedforward controller is in this case given by
6.4 Feedforward
Manipulate input
167
yr
The controller will thus contain a derivation. The derivative can be approximated by the high pass lter s 1 +sT s d The feedforward is implemented as (6.13) Gf 1(s) = 1 +sT s + 1 d or as the static feedforward controller
Gf 1(0) = 1
(6.14)
Figure 6.9 shows the response to a step disturbance when Td = 0:1 and when (6.13) and (6.14) are used. The reference value is zero. Also the response without feedforward is shown. The response is greatly improved by including the dynamics in the feedforward controller. A bias in the measurement will cause a drift in the output, since the controller does not have any feedback from the output level. Assume that the controller is extended with a feedback part. Figure 6.10 shows the output and input when both feedforward and feedback are used. The example shows the advantage of combining feedforward with feedback. The reference signal can also be regarded as a measurable disturbance, which can be used for feedforward. Figure 6.11 shows one way to do this. A reference model is included in the block diagram. The purpose with the model is to give
168
a)
Chapter 6
1.5 Output, bias=0 Without feedforward 1 (6.14) 0.5 0 (6.13)
0.5 b)
Time t
10
Time t
Figure 6.9 Feedforward control of the process in Figure 6.8 using (6.13) and (6.14) with
a) 2 Output 1.5 1 0.5 0 0 b) 5 Input 0 5 10 0 10 20 30 40 Time t 10 20 30 40 Without feedforward With feedforward
Td = 0:1, when the disturbance is a unit step. The response without feedforward, i.e. the open loop response, is also shown. (a) No bias (b) Bias={0.05.
Time t
50
50
Figure 6.10 Feedforward and feedback control of the process in Figure 6.8 using (6.13)
with Td = 0:1 and bias={0.05. (a) Output with and without feedforward (b) Input with feedforward. The reference signal is a unit step at t = 0 and the load disturbance is a unit step at t = 25.
a reference trajectory for the system to follow. The feedforward part of the controller will speed up the response from changes in the reference signal. The feedback controller corrects errors between the reference model and the true output from the system. The feedback then accounts for unmeasurable
6.4 Feedforward
Feedforward
169
Reference model
Controller
Process
Local feedback
Gain scheduling
170
Chapter 6
Process parameters
Specification
Design
Estimation
a set of controllers may be designed to cover the expected range of operating conditions. The controller parameters are then tabulated as a function of the operating conditions. The applied controller parameters are found through interpolation in the tables or by development of a functional relationship. These smoothing features are applied to facilitate transfer between controller parameters as the operating conditions vary, a consideration which is quite important in practical applications. Development of functional relationships between controller parameters and indicators for operating conditions can be facilitated by using models in normalized form as derived in Chapter 2. Gain scheduling is discussed in more detail in the next section.
Nonlinear feedback
In some cases it may be possible to apply a feedback controller with a nonlinear function to compensate for parameter variations. In its simplest form this nonlinearity may compensate for the static nonlinearity only. An example would be the application of a speci c valve characteristic to compensate for a process nonlinearity. This can be regarded as a special case of gain scheduling. Examples of nonlinear compensation is found in Section 6.6. If the process dynamics cannot be related in a simple way to the operating characteristics, then adaptive feedback or adaptive control may be applied. In an adaptive controller the parameters of the controller are modi ed to take changes in the process or the disturbances into account. A typical block diagram for an adaptive controller is shown in Figure 6.12. In the block `Estimation' the parameters of a model of the process are estimated recursively. The estimates are updated each time new measurements are available. The estimated parameters are then used in the block `Design' to determine the parameters of the controller. To do the design it is also necessary to give speci cations for the desired response of the closed loop system. The process controlled by an adaptive controller will have two feedback loops. One fast loop, which consists of the conventional controller. The second loop is, in general, slower and is the estimation and the design of the controller. The scheme in Figure 6.12 is sometimes called indirect adaptive control, since the controller parameters are obtained indirectly via the estimated process model. It is also possible to design an adaptive controller where the controller parameters are estimated directly. This is called direct adaptive control. The block `Design' is
Adaptive feedback
171
Control signal u
Process
Output y
then eliminated. The direct adaptive controller is obtained by reparameterizing the process model such that the controller parameters are introduced. To do so the speci cations of the closed loop system are used. This whole area of adaptive control has lead to a signi cant theoretical development of especially single loop adaptive control, as described in str m and Wittenmark (1990). Adaptive single loop controllers are now commercially available and in use in tens of thousands of industrial loops.
172
Chapter 6
It is sometimes possible to nd auxiliary variables that correlate well with the changes in process dynamics. It is then possible to reduce the e ects of parameter variations simply by changing the parameters of the controller as functions of the auxiliary variables, see Figure 6.13. Gain scheduling can thus be viewed as a feedback control system in which the feedback gains are adjusted using feedforward compensation. The concept of gain scheduling originated in connection with development of ight control systems. In this application the Mach number and the dynamic pressure are measured by air data sensors and used as scheduling variables. A main problem in the design of systems with gain scheduling is to nd suitable scheduling variables. This is normally done based on knowledge of the physics of a system. In process control the production rate can often be chosen as a scheduling variable, since time constants and time delays are often inversely proportional to production rate. When scheduling variables have been determined, the controller parameters are calculated at a number of operating conditions, using some suitable design method. The controller is thus tuned or calibrated for each operating condition. The stability and performance of the system are typically evaluated by simulation particular attention is given to the transition between di erent operating conditions. The number of entries in the scheduling tables is increased if necessary. Notice, however, that there is no feedback from the performance of the closed-loop system to the controller parameters. It is sometimes possible to obtain gain schedules by introducing nonlinear transformations in such a way that the transformed system does not depend on the operating conditions. The auxiliary measurements are used together with the process measurements to calculate the transformed variables. The transformed control variable is then calculated and retransformed before it is applied to the process. The controller thus obtained can be regarded as composed of two nonlinear transformations with a linear controller in between. Sometimes the transformation is based on variables obtained indirectly through state estimation. One drawback of gain scheduling is that it is an open-loop compensation. There is no feedback to compensate for an incorrect schedule. Another drawback of gain scheduling is that the design may be time-consuming. The controller parameters must be determined for many operating conditions, and the performance must be checked by extensive simulations. This di culty is partly avoided if scheduling is based on nonlinear transformations. Gain scheduling has the advantage that the controller parameters can be changed very quickly in response to process changes. Since no estimation of parameters occurs, the limiting factors depend on how quickly the auxiliary measurements respond to process changes. It is di cult to give general rules for designing gain scheduling controllers. The key question is to determine the variables that can be used as scheduling variables. It is clear that these auxiliary signals must re ect the operating conditions of the plant. Ideally there should be simple expressions for how the controller parameters relate to the scheduling variables. It is thus necessary to have good insight into the dynamics of the process if gain scheduling is to be used. The following general ideas can be useful:
The principle
173
K (1+
1 ___ ) Ti s
- 1- 1
20
40
60
80
Time t
100
20
40
60
80
Time t 100
Figure 6.15 Step responses for PI control of the simple ow loop in Figure 6.9 at di erent
operating conditions. The parameters of the PI controller are K = 0:15, Ti = 1. Further, f (u) = u4 and G0 (s) = 1=(s + 1)3 .
Linearization of nonlinear actuators Gain scheduling based on measurements of auxiliary variables Time scaling based on production rate Nonlinear transformations The ideas are best illustrated by some examples. Example 6.3|Nonlinear actuator A simple feedback loop with a nonlinear valve is shown in Figure 6.14. Let the static valve characteristic be v = f (u) = u4 u 0 Linearizing the system around a steady-state operating point shows that the loop gain is proportional to f 0 (u). It then follows that the system can perform well at one operating point and poorly at another. This is illustrated by the step responses in Figure 6.15. One way to handle this type of problem is to feed the control signal u through an inverse of the nonlinearity. It is often su cient to use a fairly crude approximation.
174
y
Chapter 6
Process
r
PI
^ 1
Nonlinearity 1.5 2
Figure 6.17
tion f^.
Let f^;1 be an approximation of the inverse of the valve characteristic. To compensate for the nonlinearity, the output of the controller is fed through this function before it is applied to the valve. See Figure 6.16. This gives the relation v = f (u) = f (f^;1 (c)) ; where c is the output of the PI controller. The function f f^;1 (c) should have less variation in gain than f . If f^;1 is the exact inverse, then v = c. Assume that f (u) = u4 is approximated by two straight lines as shown in Figure 6.17. Figure 6.18 shows step changes in the reference signal at three different operating conditions when the approximation of the inverse of the valve characteristic is used between the controller and the valve. Compare with the uncompensated system in Figure 6.15. There is a considerable improvement in the performance of the closed-loop system. By improving the inverse it is possible to make the process even more insensitive to the nonlinearity of the valve. The example above shows a simple and very useful idea to compensate for known static nonlinearities. In practice it is often su cient to approximate the nonlinearity by a few line segments. There are several commercial singleloop controllers that can make this kind of compensation. DDC packages usually include functions that can be used to implement nonlinearities. The resulting controller is nonlinear and should (in its basic form) not be regarded as gain scheduling. In the example there is no measurement of any operating condition apart from the controller output. In other situations the nonlinearity is determined from measurement of several variables. Gain scheduling based on an auxiliary signal is illustrated in the following example.
175
20
40
60
80
Time t
100
20
40
60
80
Time t 100
20
40
60
80
Time t 100
Figure 6.18 Simulation of the system in Example 6.3 with nonlinear valve and compensation using an approximation of the valve characteristic.
Consider a tank where the cross section A varies with height h. The model is
d ;A(h)h = q ; ap2gh i dt where qi is the input ow and a is the cross section of the outlet pipe. Let qi be the input and h the output of the system. The linearized model at an 0 and h0 , is given by the transfer function operating point, qin G(s) = s +
where
p 0 0 1 q a 2gh in = A(h0 ) = 2A(h0)h0 = 2A(h 0 )h0 A good PI control of the tank is given by ; 1 Z e( ) d u(t) = K e(t) + T i
where
K = 2 !;
and
; Ti = 2 ! 2 !
176
Chapter 6
This gives a closed-loop system with natural frequency ! and relative damping . Introducing the expressions for and gives the following gain schedule:
0 qin K = 2 !A(h0) ; 2 h0 0 qin Ti = 2 ! ; 2A(h0)h0! 2
The numerical values are often such that 2 ! . The schedule can then be simpli ed to K = 2 !A(h0 ) Ti = 2 ! In this case it is thus su cient to make the gain proportional to the cross section of the tank. The example above illustrates that it can sometimes be su cient to measure one or two variables in the process and use them as inputs to the gain schedule. Often it is not as easy as in this example to determine the controller parameters as a function of the measured variables. The design of the controller must then be redone for di erent working points of the process. Some care must also be exercised if the measured signals are noisy. They may have to be ltered properly before they are used as scheduling variables.
6.7 Conclusions
Linear and nonlinear couplings of simple controllers have been treated in this chapter. By expanding the controller structure to include several measurements and loops it is possible to improve the performance of the closed loop system. Cascade, ratio, and feedforward control are very useful in process control. Gain scheduling and nonlinear compensation are good ways to compensate for known nonlinearities. With such schemes the controller adapts quickly to changing conditions. One drawback of the method is that the design may be time-consuming. Another drawback is that the controller parameters are changed in open loop, without feedback from the performance of the closedloop system. This makes the method impossible to use if the dynamics of the process or the disturbances are not known accurately enough.
7.1 Introduction
In the previous chapters we have discussed how to design simple controller, such as PID-controllers. We have also used the simple controllers in di erent combinations, cascade, ratio etc. In this chapter we will see how the performance of the control can be improved by including more process knowledge into the controllers. The main idea is to base the calculation of the control signal on a model of the process. This type of controllers is called model based controllers. Dead times are common in many process control applications. In Section 7.1 we show how the e ect of dead times can be reduced by predicting the output of the process by using a process model. General model based controllers are considered in Section 7.2. When using cascade controllers we found that it can be advantageous to use several measurement signals. This concept will be generalized in Section 7.3 where we introduce state feedback. The idea is to use all states of the process for feedback. This makes it possible to place the poles of the closed loop system arbitrarily. However, in most practical situations it is not possible to measure all the states. Using a model of the process we can instead estimate them from the available measurements. This leads to the concept of estimators or observers. The feedback is then based on the estimated states.
178
Chapter 7
process that is being controlled. One should try to minimize the time delays in the process by making a good physical process design. There are, however, many situations where time delays cannot be avoided. From a dynamical point of view there is little di erence between a time delay and a series connections of many rst order subprocesses. We have the relation n 1 = e;sT lim n!1 1 + sT=n It is therefore possible to approximate many industrial processes by a transfer function of the form e;sT (7.1) G(s) = (1 + sT K )(1 + sT ) 1 2
This is incidentally the approximation that Ziegler and Nichols used when deriving their rules of thumb for tuning of PID-controllers. A process of the form (7.1) is di cult to control using a PID-controller when T is of about the same magnitude as or longer than T1 + T2. We will in this section derive controllers that make it possible to improve the control of processes with time delays. The main idea was introduced by O. J. M. Smith in 1957. These controllers are often called Smith predictors or Smith controllers. The main limitations of Smith predictors are that they need a model of the process including the time delay. This model is incorporated in the controller. Using analog equipment it is di cult to implement long time delays. This implied that the Smith predictors had a very limited impact until the controllers could be implemented in digital form. Using a computer it is easy to implement time delays. One limitation is, however, that the Smith predictors cannot be used for unstable processes as will be discussed later in this section. The following example shows the di culties using PID-controllers for control of processes with time delays. Example 7.1|PI-control of a time delay process Consider the process
dy(t) = ;0:5y (t) + u(t ; 4) dt This is a rst order process with the transfer function
2 e;4s G(s) = 1 + 2s
(7.2)
Equation (7.2) can, for instance, be a model of a paper machine, where the time delay is caused by the transport of the paper through the machine. Figure 7.1 shows the performance when a PI-controller is used. The controller has a gain of 0.2 and a reset time of 2.6 time units. It is necessary to have low gain to keep down the oscillations in the response. Both the step response and the response to an input load disturbance are shown. If the process to be controlled has a dead time there will always be a delay in the response to the input signal. The behavior can, however, be improved compared with a conventional PI-controller.
179
20
40
Time t
60
0.4
0 0
20
40
Time t
60
Figure 7.1 Output and control signal when the process (7.2) is controlled by a PIyr
controller. The reference signal is changed at t = 0 and an input load disturbance occurs at t = 30.
e Gr
G1
Figure 7.2 The closed loop system when there is no time delay in the process.
Smith predictor
In connection with the on-o control in Chapter 4 it was found that prediction of the error signal could be very useful. The same idea is used in the Smith predictor. Assume that the controlled process has the transfer function
Gp(s) = e;sT G1(s) First we design a controller for the process under the assumption that there is no time delay, i.e. T = 0. Denote the obtained controller Gr (s). The closed loop system without time delay is then given by G1 (s)Gr (s) (7.3) Gc(s) = 1 + G1 (s)Gr (s) See Figure 7.2. Now let the process with time delay be controlled using the controller Grd (s). This gives the closed loop system Gp(s)Grd (s) = e;sT G1(s)Grd (s) Gcd(s) = 1 + (7.4) Gp (s)Grd(s) 1 + e;sT G1(s)Grd (s)
180
yr e G rd
Chapter 7
u
Gr
sT
Gp
(1 e
)G 1
Due to the time delay we can not choose Grd (s) such that Gcd (s) = Gc (s). It is, however, possible to choose the controller such that
e;sT G1 (s)Grd(s) = e;sT G1 (s)Gr (s) 1 + e;sT G1(s)Grd (s) 1 + G1 (s)Gr (s)
Solving for Grd gives
(7.5)
Notice that the controller given by (7.5) includes a time delay and a model of the process. The controller Grd can be interpreted as a feedback around the controller Gr , see Figure 7.3. We may write the control signal as
(7.6)
The rst part of the controller (7.6) is the same as if the process does not have any time delay. The two last terms are corrections that compensate for the time delay in the process. The term esT Ym (s) = G1(s)U (s) can be interpreted as a prediction of the process output. Notice that this signal is realizable. The controller must store the input signals used during the delay T . I.e. the time signal corresponding to e;sT U (s). This makes it di cult to make an analog implementation of the Smith predictor for processes with long time delays. Using computers where the control signal is constant over the sampling intervals it is only necessary to store some sampled past values of the control signals. The Smith predictor has thus become very useful after the advent of computers for process control.
181
20
40
Time t
60
0.4
0 0
20
40
Time t
60
Figure 7.4 Output and control signal when the process (7.2) is controlled using a Smith
predictor. The reference signal is changed at t = 0 and an input load disturbance occurs at t = 30.
Consider the process in Example 7.1. A PI-controller is designed for the process without any time delay. The parameters are chosen such that the step response will give the same overshoot as in Figure 7.1. This give a gain of 1 and a reset time of 1. The controller gain is ve times larger than for direct PIcontrol of the process with time delay. Figure 7.4 shows the performance when the Smith predictor is used. The closed loop system is much faster at changes of the reference value. Also the recovery after the input load disturbance is faster. Compare Figure 7.1.
We will now make interpretations of the Smith predictor that give better understanding for the behavior of the closed loop system. Assume that the design ^G ^ 1 (s). Introducing input is based on an approximate model of the process e;sT and output disturbances we may rewrite the block diagram in Figure 7.3 into Figure 7.5. The part of the diagram denoted `Model' is used to generate the model output ym (t). There would be no problems with the time delay if the signal at point A was available for measurement. The model is instead used to generate the signal at point B. I.e. we make an indirect measurement of the desired signal. This signal is a prediction of the output T time units ahead. If the model and the process coincide and if there are no disturbances then y (t) = ym(t) and the feedback is eliminated. The feedback signal is thus only used to compensate for model errors and disturbances. For a perfect model and no disturbances we get the system in Figure 7.6, which is an open loop system.
182
Cricket Software
l
Chapter 7
Controller yr
Process
u Gr
G1
A e
sT
Model G 1 B e
sT
y m
Figure 7.5 Block diagram of the Smith predictor based on an approximateprocess model.
yr
Gr
G1
sT
G 1
Figure 7.6 The system when the outer loop in Figure 7.5 is removed.
Drawbacks
The Smith predictor has some serious drawbacks. Consider the block diagram in Figure 7.5 and assume that the model and the process are identical. The closed loop system is then described by G1 Gr Y + 1 ; e;sT G1Gr ;N + e;sT G L (7.7) Y = e;sT 1 + 1 G1 Gr r 1 + G1Gr G1 Gr Y ; e;sT G1Gr ;N + e;sT G L Ym = e;sT 1 + 1 GG r 1+G G
1 r 1 r
The response to the reference value is as desired. In uence of disturbances From (7.7) it follows that the transfer function from measurement noise N to the output is given by
G1 Gr GN = 1 ; e;sT 1 + GG
1 r
For a well designed servo we expect that low frequency disturbances can be eliminated, i.e. GN (0) = 0 The transfer function GN is thus small for low frequencies. Let !B be the bandwidth of the closed loop system. For s = i! , where 0 ! !B we have approximately G1 Gr 1+G G 1
1 r
183
Further
e;i!T = ;1 when !T = . If the time delay is such that !B T > then there are frequencies for which GN 2. The feedback will thus amplify the disturbances for certain frequencies. Some of the problems with measurement disturbances can be avoided by replacing the block with ;1 in Figure 7.5 by a low pass lter with a low frequency gain of 1. Unstable processes The transfer function from the load disturbance L to the output is given by G1Gr e;sT G GL = 1 ; e;sT 1 + 1 GG
1 r
We can approximate this transfer function by ; GL 1 ; e;sT e;sT G1 at least for frequencies up to the bandwidth of the closed loop system. We see that the system essentially behaves like the step response to the open loop system. Compare Figure 7.6. If the system G1 is unstable then the output will grow to in nity and so will ym . The Smith predictor can thus not be used for unstable systems. For the case with only one unstable pole of G1 at the origin then there will be a bias in the output and the signal ym will grow. Example 7.3|Smith predictor control of an unstable system Let the process be an integrator with a time delay, i.e.
;2s Gp(s) = 1 se
Let the controller Gr be a PI-controller with gain 1 and reset time 1. Figure 7.7 shows the in uence of an input load disturbance of magnitude 0.1. Both the model output ym and the predicted signal in point B in Figure 7.5 are growing in magnitude. The unlimited growth of ym makes the controller impossible to use in practice when the process is unstable. The limitation that the open loop system must be stable is important to observe. One way to circumvent the problem is to rst stabilize the open loop system using a simple feedback and then design the dead time compensation. Using predictive control based on Smith predictor makes it possible to substantially improve the performance when the process has a time delay. A controller is rst designed for the process assuming that the time delay is zero. The obtained controller is then easily modi ed as shown in Figure 7.3. It is, however, important to know that the Smith predictor can amplify certain frequencies of the disturbances. Further, it is required that the open loop system is stable.
Summary
184
Output, model output, and reference 1 0 1 0 Input 0.5 0.5 10 20
Chapter 7
30
Time t
40
20
30
Time t
40
0.2 0 0
10
20
30
Time t
40
b) yr
Gr
Gp G p
Figure 7.8 Open loop control. a) Feedforward compensation b) Modi cation of a), but
with the same transfer function from yr to y.
(7.8)
This will give perfect following of the reference signal yr for all frequencies. One way to modify the block diagram in Figure 7.8 a) is shown in Figure
185
G rf Gp
1 b) yr
u G rf G p Gp G p
Figure 7.9 Feedback control. a) Simple feedback system b) Modi cation of a), but with
the same transfer function from yr to y.
^ p = Gp the two block diagrams have the same transfer 7.8 b). Provided G functions from yr to y . There are many practical limitations that make it impossible to use (7.8). First the controller will be unstable if the process has right half plane zeros. Secondly, a time delay in the process will give the unrealizable term esT in the controller. Thirdly, the system will not be robust against uncertainties in the process model. We will therefore later modify (7.8) to make a more practical and robust controller. This will give \perfect" control only for the low frequency range. We may also start with the closed loop structure in Figure 7.9 a). The controller Grf can be designed to give the desired closed loop system. This design is more complicated than for the feedforward compensation discussed above. For instance, the stability of the closed loop system must be tested. One way to rearrange the block diagram is shown in Figure 7.9 b). By choosing
Grf =
Gr ^p 1 ; Gr G
we can reduce the block diagram into Figure 7.10. Notice that the feedback signal can be interpreted as an estimate of the in uence of the disturbance. This implies that both the open loop and feedback structures can be transformed into this structure, which is called the internal model structure. We will now analyze some of its properties. First it is seen that if there are no ^ p = Gp then there is no feedback signal and the disturbances d = 0 and if G system is reduced to the block diagram in Figure 7.8 a). However, when there is a mismatch between the model and the true process or if there are disturbances then the feedback loop is active and we can reduce the sensitivity compared with the feedforward structure. Special design methods have been developed for internal model control. Special emphasis has been given to stability and robustness of the closed loop system. In these lecture notes we will only discuss some simple cases of IMC.
186
Chapter 7
yr
Gr
Gp G p 1
Properties of IMC
The block diagram of the IMC is given in Figure 7.10. It follows that ^ r Yr (s) ; Y (s) = 1 ; GpG (7.9) ^ p )Gr (Yr (s) ; D(s)) 1 + (Gp ; G Gr U (s) = (7.10) ^ p )Gr (Yr (s) ; D(s)) 1 + (Gp ; G 1 ^ Y (s) = ^ p )Gr Gp Gr Yr (s) + (1 ; Gp Gr )D(s) (7.11) 1 + (Gp ; G where E (s) = Yr (s) ; Y (s). The stability of the IMC is given by the following theorem. Theorem 7.1|Stability of IMC ^ p = Gp in the internal model structure. The total system is Assume that G stable if and only if Gr and Gp are stable. Proof: A necessary and su cient condition for stability is that the characteristic equations of (7.10) and (7.11) have roots with negative real parts. This gives the conditions 1 +G ;G ^ Gr p p = 0 1 + 1 (G ; G ^ G G G p p) = 0 1 =0 Gr 1 =0 Gp Gr This implies that all poles of both the process and the controller must have negative real parts. Remark. If the process is unstable then it can be stabilized with a conventional feedback controller before the IMC is used. We will rst assume that the process is stable and factorize the process model into ^ p (s) = G ^ pa(s)G ^ pm (s) G (7.12) where ^ pa(i! )j = 1 8! jG ^ p = Gp this reduces to For G
p r p
187
^ pa(s) is called an allpass transfer function. In general it has the form G ^ pa(s) = e;sT Y ;s + zi G i s + zi ^ p . Further zi is the complex where zi are the right half plane zeros of G ^ conjugate of zi . The allpass function Gpa contains the nonminimum phase ^ p . The transfer function G ^ pm (s) is thus minimum phase and stable parts of G ^ p (s) is assumed to be stable). Further we factorize V (s) as (since G
V (s) = Va(s)Vm(s)
(7.13)
^ p in (7.12) where V (s) = Yr (s) or V (s) = ;D(s). I.e. we in the same way as G can look at either the servo problem or the regulator problem. Without proof we give the following theorem: Theorem 7.2|IMC for stable processes (Morari and Za riou (1989)) ^ p = Gp and Gp is stable. Factor G ^ p and V according to (7.12) Assume that G and (7.13) respectively. The controller that minimizes the integral square error (ISE) Z1 e2 (t) dt (7.14) for both the servo and the regulator problems is given by
1 ^ pm(s)Vm ;1 G ^; Gr (s) = G pa Vm 0
(7.15)
where the operator ( ) denotes that after a partial expansion of the operand 1 ^; all terms involving the poles of G pa are omitted. Remark. When the input v is a step we get the controller ^ pm(s) ;1 Gr (s) = G and when v is a ramp ^ pa(s) ^ pm (s) ;1 1 ; dG Gr (s) = G ds Let the process be
s=0
(7.16)
! s
1 ; s e;s ^ p(s) = G (s + 2)(s + 3) ;s s+1 ^ ^ = e;s 1 1 + s (s + 2)(s + 3) = Gpa(s) Gpm (s) The controller minimizing the ISE criterion (7.14) is given by
s + 3) Gr (s) = (s +s2)( +1
188
Chapter 7
which is stable but contains derivations, since there are more zeros than poles. Notice that the zero in the right half plane of the process is mirrored in the imaginary axis to obtain the controller. Using the controller (7.16) gives the closed loop system ^ paYr (s) + (1 ; G ^ pa)D(s) Y (s) = G ^ pa contains the process dynamics that are limiting the This implies that G performance of the closed loop system. The IMC (7.16) is in many practical situations not a feasible controller due to uncertainties. The controller usually has too high gain at high frequencies. Further, the controller may contain derivations, i.e. it is not proper. This is the case if the controller has more zeros than poles, compare Example 7.4. One way to modify the \optimal" controller is to introduce a low pass lter Gm and use the controller m Gr = G (7.17) ^ Gpm where Gm is determined by the designer. To get correct steady state value it is necessary to choose Gm (0) = 1. The selection of Gm should be based on the ^ p . In practice one may use quite simple lters. For step model error Gp ; G reference signals or disturbances we may use Gm (s) = (T s1+ 1)n (7.18) m where n is selected to make Gr proper. The time constant Tm determines the speed of the closed loop system. When the reference signal or the disturbance is a ramp we can use ms + 1 (7.19) Gm(s) = (nT T s + 1)n
m
It is interesting to interpret the IMC in a conventional feedback structure as in Figure 7.9 a). Using (7.10) and (7.11) to eliminate Yr we get in the disturbance free case r E (s) = G E (s) U ( s) = G (7.20) rf ^ 1 ; GpGr Using the compensation (7.8) gives a controller Grf with in nite gain. The controller (7.16) gives instead
Grf =
(7.21)
So far we have assumed that the process is stable. Many processes in chemical plant contain integrators. The controller (7.15) can be shown to be \optimal" also when the process contains one integrator. For other cases we refer to the literature given in the end of the chapter.
The following steps can be used to design an internal model controller for steps in the reference value:
189
^p = G ^ paG ^ pm G
m Gr = G ^ Gpm 3. Choose the structure of Gm according to (7.18) or (7.19). 4. Tune the parameter Tm . We have here only given the basic ideas behind the IMC. In the literature one can nd much more analysis and design rules for internal model controllers. Example 7.5|IMC design of rst order system with time delay We can now use the IMC approach to design a controller for the process in Example 7.1. The transfer function of the process is 2 e;4s Gp (s) = 1 + (7.22) 2s In this case Gpa (s) = e;4s 2 Gpm (s) = 1 + 2s Assuming Gm (s) = 1 +1T s m
Figure 7.11 shows the response to a reference value change and to an input load disturbance when Tm = 1. The IMC can be considered as a special way of doing the design such that it is easy to predict the in uence of the design parameters on the closed loop system. We have introduced the model Gm and the parameter Tm to specify the closed loop performance. After the design we can interpret the resulting controller Grf given by (7.21) as a conventional feedback controller. For simple process models Grf can be translated into a conventional PID-controller. Example 7.6|PI-interpretation of IMC Let the process model be ^ ^p = K G ^1 s 1+T and choose Gm = 1 +1T s m This gives the IMC controller ^1s 1+T Gr = 1 ^ 1 + Tm s K
1 + 2s Gr (s) = 2(1 + T s)
m
(7.23)
190
1.5 Output and reference 1 0.5 0 0 Input 0.8
Chapter 7
20
40
Time t
60
0.4
0 0
20
40
Time t
60
Figure 7.11 The output of the process (7.22) when using the IMC controller (7.23) for
a reference value change and an input load disturbance when Tm = 1.
Using (7.21) we get ^1s ^ +T Grf = 1^ = ^T1 1 + ^1 KTm s KTm T1s which can be interpreted as a PI-controller with the parameters ^ Kc = ^T1 KTm ^1 Ti = T In this case the IMC can be regarded as a special way of tuning a PI-controller. This is another way to see that there will be no stationary error when yr or d are steps. The example above can be generalized to other simple forms of the process. Table 7.1, which is adopted from River et al. (1986), gives the connection between the parameters of the IMC and the parameters in the PID-controller 1 Z t e( ) d + T de(t) Kc e(t) + T (7.24) d dt i The internal model control can be viewed as a generalization of PID-controllers, where we are using a model of the process to improve the performance of the system. The structure of the IMC makes it easier to tune the controller, since it is more transparent how the design parameters are in uencing the closed loop system.
Summary of IMC
191
Table 7.1 The connection between the IMC parameters and the parameters in the PIDcontroller (7.24), when n = 1 is used in (7.18) and v is a step.
Process Gp
Kc K T Tm T1 + T2 Tm 2T Tm 1 Tm 1 Tm
Ti T T1 + T2
2T | |
Td
| T1T2 T1 + T2 T 2 |
192
Example 7.7|Pole-placement
Chapter 7
Let the system (7.25) have the matrices 8 9 ;1 ;2 > > > A=: 1 0 > Assume that
8 9 : L = l1 l2 (7.28) and that the desired closed loop poles are ;1 and ;2. The desired closed loop characteristic equation is thus
s2 + 3s + 2 = 0
We rst test the condition (7.27) 8 9 8 9 1 ;3 > > : det B AB = det : 1 1 = 4 The condition in Theorem 7.3 is thus satis ed. Using the L vector (7.28) we get 8 9 ;1 ; l1 ;2 ; l2 > > A ; BL = : 1 ; l ;l2 1 which has the characteristic equation det(sI ; A + BL) = s2 + (1 + l1 + l2 )s + 2 ; 2l1 + 2l2 = 0 This gives the system of equations 1 + l1 + l2 = 3 2 ; 2l1 + 2l2 = 2 which can be solved for l1 and l2 . The solution 8 9 L = :1 1 gives the desired closed loop performance. Theorem 7.3 implies that the designer may assign any closed loop poles as long as Wc has full rank. The choice of the closed loop poles will, however, in uence robustness, magnitudes of control signals etc. Another way to approach the design problem is based on optimization theory. The designer then de nes a loss function. One example is a quadratic loss function of the form Z 1; xT (t)Q1 x(t) + uT (t)Q2u(t) dt (7.29)
0
The controller that minimizes (7.29) is called the Linear Quadratic controller, since it minimizes the quadratic loss function for linear systems. The resulting controller is a state space feedback of the form (7.26).
193
1 s
A Observer
Observers
The state feedback controller requires that all states are available for measurement. In most situations this is not possible. For instance, when controlling a distillation column this would require that the concentrations at each tray would be known. We can instead use the process knowledge to indirectly obtain the unknown states of the system. Assume that the system is described by dx = Ax + Bu dt (7.30) y = Cx We could try to mimic the performance of the system by creating a model of the process and use the same inputs to the model as to the true process. We could use the model dx ^ = Ax ^ + Bu dt y ^ = Cx ^ This solution is only feasible if the process is stable, i.e. the eigenvalues of A have negative real part. In this case the in uence of the initial values will decrease after some time and the state x ^ will approach x. To increase the speed of the convergence and to be able to handle unstable systems we introduce a feedback in the model in the following way: dx ^ ^ + Bu + K (y ; y ^) dt = Ax (7.31) y ^ = Cx ^ The correction y ; y ^ will compensate for the mismatch between the state in the system and the state in the model. The system (7.31) is called a state observer or state estimator. The estimator has the structure in Figure 7.12. Using the de nition of y ^ we get dx ^ = (A ; KC )^ x + Bu + KCx dt
194
Observer x L
Chapter 7
u
Introducing the error x ~(t) = x(t) ; x ^(t) we get the relation dx ~ x (7.32) dt = (A ; KC )~ If the system (7.32) is stable it follows that the error x ~ will decay with time. The free design parameter is the gain K that we can use to in uence the eigenvalues of the matrix A ; KC . We have the following theorem: Theorem 7.4|State observer Assume that the matrix 8 C 9 > > > > > > CA > > > 2 > CA > > Wo = > (7.33) > > > . > > . > > > : .n;1 > CA has rank n. It is then possible to nd a matrix K such that the observer in (7.31) has an arbitrarily characteristic polynomial. Remark 1. The matrix Wo is called the observability matrix of the system (7.30). Remark 2. The theorem is valid also for MIMO systems. Using a model it is possible to reconstruct all the internal states of the system from input-output measurements. The observer or estimator is an indirect way of making measurements. The estimated states can be used as such to get information about the system, but they can also be used for feedback. The controller (7.26) can be replaced by
u(t) = lr yr (t) ; Lx ^(t) The structure of the closed loop system based on observer and state feedback is shown in Figure 7.13.
The rank conditions on Wc and Wo in Theorems 7.3 and 7.4 rst appeared as algebraic conditions to be able to solve the pole-placement and the observer problems. The rank conditions have, however, much deeper system theoretical interpretations. We will here only brie y discuss the concepts of controllability and observability. Controllability Let the system be described by the nth order state space model dx = Ax + Bu dt (7.34) y = Cx Is it possible to determine the control signal such that the system can reach any desired state? It will, however, not be possible to halt the system at all
195
state. The states may be positions and velocities and it is not possible to stop at a position and have a non-zero velocity. We thus have to modify our question and ask if it is possible to determine the control signal such that we will pass through any state of the system. If this is possible we say that the system is controllable. A test for controllability is if the controllability matrix Wc de ned in (7.27) has rank n. The controllability depends on the internal coupling between the states and how the control signal in uences the states, i.e. the A and B matrices. Example 7.8|Parallel coupling of identical systems Consider the system dx1 = ax + u 1 dt dx2 = ax + u 2 dt We can regard the system as a parallel coupling of two identical rst order systems. This system is not controllable since 8 9 1 a > > > Wc = : 1 a > has rank 1. We can also realize this by making a transformation of the states. Introduce the new states z1 = x1 z2 = x1 ; x2 The equation for the state z2 is given by dz1 = d(x1 ; x2 ) = 0 dt dt This implies that it is not possible to control the di erence between the two states. Observability Given the system (7.34) we may ask if it is possible to reconstruct the states of the system given only the input and output signals. If this is the case we say the the system is observable. The system is observable if the observability matrix Wo given by (7.33) has rank n. There may be states that can't be obtained from the measurements. These states are unobservable. Kalman's decomposition R. E. Kalman developed the concepts of controllability and observability in the beginning of the 1960's. He also showed that a general system can be decomposed into four di erent subsystems. One part is both controllable and observable, one is controllable but not observable, one is not controllable but observable, and the fourth subsystem is neither controllable nor observable. The connection between the four subsystems is shown in Figure 7.14 for the case when it is possible to diagonalize the system. From the gure we also see that it is only the part that is both controllable and observable that determines the input output relation, i.e. the transfer function of the system. This implies that the system is both controllable and observable only if the transfer function
G(s) = C (sI ; A);1 B + D is of order n. If controllability or observability is lost the there will be a pole-zero cancellation in the transfer function.
196
u
Chapter 7
S co
S co
S co
S co
Figure 7.14 Illustration of Kalman's decomposition of a system that is possible to diagExample 7.9|Kalman's decomposition
onalize. Index c implies that the subsystem is controllable and index c that the subsystem is uncontrollable. The index o is used in the same way for observability.
Consider the diagonal system 8 ;1 0 0 0 9 8 1 9 > > > > > > > > > > > 0 ; 2 0 0 1 dx = > > > > > x + > > > >u > > dt > 0 0 ;3 0 0 : : > 0 ;4 0 8 0 0 9 y = :2 0 1 0 x We see directly that the states x3 and x4 can't be in uenced by the control signal neither directly nor through some other states. Also we see that it is not possible to get information about the states x2 or x4 through the output. It is only the state x1 that is both controllable and observable. The controllability and observability matrices are 8 1 ;1 1 ;1 9 8 2 0 9 1 0 > > > > > > > > > > > > 1 ; 2 4 ; 8 ; 2 0 ; 3 0 > > > > > > > > Wc = > Wo = > > > > > > > 0 0 0 0 2 0 9 0 : : 0 0 0 0 ;2 0 ;27 0 Both matrices have rank 2. The transfer function of the system is 2 G(s) = s + 1 Three poles and zeros are cancelled in G(s) and the input-output behavior is determined by a rst order system even if the total system is of order 4. The concepts of controllability and observability are very important and can be used at the design stage of the process to test for possible control di culties.
7.5 Conclusions
We have in this chapter introduced more complex controllers that all are based on the inclusion of a model in the controller. Using the model is one way of
7.5 Conclusions
197
predicting unmeasurable signals in the process. \Measurements" are instead done on the model and used to improve the performance of the system. The dead time compensation requires realization of a time delay. This is best done using a computer and a sampled data controllers. We have shown that it is possible to obtain indirect or \soft" measurements of the states by using an observer. The estimated states can be used for feedback to place the poles of the closed loop system. There are other model based controllers that are based on sampled data theory. Model Predictive Control, Dynamic Matrix Control, and Model Algorithmic Control are typical methods of this type. Based on a sampled data model of the process the output is predicted and the control signal is chosen such that a loss function is minimized. These types of controllers have been very useful when there are time delay and also when having constraints on the states or the control signals.
Sequencing Control
GOAL: To give a brief overview of sequencing control and to introduce GRAFCET.
8.1 Introduction
The history of logical nets and sequential processes starts with batch processing, which is very important in many process industries. This implies that control actions are done in a sequence where the next step depends on some conditions. Simple examples are recipes for cooking and instructions or manuals for equipment. Washing machines, dish washers, and batch manufacturing of chemicals are other examples. The recipes or instructions can be divided into a sequence of steps. The transition from one step to the next can be determined by a logical expression. In this chapter logical and sequencing controllers are presented. Sequencing control is an integrated part of many control systems, and can be used for batch control, start-up and shut-down procedures, interlooks, and alarm supervision. When implementing such systems it is useful to be aware of the notations and traditions in sequence control as it developed before it started to be integrated with automatic control. This chapter will start by giving a brief background starting with logic or Boolean algebra, that plays an important role and is presented in Section 8.2. The axioms and basic rules for evaluation of logical expressions are given. Logical expressions are sometimes also called combinatory networks. Using combinational networks and a memory function it is possible to build so called sequential nets. Section 8.3 is devoted to sequencing controllers. One way to present function procedures, that is becoming a popular industrial notation, is to use GRAFCET, which is described in Section 8.4.
199
a (b)
a (c)
R ( )
R ( )
b a b a b R
(d)
a (e) b
&
a b
Figure 8.1 Di erent ways to represent the logical expressions a b and a + b: (a) Boolean
algebra (b) Relay symbols (c) Ladder symbols (d) Logical circuit symbols (American standard) (e) Logical circuit symbols (Swedish standard).
computer implementations are often called PLC (Programmable Logical Controller) systems. An earlier notation was PC (Programmable Controller) systems. This was used before PC became synonymous with personal computer. Logical systems are built up by variables that can be true or false, i.e. the variables can only take one of two values. For instance, a relay may be drawn or not, an alarm may be set or not, a temperature may be over a limit or not. The output of a logical net is also a two-valued variable, a motor may be started or not, a lamp is turned on or o , a contactor is activated or not. The mathematical basis for handling this type of systems is Boolean algebra. This algebra was developed by the English mathematician George Boole in the 1850's. It was, however, not until after about a century that it became a widely used tool to analyze and simplify logical circuits. Logical variables can take the values true or false. These values are often also denoted by 1 (true) or 0 (false). Boolean algebra contains three operations or, and, and not. We have the following notations: a and b a^b and: a b or: a+b a or b a_b not: a not a :a The expression a b is true only if both a and b are true at the same time. The expression a + b is true if either a or b or both a and b are true. Finally the expression a is true only if a is false. In the logical circuit symbols a ring on the input denotes negation of the signal and a ring on the output denotes negation of the computed expression. The and and or expressions can also be interpreted using relay symbols as in Figures 8.1 and 8.2. The and operator is the same as series connection of two relays. There is only a connection if both relays are drawn. The or operator is the same as parallel connection of two relays. There is a connection whenever at least one of the relays are drawn. The relay or ladder representation of logical nets is often used for documentation and programming of PLCs. We will use the more programming or computer oriented approach with and +. This usually gives
200
(a) a b
Chapter 8
Sequencing Control
a (b)
a (c)
R ( )
(d)
a b
a (e) b
&
Figure 8.2 Di erent ways to represent the logical expression a b: (a) Boolean algebra
(b) Relay symbols (c) Ladder symbols (d) Logical circuit symbols (American standard) (e) Logical circuit symbols (Swedish standard).
more compact expressions and is also more suited for algebraic manipulations. Exactly as in ordinary algebra we may simplify the writing by omitting the and-operator, i.e. to write ab instead of a b. To make the algebra complete we have to de ne the unit and zero elements. These are denoted by 1 and 0 respectively. We have the following axioms for the Boolean algebra: 0=1 1=0 1+a = 1 0+a = a 1 a=a 0 a=0 a+a=a a+a=1 a a=0 a a=a a=a We further have the following rules for calculation a+b = b+a Commutative law a b=b a Commutative law a (b + c) = a b + a c Distributive law a (b c) = (a b) c Associative law a + (b + c) = (a + b) + c Associative law a+b=a b de Morgan's law a b= a+b de Morgan's law A logical net can be regarded as a static system. For each combination of the input signals there is only one output value that can be obtained. In many applications it can be easy to write down the logical expressions for the system. In other applications the expressions can be quite complicated
201
and it can be desirable to simplify the expressions. One reason for making the simpli cation is that the simpli ed expressions give a clearer understanding of the operation of the network. The rules above can be used to simplify logical expressions. One very useful rule is the following
a + a b = a 1 + a b = a (1 + b) = a 1 = a
(8.1)
One way to test equality between two logical expressions is a truth table. The truth table consists of all combinations of the input variables and the evaluation of the two expressions. Since the inputs only can take two values there will be 2n combinations, where n is the number of inputs. The expressions are equal if they have the same value for all combinations. Example 8.1|Truth table For instance (8.1) is proved by using the table:
a
0 0 1 1
b
0 1 0 1
a + ab
0 0 1 1
a
0 0 1 1
To the left we write all possible combinations of the input variables. To the right we write the value of the expressions of the left and right hand sides of (8.1). The last two columns are the same for all possible combinations of a and b, which proves the equality. There are systematic methods to make an automatic reduction of a logical expression. The methodologies will only be illustrated by an example. Example 8.2|Systematic simpli cation of a logical network Consider a logical network that has three inputs a, b, and c and one output y . The network is de ned by the following truth table:
a v0 v1 v2 v3 v4 v5 v6 v7
0 0 0 0 1 1 1 1
b
0 0 1 1 0 0 1 1
c
0 1 0 1 0 1 0 1
y
0 0 0 1 1 1 1 1
The di erent combinations of the inputs are denoted vi , where the index i corresponds to the evaluation of the binary number abc. I.e. the combination abc = 101 corresponds to the number 1 22 + 0 21 + 1 20 = 5. The expression for the output y can be expressed in two ways: either as the logical or of the combinations when the output is true or as the negation of the logical or of the combinations when the output is false. Using the rst representation we
202
Memory CPU
Chapter 8
Sequencing Control
Programming module
Bus
I/O unit
Input modules
Output modules
Sensors
Actuators
y = v3 + v4 + v5 + v6 + v7 = abc + abc + abc + abc + abc = bc(a + a) + ab(c + c) + ab(c + c) = bc + ab + ab = bc + a(b + b) = a + bc The rst equality is obtained from the truth table. The second equality is obtained by combining the terms v3 with v7 , v4 with v5 , and v6 with v7 . It is possible to use v7 two times since v7 + v7 = v7. The simpli cations are then given from the computational rules listed above. The second way to do the simpli cation is to write y = v0 + v1 + v2 = abc + abc + abc = ab(c + c) + ac(b + b) = ab + ac = a(b + c)
This gives
can write
y = y = a(b + c) = a + bc which is the same as before. Using the methodology described in the example above it is possible to reduce a complicated logical expression into its simplest form. A more formal presentation of the methods are outside the scope of this book.
PLC implementation
Most PLC units are implemented using microprocessors. This implies that the logical inputs must be scanned periodically. A typical block diagram is shown in Figure 8.3. The execution of the PLC program can be done in the following way:
203
1. Input-copying. Read all logical input variables and store them in I/O memory. 2. Scan through the program for the logical net and store the computed values of the outputs in the I/O memory. 3. Output-copying. Send the values of output signals from the I/O memory to the process. 4. Repeat from 1. The code for the logical net is executed as fast as possible. The time for execution will, however, depend on the length of the code. The I/O-copying in Step 1 is done to prevent the logical signals to change value during the execution of the code. Finally all outputs are changed at the same time. The programming of the PLC-unit can be done from a small programming module or by using a larger computer with a more e ective editor. The programming is done based on operations such as logical and, logical or, and logical not. Also there are typically combinations such as nand and nor, which are de ned as
a nand b = a b = a + b a nor b = a + b = a b Further there are operations to set timers, to make conditional jumps, to increase and decrease counters etc. The speci c details di er from manufacturer to manufacturer.
204
Input: No bell Output: Play
Chapter 8
Input: Bell Output: Leave class room
Sequencing Control
Break
Lesson
Input
State
New state
Figure 8.5 Synchronous sequential net as combinationof logical net and delay or memory
elements.
A sequential net can be described by a state graph such as in Figure 8.4. The state graph shows the transitions and the outputs for di erent input signals. The sequential net can also be described by a truth table, which must include the states and also the conditions for transitions. The sequential net is thus described by the maps new state = f (state inputs) output = g (state inputs) Notice the similarity with the state equations for continuous time and discrete time systems. The di erence is that the states, inputs, and outputs only can take a nite number of values. Sequential nets can be divided into synchronous and asynchronous nets. In synchronous nets a transition from one state to another is synchronized by a clock pulse, which is the case when the nets are implemented in a computer. A synchronous sequential net can be implemented as shown in Figure 8.5. In asynchronous nets the system goes from one state to the other as soon as the conditions for transition are satis ed. The asynchronous nets are more sensitive to the timing when the inputs are changing. In the sequel we will only discuss the synchronous nets. There are many ways to describe sequences and sequential nets. A standard is now developing based on GRAFCET, developed in France. GRAFCET stands for \Graphe de Commande Etape-Transition" (Graph for Step-Transition Control). GRAFCET with minor modi cations is passed as an IEC (International Electrotechnical Commission) standard, IEC 848. The way to
205
L1 T L0
Q V2
describe sequential nets is called function charts. GRAFCET is a formalized way of describing sequences and functional speci cations. This can be done without any consideration of how to make the hardware implementation. The functional speci cations are easy to interpret and understand. Computerized aids to program and present sequences have also been developed. Another way to describe sequences and parallel actions is Petri nets. Petri nets are directed graphs that can handle sequential as well as parallel sequences. Sometimes the formalism for Petri nets makes it possible to investigate for instance reachability. It is then possible to nd out which states that can be reached by legitimate transitions. This knowledge can be used to test the logic and to implement alarms.
8.4 GRAFCET
The objective of GRAFCET is to give a tool for modeling and speci cation of sequences. The main functions and properties of GRAFCET will be described in this section. A simple example is used to illustrate the concepts. Example 8.4|Heating of water Consider the process in Figure 8.6. It consists of a water tank with two level indicators, a heater, and two valves. Assume that we want to perform the following sequence: 0. Start the sequence by pressing the button B . (Not shown in Figure 8.6.) 1. Fill water by opening the valve V1 until the upper level L1 is reached. 2. Heat the water until the temperature is greater than T . The heating can start as soon as the water is above the level L0. 3. Empty the water by opening the valve V2 until the lower level L0 is reached. 4. Close the valves and go to Step 0 and wait for a new sequence to start. From the natural language description we nd that there is a sequence of waiting, lling, heating, and emptying. Also notice that the lling and heating must be done in parallel and then synchronized, since we don't know which will be nished rst.
206
a) b)
Chapter 8
Sequencing Control
c)
d)
e) p
f)
g)
h)
Figure 8.7 GRAFCET symbols. (a) Step (inactive) (b) Step (active) (c) Initial step
(d) Step with action (e) Transition (f) Branching with mutually exclusive alternatives (g) Branching into parallel paths (h) Synchronization.
A function chart in GRAFCET consists of steps and transitions. A step corresponds to a state and can be inactive, active, or initial. See Figure 8.7(a){(c). Actions associated with a step can also be indicated, see Figure 8.7(d). A transition is denoted by a bar and a condition when the transition can take place, see Figure 8.7(e). A step is followed by a transition, branching with mutually exclusive alternatives, or branching into parallel sequences. Parallel sequences can be synchronized, see Figure 8.7(h). The synchronization takes place when all the preceding steps are active and when the transition condition is ful lled. The function chart in GRAFCET for the process in Example 8.4 is shown in Figure 8.8. The sequence starts in the step Initial. When B = 1 we get a transition to Fill 1, where the valve V1 is opened until the level L0 is reached. Now two parallel sequences starts. First the heating starts and we get a transition to Temp when the correct temperature is reached. At this stage the other branch may be nished or not and we must wait for synchronization before the sequence can be continued. In the other branch the lling continues until level L1 is reached. After the synchronization the tank is emptied until level L0 is reached thereafter we go back to the initial state and wait for a new sequence to start. The example can be elaborated in di erent ways. For instance, it may happen that the temperature is reached before the upper level is reached.
8.4 GRAFCET
207
Init
B
Fill 1
V1 L0
Heat
Q T
Fill 2
V1 L1
Temp
Full
1
Empty
V2 L0
Figure 8.8 GRAFCET for the process and sequences in Example 8.4.
The left branch is then in step Temp. The water may, however, become too cool before the tank is full. This situation can be taken into account making it possible to jump to the step Heat if the temperature is low. In many applications we need to separate between the normal situation, and emergency situations. In emergency situations the sequence should be stopped at a hazardous situation and started again when the hazard is removed. In simple sequential nets it can be possible to combine all these situations into a single function chart. To maintain simplicity and readability it is usually better to divide the system into several function charts. To formalize the behavior of a function chart we need a set of rules how steps are activated and deactivated etc. We have the following rules: Rule 1: The initialization de nes the active step at the start. Rule 2: A transition is rable if: i: All steps preceding the transition are active (enabled). ii: The corresponding receptivity (transition condition) is true. A rable transition must be red. Rule 3: All the steps preceding the transition are deactivated and all the steps following the transition are activated. Rule 4: All rable transitions are red simultaneously. Rule 5: When a step must be both deactivated and activated it remains active without interrupt. For instance, Rule 2 is illustrated in Figure 8.9. One way to facilitate the understanding of a functional speci cation is to introduce macro steps. The macro step can represent a new functional speci cation, see Figure 8.10. The
GRAFCET rules
208
1 2 1
Chapter 8
Sequencing Control
2
a = 1 or 0 3 3
a=0
a) Not enabled
a=1 3 3
c) Firable
Macro step
macro steps make it natural to use a top-down approach in the construction of a sequential procedure. The overall description is rst broken down into macro steps and each macro step can then be expanded. This gives well structured programs and a clearer illustration of the function of a complex process.
8.5 Summary
209
8.5 Summary
To make alarm, supervision, and sequencing we need logical nets and sequential nets. Ways to de ne and simplify logical nets have been discussed. The implementation can be done using relays or special purpose computers, PLCs. GRAFCET or function charts is a systematic way of representing sequential nets that is becoming a popular industrial notation and thus important to be aware of. The basic properties of function charts are easy to understand and have been illustrated in this chapter. The analysis of sequential nets have not been discussed and we refer to the literature. A characteristic of sequencing control is that the systems may become very large. Therefore, clarity and understandability of the description are major concerns. Boolean algebra and other classical methods are well suited for small and medium size problems. However, even though they look very simple, they have an upper size limit where the programs become very di cult to penetrate. There is currently a wish to extend this upper limit further by using even simpler and clearer notations. GRAFCET is an attempt in this direction.
9.1 Introduction
So far we have mainly discussed how to analyze simple control loops. I.e., control loops with one input and one output (SISO systems). Most chemical processes contain, however, many control loops. Several hundred control loops are common in larger processes. Fortunately, most of the loops can be designed from a single-input single-output point of view. This is because there is no or weak interaction between the di erent parts of the process. Multiloop controllers, such as cascade control, were discussed in Chapter 6. In cascade control there are several measurements, but there is still only one input signal to the process. In many practical cases it is necessary to consider several control loops and actuators at the same time. This is the case when there is an interaction between the di erent control loops in a process. A change in one input may in uence several outputs in a complex way. If the couplings or interactions are strong it may be necessary to make the design of several loops at the same time. This leads to the concept of multi-input multi-output systems (MIMO systems). We will in this chapter generalize the multiloop control into multivariable control. The distinction is not well de ned, but we will with multivariable control mean systems with two or more actuators and where all the control loops are design at the same time. A fundamental problem in multivariable control is how the di erent measurements should be used by the di erent actuators. There are many possible combinations. We have so far only discussed simple systems where the use of the measurements has been \obvious". The controllers have typically been PID controllers with simple modi cations. Multivariable systems will be introduced using a couple of examples.
210
9.1 Introduction
Tr
211
TC L
LC Feed F i,T i
Lr
F,T
A typical example of a coupled system is a shower. The system has two inputs, ow of hot and cold water, and two outputs, total ow and temperature. Changing either of the ows will change the total ow as well as the temperature. In this case the coupling is \strong" between the two input signals and the two output signals. In the daily life we have also seen that the control of ow and temperature in a shower can be considerably simpli ed by using a thermostat mixer. This will reduce the coupling and make it easier to make the control. Example 9.2|Level and temperature control in a tank Consider the heated tank in Figure 9.1. The ow and temperature of the feed are the disturbances of the process. The level is controlled by the outlet valve. The temperature in the tank is controlled by the steam ow through the heating coil. A change in feed temperature Ti or the temperature setpoint Tr will change the steam ow, but this will not in uence the level in the tank. A change in the feed ow Fi or the level setpoint Lr will change the output ow F and thus the content in the tank. This will also in uence the temperature controller that has to adjust the steam ow. There is thus a coupling between the level control and the temperature control, but there is no coupling from the temperature loop to the level loop. Example 9.3|Level and temperature control in an evaporator Consider the evaporator in Figure 9.2. In this process there is an interaction between the two loops. The temperature control loop will change the steam ow to the coil. This will in uence both the produced steam and the level. In the same way a change in the output ow will change both the level and the temperature. In the evaporator there is an interaction between both loops. One-way interaction has in previous chapters been handled by using feedforward. In this chapter we will discuss how more complex interactions can be handled. Design of MIMO systems can be quite complex and is outside the main theme of this course. It is, however, of great importance to be able to judge if there is a strong coupling between di erent parts of the process. Also it is important to have a method to pair input and output signals. In this chapter we will discuss three aspects of coupled systems: How to judge if the coupling in the process will cause problems in the control of the process?
Example 9.1|Shower
212
Tr
Chapter 9
y1
G21
G12 u2 y2
G22
Figure 9.3 Coupled system with two inputs and two outputs.
How to determine the pairing of the inputs and outputs in order to avoid the coupling? How to eliminate the coupling in the process? An example of a coupled system is given in Figure 9.3. The system has two input signals and two output signals. Let Gij be the transfer function from input j to output i and introduce the vector notations for the Laplace transforms of the outputs and inputs 8 9 8 9 Y ( s ) U ( s ) > > > 1 1 > Y (s) = > : Y (s) > U (s) = > : U (s) > 2 2 then
8 9 G ( s ) G ( s ) > 11 12 > U (s) = G(s)U (s) Y (s) = > : G (s) G (s) > 21 22 G(s) is called the transfer function matrix of the system. This representation can be generalized to more inputs and outputs. Another way of describing a multivariable system is by using the state space model
dx = Ax + Bu dt y = Cx + Du
where the input u and output y now are vectors.
213
G c1
u1
G 11 G 21
y1
G 12
y r2
G c2
u2
G 22
y2
Figure 9.4 Block diagram of a system with two inputs and outputs. The system is
controlled by two simple controllers Gc1 and Gc2 .
1
y r1
G c1
u1
y1
u2 = 0
y2
Figure 9.5 The system in Figure 9.4 when only the rst loop is closed.
Y1 (s) = G11U1 (s) + G12U2 (s) Y2 (s) = G21U1 (s) + G22U2 (s)
and the controllers by
(9.1)
(Loop 1) (Loop 2)
(9.2)
The situation with only Loop 1 closed is shown in Figure 9.5. The closed loop system is now described by G11Gc1 Y Y1 = 1 + G11Gc1 r1 G21Gc1 Y Y2 = G21 U1 = 1 + G G r1
11 c1
214
1 y r1
Chapter 9
G c1
u1
y1
y r2
G c2
u2
y2
With obvious changes we can also write down the expressions when Loop 2 is closed and Loop 1 open. The characteristic equations for the two cases are 1 + G11 Gc1 = 0 and (9.3)
1 + G22 Gc2 = 0 (9.4) Now consider the case in Figure 9.4 with both loops closed. Using (9.2) to eliminate U1 and U2 from (9.1) gives (1 + G11Gc1 )Y1 + G12Gc2Y2 = G11Gc1 Yr1 + G12Gc2Yr2 G21 Gc1Y1 + (1 + G22Gc2 )Y2 = G21Gc1Yr1 + G22Gc2Yr2 Using Cramer's rule to solve for Y1 and Y2 gives
G11G22 ; G12G21) Y + G12Gc2 Y Y1 = G11 Gc1 + Gc1Gc2(A r1 A r2 Gc1 Y + G22Gc2 + Gc1Gc2 (G11G22 ; G12G21) Y Y2 = G21 r2 A r1 A where the denominator is A(s) = (1 + G11 Gc1)(1 + G22Gc2 ) ; G12G21Gc1Gc2
(9.5) If G12 = G21 = 0 then there is no interaction and the closed loop system is described by G11Gc1 Y Y1 = 1 + G11Gc1 1r G22Gc2 Y Y2 = 1 + G22Gc2 2r The closed loop system is in this case stable if each loop is stable, i.e. if all the roots of (9.4) and (9.5) are in the left half plane. With interaction the stability of the closed loop system is determined by the polynomial A(s) in (9.5). Notice that the stability of the closed loop system depends on both the controllers and the four blocks in G. The interaction through the second loop is illustrated in Figure 9.6. The bold lines indicate the in uence of u1 on y1 through the second loop. The controllers must be tuned such that the total system is stable. Since any of the loops may be switched into manual control, i.e. the loop is open, it
215
is also necessary that each loop separately is stable. The closed loop system may be unstable due to the interaction even if each loop separately is stable. The interaction in the system may destabilize the closed loop system and this makes it more di cult to do the tuning. The following example points out some of the di culties of tuning multivariable controllers. Example 9.4|Stability of multivariable system Assume that the process in Figure 9.4 has the transfer functions 5 G11 = 0:1s1+ 1 G12 = s + 1 1 2 G21 = 0:5s + 1 G22 = 0:4s + 1 and the proportional controllers Gc1 = Kc1 and Gc2 = Kc2. Tuning each loop separately each loop will be stable for any positive gain. The characteristic equation for the total system is 0:02s4 + 0:1(3:1 + 2Kc1 + Kc2)s3 + (1:29 + 1:1Kc1 + 1:3Kc2 + 0:8Kc1Kc2)s2 + (2 + 1:9Kc1 + 3:2Kc2 + 0:5Kc1Kc2 )s + (1 + Kc1 + 2Kc2 ; 3Kc2 Kc2) = 0 A necessary condition for stability is that all coe cients are positive. The only one that may become negative is the last one. This implies that a necessary condition for stability of the closed loop system is 1 + Kc1 + 2Kc2 ; 3Kc1Kc2 > 0 This condition is violated when the gains are su ciently high.
Generalization
The computations above were done for a 2 2 system with simple proportional controllers. We will now consider a system with a general transfer function matrix and a general controller matrix. Let the system be described by
(9.6)
where I is the unit matrix. The closed loop transfer function matrix is Gcl (s). When evaluating the inverse in (9.6) we get the denominator
216
Chapter 9
(9.7)
This is in analogy with the SISO case. Compare (3.20). The closed loop system is thus stable if the roots of (9.7) are in the left hand plane. The simple examples above point out some of the di culties with MIMO systems. How di cult it is to control a MIMO system depends on the system itself, but also on how we are choosing and pairing the measurements and input signals. Controllability The concept of controllability was introduced in Section 7.4. This is one way to measure how easy it is to control the system. Let Wcr be a square matrix consisting of n independent columns of the controllability matrix Wc . One measure of the controllability of the system is det Wcr . This quantity will, however, depend on the chosen units and can only be used as a relative measure when comparing di erent choices of input signals. Resiliency One way to measure how easy it is to control a process is the concept of resiliency introduced in Morari (1983). The Resiliency Index (RI) depends on the choice of inputs and measurements, but is independent of the chosen controller. It is thus a measure based on the open loop system that is independent of the pairing of the control signals and measurements. The RI is de ned as the minimum singular value of the transfer function matrix of the process Go(i! ), i.e. RI = min (! ) The singular values i of a matrix A are de ned as p i= i where i are the eigenvalues of the matrix AT A. The smallest in magnitude is called the minimum singular value. A large value of RI indicates that the system is easy to control. The RI can be de ned for di erent frequencies or only be based on the steady state gain matrix of the process. The RI is dependent on the scaling of the process variables and should be used to compare di erent choices of inputs and outputs. All signals should be scaled into compatible units.
217
We will now consider a situation when a pairing of variables have been obtained. Information about the stability of the proposed controller structure can be obtained using the Niederlinski index. The method was proposed in Niederlinski (1971) and a re ned version in Grosdidier et al (1985). The theorem gives a su cient condition for instability and is based on the steady gain matrix of the process. Theorem 9.1|Su cient condition for instability, Grosdidier et al (1985) Suppose that a multivariable system is used with the pairing u1 {y1, u2 {y2,: : :, un {yn . Assume that the closed loop system satis es the assumptions 1. Let Goij (s) denote the (i j ):th element of the open loop transfer function matrix Go (s). Each Goij (s) must be stable, rational, and proper. Further de ne the steady state gain matrix K = Go (0) 2. Each of the n feedback controllers contains integral action. 3. Each individual control loop is stable when any of the other n ; 1 loops are opened. If the closed loop system satis es Assumptions 1{3, then the closed loop system is unstable if det K < 0 Q n k i=1 ii where kii are the diagonal elements of K . Remark 1. Notice that the test is based on the steady state gain of the open loop process. Further the pairing is such that ui is paired with yi . This can be obtained by a suitable reordering of the open loop transfer function matrix. Remark 2. The theorem gives a su cient, but not necessary condition for instability, which can be used to eliminate unsuitable pairings of variables. A transfer function is proper if the degree of the denominator is the same or larger than the degree of the numerator. I.e. the number of poles are the same or larger than the number of zeros. Luyben (1990) suggests a design procedure for MIMO systems consisting of the following steps: 1. Select controlled variables using engineering judgment. 2. Select manipulated variables that gives the largest resiliency index, RI. 3. Eliminate impossible variable pairings using the Niederlinski index. 4. Find the best pairing from remaining set. 5. Tune the controllers. The selection of control variables must be based on good knowledge of the process to be controlled. There is usually a di erence between the quantities that one primarily want to control and the variables that are easily available for measurement without any time delay, for instance, due to laboratory tests. A typical example is composition in distillation columns that can be di cult and expensive to measure. Instead temperatures in the column are measured and controlled. When several possibilities to manipulate the process are available one can use the Morari reciliency index to select the input that is most e ective. This
A design procedure
218
Chapter 9
can be done based on the open loop model without having determined the structure of the controller. Only the outputs and inputs are selected not how they should be used for control. The Niederlinski index can be used to eliminate possible bad pairings of variables. The pairing can be based on the relative gain array described in the next section. The pairing of variables now gives the structure of the controller. A simple way to approach the design is to consider one loop at the time. Simple PID controllers can be used in many situations. It can, however, be necessary to \detune" the controller setting obtained after the one-loop-at-the-time tuning. This is due to the stability problem from to the interaction discussed earlier. One way to detune a PID controller is to change the parameters to Kcdetuned = Kc= Tidetuned = Ti Tddetuned = Td where Kc , Ti , and Td are the PID parameters obtained, for instance, through Ziegler-Nichols tuning. The detuning factor should be greater than 1. Typical values, according to Luyben (1990), are between 1.5 and 4. An increase of will make the system more stable, but also more sluggish. The factor can be determined through trial-and-error, but also by analyzing the closed loop characteristic equation. After the tuning there may still be too much interaction between the di erent loops. To avoid this a decoupler may be determined for the process. This is discussed in Section 9.5.
yi uj
where the denotes the change in the variable. Introduce
uk = 0 k 6= j
219
Controller
Process
The steady state gain K = G(0) is a matrix with the elements kij . To obtain a normalization we also study a second situation. Consider the situation in Figure 9.7. Assume that the control of the system is so good that only yi is changed when uj is changed. Introduce
yi when y = 0 k 6= i lij = u k j
The coupling through the static gain G(0) is easy to obtain, when the transfer function matrix is known. The normalization through lij is more di cult to determine. We will derive the normalization for a system with two inputs and outputs and then give the general expression. Example 9.5|The normalization for a system with two inputs and outputs Assume that the steady state behavior of the system is described by
u1
or
y2 = ; k11k22 ; k12k21 = ; det K l21 = u k12 k12 1 where det K is the determinant of the matrix with the elements kij . In the same way we can determine K l22 = det k11 det l11 = k K 22 det l12 = ; k K 21
220
Chapter 9
The matrix with the elements 1=lij is thus given by 8 9 1 > k22 ;k21 > = ;K ;1 det K : ;k12 k11
ij =k l ij
The relative gain array (RGA) is now de ned as a matrix with the elements
ij
The example above can be generalized. The relative gain array is de ned ; = K K ;1 T (9.8) where denotes the element-by-element (Schur) product of the elements in K and (K ;1)T . Notice that it is not a conventional matrix multiplication that is performed. From (9.8) it follows that it is su cient to determine the stationary gain K = G(0) of the open loop system. The relative gain array matrix is the given through (9.8) and an element by element multiplication of two matrices. The RGA has the properties as
n X i=1 ij = n X j =1 ij
=1
I.e. the row and the column sums are equal to unity. This implies that for a 2 2 system only one element has to be computed. The rest of the elements are uniquely determined. For a 3 3 system four elements are needed to determine the full matrix. A system is easy to control using single-loop controllers if is a unit matrix or at least diagonal dominant after possible permutations of rows and/or columns. Example 9.6|Non-interacting system Assume that 8 9 0 a > > > K = :b c> then 8 9 ; ;1 T c ;b > 1 > > K = ; ab : ;a 0 > and 8 9 0 1 > = :1 0> By interchanging the rows or the columns we get a unit matrix. The system is thus easy to control using two non-interacting controllers. A system has di cult couplings if has elements that are larger than 1. This implies that some elements must be negative since the row and column sums must be unity.
221
Pairing
1 0 0:85 2
0 1
1> > 1 0
9
u1 ; y1 u2 ; y2 u1 ; y2 u2 ; y1 u1 ; y1 u2 ; y2 u1 ; y1 u2 ; y2
By determining the relative gain array is is possible to solve the rst problem stated in the Section 9.1. The RGA matrix can also be used to solve the second problem. I.e. it can be used to pair inputs and outputs. The inputs and
outputs should be paired so that the corresponding relative gains are positive and as close to one as possible.
Example 9.7|Pairing 1
The RGA and the pairing for di erent values of are shown in Table 9.1. The interaction when = 2 is severe and the system will be di cult to control with two single loops. Example 9.8|Pairing 2 Assume that a system has the transfer function matrix 8 1 2 9 > > > > > > s + 1 s + 3 G(s) = > > 1 1 > > : s+1 s+1 The static gain is given by
8 => :1;
9 1; >
and we get
9 ; ;1 T 8 3 ;3 > > K = : ;2 3 8 9 3 ; 2 > > => : ;2 3 > Since has elements that are larger than one we can expect di culties when controlling the system using single-input single-output controllers.
222
F 1, x 1=0.8 F 2, x 2=0.2
Chapter 9
F, x
Figure 9.8 Mixing process, where total ow and mixture should be controlled.
Assume that a system has the transfer function matrix 8 9 s ;1 s 1 > G(s) = (s + 1)(s + 2) : ;6 s ; 2 > The static gain is given by 8 9 ;0:5 0 > > > K = : ;3 ;1 > and we get 9 ; ;1 T 8 ;2 6 > > K = : 0 ;1 8 9 1 0 > = :0 1> This system should be possible to control using two simple controllers, since is a unit matrix. Example 9.10|Mixing process Consider the mixing process in Figure 9.8. The purpose is to control the total ow F and the composition x at the outlet. The inputs are the ows F1 and F2 . The desired equilibrium point is F = 200 and x = 0:6. The input compositions are x1 = 0:8 and x2 = 0:2. Which inputs should be used to control F and x respectively? The mass balances give F = F1 + F2 Fx = F1x1 + F2 x2 Solving for the unknown ows give F1 = 133:33 and F2 = 66:67. The system is nonlinear and we can't directly determine the gain matrix at the desired equilibrium point. One way is to calculate the RGA using perturbation. Assume that F1 is changed one unit and assume that F2 is kept constant. This gives F = 201 and x = 0:6009 and F 1 =1 = F1 F2 1 In the same way we change F1 by one unit but is keeping x constant. This gives F 1:50 F = 1 = 1:50 Using the de nition of the RGA we get
11 = 1 x
Example 9.9|Pairing 3
F F1 F2 F F1 x
2 = 1:1 = 50 3
223
T(s)
and the full relative gain matrix becomes 8 9 0 : 67 0 : 33 > = : 0:33 0:67 > where we have assumed that the inputs are in the order F1, F2 and the outputs F , x. The interaction will be minimized if we choose the pairing F {F1 and x{F2 . There will, however, be a noticeable coupling since the elements are close in size. The relative gain array method only considers the static coupling in a system. There may very well be di cult dynamic couplings in the process that is not detected through the RGA methods. Di erent ways to circumvent this problem have been discussed in the literature. The Relative Dynamic Array (RDA) method is one way to also consider the dynamic coupling in some frequency ranges.
9.5 Decoupling
In this section we will give a short discussion of how to improve the control of a multivariable system with coupling. One way is to use theory for design of multivariable systems. This is, however, outside the scope of this course. A second way that can be e ective is to introduce decoupling in a MIMO system. This is done by introducing new signals that are static or dynamic combinations of the original control signals. After the introduction of a decoupling matrix it can be possible to design the controllers from a single-input singleoutput point of view. Consider the system in Figure 9.9. The matrix D(s) is a transfer function matrix that will be used to give decoupling by introducing new control signals m(t). We have
224
a) y1, yr1 1 0 0 y2, yr2 1 0 0 4 0 4 0 u1, u2 1 2 5 1 2 10 1 2 15 5 10 15 5 10 15
Chapter 9
Figure 9.10 The process in Example 9.8 controlled by two PI controllers. (a) y1 , y2 and
the control signals u1 and u2 , when the reference value of y1 is changed. (b) Same as (a) when the reference value of y2 is changed.
Equation (9.9) can give quite complicated expressions for D. One choice is obtained by considering the diagonal elements D11 and D22 as parts of the controller and interpret D12 and D21 as feedforward terms in the controller. We may then choose 8 9 1 ;G12 =G11 > > > > D = : ;G =G 1 21 22 This gives
With this special choice we have obtained a complete decoupling of the system. There may, however, be di culties to make a realization of the decoupling. There may be pure derivations in D. An other solution to the decoupling problem is obtained by the choice
9.5 Decoupling
a) y1, yr1 1 0 0 y2, yr2 1 0 0 4 0 4 0 2 5 2 10 2 15 u1, u2 1 1 1 5 10 15 1 0 0 4 0 4 0 1 5 u1, u2 2 2 1 10 2 1 5 10 5 10 15 1 0 0 y2, yr2 5 10 b) y1, yr1
225
15
15
15
is changed. Figure 9.11 shows the same experiment as in the previous gure, but when a stationary decoupling is done using 8 29 1 ;3 > D = : ;1 1 > This gives 9 8 1;s 4s > > > > > > ( s + 1)( s + 3) 3( s + 1)( s + 3) > > T ( s) = > > > > 1 : 0 3(s + 1) The RGA matrix of the decoupled system is 8 9 1 0 > > => :0 1> The system should now be possible to control using two separate controllers. This is also veri ed in the simulation shown in Figure 9.11. There is, however, still a dynamic coupling between the two loops. A complete dynamic decoupling is shown in Figure 9.12. In this case we use 8 9 s + 1 > 1 ;2 s + 3 > > > > D=> : ;1 1 which gives
8 9 1;s > > 0 > > > > ( s + 1)( s + 3) > > T (s) = > > > > 1 ;s : 0 (s + 1)(s + 3)
226
a) y1, yr1 1 0 0 y2, yr2 1 0 0 4 0 4 0 2 5 2 10 2 15 u1, u2 1 1 1 5 10 15 5 10 15
Chapter 9
9.6 Summary
Multivariable systems can be analyzed with the relative gain array method. It is only necessary to compute the static gain of the open loop system to determine the relative gain array (RGA) . If through permutations of rows and columns can become a unit matrix or a diagonal dominant matrix then the system can be controlled by simple SISO controllers. The RGA method can also be used to pair inputs and outputs. We also have shown how it is possible to create new input signals such that the system becomes decoupled either in a static sense or in a dynamic sense.
10
Sampled-data Control
GOAL: To give an introduction to how computers can be used to implement controllers.
10.1 Introduction
Computers are more and more used to control processes of di erent kinds. In Chapter 5 we have seen how a PID-controller can be implemented in a computer. In this chapter we will give an overview of sampled-data systems and how computers can be used for implementation of controllers. Standard controllers such as PID-controllers have over the years been implemented using mechanical, pneumatic, hydraulic, electronic, and digital equipment. Most manufactured PID-controllers are today based on microprocessors. This is done because of the cost bene ts. If the sampling frequency is high the controllers can be regarded as continuous time controllers. The user can make the tuning and installation without bothering about how the control algorithm is implemented. This has the advantage that the operators do not have to be re-educated when equipment based on new technology is installed. The disadvantage is that the full capacity of sampled-data systems is not used. Some advantages of using computers to control a process are: Increased production Better quality of the product Improved security Better documentation and statistics Improved exibility One of the most important points above is the increased exibility. It is, in principle, very simple to change a computer program and introduce a new controller scheme. This is important in industrial processes, where revisions always are made. New equipment is installed and new piping is done etc. In this chapter we will give some insights into sampled-data systems and how they can be used to improve the control of processes. Section 10.2 is an introduction of sampling. Some e ects of the sampling procedure, such as aliasing, are covered in Section 10.2. Sampled-data systems are conveniently
227
228
Chapter 10
Sampled-data Control
described by di erence equation. Some useful ideas are introduced in Section 10.3. Approximation of continuous time controllers are discussed in Section 10.4. Computer implementation of PID-controllers is treated in Section 10.5. General aspects of real-time programming are discussed in Section 10.6, where also commercial process control systems are discussed. The development of process control using computers can be divided into ve phases Pioneering period 1955 Direct-digital-control period 1962 Minicomputer period 1967 Microcomputer period 1972 General use of digital control 1980 The years above give the approximate time when di erent ideas appeared. The rst computer application in the process industry was in March 1959 at the Port Arthur, Texas, re nery. The project was a cooperation between Texaco and the computer company Thomson Ramo Woodridge (TRW). The controlled process was a polymerization unit. The system controlled 26 ows, 72 temperatures, 3 pressures, and 3 compositions. The essential function of the computer was to make an optimization of the feeds and recirculations of the process. During the pioneering period the hardware reliability of the computers was very poor. The Mean Time Between Failures (MTBF) for the central processing unit could be in the range 50{100 hours. The task of the computer was instead to compute and suggest the set-point values to the conventional analog controllers, see Figure 10.1 a). The operator then changed the set-points manually. This is called operator guide. With increased reliability it became feasible to let the computers change the set points directly, see Figure 10.1 b). This is called set-point control. The major tasks of the computers were optimization, reporting, and scheduling. The basic theory for sampled-data system was developed during this period. With increasing reliability of the computers it became possible to replace the conventional controllers with algorithms in the computers. This is called Direct Digital Control, (DDC), see Figure 10.1 c). The rst installation using DDC was made at Imperial Chemical Industries (ICI) in England in 1962. A complete analog instrumentation was replaced by one computer, a Ferranti Argus. The computer measured 224 variables and controlled directly 129 valves. At this time a typical central processing unit had a MTBF of about 1000 hours. Using DDC it also became more important to consider the operator interfaces. In conventional control rooms the measured values and the controllers are spread out over a large panel, usually covering a whole wall in a room. Using video displays it became possible to improve the information ow to the process operators. Integrated circuits and the development of electronics and computers during the end of the sixties lead to the concept of minicomputers. As the name indicates the computers became smaller, faster, cheaper, and more reliable. Now it became cost-e ective to use computers in many applications. A typical computer from this time is CDC 1700 with an addition time of 2 s and a multiplication time of 7 s. The primary memory was in the range 8{124k words. The MTFB for the central processing unit was about 20 000 hours. The computers were now equipped with hardware that made it easy
10.1 Introduction
a) Process Analog controller Set-point Instrument Computer
229
Operator display
Operator display
Operator
Operator
Figure 10.1 Di erent ways to use computers. a) Operator guide b) Set-point control
c) Direct Digital Control (DDC).
to connect them to the processes. Special real-time operating systems were developed which made it easier to program the computers for process control applications. The development with very large scale integration (VLSI) electronics made it possible to make a computer on one integrated circuit. Intel developed the rst microprocessors in the beginning of the seventies. The microprocessors made it possible to have computing power in almost any equipment such as sensors, analyzers, and controllers. A typical standard single loop controller based on a microprocessor is shown in Figure 10.2. This type of standard controllers are very common in the industry. Using a microprocessor makes it possible to incorporate more features in the controller. For instance, autotuning and adaptivity can be introduced. From the beginning of the eighties the computers have come into more general use and the development have continued both on the hardware and the software. Today most control systems are built up as distributed systems. The system can be easily expanded and the reliability has increased with the distributed computation and communication. A typical architecture of a distributed control system is shown in Figure 10.3. Typically a workstation has today a computing capacity of 10 or more MIPS (Million Instructions Per Second) and a memory of 4{16 Mbyte.
230
Chapter 10
Sampled-data Control
PC
MIS
OS
OS LAN
PROCESS
Figure 10.3 Architecture of a distributed control system.
231
Hold circuit
Sampler
ut
ut
that the sampled signal is quantized both in time and amplitude. The time between the sampling instances is called the sampling period. The sampling is usually periodic, i.e. the times between the samples are equal. The sampling frequency !s and the sampling period h are related through !s = 2h The controller is an algorithm or program in the computer that determines the sequence of control signals to the process. The control signals are converted into an analog signal by using a digital-to-analog (D-A) converter and a holdcircuit. The hold circuit converts the sequence of numbers to a time-continuous signal. The most common hold circuit is the zero-order-hold, which keeps the signal constant between the D-A conversions. The end result is that the process is controlled by a piece-wise constant signal. This implies that the output of the system can be regarded as a sequence of step responses for the open loop system. Due to the periodicity of the control signal there will be a periodicity also in the closed loop system. This makes it di cult to analyze sampled-data systems. The analysis will, however, be considerably simpli ed if we only consider the behavior of the system at the sampling points. We then only investigate how the output or the states of the system is changed from sampling time to sampling time. The system will then be described by recursive or di erence equations. When sampling the output of the process we must choose the sampling period. Intuitively it is clear that we may loose information if the sampling is too sparse. This is illustrated in Figure 10.5, where two sinusoidal signals are sampled. The sinusoidal signals have the frequencies 0.1 and 0.9 Hz respectively. When sampling with a frequency of 1 Hz the signals have the same value at each sampling instant. This implies that we can't distinguish between the signals after the sampling and information is lost. The high frequency signal can be interpreted as a low frequency signal. This is called aliasing or frequency folding. The aliasing problem may become serious in control systems when
Aliasing
232
1 0 1 0 5 Time
Chapter 10
Sampled-data Control
10
Figure 10.5 Illustration of aliasing. The sinusoidal signals have the frequencies 0.1 and
0.9 Hz respectively. The sampling frequency is 1 Hz.
Pressure Steam Feed water Valve Pump To boiler Temperature
Condensed water
Temperature
38 min
2 min
Pressure
the measured signal has a high frequency component. The sampled signal will then contain a low frequency alias signal, which the controller may try to compensate for. Example 10.1|Aliasing Figure 10.6 is a process diagram of feedwater heating in a boiler of a ship. A valve controls the ow of water. Unintentionally there is a backlash in the valve positioner due to wear. This causes the temperature to oscillate. Figure 10.6 shows a sampled recording of the temperature and a continuous time recording of the pressure. The sampling period of the temperature is 2 min. From the temperature recording one might believe that there is an oscillation with a period of 38 min. Due to the physical coupling between the temperature and the pressure we can conclude that also the temperature must have an oscillation of 2.11 min. The 38 min oscillation is the alias frequency.
233
30
30
Figure 10.7 Usefulness of a pre lter. a) Signal plus sinusoidal disturbance b) The signal
ltered through a sixth-order Bessel lter c) Sample and hold of the signal in a) d) Sample and hold of the signal in b).
Can aliasing be avoided? The sampling theorem by Shannon answers this question. To be able to reconstruct the continuous time signal from the sampled signal it is necessary that the sampling frequency is at least twice as high as the highest frequency in the signal. This implies that there must be at least two samples per period. To avoid the aliasing problem it is necessary to lter all the signals before the sampling. All frequencies above the Nyquist frequency s !N = ! 2 =h should ideally be removed. Example 10.2|Pre ltering Figure 10.7 a) shows a signal (the square wave) disturbed by high frequency sinusoidal measurement noise. Figure 10.7 c) shows the sample and hold of the signal. The aliasing is clearly visible. The signal is then pre ltered before the sampling giving the signal in Figure 10.7 b). The measurement noise is eliminated, but some of the high frequency components in the desired signal are also removed. Sample and hold of the ltered signal is shown in Figure 10.7 d). The example shows that the pre ltering is necessary and that we must compromise between the elimination of the noise and how high frequencies that are left after the ltering. It important that the bandwidth of the pre lter is adjusted to the sampling frequency.
234
Chapter 10
Sampled-data Control
Assume that the initial value is given and that the input signal is constant over the sampling interval. The solution of (10.1) is now given by (B.9). Let t and t + h be two consecutive sampling times. Z t+h Ah x(t + h) = e x(t) + eA(t; ) Bu( ) d Zt t+h = eAh x(t) + eA(t; ) B d u(t) t Zh 0 Ah = e x(t) + eA B d 0 u(t) 0 = x(t) + ;u(t) In the second equality we have used the fact that the input is constant over the interval t t + h]. The third equality is obtained by change of integration variable 0 = t ; . A di erence equation is now obtained, which determines how the state is changed from time t to time t + h.
(10.2)
The sampled data system is thus described by a di erence equation. The solution of the di erence equation is obtained recursively through iteration. In the same way as for continuous time systems it is possible to eliminate the states and derive an input output model of the system. Compare Section 3.4. Transform theory plays the same role for sampled data systems as the Laplace transform for continuous time systems. For sampled-data systems the z-transform is used. We can also introduce the shift operator q , which is de ned as qy (t) = y (t + h) i.e. multiplication by the operator q implies shift of the time argument one sampling period ahead. Multiplication by q ;1 is the same as backward shift. Using the shift operator on (10.2) gives
x(t) = (qI ; );1 ;u(t) Compare (3.18). The input-output model is thus y (t) = C (qI ; );1 ;u(t) = H (q)u(t)
(10.3) where H (q ) is called the pulse transfer function. Equation (10.3) corresponds to a higher order di erence equation of the form
(10.4)
The output of the di erence equation (10.4) is thus a weighted sum of previous inputs and outputs. See Figure 10.8.
235
Output y a 2 a 1
t2h
th
t+h Time
Figure 10.8 Illustration of the di erence equation y(t) + a1 y(t ; h) + a2 y(t ; 2h) =
b1 u(t ; h) + b2 u(t ; 2h).
(10.5)
u(t) y (t) 1 0 1 0.5 1 0.8 1 0.98 1 1.088 1 1.153 1 1.192 1 1.215 1 1.229 1 1.237 1 1.242 . . . . . . 1 1 1.25 The stationary value can be obtained by assuming that y (t + 1) = y (t) = y (1) in (10.5). This gives
0:5 = 1:25 y (1) = 1 ; 0:6 Given a continuous time system and assuming that the control signal is constant over the sampling period is is thus easy to get a representation that describes how the system behaves at the sampling instants. Stability, controllability, observability, etc can then be investigated very similar to continuous time systems. There are also design methods that are the same as for continuous time systems. This is, however, outside the scope of this book.
236
Chapter 10
H(q) G(s)
Sampled-data Control
u(t)
u(kh) A-D
Algorithm
y(kh)
D-A
y(t)
Clock
Figure 10.9 Approximating a continuous time transfer function, G(s), using a computer.
10.4 Approximations
It is sometimes of interest to make a transformation of a continuous time controller or a transfer function into a discrete time implementation. This can be done using approximations of the same kind as when making numerical integration. This can be of interest when a good continuous time controller is available and we only want to replace the continuous time implementation by a sampled-data implementation. See Figure 10.9. The simplest ways are to approximate the continuous time derivative using simple di erence approximations. We can use forward di erence (Euler's
approximation)
or backward di erence
dy dt
y (t + h) ; y (t) h
(10.6)
dy y (t) ; y(t ; h) (10.7) dt h Higher order derivatives are obtained by taking the di erence several times. These types of approximations are good only if the sampling period is short. The forward di erence approximation corresponds to replacing each s in the transfer function by (q ; 1)=h. This gives a discrete time pulse transfer function H (q ) that can be implemented as a computer program. Using backward di erence we instead replace all s by (1 ; q ;1 )=h. Example 10.4|Di erence approximation Assume that we want to make a di erence approximation of a transfer function 2 G(s) = s + 3 The system is thus described by the di erential equation dy(t) = ;3y (t) + 2u(t) dt Using forward di erence approximation we get dy y (t + h) ; y (t) = ;3y(t) + 2u(t) dt h Rearrangement of the terms gives the di erence equation y(t + h) = (1 ; 3h)y (t) + 2hu(t)
10.4 Approximations
237
Controller
Level
Set point
Using backward di erence approximation we instead get dy y (t) ; y (t ; h) = ;3y(t) + 2u(t) dt h or 1 (y (t ; h) + 2hu(t)) y(t) = 1 + 3h The choice of the sampling interval depends on many factors. For the approximations above one rule of thumb is to choose
h!c 0:15 { 0:5 where !c is the crossover frequency (in rad/s) of the continuous time system. This rule gives quite short sampling periods. The Nyquist frequency will be about 5{20 time larger than the crossover frequency. Example 10.5|Approximation of a continuous time controller Consider the process in Figure 10.10. There are two tanks in the process and the outlet of the second tank is kept constant by a pump. The control signal is the in ow to the rst tank. The process can be assumed to have the transfer function Gp (s) = s(s 1 (10.8) + 1) i.e. there is an integrator in the process. Assume that the system is satisfactorily controlled by a continuous time controller + 0:8 (10.9) Gc (s) = 4 s s + 3:2 We now want to implement the controller using a computer. Using Euler approximation gives the approximation q ; 1 + 0:8 q ; 1 + 0:8h h H (q ) = 4 q ; (10.10) 1 + 3:2 = 4 q ; 1 + 3:2h h The crossover frequency of the continuous time system (10.8) in cascade with (10.9) is !c = 1:7 rad/s. The rule of thumb above gives a sampling period of about 0.1 { 0.3.
238
a) 1 0 0 b) 1 0 0 c) 1 0 0 Output
Chapter 10
Input 2 2 2 Output 2 2 2 Output 2 2 2 4 6 8 10 0 2 4 6 8 10 0 2 Input 4 6 8 10 0 2 Input
Sampled-data Control
10
10
Figure 10.11 Process output and input when the process (10.8) is controlled by (10.9)
and the sampled-data controller (10.10) when a) h = 0:1 b) h = 0:2 c) h = 0:5. The dashed signals are the output and input when the continuous time controller is used.
10
Figure 10.11 shows the output and the input of the system when it is controlled by the continuous time controller (10.9) and the sampled-data controller (10.10) for di erent sampling times.
It is important to stress the division of the computations into two parts. Everything that can be precomputed for the next sampling instant is done after the DA conversion. When the most recent measurements are obtained after
End of loop
239
the AD conversion only the necessary computations are done to calculate the next control signal. The rest of the calculations, such as update of the states of the controller, are done during the time to the next sample. Compare the discussion of the digital implementation of the PID controller in Section 5.6. To implement one or two controllers in a computer is not any larger problem. In general, we need many more controllers and also a good operator communication. The di erent controllers must be scheduled in time and the computer must be able to handle interrupts from the process and the operator. It is natural to think about control loops as concurrent activities that are running in parallel. However, the digital computer operates sequentially in time. Ordinary programming languages can only represent sequential activities. Real-time operating systems are used to convert the parallel activities into sequential operations. To do so it is important that the operating systems support time-scheduling, priority, and sharing of resources. The problem of shared variables and resources is one of the key problems in real-time programming. Shared variables may be controller coe cients that are used in the controller, but they can also be changed by the operator. It is thus necessary to make sure that the system does not try to use data that is being modi ed by another process. If two processes want to use the same resource, such as input-output units or memory, it is necessary to make sure that the system does not deadlock in a situation where both processes are waiting for each other. The programming of the control algorithms has to be done using a real-time programming language. Such languages contain explicit notations for real-time behavior. Examples of real-time languages are Ada and Modula-2.
10.7 DDC-packages
Most manufacturers of control equipment have dedicated systems that are tailored for di erent applications. DDC-packages are typical products that contain the necessary tools for control and supervision. They contain aids for good operator interaction and it is possible to build complex control schemes using high level building blocks. The systems are usually centered around a common database, that is used by the di erent parts of the system. A typical DDC-package contains functions for Arithmetic operations Logical operations Linearizing Selectors and limiters Controllers Input/output operations There may be many di erent controllers available. Typical variants are PID controllers in absolute or velocity form PID controllers with pulse train output Cascade controllers Ratio controllers Filters Auto-tuning of controllers
240
Chapter 10
Sampled-data Control
Adaptive controllers Today it is also more and more common that the user can include algorithms that are tailored to the applications. The controllers may then contain complex computations. The operator communication is usually built up around video displays, where the operator can select between overviews or detailed information about di erent parts of the process. The operator can also study trend curves and statistics about the process. The controllers can be switched between automatic, manual, and possibly also auto-tuning. The use of DDC-packages has considerably increased the information available and the possibilities for the operators to run the processes more e ciently.
10.8 Summary
In this chapter we have given a short overview of some aspects of sampled-data systems. It is important to consider the danger of aliasing. All signals must thus be ltered before they are sampled. We may look at computers as only one new way of making the implementation and don't take the full advantage of the capacity of the computer. This leads to simple di erence approximations of continuous time transfer functions. It is, however, important to point out that using a computer it is also possible to make more complex and better controllers. We can use information from di erent parts of the process. We can also make more extensive calculations in order to improve the quality of the control.
This appendix gives the basic properties of the Laplace transform in Table A.1. The Laplace transform for di erent time functions are given in Table A.2. The Laplace transform of the function f (t) is denoted F (s) and is obtained through Z1 F (s) = Lff (t)g = e;st f (t) dt provided the integral exists. The function F (s) is called the transform of f (t). The inverse Laplace transform is de ned by Z +i1 1 ; 1 est F (s) ds f (t) = L fF (s)g = 2 i ;i1
0;
241
242
Appendix A
(a > 0)
f1 (t) + f2 (t) Linearity e;at f (t) Damping f (t ; a) t ; a > 0 Time delay 0 t;a < 0 f (at) a a (;t)n f (t) f (t) t f1 (t)f2 (t)
Z t
0
5 F (as) (a > 0)
1f t
Stretching
d F (s) dsn Z 1 7 F( )d
6 8 1 Z c+i1 F ( )F (s ; ) d 2 i c;i1 1 2
s
9 F1 (s)F2 (s) 10 sF (s) ; f (0) ; 11 s2 F (s) ; sf (0) + f 0 (0) 12 sn F (s) ; sn;1 f (0) + + f (n;1)(0) Z t 13 1 F (s) + 1 f( ) d 14 15
s !0
f1 ( )f2 (t ; ) d =
f1 (t ; )f2 ( ) d
Convolution
Derivation in t-plane Integration in t-plane Final value theorem Initial value theorem
lim sF (s)
t=+0
f( ) d
s!1
lim sF (s)
t!1 t!0
lim f (t)
lim f (t)
243
s2 s3 s
1
n+1
t tn n! e;at t e;at
(1 ; at)e;at sin at sinh at cos at cosh at 1 ;1 ; e;at 1 t2 2
a s2 + a2 a s2 ; a2 s s2 + a2 s 2 s ; a2
1 1
ae
1 ;t=a
1 ; e;t=a
244
Appendix A
s2 + 2 !s + !2
=0
<1
=1 >1 21
! sin !t te;!t !
p
1 ! 1;
p
e; !t sin ! 1 ; 2 t e; !t sinh !
p p
2
s s2 + 2 !s + !2
0 :
;1
;1t
<1
=1 >1 =0
22 23 24 25 26 27 28 29 30
a s2 + a2 (s + b) s s2 + a2 (s + b)
1
s(s + a)(s + b)
e; !t sin ! 1 ; 2 t + 1; 2 p ! 1; 2 = arctan ; ! ; !t (1 ; !t)e 1 e; !t sinh !p 2 ; 1 t + p 2;1 p 2 ! = arctan ; !; 1 cos !t p 21 2 ;sin(at ; ) + e;bt sin a +b = arctan a b ; p 21 2 cos(at ; ) ; e;bt cos a +b = arctan a b ; bt ; 1 + ae ; be at ab ab(b ; a) (b ; c)e;at + (c ; a)e;bt + (a ; b)e;ct (b ; a)(c ; a)(b ; c) 1 n ;at n! t e
p
as s + a2 2 1 p s 1 F ;ps p s
;
2
e;
2 =4t
f( ) d
Matrices
GOAL: To summarize the basic properties of matrices.
B.1 Introduction
Vectors and matrices are compact ways of notations for describing many mathematical and physical relations. Matrices are today used in most books on automatic control. This appendix gives some of the most fundamental properties of matrices that are relevant for a basic course in process control. For more material on matrices we refer to courses in linear algebra and books on matrices and linear algebra. A selection of references is given in the end of the appendix. Many authors prefer special notations such as boldfaced notations or bars over the notations to indicate matrices. From the context there is usually no di culties to distinguish between scalars and matrices. Of this reason we will not use any special notations to separate scalars and matrices. The only convention used is that lower case letters usually denote vectors while upper case letters usually denote general matrices. In the presentation of the material in the book and in this appendix we assume that the reader has knowledge about the basic properties of vectors and matrices, such as: vector, matrix, rules of computations for vectors and matrices, linear dependence, unit matrix, diagonal matrix, symmetric matrix, scalar product, transpose, inverse, basis, coordinate transformations, determinants We will introduce: eigenvalue, eigenvector, matrix function, and matrix exponential. These are concepts that are very useful in analysis of dynamical systems. To make matrix calculations the program package Matlab can be recommended. This is a command driven package for matrix manipulations. Matlab contains commands for matrix inversion, eigenvalue and eigenvector calculations etc.
245
246
Appendix B
Matrices
Computational Rules
A(BC ) = (AB)C (A + B )C = AC + BC It has been assumed that the matrices have appropriate dimensions, such that the operations are de ned. Notice that in general AB 6= BA. If AB = BA then we say that A and B commute.
Matrix multiplication:
Trace
The trace of a n n matrix A is denoted tr A and is de ned as the sum of the diagonal elements, i.e. n X tr A = aii
i=1
The transpose of a matrix A is denoted AT . The transpose is obtained by letting the rows of A become the columns of AT . Example B.1|Transpose Let 81 29 > > > > > 3 4 A=> > :5 6> then 8 9 1 3 5 > T > > A = :2 4 6>
Transpose Matrix
247
We have the following computational rules: (AT )T = A (A + B )T = AT + B T ( A)T = AT scalar T T T (AB ) = B A If A = AT then A is said to be symmetric.
Linear independence of the vectors x1 : : : xp of dimension n 1 is equivalent with the condition that
1 x1 + 2x2 +
+ pxp = 0
(B.1)
if and only if 1 = 2 = = p = 0. If (B.1) is satis ed for a nontrivial set of i :s then the vectors are linear dependent. Example B.2|Linear dependence Determine if the vectors 839 819 819 > > > > > > > > > > > > 2 0 2 > > > > > > > > > > > x1 = > x = and x = 2 3 > > > > > > > > > > > 2 1 0 : : : > 1 0 1 are linear dependent. To test the linear dependence we form 83 + + 9 809 1 2 3> > > > > > > > > > 2 + 2 0> 1 3 > > > > > x + x + x = = > > > > 2 2 3 3 > 2 + 1 1 > > 0 1 2 > > > > : : > 0 1+ 3 Rows two and four give the same relation. Row one is the same as row two plus row four. The system of equations is satis ed, for instance, when 1 = 1, 2 = ;2 and 3 = ;1. This implies that x1 , x2 and x3 are linear dependent. If n vectors of dimension n 1 are linear independent then they constitute a basis. All vectors in the n dimensional space may then be written as a linear combination of the basis vectors.
Rank
The rank of a matrix A, rank A, is the number of linear independent columns or rows in A. Introduce the row (column) operations: { Exchange two rows (columns) { Multiply a row (column) with a scalar 6= 0 { Add one row (column) to another row (column)
248
Appendix B
Matrices
We then have the property that row or column operations don't change the rank.
Determinant
The determinant is a scalar, that can be de ned for quadratic matrices. The determinant of A is denoted det A or jAj. The determinant is calculated by expanding along, for instance, the ith column n X det A = (;1)(i+j ) jAij j
j =1
where jAij j is the determinant of the matrix obtained from A by taking away the ith column and the j th row. The expansion can also be done along one of the rows. Example B.3|Determinants of low order matrices For a scalar (1 1 matrix) A = a] is det A = a. For the 2 2 matrix 9 8 a a > 12 11 > > A = :a a > 21 22 we get det A = a11a22 ; a21 a22 The determinant of a 3 3 matrix can be calculated using Sarru's rule. The determinant of the matrix 8a a a 9 > > 11 12 13 > > > A=> a a a > 21 22 23 > > :a a a > 31 32 33 can be calculated by expanding along the rst row det A = a11 det A11 ; a12 det A12 + a13 det A13 = a11(a22a33 ; a23 a32) ; a12(a21a33 ; a23a31 ) + a13(a21a32 ; a22a31 ) = a11a22 a33 + a12a23a31 + a13 a21a32 ; a11a23a32 ; a12 a21a33 ; a13 a22a31
Inverse
A quadratic n n matrix is invertible or non-singular if there exists a n n matrix B such that A B=B A=I where I is the unit matrix. B is called the inverse of A and is denoted A;1 . A matrix has an inverse if and only if rank A = n. If A has not an inverse it is singular. The inverse can for instance be used to solve a linear system of equations
Ax = b
(B.2)
249
x = A;1b
if A is invertible. If b = 0 in (B.2) we get the homogeneous system of equations
Ax = 0 If A is non-singular we have only one solution x = 0 called the trivial solution. However, if det A = 0 there may exist non-trivial solutions such that x 6= 0. Example B.4|Inverse of 2 2 system The 2 2 matrix 8 9 a a > 11 12 > > A = :a a > 21 22 has the inverse 8 9 a ;a12 > 1 > 22 ; 1 > A = det A : ;a a > 21 11 where det A = a11a22 ; a21 a12
Transformation of a System
x _ = Ax + Bu y = Cx + Du
We can introduce new states z = Tx to obtain a transformed system. It is assumed that T is nonsingular such that x = T ;1 z . In the new state variables
z_ = T x _ = TAx + TBu = TAT ;1 z + TBu y = Cx + Du = CT ;1 z + Du This de nes a new system ~ + Bu ~ z_ = Az ~ + Du ~ y = Cz where ~ = TAT ;1 B ~ = TB A (B.3) ~ = CT ;1 D ~ =D C For certain choices of T the transformed system will get simple forms.
and
250
Appendix B
Matrices
where A is an n n matrix, x a column vector and a scalar. Equation (B.4) should be solved for x and . The vector x is an eigenvector and can be interpreted as a vector that after a transformation by A is scaled by the scalar . Equation (B.4) can be written as the homogeneous system of equations ( I ; A)x = 0 There exists non-trivial solutions, i.e. x 6= 0, if det( I ; A) = 0 (B.5) This give rise to a nth order equation in and (B.5) thus has n solutions called the eigenvalues of A. Equation (B.5) is called the characteristic equation of the matrix A. The solutions x are found from (B.4) with the eigenvalues obtained from (B.5). Example B.5|Eigenvalues of a 2 2 matrix For a 2 2 matrix we get the characteristic equation ; a11 ;a12 det( I ; A) = ; a21 ; a22 2 = ; (a11 + a22) + (a11a22 ; a12a21 ) = 2 ; tr A + det A =0 There are two solutions
r (tr A)2 ; det A tr A 12 = 2 4 for which the homogeneous equation has solutions x 6= 0. Consider in particular the case 8 9 4 1 > A = : ;2 1 > This matrix has the eigenvalues = 2 and = 3. The eigenvectors are obtained from the system of equations
2x1 + x2 = 0 ;2x1 ; x2 = 0 and This gives the eigenvectors
x1 + x2 = 0 ;2x1 ; 2x2 = 0
> x2 = > : 8
9 1> > ;1 Only the directions of the eigenvectors are determined not their sizes. There are several interesting connections between properties of a quadratic matrix A and the eigenvalues. We have for instance:
x1
251
If rank A < n then at least one eigenvalue is equal to zero. If rank A = n then no eigenvalue is equal to zero. The sum of the diagonal elements is equal to the sum of the eigenvalues, i.e. n X tr A = i
i=1
+ an = 0
The Cayley-Hamilton's theorem shows that every quadratic matrix satis es its own characteristic equation i.e.
An + a1 An;1 +
+ an I = 0 I
(B.6)
The connections between rank, determinant and inverse of a matrix is summarized into the following theorem.
Theorem B.1
Let A be a quadratic matrix then the following statements are equivalent: 1. The columns of A constitute a basis. 2. The rows of A constitute a basis. 3. Ax = 0 has only the trivial solution x = 0. 4. Ax = b is solvable for all b. 5. A is nonsingular. 6. det A 6= 0. 7. rank A = n. The eigenvectors and eigenvalues can be used to nd transformations such that the transformed system gets a simple form. Assume that the eigenvalues of A are distinct, i.e. i 6= j i 6= j . Let X be the matrix with the eigenvectors as columns. We may then write (B.4) as
AX = X
where
(B.7)
X ;1AX = which we can interpret as a transformation with T = X ;1 . Compare (B.3). It is always possible to diagonalize a matrix if it has distinct eigenvalues. If the matrix has several eigenvalues that are equal then the matrix may be transformed to a diagonal form or a block diagonal form. The block diagonal form is called the Jordan form.
252
Appendix B
Matrices
Singular values
where i are the eigenvalues of the matrix AT A. The smallest in magnitude is called the minimum singular value.
f (A) = c0 Ap + c1Ap;1 +
+ cp;1A + cpI
where ci are scalar numbers. Example B.6|Matrix polynomial An example of third order matrix polynomial is
f (A) = A3 + 3A2 + A + I
If then
A very important matrix function in connection with linear di erential equations is the matrix exponential, eA . This function is de ned through a series expansion in the same way as for the scalar function ex . If A is a quadratic matrix then 1 A + 1 A2 + 1 A3 + eA = I + 1! 2! 3! The matrix exponential has the following properties: 1. eAt eAs = eA(t+s) for scalar t and s 2. eA eB = eA+B only if A and B commute, i.e. if AB = BA. 3. If i is an eigenvalue of A then e i t is a eigenvalue of eAt .
dx = Ax + Bu dt y = Cx + Du
(B.8)
253
x(t) = eAtx(0) +
(B.9)
This is veri ed by taking the derivative of x(t) with respect to t. This gives dx(t) = AeAtx(0) + AeAt Z t e;A Bu( ) d + eAt e;At Bu(t) dt Zt 0 = A eAt x(0) + eA(t; )Bu( ) d + Bu(t) = Ax(t) + Bu(t)
0
The solution thus satis es (B.9). The solution consists of two parts. One is dependent of the initial value. The other part depends on u over the interval from 0 to t. The exponential matrix plays a fundamental role in the solution. The character of the solution depends of the matrix A and its eigenvalues. The matrix eAt is called the fundamental matrix of the linear system (B.8).
There are several ways to compute the matrix exponential 1 (At)2 + 1 (At)3 + (B.10) eAt = I + At + 2! 3! Since we usually use the matrix exponential in connection with the solution of the system equation we have multiplied the matrix A by t in (B.10). Series expansion. The most straightforward way to compute the matrix exponential is to use the de nition in (B.10). This method is well suited for computer computations. Di erent tricks must be used to speed up the convergence. See the references. For hand calculations this is usually not a good way, since it is necessary to identify series expansions of e t, sin(!t), etc. Use of Laplace transform. It can be shown that the Laplace transform of (B.10) is given by LfeAt g = sI ; A];1 (B.11) The right hand side of (B.11) can be calculated and the elements in the matrix exponential matrix are obtained by taking the inverse Laplace transform of the elements in sI ; A];1 . Use of diagonalization. Assume that A can be diagonalized, i.e. we can write A = T ;1 T where is a diagonal matrix. Then 1 ;T ;1 tT ;T ;1 tT + eAt = I + T ;1 tT + 2! 1 ( t)2 + = T ;1 I + t + 2! T = T ;1 e t T
254
Appendix B
Matrices
The matrix exponential of a diagonal matrix is 8 e 1t 0 9 > > > > t . . > > e => . > : t n 0 e where
(B.6) it follows that (At)n can be written as a linear combination of (At)n;1 , (At)n;2 : : :I . It then follows that all terms in (B.10) with exponents higher than or equal n can be expressed as a linear combination of lower order terms. It is thus possible to write
eAt = 0I + 1 (At) +
(B.12)
How to get the coe cients 0 : : : n;1 ? A necessary condition for (B.12) to be satis ed is that e it = f ( it) i = 1 : : : n (B.13) where the i s are the eigenvalues of A. If the eigenvalues of A are distinct then (B.13) is also a su cient condition. If A has an eigenvalue i with multiplicity ki then (B.13) is replaced by (e i t )(j ) = f (j ) ( it) 0 j ki ; 1 (B.14) where the superscript (j ) denotes the j :th derivative of the function with respect to i t. This will give n equations for the n unknown i :s. Example B.7|Distinct eigenvalues Let 8 9 0 1 > > > > A=: ;1 1 For this value of A we get A2 = ;I A3 = ;A A4 = I . . . and 1 It2 ; 1 At3 + 1 It4 + eAt = I + At ; 2! 3! 4! 2 t4 3 t = I 1 ; 2! + 4! + +A t; t 3! + = I cos t + A sin t 9 8 cos t sin t > > > = : ; sin t cos t > Using the Laplace transform method we get 8 9;1 8 s 1 9 2 2 > s ; 1 > > s +1 s +1 > ; 1 > > sI ; A] = > :1 s > = : 1 s ;
s2 +1 s2 +1
255
Using the table in Appendix A we can nd the corresponding time functions for each element. 8 9 cos t sin t > At ; 1 ; 1 > > e = L sI ; A] = : ; sin t cos t > Using Cayley-Hamilton's theorem we write
eAt = 0 I + 1At
The eigenvalues of A are equations which have the solution
1
= i and
eit = e;it =
0
0 + 1i 0 ; 1i
; it ;it =1 2 e + e = cos t 1 ; it ;it = sin t 1 = 2i e ; e 8 9 cos t sin t > > > I + sin t A = : ; sin t cos t >
This gives
eAt = cos t
Let
8 9 ;1 1 > > > A = : 0 ;1 > which has the eigenvalue = ;1 with multiplicity 2. Assume that
eAt = 0 I + 1At
This gives the condition Using (B.14) gives and we get Finally
e;t =
0 ; 1t
te;t = 1t
;t ;t 0 = e + te 1 = e;t
8 ;t ;t At > e => : e + te
9 8 ;t ;t 9 0 > ;te te > > >+> > : ; t ; t ;t 0 e + te 0 te 8 ;t ;t 9 e te > > => : 0 e;t >
256
Appendix B
Matrices
B.5 References
Fundamental books on matrices are, for instance, Gantmacher, F. R. (1960): The theory of Matrices, Vols I and II, Chelsea, New York. Bellman, R. (1970): Introduction to Matrix Analysis, McGraw-Hill, New York. Computational aspects on matrices are found in: Golub, G. H., and C. F. van Loan (1989): Matrix Computations, (2nd ed), Johns Hopkins University Press, Baltimore, MD. Di erent ways to numerically compute the matrix exponential is reviewed in: Moler, C. B., and C. F. van Loan (1978): \Nineteen dubious ways to compute the exponential of a matrix," SIAM Review, 20, 801{836.
English-Swedish dictionary
GOAL: To give proper Swedish translations.
A actuator st lldon A-D converter A-D omvandlare adaptive control adaptiv reglering aliasing vikning amplitude function amplitudfunktion amplitude margin amplitudmarginal analog-digital converter analog-digital omvandlare argument function argumentfunktion asymptotic stability asymptotisk stabilitet asynchronous net asynkronn t B backward di erence bak tdi erens backward method bakl ngesmetoden bandwidth bandbredd basis bas batch process satsvis process bias avvikelse, nollpunktsf rskjutning block diagram blockdiagram block diagram algebra blockdiagramalgebra Bode diagram Bodediagram Boolean algebra Boolesk algebra bounded input bounded output stability begr nsad insignal
break frequency brytfrekvens bumpless transfer st tfri verg ng C canonical form kanonisk form cascade control kaskadreglering causality kausalitet characteristic equation karakteristisk ekvation characteristic polynomial karateristiskt polynom chattering knatter combinatory network kombinatoriskt n t command signal b rv rde, referenssignal complementary sensitivity function komplement r k nslighetsfunktion computer control datorstyrning computer-controlled system datorstyrt system control styrning, reglering control error reglerfel control signal styrsignal controllability styrbarhet controller regulator controller gain regulatorf rst rkning controller structure
regulatorstruktur
257
258
Appendix C
English-Swedish dictionary
coupled systems kopplade system cross-over frequency D D-A converter D-A omvandlare damped frequency d mpad frekvens damping d mpning dead time d dtid, transporttid, tidsf rdr jning decade dekad decoupling s rkoppling delay time d dtid, transporttid, tidsf rdr jning derivative term derivataterm derivative time derivationstid determinant determinant di erence equation di erensekvation di erential equation di erentialekvation di erential operator di erentialoperator digital-analog converter digital-analog omvandalare direct digital control direkt digital styrning discrete time system tidsdiskret system disturbance st rning dynamic relation dynamiskt samband dynamic system dynamiskt system E eigenvalue egenv rde eigenvector egenvektor error fel external model extern modell F feedback terkoppling feedback control terkopplad reglering, sluten reglering feedback system terkopplat system feedforward framkoppling ltering ltrering nal value theorem slutv rdessatsen oating control ytande reglering forward di erence fram tdi erens
sk rningsfrekvens
free system fritt system frequency analysis frekvensanalys frequency domain approach frekvensdom n metod frequency domain model frekvensdom nmodell frequency function frekvensfunktion function chart funktionsschema fundamental matrix G gain f rst rkning gain function f rst rkningsfunktion gain margin amplitudmarginal gain scheduling parameterstyrH high frequency asymptote h gfrekvensasymptot hold circuit h llkrets homogeneous system homogent system hysterises hysteres I identi cation identi ering implementation implementering, f rverkligande impulse impuls impulse response impulssvar initial value initialv rde, begynnelsev rde initial value theorem begynnelsev rdesteoremet input insignal input-output model insignal-utsignalmodell input-output stability insignal-utsignalstabilitet instability instabilitet integral control integrerande reglering integral term integralterm integral time integraltid integrating controller integrerande regulator integrator integrator integrator saturation
integratorm ttning ning fundamentalmatrix
259
integrator windup integratoruppvridning interaction v xelverkan interlock f rregling internal model intern modell inverse invers inverse response system icke L lag compensation fasretarderande kompensering Laplace transform Laplace transform lead compensation fasavancerade kompensering limiter begr nsare, m ttningsfunktion linear dependent linj rt beroende linear indedendent linj rt oberoende linear quadratic controller linj rkvadratisk regulator linear system linj rt system linerarizing linj risering load disturbance belastningsst rning logical control logikstyrning logical expression logiskt uttryck loop gain kretsf rst rkning low frequency asymptote
minimumfassystem
neper neper noise brus, st rning nonlinear coupling olinj r koppling nonlinear system olinj rt system nonminimum phase system icke minimumfassystem Nyquist diagram Nyquistdiagram Nyquist's theorem
Nyquistteoremet
O observability observerbarhet observer observerare octave oktav on-o control till{fr n reglering, tv l gesreglering open loop control ppen styrning open loop response ppna systemets respons operational ampli er operationsf rst rkare operator guide operat rshj lp ordinary di erential equation ordin r di erentialekvation oscillation sv ngning output utsignal, rv rde overshoot versl ng P paralell connection parallellkoppling l gfrekvensasymptot Petri net Petrin t phase function fasfunktion M margin fasmarginal manual control manuell styrning phase PI-reglering mathematical model matematisk PI-control PID-control PID-reglering modell PLC programmerbart logiksystem matrix matris pole pol measurement noise m tbrus pole-placement poleplacering measurement signal m tsignal position algorithm minimum phase system positionsalgoritm minimumfassystem practical stability praktisk mode mod stabilitet model based control prediction prediktion modellbaserad reglering horizon modeling modellering, modellbygge prediction prediktionshorisont multiplicity multiplicitet time prediktionstid multivariable system ervariabelt prediction pre ltering f r ltrering system process process N process input processinsignal natural frequency naturlig process mode processmod frekvens process output processutsignal
260
Appendix C
English-Swedish dictionary
sampling period samplingsperiod selector v ljare sensitivity k nslighet sensitivity function k nslighetsfunktion sensor m tgivare, sensor sequential control sekvensreglering sequential net sekvensn t series connection seriekoppling set point b rv rde, referensv rde set point control b rv rdesreglering puls verf ringsfunktion settling time l sningstid Q shift operator skiftoperator quantization kvantisering simpli ed Nyquist's theorem f renklade Nyquistteoremet R single-loop controller ramp ramp enloopsregulator ramp response rampsvar singular value singul rt v rde rank rang singularity diagram ratio control kvotreglering singularitetsdiagram reachability uppn elighet sinusoidal sinusformad real-time programming smoothing utj mning realtidsprogrammering solution time l sningstid recursive equation rekursiv split range control uppdelat ekvation utstyrningsomr de reference value referensv rde, stability stabilitet b rv rde criteria stabilitetsrelative damping relativ d mpning stability kriterium relative gain array relativa state tillst nd f rst rkningsmatrisen state graph tillst ndsgraf relay rel state space controller reset time integraltid tillst ndsregulator reset windup integratoruppvridstate space model tillst ndsmodell ning state transition matrix return di erence terf ringsdi edynamikmatris rens state variable tillst ndsvariabel return ratio terf ringskvot static system statiskt system rise-time stigtid stationary error station rt fel robustness robusthet steady state error station rt fel root-locus rotort steady state gain station r Routh's algorithm Rouths f rst rkning algoritm steady state value station rv rde S step steg sampled data system tidsdiskret step response stegsvar system superposition principle sampler samplare superpositionsprincipen sampling sampling synchronous net synkonn t sampling frequency synthesis syntes, dimensionering samplingsfrekvens sampling interval samplingsinter-
process pole processpol process zero processnollst lle programmable logical controllers programmerbart logiksystem proportional band proportionalband proportional gain proportionell f rst rkning proportional control proportionell reglering pulse transfer function
vall
261
T time constant tidskonstant time delay tidsf rdr jning time domain approach tidsdom nmetod time-invariant system tidsinvariant system trace sp r tracking f ljning transfer function verf ringsfunktion transient transient transient analysis transientanalys translation principle translationsprincip transpose transponat truth table sanningstabell tuning inst llning U ultimate sensitivity method
sj lvsv ngningsmetoden
undamped frequency naturlig frekvens unit element enhetselement unit step enhetssteg unmodeled dynamics omodellerad dynamik unstable system instabilt system V vector vektor velocity algorithm
hastighetsalgoritm
W weigting function viktfunktion windup uppvridning Z zero nollst lle zero element nollelement zero order hold nollte ordningens
h llkrets