T7 - State Feedback Analysis and Design - 2021
T7 - State Feedback Analysis and Design - 2021
T7 - State Feedback Analysis and Design - 2021
State Feedback
Analysis and Design
Page 1
What is state feedback
• Recall that a SISO state variable system is described in a
matrix form as below (assume no direct input in the
output equation):
𝐱ሶ 𝑡 = 𝐀𝐱 𝑡 + 𝐁𝑢 𝑡
𝑦 𝑡 = 𝐂𝐱 𝑡
Page 2
What is state feedback (2)
• The idea of state feedback is to use the measurements of the state
variables and multiply them by some gains to create the input 𝑢 𝑡 𝐱ሶ 𝑡 = 𝐀𝐱 𝑡 + 𝐁𝑢 𝑡
such that it can adjust the location of the closed-loop eigenvalues. 𝑦 𝑡 = 𝐂𝐱 𝑡
• Let’s begin by assuming that we wish to create a closed-loop
system such that all initial conditions 𝐱 𝑡 are driven to zero in a
specified fashion (as determined by the design specifications):
For an nth-order system
⟹ 𝐱ሶ 𝑡 = 𝐀𝐱 𝑡 + 𝐁 −𝐊𝐱 𝑡
𝐱ሶ 𝑡 = (𝐀 − 𝐁𝐊)𝐱 𝑡
𝐂M = 𝐁 𝐀𝐁 𝐀2 𝐁
0 1 0 0 0 1
⟹ 𝐀= 0 0 1 , 𝐁 = 0 , 𝐀𝐁 = 1 , 𝐀𝟐 𝐁 = −𝑎2 ,
−𝑎0 −𝑎1 −𝑎2 1 −𝑎2 𝑎22 − 𝑎1
0 0 1
⟹ 𝐂M = 0 1 1 Phase-variable form is
1 −𝑎2 2
𝑎2 − 𝑎1 one of the canonical
controllable forms in the
0 1 state space representation
⟹ det 𝐂M = 0 − 0 + 1 × det =1
1 −𝑎2
Page 5
Example 1
• For the system shown below, determine the condition on parameter 𝑑 such that the
system is fully controllable.
• (Worked solutions of examples will be given either in lecture or uploaded later)
−2 0 1
𝐱ሶ 𝑡 = 𝐱 𝑡 + 𝑢 𝑡
𝑑 −3 0
𝑦 𝑡 = 0 1𝐱 𝑡
Answer:
𝑑≠0
Page 6
Practice Problem 1
• Determine whether the system shown below is controllable.
−1 1 2 2
𝐱ሶ 𝑡 = 0 −1 5 𝐱 𝑡 + 1 𝑢 𝑡
0 3 −4 1
A=[-1 1 2
0 -1 5
Answer: 0 3 -4]
B=[2;1;1]
Controllable.
Cm=ctrb(A,B)
Rank=rank(Cm)
Page 7
Observability
• In practice, it is not always possible to measure all the
states, only some of the state variables (or a linear 𝐱ሶ 𝑡 = 𝐀𝐱 𝑡 + 𝐁𝑢 𝑡
combination of them as the output) are available. 𝑦 𝑡 = 𝐂𝐱 𝑡
• So we need to somehow estimate all of the state
variables (or at least those which are not measurable)
A system is fully observable if
using the available information, namely the measured
there exists a finite time 0 ≤ 𝑡 ≤ 𝑇
output 𝑦(𝑡) and the control input 𝑢(𝑡).
such that the initial states 𝐱 0
• We can design and construct a so-called observer to can be determined from the
estimate the states if the system is observable. observation of output 𝑦(𝑡) given
• Observability is the ability of the system to provide the control input 𝑢(𝑡)
estimation of the state variables.
• For SISO systems, 𝐎M is an
𝐂 𝑛 × 𝑛 square matrix so if its
• For an nth-order system to be 𝐂𝐀 determinant is nonzero, the
completely observable, the
observability matrix 𝐎M must be 𝐎M = 𝐂𝐀2 system is observable.
𝐂 0 1 0
𝐎M = 𝐂𝐀 ⟹ 𝐀 = 0 0 1 , 𝐂= 1 0 0,
𝐂𝐀2 −𝑎0 −𝑎1 −𝑎2
⟹ 𝐂𝐀 = 0 1 0, 𝐂𝐀2 = 0 0 1,
1 0 0
⟹ 𝐎M = 0 1 0 Phase-variable form (without any zeros)
0 0 1 is a fully observable state space
representation.
⟹ det 𝐎M = 1 × 1 × 1 = 1
Page 9
Example 2
• Determine the controllability and observability of the system shown below.
• (Worked solutions of examples will be given either in lecture or uploaded later)
2 0 1
𝐱ሶ 𝑡 = 𝐱 𝑡 + 𝑢 𝑡
−1 1 −1
𝑦 𝑡 = 1 1𝐱 𝑡
Answer:
Uncontrollable and unobservable.
Page 10
Full-state feedback control design
• If the system is controllable and observable, we can
𝐱ሶ 𝑡 = 𝐀𝐱 𝑡 + 𝐁𝑢 𝑡
accomplish the design objective of placing the
𝑦 𝑡 = 𝐂𝐱 𝑡
poles/eigenvalues precisely at the desired locations to meet
the performance requirements.
• This is possible since for any number of eigenvalues, we
can introduce the same number of adjustable gains which
provides total control on pole placement.
• If we wish for the state feedback control system to provide
reference tracking as well as state variable control, we can
For an nth-order system
add a reference signal or the desired output to the
feedback control law. 𝐊 = [𝑘1 𝑘2 ⋯ 𝑘𝑛 ]
Define: 𝑢 𝑡 = −𝐊𝐱 𝑡 + 𝑟(𝑡) ⟹ 𝐱ሶ 𝑡 = 𝐀𝐱 𝑡 − 𝐁𝐊𝐱 𝑡 + 𝐁𝑟(𝑡)
• The closed-loop system is 𝑟(𝑡) 𝑢(𝑡) 𝐱ሶ 𝑡 𝐱 𝑡 𝑦 𝑡
described by the following state
equations:
𝐱ሶ 𝑡 = 𝐀 − 𝐁𝐊 𝐱 𝑡 + 𝐁𝑟 𝑡
𝑦 𝑡 = 𝐂𝐱 𝑡
Page 11
Pole placement technique
𝑟(𝑡) 𝑢(𝑡) 𝐱ሶ 𝑡 𝐱 𝑡 𝑦 𝑡
• Here are the steps for designing the state
feedback gain matrix 𝐊 (similar to other analytical
design techniques you have learned so far):
−1
𝐊 = 0 0 … 0 1 𝐂M (𝐀𝑛 + 𝛼𝑛−1 𝐀𝑛−1 + ⋯ + 𝛼1 𝐀 + 𝛼0 𝐈)
where 𝑠 𝑛 + 𝛼𝑛−1 𝑠 𝑛−1 + ⋯ + 𝛼1 𝑠 + 𝛼0
is the desired characteristic equation and we substitute 𝐀 for 𝑠 in the formula.
Page 12
Example 3
• For the plant shown below, design a state feedback control law to yield 15% overshoot
with a settling time of 0.5 seconds.
• (Worked solutions of examples will be given either in lecture or uploaded)
10
𝐺 𝑠 =
𝑠 2 + 3𝑠 + 2
Answer:
𝐊 = 𝑘1 𝑘2 = 237 13 .
Page 13
Practice Problem 2
• For the plant shown below, design a state feedback control law to yield a 9.5% overshoot
with a settling time of 0.74 seconds.
• Hint: choose the third desired pole/eigenvalue to be either close to the system zero (i.e., 𝑝3 = −5.1) or 10 times to
the left side of the dominant desired closed-loop poles/eigenvalues.
20(𝑠 + 5)
𝐺 𝑠 =
𝑠(𝑠 + 1)(𝑠 + 4)
Answer:
𝐊 = 𝑘1 𝑘2 𝑘3 = 413.1 132.08 10.9 , with 𝑝3 = −5.1
Page 14
Observer design
• To be able to employ full-state feedback control, there
𝐱ሶ 𝑡 = 𝐀𝐱 𝑡 + 𝐁𝑢 𝑡
must be enough sensors to measure all the states.
𝑦 𝑡 = 𝐂𝐱 𝑡
• Aside from the cost and complexity of having so
many sensors (particularly in high-order systems), it is 𝑢 𝑡 𝑦 𝑡
Process
often not practically feasible to have access to all the
state variables in a real system.
+
• So, if the system is observable, using the system 𝐋
−
state space equations we derived for the process, + + 𝐱
𝑢 ොሶ 1 𝐱ො 𝑦ො
we can estimate the state variables to be used in 𝐁 + 𝑠
𝐂
state feedback control law.
• The full state observer is then given by the following 𝐀
observer
equation:
𝐱ො 𝑡
𝐱ොሶ 𝑡 = 𝐀ො𝐱 𝑡 + 𝐁𝑢 𝑡 + 𝐋 𝑦(𝑡) − 𝐂ො𝐱 𝑡
• 𝐱ො 𝑡 is the estimate of the state
variables 𝐱 𝑡 .
• 𝐋𝑛×1 is the observer gain matrix
𝑇
• 𝐋 = 𝑙1 𝑙2 ⋯ 𝑙𝑛 column vector
• 𝑦ො 𝑡 = 𝐂ො𝐱 𝑡 is the estimated output
Page 15
Observer design (2) 𝐱ሶ 𝑡 = 𝐀𝐱 𝑡 + 𝐁𝑢 𝑡
• The goal of the observer design is to provide an estimate 𝑦 𝑡 = 𝐂𝐱 𝑡
𝐱ො 𝑡 so that 𝐱ො 𝑡 → 𝐱 𝑡 as 𝑡 → ∞
• Also, note that we don’t know the precise initial
𝑢 𝑡 𝑦 𝑡
condition 𝐱 0 so we need to provide an initial estimate Process
𝐱ො 0 to the observer.
+
• To see how we can design the observer gain matrix 𝐋, 𝐋
−
define the estimation error as below:
𝑢
𝐁
+ + 𝐱ොሶ 1 𝐱ො 𝐂 𝑦ො
𝐞𝐱 𝑡 = 𝐱 𝑡 − 𝐱ො 𝑡 + 𝑠
⟹ 𝐞ሶ 𝐱 𝑡 = 𝐀𝐱 𝑡 + 𝐁𝑢 𝑡 − 𝐀ො𝐱 𝑡 − 𝐁𝑢 𝑡 + 𝐋 𝐂𝐱 𝑡 − 𝐂ො𝐱 𝑡
⟹ 𝐞ሶ 𝐱 𝑡 = 𝐀 − 𝐋𝐂 𝐱 𝑡 − 𝐱ො 𝑡 To guarantee that 𝐞𝐱 𝑡 → 0 as 𝑡 → ∞
for any initial estimation error 𝐞𝐱 0 , 𝐋
⟹ 𝐞ሶ 𝐱 𝑡 = 𝐀 − 𝐋𝐂 𝐞𝐱 𝑡 must be chosen such that the
eigenvalues of 𝐀 − 𝐋𝐂 are all in the
left hand-plane. Page 16
Observer design (3)
• We notice that the estimation error dynamics is
𝐞𝐱 𝑡 = 𝐱 𝑡 − 𝐱ො 𝑡
also a state space model.
• We use the same pole placement technique to 𝐞ሶ 𝐱 𝑡 = 𝐀 − 𝐋𝐂 𝐞𝐱 𝑡
design the observer gain matrix 𝐋 as we use for the
𝑢 𝑡 𝑦 𝑡
state feedback gain matrix 𝐊. Process
Page 17
Full-state feedback system with observer
• Using Kalman’s separation principle, we can design the full-state
feedback controller and observer separately. This will guarantee stability
(for LTI systems). This can be done using the following three steps:
• Step 1: Assume all the states are available, and design the feedback gain matrix
𝐊 for 𝑢 𝑡 = −𝐊𝐱(𝑡) to achieve both stability and the desired performance
• Place the eig(𝐀 − 𝐁𝐊) at the desired locations determined by the performance
specifications.
• Step 2: Construct the observer and design the observer gain matrix 𝐋
independently to place the observer poles/eigenvalues at their desired location.
• It is preferred that the eig(𝐀 − 𝐋𝐂) are much faster than the eig(𝐀 − 𝐁𝐊), if possible!
• Step 3: Connect the observer to the state feedback gains to use the estimated
states variables in the state feedback.
• The stability of the system will be guaranteed so long as the eigenvalues of both observer
and state feedback systems are stable (on the LHP).
• The state feedback control law is then given by:
𝑢 𝑡 = −𝐊ො𝐱 𝑡 because 𝐱ො 𝑡 → 𝐱 𝑡 as 𝑡 → ∞
Page 18
Full-state feedback system with observer (2)
• The overall configuration of full-state feedback with observer is shown below:
+
𝐋
−
𝑢 𝑡 + + 𝐱ොሶ 1 𝐱ො 𝑦(𝑡)
ො
𝐁 𝐂
+ 𝑠
observer
𝐀
State feedback
𝐱ො 𝑡
𝐊
Page 19
Example 4
• Given the plant shown below, design a state observer to estimate the state variables.
The error dynamics should have a damping ratio of 0.8 and a natural frequency of 10
rad/s.
• (Worked solutions of examples will be given either in lecture or uploaded)
2 3 0
𝐱ሶ 𝑡 = 𝐱 𝑡 + 𝑢 𝑡
−1 4 1
𝑦 𝑡 = 1 0𝐱 𝑡
Answer:
𝐿 22
𝐋= 1 = .
𝐿2 59
2 3 0 22
𝐱ොሶ 𝑡 = 𝐱ො 𝑡 + 𝑢 𝑡 + 𝑦(𝑡) − 𝑥ො1 𝑡
−1 4 1 59
Page 20
Practice Problem 3
• Design an observer for the phase variables with a transient response described by 𝜁 =
0.7 and 𝜔𝑛 = 100 rad/s.
• This is a time-scaled model for the body’s blood glucose level where the output is the deviation
in glucose concentration from its mean value in mg/100 mL, and the input is the intravenous
glucose injection rate in g/kg/hr (Milhorn, 1966)
407(𝑠 + 0.916)
𝐺 𝑠 =
(𝑠 + 1.27)(𝑠 + 2.69)
Answer:
𝐿 −38.397
𝐋= 1 = .
𝐿2 35.506
0 1 0 −38.397
𝐱ොሶ 𝑡 = 𝐱ො 𝑡 + 𝑢 𝑡 + 𝑦 𝑡 − [372.81 407]ො𝐱 𝑡
−3.42 −3.96 1 35.506
Page 21
DC gain compensation for steady-state error
• Let's consider full-state feedback 𝑟(𝑡) 𝑢(𝑡) 𝐱ሶ 𝑡 𝐱 𝑡 𝑦 𝑡
assuming all states are measurable.
−1
Find the closed-loop 𝑠𝐗(𝑠) = 𝐀 − 𝐁𝐊 𝐗 𝑠 + 𝐁𝑅 𝑠 𝐗(𝑠) = 𝑠𝐈 − 𝐀 − 𝐁𝐊 𝐁𝑅 𝑠
transfer function 𝑇(𝑠): ⟹ ⟹ 𝑌 𝑠 = 𝐂𝐗 𝑠
𝑌 𝑠 = 𝐂𝐗 𝑠
−1
𝑌 𝑠 −1
𝑌 𝑠 = 𝐂 𝑠𝐈 − (𝐀 − 𝐁𝐊 𝐁𝑅 𝑠 ⟹ 𝑇 𝑠 = = 𝐂 𝑠𝐈 − 𝐀 − 𝐁𝐊 𝐁
𝑅 𝑠
• One major issue with this control configuration is that parameter or model
uncertainty would make 𝑁 not be precisely the same as the reciprocal of the
actual DC gain of the closed-loop system (without prefilter), thus causing
some steady-state error.
Page 24
Integral control
• To ensure a state feedback control system has both zero steady-
state error (for a step input) and input disturbance rejection,
integral action on the error signal can be introduced in the
forward path as below:
• Define a new state variable 𝑥𝑁 to 𝑥𝑁 = න 𝑒 𝑑𝑡 ⟹ 𝑥ሶ 𝑁 = 𝑒 = 𝑟 − 𝑦 = 𝑟 − 𝐂𝐱
be the integral of the error signal:
• System dynamics are:
𝐱ሶ 𝑡 = 𝐀𝐱 𝑡 + 𝐁𝑢 𝑡
𝑦 𝑡 = 𝐂𝐱 𝑡
Page 25
Integral control (2)
• We can augment the new state equation 𝑥ሶ 𝑁 = 𝑟 − 𝐂𝐱 to
the main state-space model:
𝐱ሶ 𝑡 𝐀 0 𝐱 𝑡 𝐁 0
𝐱ሶ 𝑡 = 𝐀𝐱 𝑡 + 𝐁𝑢 𝑡 = + 𝑢 𝑡 + 𝑟 𝑡
𝑥ሶ 𝑁 (𝑡) −𝐂 0 𝑥𝑁 (𝑡) 0 1
𝑥ሶ 𝑁 (𝑡) = 𝑟 − 𝐂𝐱(𝑡)
𝐱 𝑡
𝑦 𝑡 = 𝐂𝐱 𝑡 𝑦 𝑡 = [𝐂 0] 0 refers to zero vector
𝑥𝑁 (𝑡)
• As you can see in the control system configuration (block diagram)
below, we can introduce an integral gain 𝐾𝑒 to control the
augmented state vector via the state feedback control law:
𝑢 𝑡 = −𝐊𝐱 𝑡 + 𝐾𝑒 𝑥𝑁 𝐱 𝑡
or 𝑢 𝑡 = −[𝐊 −𝐾𝑒 ]
𝐱 𝑡 𝑥𝑁 (𝑡)
𝑢 𝑡 = [−𝐊 𝐾𝑒 ]
𝑥𝑁 (𝑡)
Page 26
Integral control (3)
• So the closed-loop system is given by:
𝐱ሶ 𝑡 𝐀 0 𝐱 𝑡 𝐁 0
= + 𝑢 𝑡 + 𝑟 𝑡
𝑥ሶ 𝑁 (𝑡) −𝐂 0 𝑥𝑁 (𝑡) 0 1
𝐱 𝑡
𝑦 𝑡 = [ 𝐂 0]
𝑥𝑁 (𝑡)
𝐱 𝑡
𝑢 𝑡 = [−𝐊 𝐾𝑒 ]
𝑥𝑁 (𝑡)
𝐱ሶ 𝑡 𝐀 0 𝐱 𝑡 𝐁 −𝐊 𝐾 𝐱 𝑡 0
= + [ 𝑒] + 𝑟 𝑡
𝑥ሶ 𝑁 (𝑡) −𝐂 0 𝑥𝑁 (𝑡) 0 𝑥𝑁 (𝑡) 1
𝐱ሶ 𝑡 𝐀 0 −𝐁𝐊 𝐁𝐾𝑒 𝐱 𝑡 0
⟹ = + + 𝑟 𝑡
𝑥ሶ 𝑁 𝑡 −𝐂 0 0 0 𝑥𝑁 (𝑡) 1
𝐱ሶ 𝑡 𝐀 − 𝐁𝐊 𝐁𝐾𝑒 𝐱 𝑡 0
= + 𝑟 𝑡
𝑥ሶ 𝑁 (𝑡) −𝐂 0 𝑥𝑁 (𝑡) 1
𝐱 𝑡
𝑦 𝑡 = [𝐂 0]
𝑥𝑁 (𝑡) Page 27
Example 5
• Given the plant shown below, design a state feedback controller with integral control to
yield a 10% overshoot and a settling time of 0.5 seconds. Evaluate the steady-state
error for a unit step.
• (Worked solutions of examples will be given either in lecture or uplaoded)
0 1 0
𝐱ሶ 𝑡 = 𝐱 𝑡 + 𝑢 𝑡
−3 −5 1
𝑦 𝑡 = 1 0𝐱 𝑡
A_integ = [A zeros(2,1);-C 0]
B_integ = [B;0]
Pcl = roots([1 16 183.1]) % %OS=10, Ts=0.5
Answer: K_integ = acker(A_integ, B_integ,[-100 pcl(1) pcl(2)])
Ke = -K_integ(3); % Acker function calculates the
𝐊 = 𝑘1 𝑘2 = 1780.1 111 negative gain for Ke because u(t) = - [K -Ke].*x(t)
K = K_integ(1:2);
𝐾𝑒 = 18310.
% In Simulink, you can already use –Ke so that you
don’t have to reverse the sign in MATLAB.
Page 28
Practice Problem 4
• Given the plant shown below, design a state feedback controller with integral control to
yield a 10% overshoot and a peak time of 2 seconds. Evaluate the steady-state error
for a unit step (note that there is a zero in the system).
0 1 0
𝐱ሶ 𝑡 = 𝐱 𝑡 + 𝑢 𝑡
−7 −9 1
𝑦 𝑡 = 4 1𝐱 𝑡
Answer:
𝐊 = 𝑘1 𝑘2 = 2.21 −2.7
𝐾𝑒 = 3.79
Page 29
Optimal control systems
• Optimality is a characteristic of a control system where the controller is
designed to deliver an best performance based on some optimal
control objectives.
• For example, you might want to optimise the performance of the control
system in term of minimising control operation time, minimising the
tracking error, maximising the efficiency, or in a majority of cases,
minimising the energy consumption in the system.
• This is generally known as an optimal control problem, where the
controller is designed by minimising or maximising a particular cost
function (objective function or loss function) written mostly in integral
form: ∞
𝐽 = න 𝑔 𝐱, 𝑢, 𝑡 𝑑𝑡
0
• where 𝐱 is the state vector and 𝑢 is the control 𝑢𝑜𝑝𝑡 = arg min 𝐽
𝑢∈ℝ
input that should be found in such a way that it
minimises the cost function 𝐽. Subject to system 𝐱ሶ 𝑡 = 𝐀𝐱 𝑡 + 𝐁𝑢 𝑡
dynamics: 𝑦 𝑡 = 𝐂𝐱 𝑡
Page 30
Linear quadratic regulator – LQR
• In the context of full state feedback control, if we wish to find the
feedback gain matrix 𝐊 such that it minimises the total energy
consumption in the control system, we can consider the following • 0∞ 𝐱 𝑇 𝐐𝐱 𝑑𝑡 corresponds to the total
quadratic cost function for a SISO system (other forms are not energy stored in the state variables.
considered here): ∞
• 0 𝑅𝑢2 𝑑𝑡 corresponds to the total
∞ energy in the control input.
𝐽 = න 𝐱 𝑇 𝐐𝐱 + 𝑅𝑢2 𝑑𝑡 • 𝐐𝑛×𝑛 (symmetric and positive definite)
0 and 𝑅1×1 are constant weighting
matrices which penalises the energy
consumption.
• Conceptually, if we wish to reduce the energy in each of the state
variables, they need to reach their steady-state value in a short
period of time. The higher the penalising
weight, the more the controller
• Initial conditions cannot be controlled, so to reduce the area beneath the
graph for each state, we need to shorten the time. tries to reduce the energy in the
corresponding variable.
• This means the system would require a much higher control input
(higher input energy) to steer the state variables quickly.
• These two contradictory objectives are balanced with a proper
choice of penalising weights in the positive definite matrices 𝐐 and 𝑅.
Page 31
Linear quadratic regulator – LQR (2)
∞
• Therefore, considering the quadratic cost function to find 𝐽 = න 𝐱 𝑇 𝐐𝐱 + 𝑅𝑢2 𝑑𝑡
an optimal state feedback gain matrix 𝐊, all we need to 0
do is to choose penalising weights 𝐐 and 𝑅 and solve
the so called Algebraic Ricatti Equation: 𝐱ሶ 𝑡 = 𝐀𝐱 𝑡 + 𝐁𝑢 𝑡
𝑦 𝑡 = 𝐂𝐱 𝑡
𝐀𝑇 𝐏 + 𝐏𝐀 + 𝐐 − 𝐏𝐁𝑅−1 𝐁 𝑇 𝐏 = 0 𝑢 𝑡 = −𝐊𝐱 𝑡
𝐊 = 𝑅−1 𝐁𝑇 𝐏
Page 32
Linear quadratic regulator – LQR (3)
∞
• Bryson's rule provides a simple choice for selecting 𝐽 = න 𝐱 𝑇 𝐐𝐱 + 𝑅𝑢2 𝑑𝑡
the penalising weights 𝐐 and 𝑅: 0
• Select 𝐐 as a diagonal matrix and choose each element in
the main diagonal of 𝐐 and scalar 𝑅 as below: 𝐱ሶ 𝑡 = 𝐀𝐱 𝑡 + 𝐁𝑢 𝑡
𝑦 𝑡 = 𝐂𝐱 𝑡
1 𝑢 𝑡 = −𝐊𝐱 𝑡
𝑄𝑖𝑖 =
𝑚𝑎𝑥𝑖𝑚𝑢𝑚 𝑎𝑐𝑐𝑒𝑝𝑡𝑎𝑏𝑙𝑒 𝑣𝑎𝑙𝑢𝑒 𝑓𝑜𝑟 𝑥𝑖2
𝐊 = 𝑅 −1 𝐁 𝑇 𝐏
1 𝐀𝑇 𝐏 + 𝐏𝐀 + 𝐐 − 𝐏𝐁𝑅 −1 𝐁𝑇 𝐏 = 0
𝑅=
𝑚𝑎𝑥𝑖𝑚𝑢𝑚 𝑎𝑐𝑐𝑒𝑝𝑡𝑎𝑏𝑙𝑒 𝑣𝑎𝑙𝑢𝑒 𝑓𝑜𝑟 𝑢2
• LQR is not used everywhere
• The maximum acceptable value for a square of a • The diagonal configuration of 𝐐 and 𝑅 limits the
signal is the maximum acceptable energy (or possible positions of the closed-loop eigenvalues.
amplitude squared) of the signal that is allowed in the • Pole placement is more flexible in this regard, but it
system. could come at a cost of large control input.
• For instance, for a ±12V DC motor, 𝑅 can be chosen as • Non-diagonal 𝐐 and 𝑅 provide more freedom in pole
1
𝑅 = 2 , and for a position of the robot arm limited to placement, but it will be much harder to design.
12
𝜋 1
± we can have 𝑄ii = .
4 𝜋/42
Page 33
Questions?
Page 34