Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

Mod1Lesson1_Introduction_to_feedback235

This document introduces feedback control systems, highlighting their significance in various engineering and scientific applications, as well as in natural systems. It defines control systems, explains their advantages, and provides a historical overview of their development, including key inventions and theories. Additionally, it distinguishes between open-loop and closed-loop systems, emphasizing the importance of feedback in compensating for disturbances.

Uploaded by

Gabs Zarella
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Mod1Lesson1_Introduction_to_feedback235

This document introduces feedback control systems, highlighting their significance in various engineering and scientific applications, as well as in natural systems. It defines control systems, explains their advantages, and provides a historical overview of their development, including key inventions and theories. Additionally, it distinguishes between open-loop and closed-loop systems, emphasizing the importance of feedback in compensating for disturbances.

Uploaded by

Gabs Zarella
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Lesson 1.

1: Introduction to Feedback Control Systems

1.1.1 Introduction

Control systems are an integral part of modern society. Numerous applications


are all around us: The rockets fire, and the space shuttle lifts off to earth orbit;
in splashing cooling water, a metallic part is automatically machined; a self-
guided vehicle delivering material to workstations in an aerospace assembly
plant glides along the floor seeking its destination. These are just a few examples
of the automatically controlled systems that we can create.

Automatic control is essential in any field of engineering and science. Automatic


control is an important and integral part of space-vehicle systems, robotic
systems, modern manufacturing systems, and any industrial operations
involving control of temperature, pressure, humidity, flow, etc. It is desirable that
most engineers and scientists are familiar with theory and practice of automatic
control.

We are not the only creators of automatically controlled systems; these systems
also exist in nature. Within our own bodies are numerous control systems, such
as the pancreas, which regulates our blood sugar. In time of “fight or flight,” our
adrenaline increases along with our heart rate, causing more oxygen to be
delivered to our cells. Our eyes follow a moving object to keep it in view; our
hands grasp the object and place it precisely at a predetermined location. Even
the nonphysical world appears to be automatically regulated. Models have been
suggested showing automatic control of student performance. The input to the
model is the student’s available study time, and the output is the grade. The
model can be used to predict the time required for the grade to rise if a sudden
increase in study time is available. Using this model, you can determine whether
increased study is worth the effort during the last week of the term.

1.1.2 Control System Definition

A control system consists of subsystems and processes (or plants) assembled


for the purpose of obtaining a desired output with desired performance, given
a specified input. Figure 1.1 shows a control system in its simplest form, where
the input represents a desired output.
Figure 1.1 Simplified description of a control system

For example, consider an elevator. When the fourth-floor button is pressed on


the first floor, the elevator rises to the fourth floor with a speed and floor leveling
accuracy designed for passenger comfort. The push of the fourth-floor button is
an input that represents our desired output, shown as a step function in Figure
1.2. The performance of the elevator can be seen from the elevator response curve
in the figure.

Figure 1.2 Elevator response

Two major measures of performance are apparent: (1) the transient response and
(2) the steady-state error. In our example, passenger comfort and passenger
patience are dependent upon the transient response. If this response is too fast,
passenger comfort is sacrificed; if too slow, passenger patience is sacrificed. The
steady-state error is another important performance specification since
passenger safety and convenience would be sacrificed if the elevator did not level
properly.

1.1.3 Advantages of Control Systems

With control systems we can move large equipment with precision that would
otherwise be impossible. We can point huge antennas toward the farthest
reaches of the universe to pick up faint radio signals; controlling these antennas
by hand would be impossible. Because of control systems, elevators carry us
quickly to our destination, automatically stopping at the right floor (Figure 1.3).
We alone could not provide the power required for the load and the speed; motors
provide the power, and control systems regulate the
position and speed.
Figure 1.3 a. Early elevators were controlled by hand ropes or an elevator
operator. Here a rope is cut to demonstrate the safety brake, an innovation in
early elevators; b. One of two modern Duo-lift elevators makes its way up the
Grande Arche in Paris. Two elevators are driven by one motor, with each car
acting as a counterbalance to the other. Today, elevators are fully automatic,
using control systems to regulate position and velocity.

We build control systems for four primary reasons:


1. Power amplification - For example, a radar antenna, positioned by the low-
power rotation of a knob at the input, requires a large amount of power for
its output rotation. A control system can produce the needed power
amplification, or power gain.
2. Remote control - Control systems are also useful in remote or dangerous
locations. For example, a remote-controlled robot arm can be used to pick
up material in a radioactive environment.
3. Convenience of input form - Control systems can also be used to provide
convenience by changing the form of the input. For example, in a
temperature control system, the input is a position on a thermostat. The
output is heat. Thus, a convenient position input yields a desired thermal
output.
4. Compensation for disturbances - Another advantage of a control system is
the ability to compensate for disturbances. Typically, we control such
variables as temperature in thermal systems, position and velocity in
mechanical systems, and voltage, current, or frequency in electrical
systems. The system must be able to yield the correct output even with a
disturbance.

1.1.4 A History of Control Systems

Liquid-Level Control

The Greeks began engineering feedback systems around 300 B.C. A water clock
invented by Ktesibios operated by having water trickle into a measuring
container at a constant rate. The level of water in the measuring container could
be used to tell time. For water to trickle at a constant rate, the supply tank had
to be kept at a constant level. This was accomplished using a float valve similar
to the water-level control in today’s flush toilets.

Soon after Ktesibios, the idea of liquid-level control was applied to an oil lamp by
Philon of Byzantium. The lamp consisted of two oil containers configured
vertically. The lower pan was open at the top and was the fuel supply for the
flame. The closed upper bowl was the fuel reservoir for the pan below. The
containers were interconnected by two capillary tubes and another tube, called
a vertical riser, which was inserted into the oil in the lower pan just below the
surface. As the oil burned, the base of the vertical riser was exposed to air, which
forced oil in the reservoir above to flow through the capillary tubes and into the
pan. The transfer of fuel from the upper reservoir to the pan stopped when the
previous oil level in the pan was reestablished, thus blocking the air from
entering the vertical riser. Hence, the system kept the liquid level in the lower
container constant.

Steam Pressure and Temperature Controls

Regulation of steam pressure began around 1681 with Denis Papin’s invention
of the safety valve. The concept was further elaborated on by weighting the valve
top. If the upward pressure from the boiler exceeded the weight, steam was
released, and the pressure decreased. If it did not exceed the weight, the valve
did not open, and the pressure inside the boiler increased. Thus, the weight on
the valve top set the internal pressure of the boiler. Also, in the seventeenth
century, Cornelis Drebbel in Holland invented a purely mechanical temperature
control system for hatching eggs. The device used a vial of alcohol and mercury
with a floater inserted in it. The floater was connected to a damper that controlled
a flame. A portion of the vial was inserted into the incubator to sense the heat
generated by the fire. As the heat increased, the alcohol and mercury expanded,
raising the floater, closing the damper, and reducing the flame. Lower
temperature caused the float to descend, opening the damper and increasing the
flame.

Speed Control

In 1745, speed control was applied to a windmill by Edmund Lee. Increasing


winds pitched the blades farther back, so that less area was available. As the
wind decreased, more blade area was available. William Cubitt improved on the
idea in 1809 by dividing the windmill sail into movable louvers.

Also, in the eighteenth century, James Watt invented the flyball speed governor
to control the speed of steam engines. In this device, two spinning flyballs rise
as rotational speed increases. A steam valve connected to the flyball mechanism
closes with the ascending flyballs and opens with the descending flyballs, thus
regulating the speed.

Stability, Stabilization, and Steering

Control systems theory as we know it today began to crystallize in the latter half
of the nineteenth century. In 1868, James Clerk Maxwell published the stability
criterion for a third-order system based on the coefficients of the differential
equation. In 1874, Edward John Routh, using a suggestion from William
Kingdon Clifford that was ignored earlier by Maxwell, was able to extend the
stability criterion to fifth-order systems. In 1877, the topic for the Adams Prize
was “The Criterion of Dynamical Stability.” In response, Routh submitted a paper
entitled A Treatise on the Stability of a Given State of Motion and won the
prize. This paper contains what is now known as the Routh-Hurwitz criterion for
stability, which we will study in Chapter 6. Alexandr Michailovich Lyapunov also
contributed to the development and formulation of today’s theories and practice
of control system stability. A student of P. L. Chebyshev at the University of St.
Petersburg in Russia, Lyapunov extended the work of Routh to nonlinear
systems in his 1892 doctoral thesis, entitled The General Problem of Stability of
Motion.

During the second half of the 1800s, the development of control systems focused
on the steering and stabilizing of ships. In 1874, Henry Bessemer, using a gyro
to sense a ship’s motion and applying power generated by the ship’s hydraulic
system, moved the ship’s saloon to keep it stable (whether this made a difference
to the patrons is doubtful). Other efforts were made to stabilize platforms for
guns as well as to stabilize entire ships, using pendulums to sense the motion.

Twentieth-Century Developments
It was not until the early 1900s that automatic steering of ships was achieved.
In 1922, the Sperry Gyroscope Company installed an automatic steering system
that used the elements of compensation and adaptive control to improve
performance. However, much of the general theory used today to improve the
performance of automatic control systems is attributed to Nicholas Minorsky, a
Russian born in 1885. It was his theoretical development applied to the
automatic steering of ships that led to what we call today proportional-plus-
integral-plus-derivative (PID), or three-mode, controllers, which we will study in
Chapters 9 and 11.

In the late 1920s and early 1930s, H. W. Bode and H. Nyquist at Bell Telephone
Laboratories developed the analysis of feedback amplifiers. These contributions
evolved into sinusoidal frequency analysis and design techniques currently used
for feedback control system.

In 1948, Walter R. Evans, working in the aircraft industry, developed a graphical


technique to plot the roots of a characteristic equation of a feedback system
whose parameter changed over a particular range of values. This technique, now
known as the root locus, takes its place with the work of Bode and Nyquist in
forming the foundation of linear control systems analysis and design theory.

Contemporary Applications

Today, control systems find widespread application in the guidance, navigation,


and control of missiles and spacecraft, as well as planes and ships at sea. For
example, modern ships use a combination of electrical, mechanical, and
hydraulic components to develop rudder commands in response to desired
heading commands. The rudder commands, in turn, result in a rudder angle
that steers the ship.

We find control systems throughout the process control industry, regulating


liquid levels in tanks, chemical concentrations in vats, as well as the thickness
of fabricated material.

Modern developments have seen widespread use of the digital computer as part
of control systems. For example, computers in control systems are for industrial
robots, spacecraft, and the process control industry. It is hard to visualize a
modern control system that does not use a digital computer.

Control systems are not limited to science and industry. For example, a home
heating system is a simple control system consisting of a thermostat containing
a bimetallic material that expands or contracts with changing temperature. This
expansion or contraction moves a vial of mercury that acts as a switch, turning
the heater on or off. The amount of expansion or contraction required to move
the mercury switch is determined by the temperature setting.
Home entertainment systems also have built-in control systems. For example, in
an optical disk recording system microscopic pits representing the information
are burned into the disc by a laser during the recording process. During
playback, a reflected laser beam focused on the pits changes intensity. The light
intensity changes are converted to an electrical signal and processed as sound
or picture. A control system keeps the laser beam positioned on the pits, which
are cut as concentric circles. There are countless other examples of control
systems, from the everyday to the extraordinary.

1.1.5 System Configurations

Open-Loop Systems

A generic open-loop system is shown in Figure 1.4(a). It starts with a subsystem


called an input transducer, which converts the form of the input to that used by
the controller. The controller drives a process or a plant. The input is sometimes
called the reference, while the output can be called the controlled variable. Other
signals, such as disturbances, are shown added to the controller and process
outputs via summing junctions, which yield the algebraic sum of their input
signals using associated signs. For example, the plant can be a furnace or air
conditioning system, where the output variable is temperature. The controller in
a heating system consists of fuel valves and the electrical system that operates
the valves.

The distinguishing characteristic of an open-loop system is that it cannot


compensate for any disturbances that add to the controller’s driving signal
(Disturbance 1). For example, if the controller is an electronic amplifier and
Disturbance 1 is noise, then any additive amplifier noise at the first summing
junction will also drive the process, corrupting the output with the effect of the
noise. The output of an open-loop system is corrupted not only by signals that
add to the controller’s commands but also by disturbances at the output
(Disturbance 2). The system cannot correct for these disturbances, either.

Open-loop systems, then, do not correct for disturbances and are simply
commanded by the input. For example, toasters are open-loop systems, as
anyone with burnt toast can attest. The controlled variable (output) of a toaster
is the color of the toast. The device is designed with the assumption that the
toast will be darker the longer it is subjected to heat. The toaster does not
measure the color of the toast; it does not correct for the fact that the toast is
rye, white, or sourdough, nor does it correct for the fact that toast comes in
different thicknesses.

Another example, assume that you calculate the amount of time you need to
study for an examination that covers three chapters to get an A. If the professor
adds a fourth chapter—a disturbance—you are an open-loop system if you do
not detect the disturbance and add study time to that previously calculated. The
result of this oversight would be a lower grade than you expected.

Figure 1.4 Block diagrams of control systems: a. open-loop system; b. closed-


loop system

Closed-Loop (Feedback Control) Systems

The disadvantages of open-loop systems, namely sensitivity to disturbances and


inability to correct for these disturbances, may be overcome in closed-loop
systems. The generic architecture of a closed-loop system is shown in Figure
1.4(b).

The input transducer converts the form of the input to the form used by the
controller. An output transducer, or sensor, measures the output response and
converts it into the form used by the controller. For example, if the controller
uses electrical signals to operate the valves of a temperature control system, the
input position and the output temperature are converted to electrical signals.
The input position can be converted to a voltage by a potentiometer, a variable
resistor, and the output temperature can be converted to a voltage by a
thermistor, a device whose electrical resistance changes with temperature.
The first summing junction algebraically adds the signal from the input to the
signal from the output, which arrives via the feedback path, the return path from
the output to the summing junction. In Figure 1.4(b), the output signal is
subtracted from the input signal. The result is generally called the actuating
signal. However, in systems where both the input and output transducers have
unity gain (that is, the transducer amplifies its input by 1), the actuating signal’s
value is equal to the actual difference between the input and the output. Under
this condition, the actuating signal is called the error.

The closed-loop system compensates for disturbances by measuring the output


response, feeding that measurement back through a feedback path, and
comparing that response to the input at the summing junction. If there is any
difference between the two responses, the system drives the plant, via the
actuating signal, to make a correction. If there is no difference, the system does
not drive the plant since the plant’s response is already the desired response.

Closed-loop systems, then, have the obvious advantage of greater accuracy than
open-loop systems. They are less sensitive to noise, disturbances, and changes
in the environment. Transient response and steady-state error can be controlled
more conveniently and with greater flexibility in closed-loop systems, often by a
simple adjustment of gain (amplification) in the loop and sometimes by
redesigning the controller. We refer to the redesign as compensating the system
and to the resulting hardware as a compensator. On the other hand, closed-loop
systems are more complex and expensive than open-loop systems. A standard,
open-loop toaster serves as an example: It is simple and inexpensive. A closed-
loop toaster oven is more complex and more expensive since it has to measure
both color (through light reflectivity) and humidity inside the toaster oven. Thus,
the control systems engineer must consider the trade-off between the simplicity
and low cost of an open-loop system and the accuracy and higher cost of a
closed-loop system.

In many modern systems, the controller (or compensator) is a digital computer.


The advantage of using a computer is that many loops can be controlled or
compensated by the same computer through time sharing. Furthermore, any
adjustments of the compensator parameters required to yield a desired response
can be made by changes in software rather than hardware.

1.1.6 Analysis and Design Objectives

Analysis is the process by which a system’s performance is determined. For


example, we evaluate its transient response and steady-state error to determine
if they meet the desired specifications. Design is the process by which a system’s
performance is created or changed. For example, if a system’s transient response
and steady-state error are analyzed and found not to meet the specifications,
then we change parameters or add additional components to meet the
specifications.

A control system is dynamic: It responds to an input by undergoing a transient


response before reaching a steady-state response that generally resembles the
input. We have already identified these two responses and cited a position
control system (an elevator) as an example. In this section, we discuss three
major objectives of systems analysis and design: producing the desired transient
response, reducing steady-state error, and achieving stability. We also address
some other design concerns, such as cost and the sensitivity of system
performance to changes in parameters.

Transient Response

Transient response is important. In the case of an elevator, a slow transient


response makes passengers impatient, whereas an excessively rapid response
makes them uncomfortable. If the elevator oscillates about the arrival floor for
more than a second, a disconcerting feeling can result. Transient response is
also important for structural reasons: Too fast a transient response could cause
permanent physical damage. In a computer, transient response contributes to
the time required to read from or write to the computer’s disk storage. Since
reading and writing cannot take place until the head stops, the speed of the
read/write head’s movement from one track on the disk to another influences
the overall speed of the computer.

In this course, we establish quantitative definitions for transient response. We


then analyze the system for its existing transient response. Finally, we adjust
parameters or design components to yield a desired transient response—our first
analysis and design objective.

Steady-State Response

Another analysis and design goal focuses on the steady-state response. As we


have seen, this response resembles the input and is usually what remains after
the transients have decayed to zero. For example, this response may be an
elevator stopped near the fourth floor or the head of a disk drive finally stopped
at the correct track. We are concerned about the accuracy of the steady-state
response. An elevator must be level enough with the floor for the passengers to
exit, and a read/write head not positioned over the commanded track results in
computer errors. An antenna tracking a satellite must keep the satellite well
within its beamwidth in order not to lose track. In this text we define steady-
state errors quantitatively, analyze a system’s steady-state error, and then
design corrective action to reduce the steady-state error—our second analysis
and design objective.
Stability

Discussion of transient response and steady-state error is moot if the system


does not have stability. To explain stability, we start from the fact that the total
response of a system is the sum of the natural response and the forced response.
When you studied linear differential equations, you probably referred to these
responses as the homogeneous and the particular solutions, respectively.
Natural response describes the way the system dissipates or acquires energy.
The form or nature of this response is dependent only on the system, not the
input. On the other hand, the form or nature of the forced response is dependent
on the input. Thus, for a linear system, we can write

Total response = Natural response + Forced response

For a control system to be useful, the natural response must


1. eventually approach zero, thus leaving only the forced response, or
2. oscillate.

In some systems, however, the natural response grows without bound rather
than diminish to zero or oscillate. Eventually, the natural response is so much
greater than the forced response that the system is no longer controlled. This
condition, called instability, could lead to self-destruction of the physical device
if limit stops are not part of the design. For example, the elevator would crash
through the floor or exit through the ceiling; an aircraft would go into an
uncontrollable roll; or an antenna commanded to point to a target would rotate,
line up with the target, but then begin to oscillate about the target with growing
oscillations and increasing velocity until the motor or amplifiers reached their
output limits or until the antenna was damaged structurally. A time plot of an
unstable system would show a transient response that grows without bound and
without any evidence of a steady-state response.

Control systems must be designed to be stable. That is, their natural response
must decay to zero as time approaches infinity or oscillate. In many systems the
transient response you see on a time response plot can be directly related to the
natural response. Thus, if the natural response decays to zero as time
approaches infinity, the transient response will also die out, leaving only the
forced response. If the system is stable, the proper transient response and
steady-state error characteristics can be designed. Stability is our third analysis
and design objective.
Other Considerations

The three main objectives of control system analysis and design have already
been enumerated. However, other important considerations must be considered.
For example, factors affecting hardware selection, such as motor sizing to fulfill
power requirements and choice of sensors for accuracy, must be considered early
in the design. Finances are another consideration. Control system designers
cannot create designs without considering their economic impact. Such
considerations as budget allocations and competitive pricing must guide the
engineer. For example, if your product is one of a kind, you may be able to create
a design that uses more expensive components without appreciably increasing
total cost. However, if your design will be used for many copies, slight increases
in cost per copy can translate into many more dollars for your company to
propose during contract bidding and to outlay before sales.

Another consideration is robust design. System parameters considered constant


during the design for transient response, steady-state errors, and stability
change over time when the actual system is built. Thus, the performance of the
system also changes over time and will not be consistent with your design.
Unfortunately, the relationship between parameter changes and their effect on
performance is not linear. In some cases, even in the same system, changes in
parameter values can lead to small or large changes in performance, depending
on the system’s nominal operating point and the type of design used. Thus, the
engineer wants to create a robust design so that the system will not be sensitive
to parameter changes.

The control system design process

System Models

• Linear vs. non-linear


• Deterministic vs. Stochastic
• Time-invariant vs. Time-varying
– Are coefficients functions of time?
A time varying system is a system whose dynamics changes over time. Consider
the following 3 examples - a bicycle, a car, and a rocket.
 The model of the bicycle doesn't change much over time (almost no change
during a ride). The mass of the bicycle, for instance, remains constant,
thus time invariant.
 The rocket is at the other end of this case. As it burns tremendous amount
of fuel the mass reduces quickly over short periods of time. This is a.
Example of a time varying system.
 The car is somewhere in between. The mass hanged as it burns fuel, but
the change can be neglected, and the system can be approximated as a
time invariant system. Such an approximation would not affect the model
predictability by much.

• Continuous-time vs. Discrete-time

Test waveforms used in control systems

You might also like