Control System
Control System
To gain some insight into how an automatic control system operates we shall briefly examine the speed control
mechanism in a car. It is perhaps instructive to consider first how a typical driver may control the car speed
over uneven terrain. The driver, by carefully observing the speedometer, and appropriately increasing or
decreasing the fuel flow to the engine, using the gas pedal, can maintain the speed quite accurately. Higher
accuracy can perhaps be achieved by looking ahead to anticipate road inclines. An automatic speed control
system, also called cruise control, works by using the difference, or error, between the actual and desired
speeds and knowledge of the car’s response to fuel increases and decreases to calculate via some algorithm
an appropriate gas pedal position, so to drive the speed error to zero. This decision process is called a control
law, and it is implemented in the controller. The system configuration is shown in Fig. 19.1.1. The car dynamics
of interest are captured in the plant. Information about the actual speed is fed back to the controller by sensors,
and the control decisions are implemented via a device, the actuator, that changes the position of the gas pedal.
The knowledge of the car’s response to fuel increases and decreases is most often captured in a mathematical
model. Certainly, in an automobile today there are many more automatic control systems such as the antilock
brake system (ABS), emission control, and tracking control. The use of feedback control preceded control
theory, outlined in the following sections, by over 2000 years. The first feedback device on record is the famous
Water Clock of Ktesibios in Alexandria, Egypt, from the third century BC
controller, defined by (1) is a particularly useful control approach that was invented over 80 years ago. Here
KP, KI , and KD are controller parameters to be selected, often by trial and error or using a lookup table in
industry practice. The goal, as in the cruise control example, is to drive the error to zero in a desirable manner.
All three terms Eq. (1) have explicit physical meanings in that e is the current error, ∫e is the accumulated error,
and represents the trend. This, together with the basic understanding of the causal relationship between the
control signal (u) and the output (y), forms the basis for engineers to “tune,” or adjust, the controller
parameters to meet the design specifications. This intuitive design, as it turns out, is sufficient for many control
applications. To this day, PID control is still the predominant method in industry and is found in over 95
percent of industrial applications. Its success can be attributed to the simplicity, efficiency, and effectiveness
of this method.
To design a controller that makes a system behave in a desirable manner, we need a way to predict the
behavior of the quantities of interest over time, specifically how they change in response to different inputs.
Mathematical models are most often used to predict future behavior, and control system design methodologies
are based on such models. Understanding control theory requires engineers to be well versed in basic
mathematical concepts and skills, such as solving differential equations and using Laplace transform. The role
of control theory is to help us gain insight on how and why feedback control systems work and how to
systematically deal with various design and analysis issues. Specifically, the following issues are of both
practical importance and theoretical interest:
2. How fast and smooth the error between the output and the set point is driven to zero.
3. How well the control system handles unexpected external disturbances, sensor noises, and internal
dynamic changes.
In the following, modeling and analysis are first introduced, followed by an overview of the classical design
methods for single-input single-output plants, design evaluation methods, and implementation issues.
Alternative design methods are then briefly presented. Finally, For the sake of simplicity and brevity, the
discussion is restricted to linear, time invariant systems. Results may be found in the literature for the cases of
linear, time-varying systems, and also for nonlinear systems, systems with delays, systems described by partial
differential equations, and so on; these results, however, tend to be more restricted and case dependent
MATHEMATICAL DESCRIPTIONS
Mathematical models of physical processes are the foundations of control theory. The existing analysis and
synthesis tools are all based on certain types of mathematical descriptions of the systems to be controlled,
also called plants. Most require that the plants are linear, causal, and time invariant. Three different
mathematical models for such plants, namely, linear ordinary differential equation, state variable or state
space description, and transfer function are introduced below.
In control system design the most common mathematical models of the behavior of interest are, in the time
domain, linear ordinary differential equations with constant coefficients, and in the frequency or transform
domain, transfer functions obtained from time domain descriptions via Laplace transforms. Mathematical
models of dynamic processes are often derived using physical laws such as Newton’s and Kirchhoff’s. As an
example consider first a simple mechanical system, a spring/mass/damper. It consists of a weight m on a
spring with spring constant k, its motion damped by friction with coefficient.
State Variable Descriptions Instead of working with many different types of higher-order differential
equations that describe the behavior of the system, it is possible to work with an equivalent set of standardized
first-order vector differential equations that can be derived in a systematic way. To illustrate, consider the
spring/mass/damper example. Let x1(t) = y(t), x2(t) = (t) be new variables, called state variables. Then the
system is equivalently described by the equations
Linearization.
The linear models studied here are very useful not only because they describe linear dynamical processes, but
also because they can be approximations of nonlinear dynamical processes in the neighborhood of an
operating point. The idea in linear approximations of nonlinear dynamics is analogous to using Taylor series
approximations of functions to extract a linear approximation. A simple example is that of a simple pendulum
sin x1, where for small excursions from the equilibrium at zero, sin x1 is approximately equal to x1 and the
equations become linear, namely,
Transfer Functions
The transfer function of a linear, time-invariant system is the ratio of the Laplace transform of the output Y(s)
to the Laplace transform of the corresponding input U(s) with all initial conditions assumed to be zero
where G(s) is the transfer function of the system defined above. We are concerned with transfer functions G(s)
that are rational functions, that is, ratios of polynomials in s, G(s) = n(s)/d(s). We are interested in proper G(s)
where lim G(s) < ∞. Proper G(s) have degree n(s) ≤ degree d(s)
In most cases degree n(s) < degree d(s), which case G(s) is called strictly proper. Consider the transfer function.
Reference
PRELIM RESEARCH
Submitted to: