Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

0912 ch25

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

0912 S1-S8 Frame55.

book Page 781 Saturday, October 5, 2002 10:06 PM

25
Analysis in the
Time Domain

25.1 Signal Types


Introduction • Step, Impulse, and Ramp • Sinusoids • Periodic
and Aperiodic Waveforms
25.2 First-Order Circuits
Introduction • Zero-Input and Zero-State Response • Transient
and Steady-State Responses • Network Time Constant
25.3 Second-Order Circuits
Robert W. Newcomb Introduction • Zero-Input and Zero-State Response • Transient
University of Maryland and Steady-State Responses • Network Characterization

25.1 Signal Types

Introduction
Because information into and out of a circuit is carried via time domain signals we look first at some of
the basic signals used in continuous time circuits. All signals are taken to depend on continuous time t
over the full range –∞ < t < ∞. It is important to realize that not all signals of interest are functions in
the strict mathematical sense; we must go beyond them to generalized functions (e.g., the impulse),
which play a very important part in the signal processing theory of circuits.

Step, Impulse, and Ramp


The unit step function, denoted 1(·), characterizes sudden jumps, such as when a signal is turned on or
a switch is thrown; it can be used to form pulses, to select portions of other functions, and to define the
ramp and impulse as its integral and derivative. The unit step function is discontinuous and jumps
between two values, 0 and 1, with the time of jump between the two taken as t = 0. Precisely,

1 if t > 0
1(t ) =  (25.1)
0 if t < 0

which is illustrated in Fig. 25.1 along with some of the functions to follow.
Here, the value at the jump point, t = 0, purposely has been left free because normally it is immaterial
and specifying it can lead to paradoxical results. Physical step functions used in the laboratory are actually
continuous functions that have a continuous rise between 0 and 1, which occurs over a very short time.
Nevertheless, instances occur in which one may wish to set 1(0) equal to 0 or to 1 or to 1/2 (the latter,
for example, when calculating the values of a Fourier series at a discontinuity). By shifting the time

© 2003 by CRC Press LLC


0912 S1-S8 Frame55.book Page 782 Saturday, October 5, 2002 10:06 PM

782 The Circuits and Filters Handbook, Second Edition

1(t) r(t)

1 slope = 1

0 t 0 t

Unit Step Function Unit Ramp Function

to Infinity
unit area

0 t

Impulse Generalized Function

FIGURE 25.1 Step, ramp, and impulse functions.

argument the jump can be made to occur at any time, and by multiplying by a factor the height can be
changed. For example, 1(t – t0) has a jump at time t0 and a[1(t) – 1(t – t0)] is a pulse of width t0 and
height a going up to a at t = 0 and down to 0 at time t0. If a = a(t) is a function of time, then that portion
of a(t) between 0 and t0 is selected. The unit ramp, r(·) is the continuous function which ramps up
linearly (with unit slope) from zero starting at t = 0; the ramp results from the unit step by integration

t
 t if t > 0
r (t ) = ∫ 1(τ)dτ = t1(t ) = 0 (25.2)
−∞
 if t < 0

As a consequence the unit step is the derivative of the unit ramp, while differentiating the unit step yields
the unit impulse generalized function, δ(·) that is

d1 (t ) d 2r (t )
δ(t ) = = (25.3)
dt dt 2

In other words, the unit impulse is such that its integral is the unit step; that is, its area at the origin,
t = 0, is 1. The impulse acts to sample continuous functions which multiply it, i.e.,

a(t )δ(t − t 0 ) = a(t 0 )δ(t − t 0 ) (25.4)

This sampling property yields an important integral representation of a signal x(·)


x (t ) = ∫ x(τ)δ(t − τ)dτ
−∞
(25.5)
∞ ∞
x (t )δ(t − τ)dτ = x (t ) ∫ δ(t − τ)dτ
= ∫ −∞ −∞

where the validity of the first line is seen from the second line, and the fact that the integral of the impulse
through its jump point is unity. Equation (25.5) is actually valid even when x(·) is discontinuous and,
consequently, is a fundamental equation for linear circuit theory. Differentiating δ(t) yields an even more
discontinuous object, the doublet δ′(·). Strictly speaking, the impulse, all its derivatives, and signals of

© 2003 by CRC Press LLC


0912 S1-S8 Frame55.book Page 783 Saturday, October 5, 2002 10:06 PM

Analysis in the Time Domain 783

that class are not functions in the classical sense, but rather they are operators [1] or functionals [2],
called generalized functions or, often, distributions. Their evaluations take place via test functions, just
as voltages are evaluated on test meters.
The importance of the impulse lies in the fact that if a linear time-invariant system is excited by the
unit impulse, then the response, naturally called the impulse response, is the inverse Laplace transform
of the network function. In fact, if h(t) is the impulse response of a linear time-invariant (continuous
and continuous time) circuit, the forced response y(t) to any input u(t) can be obtained without leaving
the time domain by use of the convolution integral, with the operation of convolution denoted by ∗,


y (t ) = h ∗ u = ∫ h(t − τ)u(τ)dτ
−∞
(25.6)

Equation (25.6) is mathematically rigorous, but justified on physical grounds through (25.5) as follows.
If we let h(t) be the output when δ(t) is the input, then, by time invariance, h(t – τ) is the output when
the input is shifted to δ(t – τ). Scaling the latter by u(τ) and summing via the integral, as designated in
(25.5), we obtain a representation of the input u(t). This must result in the output representation being
in the form of (25.6) by linearity of the system through similar scaling and summing of h(t – τ), as was
performed on the input.

Sinusoids
Sinusoidal signals are important because they are self-reproducing functions (i.e., eigenfunctions) of
linear time-invariant circuits. This is true basically because the derivatives of sinusoids are sinusoidal. As
such, sinusoids are also the natural outputs of oscillators and are delivered in power sources, including
laboratory signal generators and electricity for the home derived from the power company.
Eternal
Eternal signals are defined as being of the same nature for all time, –∞ < t < ∞, in which case an eternal
cosine repeats itself eternally in both directions of time, with an origin of time, t = 0, being arbitrarily
fixed. Because eternal sinusoids have been turned on forever, they are useful in describing the steady
operation of circuits. In particular, the signal A cos(ωt + θ) over –∞ < t < ∞ defines an eternal cosine
of amplitude A, radian frequency ω = 2π f (with f being real frequency, in Hertz, which are cycles per
second), at phase angle θ (in radians and with respect to the origin of time), with A, ω, and θ real
numbers. When θ = π/2 this cosine also represents a sine, so that all eternal sinusoidal signals are
contained in the expression A cos (ωt + θ).
At times, it is important to work with sinusoids that have an exponential envelope, with the possibility
that the envelope increases or decreases with time, that is, with positively or negatively damped sinusoids.
These are described by Ae st cos(ωt + θ), where the real number is the damping factor, giving signals that
damp out in time when the damping factor is positive and signals that increase with time when the
damping factor is negative. Of most importance when working with this class of signals is the identity

[ ]
e σ t + jωt = e st = e σt cos(ωt ) + j sin(ωt ) (25.7)

where s = σ + jω with j = – 1. Here, s is called the complex frequency, with its imaginary part being
the real (radian) frequency, ω. When no damping is present, s = jω, in which case the exponential form
of (25.7) represents pure sinusoids. In fact, we see in this expression that the cosine is the real part of an
exponential and the sine is its imaginary part. Because exponentials are usually easier than sinusoids to
treat analytically, the consequence for real linear networks is that we can do most of the calculations with
exponentials and convert back to sinusoids at the end. In other words, if a real linear system has a cosine
or a damped cosine as a true input, it can be analyzed by using instead the exponential of which it is the
real part as its (fictitious) input, finding the resulting (fictitious) exponential output, and then taking

© 2003 by CRC Press LLC


0912 S1-S8 Frame55.book Page 784 Saturday, October 5, 2002 10:06 PM

784 The Circuits and Filters Handbook, Second Edition

the real part at the end of the calculations to obtain the true output for the true input. Because expo-
nentials are probably the easiest signals to work with in theory, the use of exponentials rather than
sinusoids usually greatly simplifies the theory and calculations for circuits operating under steady-state
conditions.
Causal
Because practical circuits have not existed since t = –∞ they usually begin to be considered at a suitable
starting time, taken to be t = 0, in which case the associated signals can be considered to be zero for
t < 0. Mathematically, these functions are said to have support bounded on the left. The support of a
signal is (the closure of) that set of times for which the signal is non-zero, therefore, the support of these
signals is bounded on the left by zero. When signals are discontinuous functions they have the important
property that they can be represented by multiplying with unit step functions signals which are differ-
entiable and have nonbounded support. For example, g(t) = est · 1(t) has a jump at t = 0 with support
at the half line 0 to ∞ but has est infinitely differential of “eternal” support.
A causal circuit is one for which the response is only nonzero after the input becomes nonzero. Thus,
if the inputs are zero for t < 0, the outputs of causal circuits are also zero for t < 0. In such cases the
impulse response, h(t), or the response to an input impulse of “infinite jump” at t = 0, satisfies h(t) = 0
for t < 0 and the convolution form of the output, (25.4), takes the form

 
y (t ) = 
∫ h(t − τ)u(τ)dτ1(t )
t
(25.8)
 0

Periodic and Aperiodic Waveforms


The pure sinusoids, although not the sinusoids with nonzero damping, are special cases of periodic
signals. In other words, ones which repeat themselves in time every T seconds, where T is the period.
Precisely, a time-domain signal g(·) is periodic of period T if g(t) = g(t + T), where normally T is taken
to be the smallest nonzero T for which this is true. In the case of the sinusoids, A cos(ωt + θ) with ω =
2πf, the period is given by T = 1/f because {2π[ f (t + T)] + θ} = {2π ft + 2π( fT) + θ} = {2π ft + (2π + θ)},
and sinusoids are unchanged by a change of 2π in the phase angle. Periodic signals need to be specified
over only one period of time, e.g., 0 ≤ t < T, and then can be extended periodically for all time by using
t = t mod(T) where mod(·) is the modulus function; in other words, periodic signals can be looked upon
as being defined on a circle, if we imagine the circle as being a clock face.
Periodic signals represent rhythms of a system and, as such, contain recurring information. As many
phycial systems, especially biomedical systems, either possess directly or to a very good approximation
such rhythms, the periodic signals are of considerable importance. Even though countless periodic signals
are available besides the sinusoids, it is important to note that almost all can be represented by a Fourier
series. Exponentials are eigenfunctions for linear circuits, thus, the Fourier series is most conveiently
expressed for circuit considerations in terms of the exponential form. If g(t) = g(t + T), then


g (t ) ≅
∑c e
j ( 2πnt T )
n (25.9)
n=−∞

where the coefficients are complex and are given by

T
g (t )e (
− j 2 πnt T )

1 dt = an + jbn
cn = (25.10)
T 0

Strictly speaking, the integral is over the half-open interval [0,T ) as seen by considering g(·) defined on
the circle. In (25.9), the symbol  is used to designate the expression on the right as a representation

© 2003 by CRC Press LLC


0912 S1-S8 Frame55.book Page 785 Saturday, October 5, 2002 10:06 PM

Analysis in the Time Domain 785

that may not exactly agree numerically with the left side at every point when g(·) is a function; for
example, at discontinuities the average is obtained on the right side. If g(·) is real, that is, g(t) = g(t)∗,
where the superscript * denotes complex conjugate, then the complex coefficients cn satisfy cn = c–n∗ . In
this case the real coefficients an and bn in (25.10) are even and odd in the indices; n and the an combine
to give a series in terms of cosines, and the bn gives a series in terms of sines.
As an example the square wave, sqw(t), can be defined by

( )
sqw(t ) = 1(t ) − 1 t − [T 2] 0 ≤ t < T (25.11)

and then extended periodically to –∞ < t < ∞ by taking t = tmod(T). The exponential Fourier series
coefficients are readily found from (25.10) to be

1 2 if n = 0


cn =  (25.12)
0 if n = 2k ≠ 0 (even ≠ 0)
 1 
 jπn 
 1 if n = 2k + 1 (odd )

for which the Fourier series is


1 j 2 π[ 2k + 1]t T
∑ jπ[2k + 1] e
1
sqw(t ) ≅ + (25.13)
2
k =−∞

The derivative of sqw(t) is a periodic set of impulses

[
d sqw(t ) ] = δ(t ) − δ(t − [T 2] ) 0 ≤ t < T (25.13)
dt

for which the exponential Fourier series is easily found by differentiating (25.13), or by direct calculation
from (25.10), to be
∞ ∞

∑(
i =−∞
(
δ(t − iT ) − δ t − iT − [T 2] ) ≅ ∑ 2 e j(2π[2k + 1]t T )
k =−∞
T
(25.15)

Combining the exponentials allows for a sine representation of the periodic generalized function signal.
Further differentiation can take place, and by integrating (25.15) we get the Fourier series for the square
wave if the appropriate constant of integration is added to give the DC value of the signal. Likewise, a
further integration will yield the Fourier series for the sawtooth periodic signal, and so on.
The importance of these Fourier series representations is that a circuit having periodic signals can
always be considered to be processing these signals as exponential signals, which are usually self-repro-
ducing signals for the system, making the design or analysis easy. The Fourier series also allows visual-
ization of which radian frequencies, 2πn/T, may be important to filter out or emphasize. In many common
cases, especially for periodically pulsed circuits, the series may be expressed in terms of impulses. Thus,
the impulse response of the circuit can be used in conjunction with the Fourier series.

References
[1] J. Mikusinski, Operational Calculus, 2nd ed., New York; Pergamon Press, 1983.
[2] A. Zemanian, Distribution Theory and Transform Analysis, New York: McGraw-Hill, 1965.

© 2003 by CRC Press LLC


0912 S1-S8 Frame55.book Page 786 Saturday, October 5, 2002 10:06 PM

786 The Circuits and Filters Handbook, Second Edition

25.2 First-Order Circuits

Introduction
First-order circuits are fundamental to the design of circuits because higher order circuits can be con-
sidered to be constructed of them. Here, we limit ourselves to single-input-output linear time-invariant
circuits for which we take the definition of a first-order circuit to be one described by the differential
equation
dy du
d1 ⋅ + d0 ⋅ y = n1 ⋅ + n0 ⋅ u (25.16)
dt dt

where d0 and d1 are “denominator” constants and n0 and n1 are “numerator” constants, y = y (·) is the
output and u = u (·) is the input, and both u and y are generalized functions of time t. So that the circuit
truly will be first order, we require that d1 · n0 – d0 · n1 ≠ 0, which guarantees that at least one of the
derivatives is actually present, but if both derivatives occur, the expressions in y and in u are not
proportional, which would lead to cancellation, forcing y and u to be constant multiples of each other.
Because a factorization of real higher-order systems may lead to complex first-order systems, we will
allow the numerator and denominator constants to be complex numbers; thus, y and u may be complex-
valued functions.
If the derivative is treated as an operator, p = d[·]/dt, then (25.16) can be conveniently written as

 n n 
 1 p + 0 u if d1 = 0
n p + n0  d0 d0 
y = 1 u =  (25.17)
d1 p + d0 
 n1 + d1n0 − d0n1 u if d1 ≠ 0
 d 
p + (d0 d1 ) 
 1

where the two cases in terms of d1 are of interest because they provide different forms of responses, each
of which frequently occurs in first-order circuits. As indicated by (25.17), the transfer function

n1 p + n0
H ( p) = (25.18)
d1 p + d0

is an operator (as a function of the derivative operator p), which characterizes the circuit. Table 25.1 lists
some of the more important types of different first-order circuits along with their transfer functions and
causal impulse responses.
The following treatment somewhat follows that given in [1], although with a slightly different orien-
tation in order to handle all linear time-invariant continuous time continuous circuits.

Zero-Input and Zero-State Response


The response of a linear circuit is, via the linearity, the sum of two responses, one due to the input when
the circuit is initially in the zero state, called the zero-state response, and the other due to the initial
state when no input is present, the zero-input response. By the linearity the total response is the sum
of the two separate responses, and thus we may proceed to find each separately. In order to investigate
these two types of responses, we introduce the state vector x(·) and the state-space representation (as
previously p = d[·]/dt)
px = Ax + Bu
(25.19)
y = Cx + Du + Epu

© 2003 by CRC Press LLC


0912 S1-S8 Frame55.book Page 787 Saturday, October 5, 2002 10:06 PM

Analysis in the Time Domain 787

TABLE 25.1 Typical Transfer Functions of First-Order Circuits


Transfer Function Description Impulse Response

n1 n1
p Differentiator δ ′(t )
d0 d0
n0 n0
Integrator 1(t )
d1 p d1
n1 p + n0 n0 n
Leaky differentiator δ(t ) + 1 δ ′(t )
d1 d1 d1
d
n0 n0 − d01 t
Low-pass filter; lossy integrator e ⋅1(t )
d1 p + d0 d1
d
n1 p n1 n d − 0t
High-pass filter δ(t ) + 1 20 e d1 ⋅1(t )
d1 p + d0 d1 d1

n1 p − (d0 d1 ) n1  
d
d − 0t
All-pass filter δ(t ) − 2 0 e d1 ⋅1(t )
d1 p + (d0 d1 ) d1  d1 

where A, B, C, D, E are constant matrices. For our first-order circuit two cases are exhibited, depending
upon d1 being zero or not. In the case of d1 = 0,

y = (n1 d0 )u + (n1 d0 ) pu d1 = 0 (25.20a)

Here, C = 0 and A and B can be chosen anything, including empty. When d1 ≠ 0, our first-order circuit
has the following set of (minimal size) state-variable equations

 d 
[
px = − 0  ⋅ x + d1n0 − d0n1 ⋅ u ]
 d1 
(25.20b)
d1 ≠ 0
n 
y = [1] ⋅ x +  1  ⋅ u
 d1 

By choosing u = 0 in (25.2), we obtain the equations that yield the zero input response. Specifically,
the zero-input response is

0 if d1 = 0

y (t ) =  − d0 t (25.21)
e 1 ⋅ y (0) if d1 ≠ 0
d

which is also true by direct substitution into (25.16). Here, we have set, in the d1 ≠ 0 case, the initial
value of the state, x(0), equal to the initial value of the output, y(0), which is valid by our choice of state-
space equations. Note that (25.21) is valid for all time and y at t = 0 assumes the assigned initial value
y(0), which must be zero when the input is zero and no derivative occurs on the output.
The zero-state response is explained as the solution of (25.21) when x(0) = 0. In the case that d1 = 0,
the zero-state response is

n0 n n n 
y= u + 1 pu =  0 δ(t ) + 1 δ ′(t ) ∗ u d1 = 0 (25.22a)
d0 d0  d0 d0 

© 2003 by CRC Press LLC


0912 S1-S8 Frame55.book Page 788 Saturday, October 5, 2002 10:06 PM

788 The Circuits and Filters Handbook, Second Edition

where ∗ denotes convolution, δ(·) is the unit impulse, and 1(·) is the unit step function. While in the
case that d1 ≠ 0

 n d
 d n − d0n1  − d01 t 
y =  1 δ(t ) +  1 0 e 1(t ) ∗ u d1 ≠ 0 (25.22b)
 d1  d1  

which is found by eliminating x from (25.20b) and can be checked by direct substitution into (25.16).
The terms in the braces are the causal impulse responses, h(t), which are checked by letting u = δ with
otherwise zero initial conditions, that is, with the circuit initially in the zero state. Actually, infinitely
many noncausal impulse responses could be used in (25.22b). One such response is found by replacing
1(t) by –1(–t)]. However, physically the causal responses are of most interest.
If d1 ≠ 0, the form of the responses is determined by the constant d0 /d1, the reciprocal of which (when
d0 ≠ 0) is called the time constant, tc, of the circuit because the circuit impulse response decays to 1/e at
time tc = d1 /d0. If the time constant is positive, the zero-input and the impulse responses asymptotically
decay to zero as time approaches positive infinity, and the circuit is said to be asymptotically stable. On
the other hand, if the time constant is negative, then these two responses grow without bounds as time
approaches plus infinity, and the circuit is called unstable. It should be noted that as time goes in the
reverse direction to minus infinity, the unstable zero-input response decays to zero. If d0 /d1 = 0 the zero-
input and impulse responses are still stable, but neither decay nor grow as time increases beyond zero.
By linearity of the circuit and its state-space equations, the total response is the sum of the zero-state
response and the zero-input response; thus, even when d0 = 0 or d1 = 0

d
− 0
y (t ) = e d1
y 0 + h(t ) ∗ u(t ) (25.23)

Assuming that u and h are zero for t < 0 their convolution is also zero for t < 0, although not necessarily
at t = 0, where it may even take on impulsive behavior. In such a case, we see that y0 is the value of the
output instantaneously before t = 0. If we are interested only in the circuit for t > 0, surprisingly, an
input will yield the zero input response. That is, an equivalent input u0 exists, which will yield the zero
input response for t > 0, this being u0(t) = d1y0 exp(–td0 /d1)1(t). Thus, y = h ∗ (u + u0) gives the same
result as (25.23).
When d1 = 0, the circuit acts as a differentiator and within the state-space framework it is treated as
a special case. However, in practice it is not a special case because the current, i, versus voltage, v, for a
capacitor of capacitance C, in parallel with a resistor of conductance G is described by i = Cpv + Gv.
Consequently, it is worth noting that all cases can be handled identically in the semistate description

d1 d1 − 1 −d0 −d0  n0 


  px =   x +  u
0 0   0 1  n1  (25.24)

y = [1 1]x

where x(·) is the semistate instead of the state, although the first components of the two vectors agree in
many cases. In other words, the semistate description is more general than the state description, and
handles all circuits in a more convenient fashion [2].

Transient and Steady-State Responses


This section considers stable circuits, although the techniques are developed so that they apply to other
situations. In the asymptotically stable case, the zero input response decays eventually to zero; that is,
transient responses due to initial conditions eventually will not be felt and concentration can be placed

© 2003 by CRC Press LLC


0912 S1-S8 Frame55.book Page 789 Saturday, October 5, 2002 10:06 PM

Analysis in the Time Domain 789

upon the zero-state response. Considering first eternal exponential inputs, u(t) = U exp(st) for –∞ < t
< ∞ at the complex frequency s = σ + jω, where s is chosen as different from the natural frequency sn =
–d0 /d1 = –1/tc and U is a constant, we note that the response is y(t) = Y(s) exp(st), as is observed by
direct substitution into (25.16); this substitution yields directly

n1s + n0
Y (s ) = ⋅U (25.25)
d1s + d0

where y(t) = Y(s) exp(st) for u(t) = U exp(st) over –∞ < t < ∞. That is, an exponential excitation yields
an exponential response at the same (complex) frequency s = σ + jω as that for the input. When σ = 0,
the excitation and response are both sinusoidal and the resulting response is called the sinusoidal steady
state (SSS). Equation (25.25) shows that the SSS response is found by substituting the complex frequency
s = jω into the transfer function, now evaluated on complex numbers instead of differential operators
as in (25.18),

n1s + n0
H (s ) = (25.26)
d1s + d0

This transfer function represents the impulse response, h(t), of which it is actually the Laplace transform,
and as we found earlier, the causal impulse response is

 n0 n1
 d δ(t ) + d δ ′(t ), if d1 = 0
 0
h(t ) = 
0
d (25.27)
  − 0t
 1 δ(t ) +  1n0 − d0n1 e d1 1(t ),
n d
if d1 ≠ 0
 d1  d1 

However, practical signals are started at some finite time, normalized here to t = 0, instead of at t =
–∞, as used for the preceding exponentials. Thus, consider an input of the same type but applied only
for t > 0; i.e., let u(t) = U exp(st)1(t). The output is found by using the convolution y = h ∗ u; after a
slight amount of calculation is evaluated to

y (t ) = h(t ) ∗ Ue st 1(t )

 n1
H (s )Ue 1(t ) + d Uδ(t ) for d1 = 0
st
(25.28)
 0
=
st
d [ d
H (s )Ue 1(t ) − 1n0 − d0n1 Ue d1 1(t )
− 0
] for d1 ≠ 0
 d1s + d0

For t > 0, the SSS remains present, while there is another term of importance when d1 ≠ 0. This is a
transient term, which disappears after a sufficient waiting time in the case of an asymptotically stable
circuit. That is, the SSS is truly a steady state, although one may have to wait for it to dominate. If a
nonzero zero-input response exists, it must be added to the right side of (25.28), but for t > 0 this is of
the same form as the transient already present, therefore, the conclusion is identical (the SSS eventually
predominates over the transient terms for an asymptotically stable circuit).
Because a cosine is the real part of a complex exponential and the real part is obtained as the sum of
two terms, we can use linearity of the circuit to quickly obtain the output to a cosine input when we
know the output due to an exponential. We merely write the input as the sum of two complex conjugate
exponentials and then take the complex conjugates of the outputs that are summed. In the case of real
coefficients in the transfer function, this is equivalent to taking the real part of the output when we take
the real part of the input; that is, y = (h ∗ u3) = h ∗ u, when u = (ue), if y is real for all real u.

© 2003 by CRC Press LLC


0912 S1-S8 Frame55.book Page 790 Saturday, October 5, 2002 10:06 PM

790 The Circuits and Filters Handbook, Second Edition

Network Time Constant


The time constant, tc , was defined earlier as the time for which a transient decays to 1/e of the intial
value. As such, the time constant shows up in signals throughout the circuit and is a very useful parameter
when identifying a circuit from its responses. In an RC circuit, the time constant physically results from
the interaction of the equivalent capacitor (of which only one exists in a first-order circuit) of capacitance
Ceq, and the Thévenin’s equivalent resistor, of resistance Req, that it sees. Thus, tc = ReqCeq.
Closely related to the time constant is the rise time. Considering the low-pass case, the rise time, tr is
defined as the time for the unit step response to go between 10% and 90% of its final value from its
initial value. This is easily calculated because the unit step response is given by

n0  − 0t 
d

y1(⋅) (t ) = h(t ) ∗ 1(t ) = 1 − e d1  ⋅ 1(t ) (25.29)


d0  

Assuming a stable circuit and setting this equal to 0.1 and 0.9 times the final value, n0/d0, it is readily
found that

tr =
d1
d0
[ ]
⋅ ln(9) = ln (9) ⋅ t c ≈ 2.2t c (25.30)

At this point, it is worth noting that for theoretical studies the time constant can be normalized to 1 by
normalizing the time scale. Thus, assuming d1 and d0 ≠ 0 the differential equation can be written as

d dy   dy 
d0 ⋅  1 ⋅ + y  = d0  + y (25.31)
 (
 d0 d(d1 d0 ) t (d1 d0 ) 
 ) dt
 n 

where tn = (d0/d1)t is the normalized time.

References
[1] L. P. Huelsman, Basic Circuit Theory with Digital Computations, Englewood Cliffs, NJ: Prentice
Hall, 1972.
[2] R. W. Newcomb and B. Dziurla, “Some circuits and systems applications of semistate theory,”
Circuits, Systems, and Signal Processing, vol. 8, no. 3, pp. 235–260, 1989.

25.3 Second-Order Circuits

Introduction
Because real transfer functions can be factored into real second-order transfer functions, second-order
circuits are probably the most important circuits available; most designs are based upon them. As with
first-order circuits, this chapter is limited to single-input-single-output linear time-invariant circuits, and
unless otherwise stated, here real-valued quantities are assumed. By definition a second-order circuit is
described by the differential equation

d2y dy d 2u du
d2 ⋅ 2
+ d1 ⋅ + d0 ⋅ y = n2 ⋅ 2
+ n1 ⋅ + n0 ⋅ u (25.32)
dt dt dt dt

where di and ni are “denominator” and “numerator” constants, i = 0, 1, 2, which, unless mentioned to
the contrary, are taken to be real. Continuing the notation used for first-order circuits, y = y(·) is the
output and u = u(·) is the input; both u and y are generalized functions of time t. Assume that d2 ≠ 0,

© 2003 by CRC Press LLC


0912 S1-S8 Frame55.book Page 791 Saturday, October 5, 2002 10:06 PM

Analysis in the Time Domain 791

which is the normal case because any of the other special cases can be considered as cascades of real
degree one circuits.
Again, treating the derivative as an operator, p = d[·]/dt, (25.32) is written as

n2 p 2 + n1 p + n0
y= u (25.33)
d2 p 2 + d1 p + d0

with the transfer function

1  n p 2 + n1 p + n0 
H ( p) =  2 2 
d2  p + (d1 d2 ) p + (d0 d2 ) 
(25.34)
=
1 
n2 +
( ) (
n1 − (d1 d2 )n2 p + n0 − (d0 d2 )n2 ) 
d2  p 2 + (d1 d2 ) p + (d0 d2 ) 
 
where the second form results by long division of the denominator into the numerator. Because they
occur most frequently when second-order circuits are discussed, we rewrite the denominator in two
equivalent customarily used forms:

d1 d ω
p2 + p + 0 = p 2 + n p + ω n2 = p 2 + 2ζω n p + ω n2 (25.35)
d2 d2 Q

where ωn is the undamped natural frequency ≥ 0, Q is the quality factor, and ζ is the damping factor =
1/(2Q). The transfer function is accordingly

1  n p2 + n p + n  1  n p2 + n p + n 
H ( p) =  2 2 1 0
2
=  2
2 1 0
2
(25.36)
d2  p + (ω n Q ) p + ω n
 d 2  p + 2ζω n p + ω n

Table 25.2 lists several of the more important transfer functions, which, as in the first-order case, are
operators as functions of the derivative operator p.

Zero-Input and Zero-State Response


Again, as in the first-order case, a convenient tool for investigating the time-domain behavior of a second-
order circuit is the state variable description. Letting the state vector be x(·), the state-space represen-
tation is
px = Ax + Bu
(25.37)
y = Cx + Du

where, as above, p = d[·]/dt, and A, B, C, D are constant matrices. In the present case, these matrices are
real and one convenient choice, among many, is

 d 
 0 1   n1 − 1 n2 
d
px =  d0 d1  x +   u
2
− −   d0   d1  
 d2 d2   n0 − d n2  −  n1 − d n2   (25.38)
 2   2 

1  n 
y= 0 x +  1  u
d
 2   d2 

© 2003 by CRC Press LLC


0912 S1-S8 Frame55.book Page 792 Saturday, October 5, 2002 10:06 PM

792 The Circuits and Filters Handbook, Second Edition

TABLE 25.2 Typical Second-Order Circuit Transfer Functions


Transfer Function Description Impulse Response

n0 e −ζωnt
hlp (t ) = sin  1 − ζ 2 ω nt  1(t )
n0 1
Low-pass  
d2 p2 + 2ζω n p + ω 2n d2 1 − ζ 2ω n
High-pass
n2 p2  ζ  n2  ω e −ζωnt 
hhp (t ) = δ(t ) − n sin 1 − ζ 2 ω nt + 2θ 1(t )
d2 p + 2ζω n p + ω 2n
2 θ = arctan2  d2    
 1 − ζ2   1− ζ 2

 
Bandpass
n1 e −ζωnt
hbp (t ) = cos  1 − ζ 2 ω nt + θ 1(t )
n1 p  ζ 
d2 p2 + 2ζω n p + ω 2n θ = arctan2   d2 1 − ζ 2  
 1 − ζ2 
 

n2 p2 + ω 20 n2ω 20
Band-stop hbs (t ) = hhp (t ) + h (t )
d2 p2 + 2ζω n p + ω 2n n0 lp

n2 p2 − 2ζω n p + ω 2n n2  4ζω ne −ζωnt 


All-pass hap (t ) = δ(t ) − cos  1 − ζ 2 ω nt + θ 1(t )
d2 p2 + 2ζω n p + ω 2n d2    
 1− ζ 2

n0 n0
sin (ω nt ) ⋅1(t )
1
Oscillator, hosc (t ) =
d2 p2 + ω 2n d2

y ′(0)
when u = 0 y(t ) = y(0) ⋅ cos(ω nt ) + ⋅ sin(ω nt )
u =0 ωn

R4 1 C1
{d2/(n1+d2−d1+n2)} {d2/n1} R6
R12
R10 E3 1 1
R1 R5
− − E1 −x1
+ + − E5
R11 1
+ +
1 {d2} y

R3 1 C2
{d2/d1} 1
R7 E4 R2
− − E2 −x2
R8 {d2/d0} + 1 +
V1
+ R9
−u

{d2/((n0+d2−d0+n2)−(n1+d2−d1+n2))}

FIGURE 25.2 Generic, second-order op-amp RC circuit.

Here, the state is the 2-vector x = [x1 x2]T, with the superscript T denoting transpose. Normally, the state
would consist of capacitor voltages and/or inductor currents, although at times one may wish to use
linear combinations of these. From these state variable equations, a generic operational-amplifier (op-
amp) RC circuit to realize any of this class of second-order circuits is readily designed and given in
Fig. 25.2. In the figure, all voltages are referenced to ground and normalized capacitor and resistor values
are listed. Alternate designs in terms of only CMOS differential pairs and capacitors can also be given
[3], while a number of alternate circuits exist in the catalog of Sallen and Key [4].

© 2003 by CRC Press LLC


0912 S1-S8 Frame55.book Page 793 Saturday, October 5, 2002 10:06 PM

Analysis in the Time Domain 793

Because (25.38) represents a set of linear constant coefficient differential equations, superposition
applies and its solution can again be broken into two parts, the part due to initial conditions, x(0), called
the zero-input response, and the part due solely to the input u, the zero-state response.
The zero-input response is readily found by solving the state equations with u = 0 and initial conditions
x(0). The result is y(t) = C exp(At) x(0), which can be evaluated by several means, including the following.
Using a prime to designate the time derivative, first note that when u = 0, x1(t) = d2 y(t) and (from the
first row of A) x1(t)′ = x2 (t) = d2 y(t)′. Thus, x1 (0) = d2 y(0) and x2 (0) = d2 y′(0), which allow the initial
conditions to be expressed in terms of the measurable output quantities. To evaluate exp(At), note that
its terms are linear combinations of terms with complex frequencies that are zeroes of the characteristic
polynomial

 s −1 
det(s12 − A) = det   = s + 2ζω ns + ω n
2 2

 ωn s + 2ζω n 
2
(25.39)

= (s − s − )(s − s + )

For which the roots, called natural frequencies, are

( ) (
s ± = −ζ ± ζ 2 − 1 ω n = −1 ± 1 − 4Q 2 ) 2ωQ n
(25.40)

The case of equal roots will only occur when ζ2 = 1, which is the same as Q 2 = 1/4, for which the roots
are real. Indeed, if the damping factor, ζ, is > 1 in magnitude, or equivalently, if the quality factor, Q, is
<1/2 in magnitude, the roots are real and the circuit can be considered a cascade of two first-order
circuits. Thus, assume here and in the following that unless otherwise stated, Q 2 > 0.25, which is the
same as ζ2 < 1, in which case the roots are complex conjugates, s– = s+∗

( ) (
s ± = −ζ ± j 1 − ζ 2 ω n = −1 ± j 4Q 2 − 1 ) 2ωQ ,
n
j = −1 (25.41)

By writing y(t) = a · exp(s+ t) + b · exp(s–t), for unknown constants a and b, differentiating and setting
t = 0 we can solve for a and b, and after some algebra and trigonometry obtain the zero-input response

y ′(0)
y (t ) =
e −ζωnt 
1−ζ  2 (
 y (0) ⋅ cos 1 − ζ ω nt − θ +
2
ωn ) ( 
⋅ sin 1 − ζ 2 ω nt 

) (25.42)

2
where θ = arctan2(ζ/ 1 – ζ )with arctan2(·) being the arc tangent function that incorporates the sign
of its argument.
The form given in (25.42) allows for some useful observations. Remembering that this assumes ζ2 < 1,
first note that if no damping occurs, that is, ζ = 0, then the natural frequencies are purely imaginary,
s+ = jωn and s– = –s+, and the response is purely oscillatory, taking the form shown in the last line of
Table 25.2. If the damping is positive, as it would be for a passive circuit having some loss, usually via
positive resistors, then the natural frequencies lie in the left half s-plane, and y decays to zero at positive
infinite time so that any transients in the circuit die out after a sufficient wait. The circuit is then called
asymptotically stable. However, if the damping is negative, as it could be for some positive feedback
circuits or those with negative resistance, then the response to nonzero initial conditions increases in
amplitude without bound, although in an oscillatory manner, as time increases, and the circuit is said
to be unstable. In the unstable case, as time decreases through negative time the amplitude also damps
out to zero, but usually the responses backward in time are not of as much interest as those forward in
time.

© 2003 by CRC Press LLC


0912 S1-S8 Frame55.book Page 794 Saturday, October 5, 2002 10:06 PM

794 The Circuits and Filters Handbook, Second Edition

For the zero-state response, the impulse response, h(t), is convoluted with the input, that is, y = h ∗ u,
for which we can use the fact that h(t) is the inverse Laplace transform of H(s) = C[sl2 – A]–1B. The
denominator of H(s) is det(sl2 – A) = s2 + 2ζωns + ω 2n , for which the causal inverse Laplace transform is

 e s+t − e s−t
 1(t ) if s − ≠ s +
e s+t 1(t ) ∗ e s−t 1(t ) =  s + − s − (25.43)
te s+t 1(t ) if s − = s +

Here, the bottom case is ruled out when only complex natural frequencies are considered, following the
assumption of handling real natural frequencies in first-order circuits, made previously. Consequently,

e s+t 1(t ) ∗ e s−t 1(t ) =


e s+t − e s−t
s+ − s−
1(t ) =
e −ζωnt
1 − ζ ωn
2 ( )
sin 1 − ζ 2ω nt ⋅ 1(t ) (25.44)

Again, assuming ζ2 < 1 using the preceding calculations give the zero-state response as

 e −ζωnt
y (t ) =
1
d2

 1 − ζ ω n
2 (
sin 1 − ζ 2 ω nt 1(t ) ∗ )
 d   d   
 n1 − 1 n2  δ ′(t ) +  n0 − 0 n2  δ(t ) + n2δ(t ) * u(t )
 d2   d2    (25.45)
 e −ζωnt
=
1
d2

 1 − ζ ω n
2 (
sin 1 − ζ 2 ω nt 1(t ) ∗ )
[n δ′′(t ) + n δ′(t ) + n δ(t )]} ∗ u(t )
2 1 0

The bottom equivalent form is easily seen to result from writing the transfer function H(p) as the product
of two terms 1/[d2(p2 + 2ζωn p + ω 2n ) and [n2 p2 + n1 p + n0] convoluting the causal impulse response (the
inverse of the left half-plane converging Laplace transform), of each term. From (25.45), we directly read
the impulse response to be

 e −ζωnt
h(t ) =
1
d2

 1 − ζ 2
ω n
(
sin 1 − ζ 2 ω nt 1(t )) (25.46)

[
∗ n2δ ′′(t ) + n1δ ′(t ) + n0δ(t ) ]}
Equations (25.45) and (25.46) are readily evaluated further by noting that the convolution of a function
with the second derivative of the impulse, the first derivative of the impulse, and the impulse itself is the
second derivative of the function, the first derivative of the function, and the function itself, respectively.
For example, in the low-pass case we find the impulse response to be, using (25.46),

hlp (t ) =
n0
d2
e −ζωnt
1 − ζ2 ωn
( )
sin 1 − ζ 2 ω nt 1(t ) (25.47)

By differentiating we find the bandpass and then high-pass impulse responses to be, respectively,

© 2003 by CRC Press LLC


0912 S1-S8 Frame55.book Page 795 Saturday, October 5, 2002 10:06 PM

Analysis in the Time Domain 795

hbp (t ) =
n1 e −ζωnt
d2 1 − ζ 2 (
cos 1 − ζ 2 ω nt + θ 1(t ) ) (25.48)

n2  
hhp (t ) =
d2 

ω e −ζωnt
δ(t ) − n
1 − ζ2
(
sin 1 − ζ 2 ω nt + 2θ 1(t )


) (25.49)

2
In both cases, the added phase angle is given, as in the zero input response, via θ = arctan2(ζ/ 1 – ζ ).
By adding these last three impulse responses suitably scaled the impulse responses of the more general
second-order circuits are obtained.
Some comments on normalizations are worth mentioning in passing. Because d2 ≠ 0, one could
assume d2 to be 1 by absorbing its actual value in the transfer function numerator coefficients. If ωn ≠
0, time could also be scaled so that ωn = 1 could be taken, in which case a normalized time, tn, is
introduced. Thus, t = ωn tn and, along with normalized time, comes a normalized differential operator
pn = d[·]/dtn = d[·]/d(t/ωn ) = ωn p. This, in turn, leads to a normalized transfer function by substituting
p = pn /ωn into H(p). Thus, much of the treatment could be carried out on the normalized transfer
function x

n2n pn2 + n1n pn + n0n


H n ( pn ) = H ( p) = pn = ω n p (25.50)
pn2 + 2ζpn + 1

In this normalized form, it appears that the most important parameter in fixing the form of the response
is the damping factor ζ = 1/(2Q).

Transient and Steady-State Responses


Let us now excite the circuit with an eternal exponential input, u(t) = U exp(st) for –∞ < t < ∞ at the
complex frequency s = σ + jω, where s is chosen as different from either of the natural frequencies, s± ,
and U is a constant. As with the first-order and, indeed, any higher-order, case the response is y(t) = Y(s)
exp(st), as is observed by direct substitution into (25.32). This substitution yields directly

1  n2s 2 + n1s + n0 
Y (s ) =  2 2
⋅U (25.51)
d2  s + 2ζω ns + ω n 
where y(t) = Y(s) exp(st) for u(t) = U exp(st) over –∞ < t < ∞. That is, an exponential excitation yields
an exponential response at the same (complex) frequency s = σ + jω as that for the input, as long as s
is not one of the two natural frequencies. (s may have positive as well as negative real parts and is best
considered as a frequency and not as the Laplace transform variable because the latter is limited to regions
of convergence.) Because the denominator polynomial of Y(s) · has roots which are the natural frequencies,
the magnitude of Y becomes infinite as the frequency of the excitation approaches s+ or s– . Thus, the
natural frequencies s+ and s– are also called poles of the transfer function.
When σ = 0 the excitation and response are both sinusoidal and the resulting response is called the
sinusoidal steady state (SSS). From (25.51), the SSS response is found by substituting the complex
frequency s = jω into the transfer function, now evaluated on complex numbers rather than differential
operators as above,

1  n2s 2 + n1s + n0 
H (s ) =  2 2
(25.52)
d2  s + 2ζω ns + ω n 

Next, an exponential input is applied, which starts at t = 0 instead of at t = –∞; i.e., u(t) = U exp(st)1(t).
Then, the output is found by using the convolution y = h ∗ u, which, from the discussion at (25.45), is
expressed as

© 2003 by CRC Press LLC


0912 S1-S8 Frame55.book Page 796 Saturday, October 5, 2002 10:06 PM

796 The Circuits and Filters Handbook, Second Edition

y (t ) = h ∗ u =
1 s+t
d2
[ ]
e 1(t ) ∗ e s−t 1(t ) ∗ n2δ" (t ) + n1δ' (t ) + n0δ(t ) ∗ e st 1(t )

  N (s ) 
+ n2 (s + s + ) + n1  e s+t
1
= H (s )Ue st 1(t ) +   (25.53)
 d2 (s + − s − )  s + − s 

 N (s )   
− + n2 (s + s − ) + n1  e s−t 1(t )
 s+ − s   

in which N(s) is the numerator of the transfer function and we have assumed that s is not equal to a
natural frequency. The second term on the right within the braces varies at the natural frequencies and
as such is called the transient response, while the first term is the term resulting directly from an eternal
exponential, but now with the negative time portion of the response removed. If the system is stable, the
transient response decays to zero as time increases and, thus, if we wait long enough the transient response
of a stable system can be ignored if the complex frequency of the input exponential has a real part that
is greater than that of the natural frequencies. Such is the case for exponentials that yield sinusoids; in
that case σ = 0, or s = jω. In other words, for an asymptotically stable circuit the output approaches that
of the SSS when the input frequency is purely imaginary. If we were to excite at a natural frequency then
the first part of (25.53) still could be evaluated using the time-multiplied exponential of (25.43); however,
the transient and the steady state are now mixed, both being at the same “frequency.”
Because actual sinusoidal signals are real, we use superposition and the fact that the real part of a
complex signal is given by adding complex conjugate terms:

e jωt + e − jωt
[ ]
cos(ωt ) = ℜ e jωt =
2
(25.54)

This leads to the SSS response for an asymptotically stable circuit excited by u(t) = U cos (ωt)1(t) to be

H ( jω )Ue jωt + H (− jω )U *e − jωt


y (t ) =
2 (25.55)
(
= H ( jω ) U cos ωt + ∠H ( jω ) + ∠U )
Here, we assumed that the circuit has real-valued components such that H(– jω) is the complex conjugate
of H(jω). In which case, the second term in the middle expression is the complex conjugate of the first.

Network Characterization
Although the impulse response is useful for theoretical studies, it is difficult to observe it experimentally
due to the impossibility of creating an impulse. However, the unit step response is readily measured, and
from it the impulse response actually can be obtained by numerical differentiation if needed. However,
it is more convenient to work directly with the unit step response and, consequently, practical charac-
terizations can be based upon it. The treatment most conveniently proceeds from the normalized low-
pass transfer function
1
H ( p) = , 0 < ζ <1 (25.56)
p + 2ζp + 1
2

The unit step response follows by applying the input u(t) = 1(t) and noting that the unit step is the
special case of an exponential multiplied unit step, where the frequency of the exponential is zero.
Conveniently, (25.43) can be used to obtain

© 2003 by CRC Press LLC


0912 S1-S8 Frame55.book Page 797 Saturday, October 5, 2002 10:06 PM

Analysis in the Time Domain 797

1.5
1.4
1.3 ζ = 0.4
1.2
1.1
ζ = 0.7
1
0.9
ζ = 0.9
0.8
yus(t)
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 1 2 3 4 5 6 7 8 9 10
t/ωn

FIGURE 25.3 Unit step response for different damping factors.

 ζ 
y us (t ) = 1(t ) −
e −ζt
1 − ζ2
( )
cos 1 − ζ 2 t − θ ⋅ 1(t ), θ = arctan 


 1 − ζ2 

(25.57)

Typical unit step responses are plotted in Fig. 25.3 where, for a small damping factor, overshoot can be
considerable, with oscillations around the final value and in addition, a long settling time before reaching
the final value. In contrast, with a large damping factor, although no overshoot or oscillation occurs, the
rise to the final value is long. A compromise for obtaining a quick rise to the final value with no oscillations
is given by choosing a damping factor of 0.7, this being called the critical value; i.e., critical damping
is ζcrit = 0.7.

References
[1] L. P. Huelsman, Basic Circuit Theory with Digital Computations, Englewood Cliffs, NJ: Prentice
Hall, 1972.
[2] V. I. Arnold, Ordinary Differential Equations, Cambridge, MA: MIT Press, 1983.
[3] J. E. Kardontchik, Introduction to the Design of Transconductor-Capacitor Filters, Boston: Kluwer
Academic Publishers, 1992.
[4] R. P. Sallen and E. L. Key, “A practical method of designing RC active filters,” IRE Trans. Circuit
Theory, vol. CT-2, no. 1, pp. 74–85, March 1955.

© 2003 by CRC Press LLC

You might also like