Wideband Amplifiers
Wideband Amplifiers
Wideband Amplifiers
Wideband Amplifiers
by
PETER STARIý
Jožef Stefan Institute,
Ljubljana, Slovenia
and
ERIK MARGAN
Jožef Stefan Institute,
Ljubljana, Slovenia
A C.I.P. Catalogue record for this book is available from the Library of Congress.
Published by Springer,
P.O. Box 17, 3300 AA Dordrecht, The Netherlands.
www.springer.com
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
P.Starič, E.Margan Wideband Amplifiers
Acknowledgments
The authors are grateful to John Addis, Carl Battjes, Dennis Feucht, Bruce
Hoffer and Bob Ross, all former employees Tektronix, Inc., for allowing us to use
their class notes, ideas, and publications, and for their help when we had run into
problems concerning some specific circuits.
We are also thankful to prof. Ivan Vidav of the Faculty of Mathematics and
Physics in Ljubljana for his help and reviewing of Part 1, and to Csaba Szathmary, a
former employee of EMG, Budapest, for allowing us to use some of his measurement
results in Part 5.
However, if, in spite of meticulously reviewing the text, we have overlooked
some errors, this, of course, is our own responsibility alone; we shall be grateful to
everyone for bringing such errors to our attention, so that they can be corrected in the
next edition. To report the errors please use one of the e-mail addresses below.
peter.staric@guest.arnes.si
erik.margan@ijs.si
- IX -
P.Starič, E.Margan Wideband Amplifiers
Foreword
With the exception of the tragedy on September 11, the year 2001 was
relatively normal and uneventful: remember, this should have been the year of the
Clarke’s and Kubrick’s Space Odyssey, mission to Juiter; it should have been the year
of the HAL-9000 computer.
Today, the Personal Computer is as ubiquitous and omnipresent as was HAL
on the Discovery spaceship. And the rate of technology development and market
growth in electronics industry still follows the famous ‘Moore Law’, almost four
decades after it has been first formulated: in 1965, Gordon Moore of Intel Corporation
predicted the doubling of the number of transistors on a chip every 2 years, corrected
to 18 months in 1967; at that time, the landing on the Moon was in full preparation.
Curiously enough, today noone cares to go to the Moon again, let alone
Jupiter. And, in spite of all the effort in digital engineering, we still do not have
anything close to 0.1% of the HAL capacity (fortunately?!). Whilst there are many
research labs striving to put artificial intelligence into a computer, there are also
rumors that this has already happened (with Windows-95, of course!).
In the early 1990s it was felt that digital electronics will eventually render
analog systems obsolete. This never happened. Not only is the analog sector vital as
ever, the job market demands are expanding in all fields, from high-speed
measurement instrumentation and data acquisition, telecommunications and radio
frequency engineering, high-quality audio and video, to grounding and shielding,
electromagnetic interference suppression and low-noise printed circuit board design,
to name a few. And it looks like this demand will be going on for decades to come.
But whilst the proliferation of digital systems attracted a relatively high
number of hardware and software engineers, analog engineers are still rare birds. So,
for creative young people, who want to push the envelope, there are lots of
opportunities in the analog field.
However, analog electronics did not earn its “Black-Magic Art” attribute in
vain. If you have ever experienced the problems and frustrations from circuits found
in too many ‘cook-books’ and ‘sure-working schemes’ in electronics magazines, and
if you became tired of performing exorcism on every circuit you build, then it is
probably the time to try a different way: in our own experience, the ‘hard’ way of
doing the correct math first often turns out to be the ‘easy’ way!
Here is the book “Wideband Amplifiers”. The book is intended to serve both
as a design manual to more experienced engineers, as well as a good learning guide to
beginners. It should help you to improve your analog designs, making better and faster
amplifier circuits, especially if time-domain performance is of major concern. We
have strieved to provide the complete math for every design stage. And, to make
learning a joyful experience, we explain the derivation of important math relations
from a design engineer point of view, in an intuitive and self-evident manner (rigorous
mathematicians might not like our approach). We have included many practical
applications, schematics, performance plots, and a number of computer routines.
- XI -
P.Starič, E.Margan Wideband Amplifiers
However, as it is with any interesting subject, the greatest problem was never
what to include, but rather what to leave out!
In the foreword of his popular book “A Brief History of Time”, Steven
Hawking wrote that his publisher warned him not to include any math, since the
number of readers would be halved by each formula. So he included the I œ 7 - #
and bravely cut out one half of the world population.
We went further: there are some 220 formulae in Part 1 only. By estimating
the current world population to some 6×109 , of which 0.01% could be electronics
engineers and assuming an average lifetime interest in the subject of, say, 30 years, if
the publisher’s rule holds, there ought to be one reader of our book once every:
2220 Îa6 ‚ 109 ‚ 104 ‚ 30 ‚ 356 ‚ 24 ‚ 3600b ¸ 3 ‚ 1051 seconds
or something like 6.6×1033 ‚ the total age of the Universe!
Now, whatever you might think of it, this book is not about math! It is about
getting your design to run right first time! Be warned, though, that it will be not
enough to just read the book. To have any value, a theory must be put into practice.
Although there is no theoretical substitute for hands-on experience, this book should
help you to significantly shorten the trial-and-error phase.
We hope that by studying this book thoroughly you will find yourself at the
beginning of a wonderful journey!
Important Note:
We would like to reassure the Concerned Environmentalists
that during the writing of this book, no animal or plant had suffered
any harm whatsoever, either in direct or indirect form (excluding the
authors, one computer ‘mouse’ and countless computation ‘bugs’!).
- XII -
P.Starič, E.Margan Wideband Amplifiers
Release Notes
- XIII -
P. Starič, E. Margan
Wideband Amplifiers
Part 1
About Transforms
-1.2-
P. Starič, E.Margan The Laplace Transform
Contents:
1.0 Introduction ............................................................................................................................................. 1.5
1.1 Three Different Ways of Expressing a Sinusoidal Function .................................................................. 1.7
1.2 The Fourier Series ................................................................................................................................. 1.11
1.3 The Fourier Integral .............................................................................................................................. 1.17
1.4 The Laplace Transform ......................................................................................................................... 1.23
1.5 Examples of Direct Laplace Transform ................................................................................................. 1.25
1.5.1 Example 1 ............................................................................................................................ 1.25
1.5.2 Example 2 ............................................................................................................................ 1.25
1.5.3 Example 3 ............................................................................................................................ 1.26
1.5.4 Example 4 ............................................................................................................................ 1.26
1.5.5 Example 5 ............................................................................................................................ 1.27
1.5.6 Example 6 ............................................................................................................................ 1.27
1.5.7 Example 7 ............................................................................................................................ 1.28
1.5.8 Example 8 ............................................................................................................................. 1.28
1.5.9 Example 9 ............................................................................................................................ 1.29
1.5.10 Example 10 ........................................................................................................................ 1.29
1.6 Important Properties of the Laplace Transform ..................................................................................... 1.31
1.6.1 Linearity (1) ......................................................................................................................... 1.31
1.6.2 Linearity (2) ......................................................................................................................... 1.31
1.6.3 Real Differentiation .............................................................................................................. 1.31
1.6.4 Real Integration .................................................................................................................... 1.32
1.6.5 Change of Scale ................................................................................................................... 1.34
1.6.6 Impulse $ (>) .......................................................................................................................... 1.35
1.6.7 Initial and Final Value Theorems ......................................................................................... 1.36
1.6.8 Convolution .......................................................................................................................... 1.37
1.7 Application of the _ transform in Network Analysis ............................................................................ 1.41
1.7.1 Inductance ............................................................................................................................ 1.41
1.7.2 Capacitance .......................................................................................................................... 1.41
1.7.3 Resistance ............................................................................................................................ 1.42
1.7.4 Resistor and capacitor in parallel ......................................................................................... 1.42
1.8 Complex Line Integrals ......................................................................................................................... 1.45
1.8.1 Example 1 ............................................................................................................................ 1.49
1.8.2 Example 2 ............................................................................................................................ 1.49
1.8.3 Example 3 ............................................................................................................................ 1.49
1.8.4 Example 4 ............................................................................................................................ 1.50
1.8.5 Example 5 ............................................................................................................................ 1.50
1.8.6 Example 6 ............................................................................................................................ 1.50
1.9 Contour Integrals ................................................................................................................................... 1.53
1.10 Cauchy’s Way of Expressing Analytic Functions ............................................................................... 1.55
1.10.1 Example 1 .......................................................................................................................... 1.58
1.10.2 Example 2 .......................................................................................................................... 1.58
1.11 Residues of Functions with Multiple Poles, the Laurent Series ........................................................... 1.61
1.11.1 Example 1 .......................................................................................................................... 1.63
1.11.2 Example 2 .......................................................................................................................... 1.63
1.12 Complex Integration Around Many Poles:
The Cauchy–Goursat Theorem ...................................................................................................... 1.65
-4_
1.13 Equality of the Integrals ( J Ð=Ñe=> .= and ( J Ð=Ñe=> .= .......................................................... 1.67
-4_
1.14 Application of the Inverse Laplace Transform .................................................................................... 1.73
1.15 Convolution ......................................................................................................................................... 1.81
Résumé of Part 1 .......................................................................................................................................... 1.85
References .................................................................................................................................................... 1.87
Appendix 1.1: Simple Poles, Complex Spaces ...................................................................................(CD) A1.1
-1.3-
P. Starič, E.Margan The Laplace Transform
List of Tables:
Table 1.2.1: Square Wave Fourier Components ........................................................................................... 1.15
Table 1.5.1: Ten Laplace Transform Examples ............................................................................................ 1.30
Table 1.6.1: Laplace Transform Properties .................................................................................................. 1.39
Table 1.8.1: Differences Between Real and Complex Line Integrals ........................................................... 1.48
List of Figures:
Fig. 1.1.1: Sine wave in three ways ................................................................................................................. 1.7
Fig. 1.1.2: Amplifier overdrive harmonics ...................................................................................................... 1.9
Fig. 1.1.3: Complex phasors ........................................................................................................................... 1.9
Fig. 1.2.1: Square wave and its phasors ........................................................................................................ 1.11
Fig. 1.2.2: Square wave phasors rotating ...................................................................................................... 1.12
Fig. 1.2.3: Waveform with and without DC component ............................................................................... 1.13
Fig. 1.2.4: Integration of rotating and stationary phasors ............................................................................. 1.14
Fig. 1.2.5: Square wave signal definition ...................................................................................................... 1.14
Fig. 1.2.6: Square wave frequency spectrum ................................................................................................ 1.14
Fig. 1.2.7: Gibbs’ phenomenon .................................................................................................................... 1.16
Fig. 1.2.8: Periodic waveform example ........................................................................................................ 1.16
Fig. 1.3.1: Square wave with extended period .............................................................................................. 1.17
Fig. 1.3.2: Complex spectrum of the timely spaced sqare wave ................................................................... 1.17
Fig. 1.3.3: Complex spectrum of the square pulse with infinite period ......................................................... 1.20
Fig. 1.3.4: Periodic and aperiodic functions ................................................................................................. 1.21
Fig. 1.4.1: The abscissa of absolute convergence ......................................................................................... 1.24
Fig. 1.5.1: Unit step function ........................................................................................................................ 1.25
Fig. 1.5.2: Unit step delayed ......................................................................................................................... 1.25
Fig. 1.5.3: Exponential function ................................................................................................................... 1.26
Fig. 1.5.4: Sine function ............................................................................................................................... 1.26
Fig. 1.5.5: Cosine function ............................................................................................................................ 1.27
Fig. 1.5.6: Damped oscillations .................................................................................................................... 1.27
Fig. 1.5.7: Linear ramp function ................................................................................................................... 1.28
Fig. 1.5.8: Power function ............................................................................................................................ 1.28
Fig. 1.5.9: Composite linear and exponential function ................................................................................. 1.30
Fig. 1.5.10: Composite power and exponential function .............................................................................. 1.30
Fig. 1.6.1: The Dirac impulse function ......................................................................................................... 1.35
Fig. 1.7.1: Instantaneous voltage on P, G and V .......................................................................................... 1.41
Fig. 1.7.2: Step response of an VG -network ................................................................................................ 1.43
Fig. 1.8.1: Integral of a real inverting function ............................................................................................. 1.45
Fig. 1.8.2: Integral of a complex inverting function ..................................................................................... 1.47
Fig. 1.8.3: Different integration paths of equal result ................................................................................... 1.49
Fig. 1.8.4: Similar integration paths of different result ................................................................................. 1.49
Fig. 1.8.5: Integration paths about a pole ..................................................................................................... 1.51
Fig. 1.8.6: Integration paths near a pole ....................................................................................................... 1.51
Fig. 1.8.7: Arbitrary integration paths .......................................................................................................... 1.51
Fig. 1.8.8: Integration path encircling a pole ................................................................................................ 1.51
Fig. 1.9.1: Contour integration path around a pole ....................................................................................... 1.53
Fig. 1.9.2: Contour integration not including a pole ..................................................................................... 1.53
Fig. 1.10.1: Cauchy’s method of expressing analytical functions ................................................................. 1.55
Fig. 1.12.1: Emmentaler cheese .................................................................................................................... 1.65
Fig. 1.12.2: Integration path encircling many poles ...................................................................................... 1.65
Fig. 1.13.1: Complex line integration of a complex function ....................................................................... 1.67
Fig. 1.13.2: Integration path of Fig.1.13.1 .................................................................................................... 1.67
Fig. 1.13.3: Integral area is smaller than QP ............................................................................................... 1.67
Fig. 1.13.4: Cartesian and polar representation of complex numbers ........................................................... 1.68
Fig. 1.13.5: Integration path for proving of the Laplace transform ............................................................... 1.69
Fig. 1.13.6: Integration path for proving of input functions .......................................................................... 1.71
Fig. 1.14.1: V P G circuit driven by a current step ....................................................................................... 1.73
Fig. 1.14.2: V P G circuit transfer function magnitude ................................................................................ 1.75
Fig. 1.14.3: V P G circuit in time domain .................................................................................................... 1.79
Fig. 1.15.1: Convolution of two functions .................................................................................................... 1.82
Fig. 1.15.2: System response calculus in time and frequency domain .......................................................... 1.83
-1.4-
P. Starič, E.Margan The Laplace Transform
1.0 Introduction
With the advent of television and radar during the Second World War, the behavior
of wideband amplifiers in the time domain has become very important [Ref. 1.1]. In today’s
digital world this is even more the case. It is a paradox that designers and troubleshooters of
digital equipment still depend on oscilloscopes, which — at least in their fast and low level
input part — consist of analog wideband amplifiers. So the calculation of the time domain
response of wideband amplifiers has become even more important than the frequency,
phase, and time delay response.
The emphasis of this book is on the amplifier’s time domain response. Therefore a
thorough knowledge of time related calculus, explained in Part 1, is a necessary
pre-requisite for understanding all other parts of this book where wideband amplifier
networks are discussed.
The time domain response of an amplifier can be calculated by two main methods:
The first one is based on differential equations and the second uses the inverse Laplace
transform (_" transform). The differential equation method requires the calculation of
boundary conditions, which — in case of high order equations — means an unpleasant and
time consuming job. Another method, which also uses differential equations, is the so
called state variable calculation, in which a differential equation of order 8 is split into 8
differential equations of the first order, in order to simplify the calculations. The state
variable method also allows the calculation of non linear differential equations. We will use
neither of these, for the simple reason that the Laplace transform and its inverse are based
on the system poles and zeros, which prove so useful for network calculations in the
frequency domain in the later parts of the book. So most of the data which are calculated
there is used further in the time domain analysis, thus saving a great deal of work. Also the
use of the _" transform does not require the calculation of boundary conditions, giving the
result directly in the time domain.
In using the _" transform most engineers depend on tables. Their method consists
firstly of splitting the amplifier transfer function into partial fractions and then looking for
the corresponding time domain functions in the _ transform tables. The sum of all these
functions (as derived from partial fractions) is then the result. The difficulty arises when no
corresponding function can be found in the tables, or even at an earlier stage, if the
mathematical knowledge available is insufficient to transform the partial fractions into such
a form as to correspond to the formulae in the tables.
In our opinion an amplifier designer should be self-sufficient in calculating the time
domain response of a wideband amplifier. Fortunately, this can be almost always derived
from simple rational functions and it is relatively easy to learn the _" transforms for such
cases. In Part 1 we show how this is done generally, as well as for a few simple examples.
A great deal of effort has been spent on illustrating the less clear relationships by relevant
figures. Since engineers seek to obtain a first glance insight of their subject of study, we
believe this approach will be helpful.
This part consists of four main sections. In the first, the concept of harmonic (e.g.,
sinusoidal) functions, expressed by pairs of counter-rotating complex conjugate phasors, is
explained. Then the Fourier series of periodic waveforms are discussed to obtain the
discrete spectra of periodic waveforms. This is followed by the Fourier integral to obtain
continuous spectra of non-repetitive waveforms. The convergence problem of the Fourier
-1.5-
P. Starič, E.Margan The Laplace Transform
integral is solved by introducing the complex frequency variable = œ 5 4=, thus coming
to direct Laplace transform (_ transform).
The second section shows some examples of the _ transforms. The results are
useful when we seek the inverse transforms of simple functions.
The third section deals with the theory of functions of complex variables, but only
to the extent that is needed for understanding the inverse Laplace transform. Here the line
and contour integrals (Cauchy integrals), the theory of residues, the Laurent series and the
_" transform of rational functions are discussed. The existence of the _" transform for
rational functions is proved by means of the Cauchy integral.
Finally, the concluding section deals with some aspects of the _" transforms and
the convolution integral. Only two standard problems of the _" transform are shown,
because all the transient response calculations (by means of the contour integration and the
theory of residues) of amplifier networks, presented in Parts 2–5, give enough examples and
help to acquire the necessary know-how.
It is probably impossible to discuss Laplace transform in a manner which would
satisfy both engineers and mathematicians. Professor Ivan Vidav said: “If we
mathematicians are satisfied, you engineers would not be, and vice versa”. Here we have
tried to achieve the best possible compromise: to satisfy electronics engineers and at the
same time not to ‘offend’ the mathematicians. But, as late colleague, the physicist Marko
Kogoj, used to say: “Engineers never know enough of mathematics; only mathematicians
know their science to the extent which is satisfactory for an engineer, but they hardly ever
know what to do with it! ” Thus successful engineers keep improving their general
knowledge of mathematics — far beyond the text presented here.
After studying this part the readers will have enough knowledge to understand all
the time domain calculations in the subsequent parts of the book. In addition, the readers
will acquire the basic knowledge needed to do the time-domain calculations by themselves
and so become independent of _ transform tables. Of course, in order to save time, they
will undoubtedly still use the tables occasionally, or even make tables of their own. But
they will be using them with much more understanding and self-confidence, in comparison
with those who can do _" transform only via the partial fraction expansion and the tables
of basic functions.
Those readers who have already mastered the Laplace transform and its inverse,
can skip this part up to Sec. 1.14, where the _" transform of a two pole network is dealt
with. From there on we discuss the basic examples, which we use later in many parts of the
book; the content of Sec. 1.14 should be understood thoroughly. However, if the reader
notices any substantial gaps in his/her knowledge, it is better to start at the beginning.
In the last two parts of this book, Part 6 and 7, we derive a set of computer
algorithms which reduce the circuit’s time domain analysis, performance plotting and pole
layout optimization to a pure routine. However attractive this may seem, we nevertheless
recommend the study of Part 1: a good engineer must understand the tools he/she is using in
order to use them effectively.
-1.6-
P. Starič, E.Margan The Laplace Transform
We will first show how a sinusoidal function can be expressed in three different
ways. The most common way is to express the instantaneous value + of a sinusoid of
amplitude E and angular frequency =" œ #10" , (0" œ frequency) by the well known
formula:
+ œ 0 Ð>Ñ œ E sin =" > (1.1.1)
The reason that we have appended the index ‘"’ to = will become apparent very
soon when we will discuss complex signals containing different frequency components.
The amplitude vs. time relation of this function is shown in Fig. 1.1.1a. This is the most
familiar display seen by using any sine-wave oscillator and an oscilloscope.
y b) y a)
A
a
ω1
ϕ x ϕ = ω 1t
0 π π π 3π 2π
4 2 2
a = A sin ω1 t
ℑ ∆ϕ ℑ
ω1= ∆ t
A/2
ϕ=0 ϕ= π
−ϕ 4
ℜ ℜ
−ω1 −ω 1
ω1 ω1
ω ω
ϕ
− A/2
c) d)
Fig. 1.1.1: Three different presentations of a sine wave: a) amplitude in time domain; b) a phasor
of length E, rotating with angular frequency =" ; c) two complex conjugate phasors of length EÎ#,
rotating in opposite directions with angular frequency =" , at =" > œ !; d) the same as c), except at
=" > œ 1Î%.
s œ E e4 =" >
0 Ð>Ñ œ E (1.1.2)
-1.7-
P. Starič, E.Margan The Laplace Transform
displayed in a three-dimensional presentation in Fig. 1.1.1c. Here both phasors are shown at
= > œ ! (or = > œ #1, %1, á ). The sum of both phasors has the instantaneous value +,
which is always real. This is ensured because both phasors rotate with the same angular
frequency =" and =" , starting as shown in Fig. 1.1.1c, and therefore they are always
complex conjugate at any instant. We express + by the well-known Euler formula:
E 4 =" >
+ œ 0 Ð>Ñ œ E sin =" > œ Še e4 =" > ‹ (1.1.3)
#4
The 4 in the denominator means that both phasors are imaginary at > œ !. The sum of both
rotating phasors is then zero, because:
E 4 =" ! E 4 =" !
0 Ð!Ñ œ e e œ! (1.1.4)
#4 #4
Both phasors in Fig. 1.1.1c and 1.1.1d are placed on the frequency axis at such a
distance from the origin as to correspond to the frequency „ =" . Since the phasors rotate
with time the Fig. 1.1.1d, which shows them at : œ =1 > œ 1Î%, helps us to acquire the idea
of a three-dimensional presentation. The understanding of these simple time-frequency
relations, presented in Fig. 1.1.1c and 1.1.1d and expressed by Eq. 1.1.3, is essential for
understanding both the Fourier transform and the Laplace transform.
Eq. 1.1.3 can be changed to the cosine function if the phasor with =" is multiplied
by 4 œ e4 1Î# and the phasor with =" by 4 œ e4 1Î# . The first multiplication means a
counter-clockwise rotation by *!° and the second a clockwise rotation by *!°. This causes
both phasors to become real at time > œ !, their sum again equaling E:
In general a sinusoidal function with a non-zero phase angle : at > œ ! is expressed as:
E
E sin Ð= > :Ñ œ ’ e4Ð= >:Ñ e4Ð= >:Ñ “ (1.1.6)
#4
The need to introduce the frequency axis in Fig. 1.1.1c and 1.1.1d will become
apparent in the experiment shown in Fig. 1.1.2. Here we have a unity gain amplifier with a
poor loop gain, driven by a sinewave source with frequency =" and amplitude E" , and
loaded by the resistor VL . If the resistor’s value is too low and the amplitude of the input
signal is high the amplifier reaches its maximum output current level, and the output signal
0 Ð>Ñ becomes distorted (we have purposely kept the same notation E as in the previous
figure, rather than introducing the sign Z for voltage). The distorted output signal contains
not just the original signal with the same fundamental frequency =" , but also a third
harmonic component with the amplitude E$ E" and frequncy =$ œ $ =" :
0 Ð>Ñ œ E" sin =" > E$ sin $ =" > œ E" sin =" > E$ sin =$ > (1.1.7)
-1.8-
P. Starič, E.Margan The Laplace Transform
Ai Vi
V1 Vi Vo
A1
Vo RL
A3 V3
t
ω 3 = 3ω1
t= π
4 ω1 Vi = A i sin ω 1t
V1 = A 1 sin ω 1t t = 2π
ω1
V3 = A 3 sin ω 3t
Vo = V1 + V3
Fig. 1.1.2: The amplifier is slightly overdriven by a pure sinusoidal signal, Zi , with a frequency ="
and amplitude Ei . The output signal Zo is distorted, and it can be represented as a sum of two
signals, Z" Z$ . The fundamental frequency of Z" is =" and its amplitude E" is somewhat lower.
The frequency of Z$ (the third harmonic component) is =$ œ $ =" and its amplitude is E$ .
Now let us draw the output signal in the same way as we did in Fig. 1.1.1c,d. Here
we have two pairs of harmonic components: the first pair of phasors E" Î# rotating with the
fundamental frequency „ =" , and the second pair E$ Î# rotating with the third harmonic
frequency „ =$ , which are three times more distant from the origin than =" . This is shown
in Fig. 1.1.3a, where all four phasors are drawn at time > œ !. Fig. 1.1.3b shows the phasors
at time > œ 1Î% =. Because the third harmonic phasor pair rotates with an angular
frequency three times higher, they rotate up to an angle „ $1Î% in the same time.
ℑ ℑ
A1
ϕ =0 ϕ = π /4
2
A3 −ϕ
2 −3 ϕ
− 3ω 1 −3ω 1
−ω 1 ℜ −ω 1 ℜ
ω1 ω1
3ω 1
3ω 1
− A3 3ϕ
2 ϕ
ω
− A1 ω
2
a) b)
Fig. 1.1.3: The output signal of the amplifier in Fig. 1.1.2, expressed by two pairs of complex
conjugate phasors: a) at =" > œ !; b) at =" > œ 1Î%.
Mathematically Eq. 1.1.7, according to Fig. 1.1.2 and 1.1.3, can be expressed as:
-1.9-
P. Starič, E.Margan The Laplace Transform
The amplifier output obviously cannot exceed either its supply voltage or its
maximum output current. So if we keep increasing the input amplitude the amplifier will
clip the upper and lower peaks of the output waveform (some input protection, as well as
some internal signal source resistance must be assumed if we want the amplifier to survive
in these conditions), thus generating more harmonics. If the input amplitude is very high
and if the amplifier loop gain is high as well, the output voltage 0 Ð>Ñ would eventually
approach a square wave shape, such as in Fig. 1.2.1b in the following section. A true
mathematical square wave has an infinite number of harmonics; since no amplifier has an
infinite bandwidth, the number of harmonics in the output voltage of any practical amplifier
will always be finite.
In the next section we are going to examine a generalized harmonic analysis.
-1.10-
P. Starič, E.Margan The Laplace Transform
In the experiment shown in Fig. 1.1.2 we have composed the sinusoidal waveforms
with the amplitudes E" and E$ to get the output time-function 0 Ð>Ñ. Now, if we have a
square wave, as in Fig. 1.2.1b, we would have to deal with many more discrete frequency
components. We intend to calculate the amplitudes of them, assuming that the time
function of the square wave is known. This means a decomposition of the time function
0 Ð>Ñ into the corresponding harmonic frequency components. To do so we will examine the
Fourier series, following the French mathematician Jean Baptiste Joseph de Fourier1.
The square wave time function is periodic. A function is periodic if it acquires the
same value after its characteristic period X" œ # 1Î=" , at any instant >:
0 Ð>Ñ œ 0 Ð> X" Ñ (1.2.1)
Consequently the same is true for 0 Ð>Ñ œ 0 Ð> 8 X" Ñ, where 8 is an integer. According to
Fourier this square wave can be expressed as a sum of harmonic components with
frequencies 08 œ „ 8ÎX" . If 8 œ " we have the fundamental frequency 0" with a phasor
E" Î#, rotating counter-clockwise. The phasor 0" with the same length E" Î# œ E" Î#
rotates clockwise and forms a complex conjugate pair with the first one. A true square wave
would have an infinite number of odd-order harmonics (all even order harmonics are zero).
A7 A1 ℑ
2 A5 2
−7 ω 1 A
2 3 2 T1
2 π t =0 1
− 5ω 1
0 t
− 3ω 1
ℜ
−ω 1
−1
ω1
ω1 = 2π
3ω1 T1
5ω1 T
−A3 1, 0< t < 1
−2 2
π 2 −A5 7 ω f (t ) = T1
− A1 1 −1, < t < T1
2 2
2 −A7
a) 2 b)
ω
Fig. 1.2.1: A square wave, as shown in b), has an infinite number of odd-order frequency
components, of which the first 4 complex-conjugate phasor pairs are drawn in a) at time
> œ Ð! „ # 8 1ÑÎ=" , where 8 is an integer representing the number of the period.
1 It is interesting that Fourier developed this method in connection with thermal engineering. As a general in
the Napoleon's army he was concerned with gun deformation by heat. He supposed that one side of a straight
metal bar is heated and then bent, joining the ends, to form a thorus. Then he calculated the temperature
distribution along the circle so formed, in such a way that it would be the sum of sinusoidal functions, each
having a different amplitude and a different angular frequency.
-1.11-
P. Starič, E.Margan The Laplace Transform
In Fig. 1.2.1, we have drawn the complex-conjugate phasor pairs of the first 4
harmonics. Because all the phasor pairs are always complex-conjugate, the sum of any pair,
as well as their total sum, is always real. The phasors rotate with different speeds and in
opposite directions. Fig. 1.2.2a shows them at time X" Î) to help the reader’s imagination.
Although this figure looks confusing, the phasors shown have an exact inter-relationship.
Looking at the positive = axis, the phasor with the amplitude E" Î# has rotated in the
counter-clockwise direction by an angle of 1Î%. During the same interval of X" Î) the
remaining phasors have rotated: E$ Î# by $1Î%; E& Î# by &1Î%; E( Î# by (1Î%; etc. The
corresponding complex conjugate phasors on the negative = axis rotate likewise, but in the
opposite (clockwise) direction. The sum of all phasors at any instant > is the instantaneous
amplitude of the time domain function. In general, the time function with the fundamental
frequency =" is expressed as:
_
E8 4 8 =" >
0 Ð>Ñ œ " e
8œ_
#
ℑ
ϕ = ω 1 t = π/4
−7ω 1 T1
A1 1
−5 ω 1 −ϕ 2
0 t
−3 ω 1
ℜ ϕ
−ω 1 ω1
−1
ω1
ω1 = 2π
3ω 1 T1
5ω 1 T
1, 0< t < 1
f (t ) = 2
ϕ T1
7ω1 −1, < t < T1
2
ω
a) b)
Fig. 1.2.2: As in Fig. 1.2.1, but at an instant > œ Ð1Î% „ # 8 1ÑÎ=" ; a) the spectrum,
expressed by complex conjugate phasor pairs, corresponds to the instant > œ :Î=" in b).
Note that for the square wave all the even frequency components are missing. For
other types of waveforms the even coefficients can be non-zero. In general Ei may also be
complex, thus containing some non-zero initial phase angle :i . In Eq. 1.2.2 we have also
introduced E! , the DC component, which did not exist in our special case. The meaning of
E! can be understood by examining Fig. 1.2.3a, where the so-called sawtooth waveform is
shown, with no DC component. In Fig. 1.2.3b, the waveform has a DC component of
magnitude E! .
-1.12-
P. Starič, E.Margan The Laplace Transform
Eq. 1.2.2 represents the complex spectrum of the function 0 Ð>Ñ, while Fig. 1.2.1
represents the corresponding most significant part of the complex spectrum of a square
wave. The next step is the calculation of the magnitudes of the rotating phasors.
f (t ) f (t )
A0
A0 = 0 t t
a) b)
Fig. 1.2.3: a) A waveform without a DC component; b) with a DC component E! .
Since we have integrated over the whole period X in order to get the average value of that
harmonic component, the result of the integration must be divided by X , as in Eq. 1.2.5. If
there is a DC component (with = œ !) in the spectrum, the calculation of it is simply:
X Î#
"
E! œ ( 0 Ð>Ñ .> (1.2.6)
X
X Î#
To return to Eq. 1.2.5, let us explain the meaning of the integration Eq. 1.2.5 by
means of Fig. 1.2.4.
-1.13-
P. Starič, E.Margan The Laplace Transform
By multiplying the function 0 Ð>Ñ by e4 =k > we have stopped the rotating phasor
Ek Î#, while during the time interval of integration all the other phasors have rotated
through an angle of 8 # 1 (where 8 is an integer), including the DC phasor E! , because it is
now multiplied by e4 =k > . The result of the integration for all these rotating phasors is zero,
as indicated in Fig. 1.2.4a, while the phasor Ek Î# has stopped, integrating eventually to its
full amplitude; the integration for this phasor only is shown in Fig. 1.2.4b.
The understanding of the effect described of the multiplication 0 Ð>Ñ e4 = > is
essential to understanding the basic principles of the Fourier series, the Fourier
integral and the Laplace transform.
ℑ Ak
ℑ
2
2 k
d A
Ak
d
2
ℜ ϕk ℜ
ϕi = dϕk ϕi = ϕk
a) b)
Fig. 1.2.4: a) The integral over the full period X of a rotating phasor is zero; b) the integral
over a full period X of a non-rotating phasor . Ek Î#, gives its amplitude, Ek Î#, Ðthe symbol
. stands for .>ÎX — in this figures .> p J> such that J> =k œ 1Î%Ñ. Note that a stationary
phasor retains its initial angle :k .
For us the Fourier series represents only a transitional station on the journey towards
the Laplace transform. So we will drive through it with a moderate speed “via the Main
Street”, without investigating some interesting things in the side streets. Nevertheless, it is
useful to make a practical example. Since we have started with a square wave, shown in
Fig. 1.2.5, let us calculate its complex spectrum components E8 Î#, assuming that the
square wave amplitude is E œ ".
4
T1 π A1
A1
1 Ak =
k
0 t ωk = kω1
ω1 = 2π
−1 T1 A3
A5
A7 A9
T A11 A13
1, 0< t < 1
2 ω
f (t ) = T1
−1, < t < T1 ω1 3ω 1 5ω1 7ω1 9ω1 11ω1 13ω1
2
Fig. 1.2.5: A square wave signal. Fig. 1.2.6: The frequency spectrum of a square wave,
expressed by real values (magnitudes) only.
For a single period the corresponding mathematical expression for this function is:
-1.14-
P. Starič, E.Margan The Laplace Transform
! X Î#
" X X
œ e4 # 1 8 >ÎX º e4 # 1 8 >ÎX º
X 4#18 X Î# 4 # 1 8 !
"
œ ˆcos 1 8 "‰ (1.2.7)
418
The result is zero for 8 œ ! (the DC component E! ) and for any even 8. For any
odd 8 the value of cos 1 8 œ ", and for such cases the result is:
E8 # #4
œ œ (1.2.8)
# 418 18
The factor 4 in the numerator means that for any positive 8 (and for > œ !, #1, %1,
'1, á ) the phasor is negative and imaginary, whilst for negative 8 it is positive and
imaginary. This is evident from Fig. 1.2.1a.
Let us calculate the first eight phasors by using Eq. 1.2.8. The lengths of phasors in
Fig. 1.2.1a and 1.2.2.b correspond to the values reported in Table 1.2.1. All the phasors
form complex conjugate pairs and their total sum always gives a real value.
However, a spectrum can also be shown with real values only, e.g., as it appears on
the cathode ray tube screen of a spectrum analyzer. To obtain this, we simply sum the
corresponding complex conjugate phasor pairs (e.g., lE8 Î#l lE8 Î#l œ E8 ) and place
them on the abscissa of a two-dimensional coordinate system, as shown in Fig. 1.2.6. Such
a non-rotating spectrum has only the positive frequency axis. Although such a presentation
of spectra is very useful in the analysis of signals containing several (or many) frequency
components, we will continue calculating with the complex spectra, because the phase
information is also important. And, of course, the Laplace transform, which is our main
goal, is based on a complex variable.
Now let us recompose the waveform using only the harmonic frequency
components from Table 1.2.1, as shown in Fig. 1.2.7a. The waveform resembles the square
wave but it has an exaggerated overshoot $ ¶ 18 % of the nominal amplitude.
The reason for the overshoot $ is that we have abruptly cut off the higher harmonic
components from a certain frequency upwards. Would this overshoot be lower if we take
-1.15-
P. Starič, E.Margan The Laplace Transform
more harmonics? In Fig. 1.2.7b we have increased the number of harmonic components
three times, but the overshoot remained the same. No matter how many, yet for any finite
number of harmonic components, used to recompose the waveform, the overshoot would
stay the same (only its duration becomes shorter if the number of harmonic components is
increased, as is evident from Fig. 1.2.7a and 1.2.7b ).
This is the Gibbs’ phenomenon. It tells us that we should not cut off the frequency
response of an amplifier abruptly if we do not wish to add an undesirably high overshoot to
the amplified pulse. Fortunately, real amplifiers can not have an infinitely steep high
frequency roll off, so a gradual decay of high frequency response is always ensured.
However, as we will explain in Part 2 and 4, the overshoot may increase as a result of other
effects.
1 δ δ
a) b)
7 harmonics 21 harmonics
−1
− 0.2 0.0 0.2 0.4
t 0.6 0.8 1.0 1.2 − 0.2 0.0 0.2 0.4
t 0.6 0.8 1.0 1.2
T T
Fig. 1.2.7: The Gibbs’ phenomenon; a) A signal composed of the first seven harmonics of a
square wave spectrum from Table 1.2.1. The overshoot is $ ¶ 18 % of the nominal amplitude;
b) Even if we take three times more harmonics the overshoot $ is nearly equal in both cases.
In a similar way to that for the square wave, any periodic signal of finite
amplitude and with a finite number of discontinuities within one period, can be
decomposed into its frequency components. As an example the waveform in Fig. 1.2.8
could also be decomposed, but we will not do it here. Instead in the following section we
will analyze another waveform which will allow us to generalize the method of frequency
analysis.
20
15
10
f (t ) [V]
0 20 40 60 80 100
t [ µ s]
Fig. 1.2.8: An example of a periodic waveform (a typical flyback switching power supply), having
a finite number of discontinuities within one period. Its frequency spectrum can also be calculated
using the Fourier transform, if needed (e.g., to analyze the possibility of electromagnetic
interference at various frequencies), in the same way as we did for the square wave.
-1.16-
P. Starič, E.Margan The Laplace Transform
Suppose we have a function 0 Ð>Ñ composed of square waves with the duration 7 and
repeating with a period X , as shown in Fig. 1.3.1. For this function we can also calculate the
Fourier series (the corresponding spectrum is shown in Fig. 1.3.2) in the same way as for
the continuous square wave case in the previous section.
f (t )
1
τ
t
0
−1
T
The difference between the continuous square wave spectrum and the spaced square
wave in Fig. 1.3.1 is that the integral of this function can be broken into two parts, one
comprising the length of the pulse, 7 , and the zero-valued part between two pulses of a
length X 7 . The reader can do this integration for himself, because it is fairly simple. We
will only write the result:
E8 sin# c 8 =" Ð7 Î%Ñ d
œ 4 7 (1.3.1)
# 8 =" Ð7 Î%Ñ
where =" œ #1 ÎX , assuming that the pulse amplitude is " (if the amplitude were E it
would simply multiply the right hand side of the equation). For the conditions in Fig. 1.3.1,
where X œ &7 and E œ ", the spectrum has the form shown in Fig. 1.3.2, with =7 œ #1Î7 .
ℑ
1
ω τ = 2τπ
ℜ
−4ω τ
− 2ω τ ∆ω = 2 π
T
0 2ωτ
4ω τ
ω
−1
-1.17-
P. Starič, E.Margan The Laplace Transform
A very interesting question is that of what would happen to the spectrum if we let
the period X p _? In general a function 0 Ð>Ñ can be recomposed by adding all its harmonic
components:
_
E8 4 8 = " >
0 Ð>Ñ œ " e (1.3.2)
8œ_
#
where E8 may also be complex, thus containing the initial phase angle :i . Again, as in the
previous section, each discrete harmonic component can be calculated with the integral:
X Î#
E8 " 4 8 =" >
œ ( 0 Ð>Ñ e .> (1.3.3)
# X
X Î#
For the case in Fig. 1.3.1 the integration should start at > œ ! and the integral has the form:
X
E8 " 4 8 =" >
œ ( 0 Ð>Ñ e .> (1.3.4)
# X
!
Here we have introduced a dummy variable 7 in the integral, in order to distinguish it from
the variable > outside the brackets. Now we express the integral inside the brackets as:
X X
4 8 =" 7 #81
( 0 Ð7 Ñ e . 7 œ ( 0 Ð7 Ñ e4 # 1 8 7 ÎX . 7 œ J Š ‹ œ J Ð8 =" Ñ (1.3.6)
X
! !
Thus:
_
" " _ #1
0 Ð>Ñ œ " J Ð8 =" Ñ e4 8 =" > œ " J Ð8 =" Ñ e4 =" >
8œ_
X # 1 8œ_ X
" _
œ " =" J Ð8 =" Ñ e4 =" > (1.3.7)
# 1 8œ_
where #1ÎX œ =" . If we let X p _ then =" becomes infinitesimal, and we call it .=. Also
8=" becomes a continuous variable =. So in Eq. 1.3.7 the following changes take place:
_
_
" Ê ( =" Ê .= 8 =" Ê =
8œ_
_
With all these changes Eq. 1.3.7 is transformed into Eq. 1.3.8:
_
" 4=>
0 Ð>Ñ œ ( J Ð=Ñ e .= (1.3.8)
#1
_
-1.18-
P. Starič, E.Margan The Laplace Transform
In Eq. 1.3.9 J Ð=Ñ has no discrete frequency components but it forms a continuous
spectrum. Since X p _ the DC part vanishes (as it would for any pulse shape, not just
symmetrical shapes), according to Eq. 1.2.6:
X
"
E! œ lim ( 0 Ð>Ñ .> œ ! (1.3.10)
Xp _ X
!
Eq. 1.3.8 and 1.3.9 are called Fourier integrals. Under certain (usually rather
limited) conditions, which we will discuss later, it is possible to use them for the
calculation of transient phenomena. The second integral ( Eq. 1.3.9 ) is called the direct
Fourier transform, which we express in a shorter way:
The first integral (Eq. 1.3.8) represents the inverse Fourier transform and it is
usually written as:
In Eq. 1.3.9, J Ð=Ñ means a firm spectrum and the factor e4 = > means the rotation of
each of the corresponding infinite spectrum components contained in J Ð=Ñ with its angular
frequency =, which is a continuous variable. In Eq. 1.3.8 0 Ð>Ñ means the complete time
function, containing an infinite number of rotating phasors and the factor e4 = > means the
rotation ‘in the opposite direction’ to stop the rotation of the corresponding rotating phasor
e4 = > contained in 0 Ð>Ñ, at its particular frequency =.
Let us now select a suitable time function 0 Ð>Ñ and calculate its continuous
spectrum. Since we have already calculated the spectrum of a periodic square wave, it
would be interesting to display the spectrum of a single square wave as shown in
Fig. 1.3.3b. We use Eq. 1.3.9:
_ ! 7 Î#
4 = > 4 = >
J Ð=Ñ œ ( 0 Ð>Ñ e .> œ ( Ð"Ñ e .> ( Ð"Ñ e4 = > .> (1.3.13)
7 Î# 7 Î# !
Here we have a single square wave with a ‘period’ X from > œ 7 Î# to _.
However, we need to integrate only from > œ 7 Î# to > œ 7 Î#, because 0 Ð>Ñ is zero
outside this interval. It is important to note that at the discontinuity where > œ !, we have
started the second integral. For a function with more discontinuities, between each of them
we must write a separate integral. Thus it is obvious that the function 0 Ð>Ñ must have a
finite number of discontinuities for it to be possible to calculate its spectrum.
-1.19-
P. Starič, E.Margan The Laplace Transform
" # e4 = 7 Î# e 4 = 7 Î#
J Ð=Ñ œ Š" e4 = 7 Î# e 4 = 7 Î# "‹ œ "
4 = 4= #
# 4 =7 # 4 =7 % 4 =7
œ Š" cos ‹œ Š# sin# ‹œ sin#
= # = % = %
=7
sin#
œ 4 7 % (1.3.14)
=7
%
ℑ
T=∞
1
−τ
ω τ = 2τπ 2 0 t
τ
∆ω = 0 2
ℜ −1
− 6ω τ
− 4ω τ
− 2ω τ b)
2ωτ
0
4ωτ
6ω τ
ω
a)
Fig. 1.3.3: a) The frequency spectrum of a single square wave is expressed by complex
conjugate phasors. Since the phasors are infinitely many, they merge in a continuous planar
form. Also the spectrum extends to = œ „ _. The corresponding waveform is shown in b).
Note that all the even frequency components #18Î7 are missing (8 is an integer).
By comparing Fig. 1.2.1a and 1.3.3a we may draw the following conclusions:
1. Both spectra contain no even frequency components, e.g., at „ # =7 , „ % =7 ,
etc., where =7 œ # 1Î7 ;
2. In both spectra there is no DC component E! ;
3. By comparing Fig. 1.3.2 and 1.3.3 we note that the envelope of both spectra can
be expressed by Eq. 1.3.14;
4. By comparing Eq. 1.3.1 and 1.3.14 we note that the discrete frequency 8 =" from
the first equation is replaced by the continuous variable = in the second equation.
Everything else has remained the same.
-1.20-
P. Starič, E.Margan The Laplace Transform
t t
b) f (t ) d) f (t )
t t
kT
The question arises of whether it is possible to calculate the spectra of the transients
in Fig. 1.3.4c and 1.3.4d by means of the Fourier integral using Eq. 1.3.8?
The answer is no, because the integral in Eq. 1.3.8 does not converge for any of
these two functions. The integral is also non-convergent for the most simple step signal,
which we intend to use extensively for the calculation of the step response of amplifier
networks.
This inconvenience can be avoided if we multiply the function 0 Ð>Ñ by a suitable
convergence factor, e.g., e-> , where - ! and its magnitude is selected so that the integral
in Eq. 1.3.2 remais finite when > p _. In this way, the problem is solved for > !. In doing
so, however, the integral becomes divergent for > !, because for negative time the factor
e-> has a positive exponent, causing a rapid increase to infinity. But this, too, can be
avoided, if we assume that the function 0 Ð>Ñ is zero for > !. In electrical engineering and
electronics we can always assume that a circuit is dead until we switch the power on or we
apply a step voltage signal to its input and thus generate a transient. The transform where
0 Ð>Ñ must be zero for > ! is called a unilateral transform.
For functions which are suitable for the unilateral Fourier transform the following
relation must hold [Ref. 1.3]:
X
-1.21-
P. Starič, E.Margan The Laplace Transform
If we want this integral to converge to some finite value for > p _, the real constant
must be - 5a , where 5a is the abscissa of absolute convergence. The magnitude of 5a
depends on the nature of the function 0 Ð>Ñ. I.e., if 0 Ð>Ñ œ ", then 5a œ !, and if 0 Ð>Ñ œ e!>
then 5a œ !, where ! !. By applying the convergence factor e-> , the inverse Fourier
transform obtains the form:
_
-> " 4=>
0 Ð>Ñ e œ ( J Ð- , =Ñ e . = for > ! (1.3.17)
#1
_
Here we must add all the complex-conjugate phasors with frequencies from
= œ _ to _. Although the direct Fourier transform in our case was unilateral, the
inverse transform is always bilateral. Because in Eq. 1.3.16 we have deliberately
introduced the convergence factor e-> we must limit - p ! after the integral is solved in
order to get the required J Ð=Ñ.
Since our final goal is the Laplace transform we will stop the discussion of the
Fourier transform here. We will, however, return to this topic later in Part 6, where we will
discuss the solving of system transfer functions and transient responses using numerical
methods, suitable for machine computation. There we will discuss the application of the
very efficient Fast Fourier Transform (FFT) algorithm to both frequency and time domain
related problems.
-1.22-
P. Starič, E.Margan The Laplace Transform
By a slight change of Eq. 1.3.16 and 1.3.17 we may arrive at a general complex
Fourier transform [Ref. 1.3]. This is done so that we join the kernel e4 = > and the
convergence factor e-> . In this way Eq. 1.3.16 is transformed into:
_
The formula for an inverse transform is derived from Eq. 1.3.17 if both sides of the
equation are multiplied by e-> . In addition, the simple variable = is now replaced by a new
one: - 4 =. By doing so we obtain:
-4_
"
0 Ð>Ñ œ ( J Ð- 4 =Ñ eÐ- 4 =Ñ> .Ð- 4 =Ñ for > ! and - 5a (1.4.2)
#14
-4_
If in Eq. 1.4.1 and 1.4.2 the constant - becomes a real variable 5 , both equations are
transformed into the form called Laplace transform. The name is fully justified, since the
French mathematician Pierre Simon de Laplace had already introduced this transform in
1779, whilst Fourier published his transform 43 years later.
It is a custom to denote the complex variable 5 4 = by a single symbol =, which
we also call the complex frequency (in some, mostly mathematical, literature this variable is
also denoted as :). With this new variable Eq. 1.4.1 can be rewritten:
and this is called the direct Laplace transform, or _ transform. It represents the complex
spectrum J Ð=Ñ. The above integral is valid for functions 0 Ð>Ñ such that the factor e=>
keeps the integral convergent. If we now insert the variable = in Eq. 1.4.2, we have:
-4_
" "
0 Ð>Ñ œ _ eJ Ð=Ñf œ ( J Ð=Ñ e=> .= (1.4.4)
#14
-4_
-1.23-
P. Starič, E.Margan The Laplace Transform
The path of integration is parallel with the imaginary axis, as shown in Fig. 1.4.1.
The constant - in the integration limits must be properly chosen, in order to ensure the
convergence of the integral.
jω
c + jω
direction of
integration
c σ
0
c − jω
Fig. 1.4.1: The abscissa of absolute convergence — the integration path for Eq. 1.4.4.
The factor e=> in Eq. 1.4.3 is needed to stop the rotation of the corresponding
phasor e=> ; there are infinitely many such phasors in the time function 0 Ð>Ñ. As our variable
is now complex, = œ 5 4 =, the factor e=> does not mean a simple rotation, but a spiral
rotation in which the radius is exponentially decreasing with > because of 5 , the real part
of =. This is necessary to cancel the corresponding rotation e=> , contained in 0 Ð>Ñ, with a
radius, which, in an exactly equal manner, increases with > [Ref. 1.23].
Since in Eq. 1.4.4 the factor e=> becomes divergent if the exponent de= >f ", the
above conditions for the variable 5 (and for the constant - ) must be met to ensure the
convergence of the integral. In the analysis of passive networks these conditions can always
be met, as we will show in many examples in the subsequent sections.
Now, because we have reached our goal, the Laplace transform and its inverse, we
may ask ourselves what we have accomplished by doing all this hard work.
For the time being we can claim that we have transformed the function of a real
variable > into a function of a complex variable =. This allows us to calculate, using the
_ transform, the spectrum function J Ð=Ñ of a finite transient, defined by the function 0 Ð>Ñ.
Or, more important for us, by means of the _" transform we can calculate the time domain
function, if the frequency domain function J Ð=Ñ is known.
Later we will show how we can transform linear differential equations in the time
domain, by means of the _ transform, into algebraic equations in the = domain. Since the
algebraic equations are much easier to solve than the differential ones, this means one has a
great facility. Once our calculations in the = domain are completed, then by means of the
_" transform we obtain the corresponding time domain function. In this way we avoid
solving directly the differential equations and the calculation of boundary conditions.
-1.24-
P. Starič, E.Margan The Laplace Transform
Now let us put our new tools to use and calculate the _ transform of several simple
functions. The results may also be used for the _" transform and the reader is encouraged
to learn the most basic of them by heart, because they are used extensively in the other parts
of the book and, of course, in the analysis of the most common electronics circuits.
1.5.1 Example 1
Most of our calculations will deal with the step response of a network. To do so our
excitation function will be a simple unit step 2a>b, or the Heaviside function (after Oliver
Heaviside, 1850–1925) as is shown in Fig. 1.5.1. This function is defined as:
! for > !
0 Ð>Ñ œ 2Ð>Ñ œ œ
" for > !
f (t ) f (t )
1 1
t t
0 0 a
Fig. 1.5.1: Unit step function. Fig. 1.5.2: Unit step function starting at > œ +.
As we agreed in the previous section, 0 Ð>Ñ œ ! for > ! for all the following
functions, and we will not repeat this statement in further examples. At the same time let us
mention that for our calculations of _ transform it is not important what is the actual value
of 0 Ð!Ñ, providing it is finite [Ref. 1.3].
The _ transform for the unit step function 0 a>b œ 2a>b is:
_
>œ_
" => "
J Ð=Ñ œ _ e0 Ð>Ñf œ ( c"d e=> .> œ e º œ (1.5.1)
= >œ! =
!
1.5.2 Example 2
The function is the same as in Example 1, except that the step does not start at > œ !
but at > œ + ! ( Fig. 1.5.2 ):
! for > +
0 Ð>Ñ œ œ
" for > +
Solution:
_
>œ_
" => " +=
J Ð=Ñ œ ( c"d e=> .> œ e º œ e (1.5.2)
= >œ+ =
+
-1.25-
P. Starič, E.Margan The Laplace Transform
1.5.3 Example 3
The exponential decay function is shown in Fig. 1.5.3; its mathematical expression:
>œ_
" "
œ eÐ5" =Ñ> º œ (1.5.3)
5" = >œ! 5" =
Later, we shall meet this and the following function and also their product very often.
f (t ) f (t )
1 1
sin (2 π t / T )
− σ1 t
e
1 t
e 0 T
t
0
T = σ1
1
1.5.4 Example 4
We have a sinusoidal function as in Fig. 1.5.4; its corresponding mathematical
expression is:
0 Ð>Ñ œ sin =" >
where the constant =" œ #1ÎX .
Solution: its _ transform is:
_
_ _
" Ô Ð=4 =" Ñ>
×
œ ( e .> ( eÐ=4 =" Ñ> .> (1.5.6)
#4 Õ Ø
! !
-1.26-
P. Starič, E.Margan The Laplace Transform
The solution of this integral is, in a way, similar to that in the previous example:
1.5.5 Example 5
Here we have the cosine function as in Fig. 1.5.5, expressed as:
Solution: the _ transform of this function is calculated in a similar way as for the sine.
According to Euler’s formula:
"
cos =" > œ Še4 =" > e4 =" > ‹ (1.5.8)
#
Thus we obtain:
_ _
"Ô Ð= 4 =" Ñ>
× " " "
J Ð=Ñ œ ( e .> ( eÐ= 4 =" Ñ> .> œ Œ
#Õ Ø # = 4 =" = 4 ="
! !
f (t )
f (t ) 1
1
cos (2π t / T ) e− σ1 t
t
t
0 0
T
e−σ1 t sin (2 π t / T )
T
1.5.6 Example 6
In Fig. 1.5.6 we have a damped oscillation, expressed by the formula:
-1.27-
P. Starič, E.Margan The Laplace Transform
_
" Ð= 5" Ñ>
J Ð=Ñ œ ( e Še4 =" > e4 =" > ‹ .>
#4
!
_
" Ð=5" 4 =" Ñ>
œ ( ’e eÐ=5" 4 =" Ñ> “ .>
#4
!
f (t ) f (t )
t
tn
t t
0 0
Fig. 1.5.7: Linear ramp 0 Ð>Ñ œ >. Fig. 1.5.8: Power function 0 Ð>Ñ œ > 8.
1.5.7 Example 7
A linear ramp, as shown in Fig. 1.5.7, is expressed as:
0 Ð>Ñ œ >
( ? .@ œ ? @ ( @ .?
>œ_
" => "
œ!! e º œ # (1.5.11)
=# >œ! =
1.5.8 Example 8
Fig. 1.5.8 displays a function which has a general analytical form:
0 Ð>Ñ œ > 8
-1.28-
P. Starič, E.Margan The Laplace Transform
Solution: again we integrate by parts, decomposing the integrand >8 e=> into:
" =>
? œ >8 .? œ 8 >8" .> @œ e .@ œ e=> .>
=
With these substitutions we obtain:
_ >œ_ _
8 => >8 e=> 8
J Ð=Ñ œ ( > e .> œ ( >8" e=> .>
= » =
! >œ! !
_
8 8" =>
œ ( > e .> (1.5.12)
=
!
_
8 Ð8 "Ñ 8# =>
œ ( > e .> (1.5.13)
=#
!
1.5.9 Example 9
The function shown in Fig. 1.5.9 corresponds to the expression:
1.5.10 Example 10
Similarly to Example 9, except that here we have > 8 , as in Fig. 1.5.10:
-1.29-
P. Starič, E.Margan The Laplace Transform
f (t ) f (t )
2 2
tn
t
1 1
e −σ1 t e−σ1 t
t e−σ1 t t n e−σ1 t
t t
0 1 2 0 1 2
0.2 f ( t ) 0.2 f ( t )
t e−σ1 t t n e−σ1 t
0.1 0.1
t t
0 1 2 0 1 2
Fig. 1.5.9: Function 0 Ð>Ñ œ > e 5 " > . Fig. 1.5.10: Function 0 Ð>Ñ œ > 8 e 5" > .
These ten examples, which we frequently meet in practice, demonstrate that the
calculation of an _ transform is not difficult. Since the results derived are used often, we
have collected them in Table 1.5.1.
-1.30-
P. Starič, E.Margan The Laplace Transform
We will integrate by parts by making 0 Ð>Ñ œ ? and e=> .> œ .@. The result is:
_ _
>œ_
=> " => " . 0 Ð>Ñ =>
( 0 Ð>Ñ e .> œ 0 Ð>Ñ e º ( Œ e .>
= >œ! = .>
! !
_
0 Ð! Ñ " . 0 Ð>Ñ =>
œ ( Œ e .>
= = .>
!
-1.31-
P. Starič, E.Margan The Laplace Transform
Example:
. Ðe5" > Ñ " 5"
_œ œ = J Ð=Ñ 0 Ð! Ñ œ = "œ (1.6.9)
.> = 5" = 5"
We may also check the result by first differentiating the function e5" > :
. Ðe5" > Ñ
œ 5" e5" > (1.6.10)
.>
and then applying the _ transform:
5"
_˜5" e5" > ™ œ 5" _˜e5" > ™ œ (1.6.11)
= 5"
The result is the same.
By now the advantage of the _ transform against differential equations should have
become obvious. In the = domain the derivative of the function 0 Ð>Ñ corresponds to J Ð=Ñ
multiplied by = and subtracting the value 0 Ð! Ñ. The reason that > must approach zero from
the right (+) side is our prescribing 0 Ð>Ñ to be zero for > !. In other words, we have a
unilateral transform.
The higher derivatives are obtained by repeating the above procedure. If for the first
derivative we have obtained:
_e0 w Ð>Ñf œ = J Ð=Ñ 0 Ð! Ñ (1.6.12)
-1.32-
P. Starič, E.Margan The Laplace Transform
We will derive the proof from the basic definition of the _ transform:
Ú > Þ _ >
Ô × =>
_ Û ( 0 Ð7 Ñ . 7 ß œ( ( 0 Ð7 Ñ . 7 e .> (1.6.17)
Ü! à ! Õ! Ø
>
" =>
? œ ( 0 Ð7 Ñ .7 .? œ 0 Ð7 Ñ .7 @œ e .@ œ e=> .> (1.6.18)
=
!
!
The term between both limits is zero for > œ ! because '! â œ !, and for > œ _ as well
_
because the exponential function e œ !. Thus only the last integral remains, from which
we can factor out the term "Î=. The result is:
Ú > Þ _
" => J Ð=Ñ
_ Û ( 0 Ð7 Ñ . 7 ß œ ( 0 Ð>Ñ e .> œ (1.6.20)
Ü! à = =
!
We have already calculated the transform of this function ( Eq. 1.5.10 ) and it is:
="
J Ð=Ñ œ
a= 5" b# =#"
Let us now calculate the integral of this function according to Eq. 1.6.20 by introducing a
dummy variable 7 :
Ú > Þ
J Ð=Ñ ="
_Û ( e5" 7 sin =" 7 . 7 ß œ œ (1.6.22)
Ü! à = = cÐ= 5" Ñ# =#" d
-1.33-
P. Starič, E.Margan The Laplace Transform
Ú > 7" Þ
J Ð=Ñ
double integral: _Û ( ( 0 Ð7 Ñ . 7 ß œ
Ü! ! à =#
Ú > 7" 7# Þ
J Ð=Ñ
triple integral: _Û ( ( ( 0 Ð 7 Ñ . 7 ß œ
Ü! ! ! à =$
> 78"
J Ð=Ñ
8-th integral: _ ( â( 0 Ð7 Ñ . 7 Ÿ œ (1.6.23)
=8
!
ðóñóò !
8 integrals
The _ transform of the integral of the function 0 Ð>Ñ gives the complex function
J Ð=ÑÎ=. The function J Ð=Ñ must be divided by = as many times as we integrate.
Here again we see a great advantage of the _ transform, for we can replace the
integration in the time domain (often a rather demanding procedure) by a simple
division by = in the (complex) frequency domain.
We introduce a new variable @ œ + >, and for this .@ œ + .> and also > œ @Î+. Thus
we obtain:
_
"
=> " =
_e0 Ð+ >Ñf œ ( 0 Ð@Ñ e + .@ œ JŠ ‹ (1.6.25)
+ + +
@œ!
We have already calculated the _ transform of a similar function by Eq. 1.5.15. For the
function above the result is:
"
J Ð=Ñ œ (1.6.27)
a= $b#
-1.34-
P. Starič, E.Margan The Laplace Transform
Now let us change the scale tenfold. The new function is:
In Fig. 1.6.1a we have a square pulse E" with amplitude " and duration > œ ". The
area under the pulse, equal to the time integral of this pulse, is amplitude ‚ time and thus
equal to ". It is obvious that we may obtain the same time integral if the duration of the
pulse is halved and its amplitude doubled (E# ). The pulse E 4 has a four times higher
amplitude and its duration is only > œ !Þ#& and still has the same time-integral.
If we keep narrowing the pulse and adjusting the amplitude accordingly to keep the
value of the time integral ", we eventually arrive at a situation where the duration of the
pulse becomes infinitely small, > œ & p !, and its amplitude infinitely large,
E œ Ð"Î&Ñ p _, as shown in Fig. 1.6.1b.
This impulse is denoted $Ð>Ñ and it is called the Dirac2 function. Mathematically we
express this function as:
A2 Aε = A 1
2
A1 ε 0
1
1
t
0 1 1 1 t 0 t
4 2
Fig. 1.6.1: The Dirac function as the limiting case of narrowing the pulse width, while keeping
the time integral constant: a) If the pulse length is decreased, its amplitude must increase
accordingly. b) When the pulse length & p ! the amplitude is a"Î&b p _.
2 Paul Dirac, 1902–1984, English physicist, Nobel Prize winner in 1933 (together with Erwin Schrödinger).
-1.35-
P. Starič, E.Margan The Laplace Transform
Now we express the function e= & in this result by the following series:
" e= & " c" = & Ð= &Ñ# Î#x Ð= &Ñ$ Î$x âd =& Ð= &Ñ#
œ œ" â
=& =& #x $x
and by letting & p ! we obtain:
" e= & =& Ð= &Ñ#
_e$ Ð>Ñf œ lim œ lim ”" â• œ " (1.6.32)
&p! =& & p ! #x $x
Therefore the magnitude of the spectrum envelope of this function is one and it is
independent of frequency. This means that the Dirac impulse $Ð>Ñ contains all frequency
components, the amplitude of each component being E œ ".
We have written the notation ! o > in order to emphasize that > approaches zero
from the right of the coordinate system. From real differentiation we know that:
_
. 0 Ð>Ñ w w =>
_œ œ _e0 Ð>Ñf œ ( 0 Ð>Ñ e .> œ = J Ð=Ñ 0 Ð! Ñ (1.6.34)
.>
!
If we assume that 0 Ð>Ñ is continuous at > œ ! we may write the limit of the right
hand side of Eq. 1.6.33:
! œ =lim
p_
= J Ð=Ñ 0 Ð! Ñ (1.6.36)
Even if 0 Ð>Ñ is not continuous at > œ !, this relation is still valid, although the proof
is slightly more difficult [Ref. 1.10]. The expression 0 Ð! Ñ is introduced because we are
dealing with a unilateral transform, in which it is assumed that 0 Ð>Ñ œ ! for > !, so to
calculate the actual initial value we must approach it from the positive side of the time axis.
For the functions which we will discuss in the rest of the book we can, in a similar
way, prove the final value theorem, which is stated as:
(note that for some functions, such as sin =" > or cos =" > or the squarewave, this limit does
not exist, since the value oscillates with the time integral of the function!).
-1.36-
P. Starič, E.Margan The Laplace Transform
œ ;lim
p_
c0 Ð;Ñ 0 Ð! Ñd œ lim 0 Ð>Ñ 0 Ð! Ñ
(1.6.40)
>p_
Although the lower limit of the integral is a (simple) zero we have nevertheless
written ! in the result, to emphasize the unilateral transform. The limit of the right hand
side of Eq. 1.6.39, when = p ! is:
lim = J Ð=Ñ 0 Ð! Ñ (1.6.41)
=p!
By comparing the results of Eq. 1.6.34, 1.6.39 and 1.6.41 we may write:
The Eq. 1.6.37 and 1.6.38 are extremely useful for checking the results of
complicated calculations by the direct or the inverse Laplace transform, as we will
encounter in the following parts of the book. Should the check by these two equations fail,
then we have obviously made a mistake somewhere.
However, this is a necessary, but not a sufficient condition: if the check was passed
we are not guarantied that other ‘sneaky’ mistakes will not exist, which may become
obvious when we plot the resulting function.
1.6.8 Convolution
We need a process by which we can calculate the response of two systems
connected so that the output of the first one is the input of the second one and their
individual responses are known. We have two functions [Ref. 1.19]:
-1.37-
P. Starič, E.Margan The Laplace Transform
In order to distinguish better between 0 Ð>Ñ and gÐ>Ñ, we assign the letter ? for the
argument of 0 and @ for the argument of g; thus, 0 a>b Ä 0 a?b and ga>b Ä ga@b. Since both
variables are now well separated we may write the above integral also in the form:
_ _
Ô =Ð?@Ñ
×
J Ð=Ñ † KÐ=Ñ œ ( ( 0 Ð?Ñ gÐ@Ñ e .@ .? (1.6.46)
Õ Ø
! !
Let us integrate the expression inside the brackets to the variable @. To do so we introduce a
new variable 7 :
7 œ?@ so @ œ7 ? and .@ œ .7 (1.6.47)
We consider the variable 7 in the inner integral to be a (variable) parameter. From the
above expressions it follows that @ œ ! if 7 œ ?. By considering all this we may transform
Eq. 1.6.46 into:
_ _
Ô =a?7 ?b
×
J Ð=Ñ † KÐ=Ñ œ ( ( 0 Ð?Ñ gÐ7 ?Ñ e . 7 .? (1.6.48)
Õ Ø
! 7
We may also change the sequence of integration. Thus we may choose a fixed >"
and first integrate from 7 œ ! to 7 œ >" . In the second integration we integrate from ? œ !
to ? œ _. Then the above expression obtains the form:
_ >"
Ô × =7
J Ð=Ñ † KÐ=Ñ œ ( ( 0 Ð?Ñ gÐ7 ?Ñ .? e . 7 (1.6.49)
Õ Ø
! !
Now we can return from ? back to the usual time variable >:
_ >"
Ô × =7
J Ð=Ñ † KÐ=Ñ œ ( ( 0 Ð>Ñ gÐ7 >Ñ .> e . 7 (1.6.50)
Õ Ø
! !
ðóóóóóóóóóñóóóóóóóóóò
CÐ>Ñ
The expression inside the brackets is the function CÐ>Ñ which we are looking for,
whilst the outer integral is the usual Laplace transform. Thus we define the convolution
process, denoted by gÐ>Ñ ‡ 0 Ð>Ñ, as:
>"
The operator symbolized by the asterisk (‡) means ‘convolved with’. Convolutio is
the Latin word for folding. The German name for convolution is die Faltung and this also
means folding. Obviously:
-1.38-
P. Starič, E.Margan The Laplace Transform
This means that the function is ‘folded’ in time around the ordinate, from the right
to the left side of the coordinate system. At the end of this part, after we master the network
analysis in Laplace space, we will make an example (Fig. 1.15.1) in which this ‘folding’
and the convolution process will be explicitly shown, step by step.
In general we convolve whichever of the two functions is simpler. We may do so
because the convolution is commutative:
The main properties of the Laplace transform are listed in Table 1.6.1.
" =
Time-Scale Change 0 Ð+>Ñ JŠ ‹
+ +
Impulse function $ Ð>Ñ "
Initial Value lim 0 Ð>Ñ lim = J Ð=Ñ
=p_
!o>
>"
-1.39-
P. Starič, E.Margan The Laplace Transform
.3
@ Ð>Ñ œ P (1.7.1)
.>
By assuming time > ! and 3Ð! Ñ œ !, then, according to Eq. 1.6.5, the _ transform of the
above equation is the voltage across the inductance in the = domain:
where M Ð=Ñ is the current in the = domain. The inductive reactance is then:
Z Ð=Ñ
œ =P (1.7.3)
MÐ=Ñ
Here = œ 5 4 = and thus it is complex; it can lie anywhere in the = plane. In the
special case when 5 œ !, and considering only the positive 4 = axis, = degenerates into 4 =.
Then the inductive reactance becomes the familiar 4 = P, as is known from the usual
‘phasor’ analysis of networks.
i i i i i i
+ t + t + t
L C R
t t t
− − −
a) b) c)
1.7.2 Capacitance
From basic electrical engineering we also know that the instantaneous voltage @Ð>Ñ
across a capacitance through which a current 3Ð>Ñ flows during a time > ! is:
>
;Ð>Ñ "
@Ð>Ñ œ œ ( 3 .> (1.7.4)
G G
!
as shown in Fig. 1.7.1b. Here ;Ð>Ñ is the instantaneous charge on the capacitor G . By
applying Eq. 1.6.20 we may calculate the voltage on the capacitor in the = domain:
"
Z Ð=Ñ œ MÐ=Ñ (1.7.5)
=G
-1.41-
P. Starič, E.Margan The Laplace Transform
Z Ð=Ñ "
œ (1.7.6)
MÐ=Ñ =G
1.7.3 Resistance
For a resistor ( Fig. 1.7.1c ) the instantaneous voltage is simply:
@Ð>Ñ œ V 3Ð>Ñ (1.7.7)
and, as there are no time-derivatives the same holds in the = domain, with the
corresponding values Z Ð=Ñ and MÐ=Ñ:
Z Ð=Ñ œ V MÐ=Ñ (1.7.8)
yielding:
Z Ð=Ñ
œV (1.7.9)
MÐ=Ñ
where the (real) pole is at =" œ 5" œ "ÎVG and K" represents the frequency dependenceÞ
The pole of a function is that particular value of the argument for which the function
denominator is equal to zero and, consequently, the function value goes to infinity.
Now let us apply a current step, MÐ=Ñ œ " VÎV , to our network expressed in the =
domain as "ÎÐ=VÑ, according to Eq. 1.5.1. We introduced the factor "ÎV in order to get a
voltage of " V on our VG combination when > p _. The corresponding function is then:
" "
J Ð=Ñ œ Z Ð=Ñ œ MÐ=Ñ † ^Ð=Ñ œ † V K" Ð=Ñ œ K" Ð=Ñ
=V =
" " "
œ † † (1.7.11)
VG = "
= Œ
VG
-1.42-
P. Starič, E.Margan The Laplace Transform
By comparing Eq. 1.7.11 with Eq. 1.7.13 we see that 5" œ "ÎVG :
"
_" œ œe
>ÎVG
(1.7.14)
= a"ÎVG b
From Eq. 1.6.20 we concluded that the division in the = domain corresponds to the
real integration in the > domain. By considering this together with Eq. 1.5.1, we obtain:
>
"ÎVG "
@o Ð>Ñ œ 0 Ð>Ñ œ _ " eJ Ð=Ñf œ _" œ œ ( e
>ÎVG
.>
= c= a"ÎVG bd VG
!
>
"
œ VG ˆe>ÎVG ‰º — œ e>ÎVG a"b œ " e>ÎVG (1.7.15)
VG – !
1
− t C R jω
g1( t ) = e RC
a)
σ
t s1 = − 1
0 RC
1
0, t <0
h (t ) =
1 V/ R , t > 0 jω
b)
t σ
0 s0 = 0
ii o
− t
g2( t ) = − e RC
−1 C R
i i R = 1V
o jω
1
σ
c) − t s1 s0
f (t ) = 1 − e RC
t
0
Fig. 1.7.2: The course of mathematical operations for a parallel V G network excited by a unit step
current 3i . The > domain functions are on the left, the = domain functions are on the right. a) The self-
discharge network function is equal to the impulse function g" Ð>Ñ. b) The unit step in > domain, 2Ð>Ñ, is
represented by a pole at the origin (=! ) in the = domain. The function g# Ð>Ñ is the reaction of the network
to the unit step excitation. c) The output voltage is the sum of both functions, @9 œ 0 Ð>Ñ œ 2Ð>Ñ g# Ð>Ñ.
-1.43-
P. Starič, E.Margan The Laplace Transform
From this simple example we obtain the idea of how to use tables of _ transforms
to obtain the response in the > domain, which should otherwise be calculated by differential
equations. In addition to this we may state a very important conclusion for the =-domain:
"
excitation function L a=b œ
=V
"
VG also named
network function V K" a=b œ V † Œ
" ‘impulse response’
=
VG
"
" VG
output function J a=b œ †
= "
=
VG
-4_
" "
0 Ð>Ñ œ _ eJ Ð=Ñf œ ( J Ð=Ñ e=> .=
#14
-4_
but it would not be fair to leave the reader to grind through this integral of his J Ð=Ñ with
the best of hisÎher knowledge. In essence the above expression is a contour integral.
Knowledge of contour integration is a necessary prerequisite for calculating the
inverse Laplace transform. We will discuss this in the following section. After studying it
the reader will realize that the calculation of the step-response in the > domain by contour
integration is — although a little more difficult than in the above example of the simple
VG circuit — still a relatively simple procedure.
-1.44-
P. Starič, E.Margan The Laplace Transform
In order to learn how to calculate contour integrals the first step is the calculation of
complex line integrals. Both require a knowledge of the basics of complex variable theory,
also called the theory of analytic functions. We will discuss only that part of this theory
which is relevant to the inverse Laplace transform of rational functions (which are
important in the calculation of amplifier step and impulse response). The reader who would
like to know more about the complex variable theory, should study at least one of the books
listed at the end of Part 1 [Ref. 1.4, 1.9, 1.11, 1.12, 1.13, 1.14, 1.15], of which [Ref. 1.10
and 1.14] (in English), [Ref. 1.13] (in German), and [Ref. 1.11] (in Slovenian) are
especially recommended.
The definition of an analytical function is:
In a certain domain (which we are interested in) a function 0 ÐDÑ is analytical if it is:
1) continuous;
2) single valued (at each argument value); and
3) has a derivative at any selected point D , independently of from which side we
approach that point.
From the calculus we know that a definite integral of a function of a real variable,
such as C œ 0 ÐBÑ, is equal to the area E between the function (curve) and the real axis B
and between both limits B" to B# . An example is shown in Fig. 1.8.1, where the integral of a
simple function "ÎB, integrated from B" to B# is displayed.
y
3
y = x1
2 x2
A= y dx
x1
1
A
x1 x2
x
−2 −1 0 1 2
−1
y = x1
−2
−3
-1.45-
P. Starič, E.Margan The Laplace Transform
The area above the B axis is counted as positive and the area below the B axis (if
any) as negative. The area E in Fig. 1.8.1 represents the difference of the integral values at
the upper limit, J ÐB# Ñ and the lower limit, J ÐB" Ñ. As shown in Fig. 1.8.1 the integration
path was from B" along the B axis up to B# .
For a comparison let us now calculate a similar integral, but with a complex
variable D œ B 4 C:
D#
" D#
[ œ( .D œ J ÐD# Ñ J ÐD" Ñ œ ln D# ln D" œ ln (1.8.2)
D D"
D"
So far we can not see any difference between Eq. 1.8.1 and 1.8.2 (a close
investigation of the result would show that it may be multi-valued in the case the path from
D" to D# circles the pole one or more times; but we will not discuss such cases). The whole
integration procedure is the same in both cases. The difference in the result of the second
equation becomes apparent when we express the complex variable D in the exponential
form:
D" œ kD" k e4 )" and D# œ kD# k e4 )# (1.8.3)
then:
D# kD# k e4 )# kD# k 4Ð)# )" Ñ kD# k
ln œ ln œ ln e œ ln 4Ð)# )" Ñ œ ? 4 @ (1.8.4)
D" kD" k e4 )" kD" k kD" k
kD# k
where: ? œ ln and @ œ )# ) " (1.8.5)
kD" k
Ci
and also: kDi k œ ÉB#i Ci# and )i œ arctan (1.8.6)
Bi
Obviously the result of Eq. 1.8.2, as shown in Eq. 1.8.4, is complex. It can not be
plotted as simply as the integral of Fig. 1.8.1, since for displaying the complex function of a
complex argument we would need a 4D graph, whilst the present state of technology allows
us to plot only a 2D projection of a 3D graph, at best.
We can, however, restrict the D argument’s domain, as in Fig. 1.8.2, by making its
real part a constant, say, B œ - and then make plots of J Ð- 4 CÑ œ "ÎÐ- 4 CÑ for some
selected value of - . In Fig. 1.8.2 we have chosen - œ ! and - œ !Þ&, whilst the imaginary
part was varied from 4 $ to 4 $.
In this way we have plotted two graphs, labeled A and B. The graph A belongs to
- œ ! and lies in the e˜=™ ‚ e˜J Ð=Ñ™ plane; it looks just like the one in Fig. 1.8.1, but
changed in sign, owing to the following rationalization of the function’s denominator:
" 4 4 4
œ # œ œ
4C 4 C " † C C
The graph B belongs to - œ !Þ& and is a 3D curve, twisting in accordance with the
phase angle of the function. To aid the 3D view the three projections of F have also been
plotted.
-1.46-
P. Starič, E.Margan The Laplace Transform
3
A
B
1
0
ℑ {F ( z )}
−1
−2
A
−3 3
2
0 c 1
1 0
−1
ℜ{ z } 2 −2 ℑ{ z}
3 −3
ℜ {F ( z )}
Fig. 1.8.2: By reducing the complex domain B 4 C to - 4 C, where - is a constant, we can plot the
complex function J Ð- 4 CÑ in a 3D graph. Here we have - œ ! (graph A) and - œ !Þ& (graph B).
Also shown are the three projections of F. The twisted surface is the integral of J Ð- 4 CÑ for
- œ !Þ& and C in the range 4# C 4 #. See Appendix 1 for more details.
-1.47-
P. Starič, E.Margan The Laplace Transform
Now that we have some idea of how J ÐDÑ looks, let us return to our integral
problem. If the integration path is parallel to the imaginary axis, 4 # C 4 #, and
displaced by B œ d ˜D ™ œ - œ !Þ&, the result of integration would be the surface indicated
in Fig. 1.8.2Þ But for an arbitrary path, with B not constant, we should make many such
plots as above and then trace the integration path to appropriate curves. The area bounded
by the integration path and its trace on those curves would be the result we seek.
For a detailed treatment of complex function plotting see Appendix 1.
Returning to the result of Eq. 1.8.4 we may draw an interesting conclusion:
The complex line integral depends only on the initial value D" and the final
value D# , which represent both limits of the integral.
The result of the integration is independent of the actual path beneath these limits,
providing the path lies on the same side of the pole.
All the significant differences between an integral of a real function and the line
integral of a complex function are listed in Table 1.8.1.
The B axis is the argument’s domain for a real integral, whilst for a complex
integral it is the whole D plane. Do not confuse the D plane (the complex plane, B 4C , with
the diagram’s D axis (vertical axis), which here is J ÐDÑ œ J ÐB 4CÑ. We recommend the
readers to ponder over Fig. 1.8.2 and try to acquire a clear idea of the differences between
both types of integral, since this is necessary for the understanding of the discussion which
follows.
B# D#
" "
integral ( .x ( .D
B D
B" D"
independent
B D œ B4C
variable
B# D# kD# k
result ln (real) ln œ ln 4 Ð)# )" Ñ (complex)
B" D" kD" k
-1.48-
P. Starič, E.Margan The Laplace Transform
1.8.1 Example 1
We have a function 0 ÐDÑ œ $D which we shall integrate from #4 to " 4:
"4 "4
$ D# $
( $D .D œ œ Ð" 4Ñ# Ð# 4Ñ# ‘ œ ' $ 4
# » #
#4 #4
1.8.2 Example 2
The integration limits are the same as in the previous example, whilst the function is
different, 0 ÐDÑ œ " D # :
"4 "4 "4
# D$ " (
( ˆ" D ‰ .D œ D » œ 4
$ » $ $
#4 #4 #4
1.8.3 Example 3
The same function as in Example 1, except that both limits are interchanged:
#4 #4
$ D# $
( $ D .D œ œ Ð# 4Ñ# Ð" 4Ñ# ‘ œ ' $ 4
# » #
"4 "4
We see that although the function under the integral is complex, the same rules
apply for integration as for a function of a real variable. The last example shows us that if
the limits of the integral of a complex function are exchanged the result of the integration
changes the sign.
As already mentioned, the result of the integration of a complex function is
independent of the actual path of integration between the limits D" and D# (see Fig. 1.8.3),
provided that no pole lies between the extreme paths P" and P# . Thus for all the paths
shown the result of integration is the same. This means that the function in the area between
P" and P# is analytic. When at least one pole of the function lies between P" and P# , the
integral along the path P" is in general no more equal to the integral along the path P# . In
Fig. 1.8.4 we show such a case, in which the function is non-analytic (or non-regular)
inside a small area between D" and D# (in the remaining area the function is analytic).
jy z2 jy
z2
L1 L1
L2
L2
z1
z1 x x
Fig. 1.8.3: A line integral from D" to D# along Fig. 1.8.4: Here the function has a non-analytic
the line P" , P# , or any other line lying between domain area between P" and P# . Now the integral
these two yields the same result because between along the path P" is not equal to the integral along
P" and P# the function has no poleÞ the path P# .
-1.49-
P. Starič, E.Margan The Laplace Transform
D œ Ð"Ñ † e4 ) and .D œ 4 e4 ) .)
When we integrate along P" the angle ) goes from 1Î# to !. Thus it is:
!
!
4 e4 ) 1
( . ) œ 4 )º œ4
e4 ) 1Î# #
1Î#
1.8.5 Example 5
Here everything is the same as in the previous example, except that we will
integrate along the path P# of Fig. 1.8.5. In this case the angle ) goes from $1Î# to !:
!
!
4 e4 ) $1
( 4 )
. ) œ 4 )º œ 4
e $1Î# #
$1Î#
1.8.6 Example 6
We would like to see whether there is any difference in the result of Example 4 if
we choose not to integrate along the circle, but instead along a straight line from 4 to "
(P$ in Fig. 1.8.6):
"
" 1
( .D œ ln " ln Ð4 Ñ œ ln e4 1Î# œ 4
D #
4
because ln " œ ! and 4 œ e4 1Î# . The result is the same as in Example 4.
In general if we consider Fig. 1.8.7, the integral along the path Pa or Pb , or any path
in between, is always equal to 4 1Î#. Similarly, the integral along the path Pc or Pd , or any
path in between, is equal to 4 $1Î#.
-1.50-
P. Starič, E.Margan The Laplace Transform
jy jy
L2
1
z=
θ x 1 x
z0 1 z0
L3
L1 L1
−j
−j
Fig. 1.8.5: The integral along the path P" is not equal Fig. 1.8.6: The integral along the straight
to the integral along the path P# because the function path P$ is the same as the integral along
has a pole which lies between both paths. the circular path P".
jy jy
Lc
C
1
Ld
z=
x θ x
z0 1 z0 1
Lb
La
−j
Fig. 1.8.7: The integrals along the paths Pa and Pb are Fig. 1.8.8: The integral along the circular
equal to 4 1Î#. However, those along Pc and Pd are path G around the pole is #14. See Sec.1.9.
equal to 4$ 1Î#.
-1.51-
P. Starič, E.Margan The Laplace Transform
Let us take again our familiar function 0 ÐDÑ œ "ÎD and calculate the integral along
the full circle G (Fig. 1.8.8), where kD k œ ". We use the same notation for D and .D as we
did in Example 4 and start the integration at ) œ !, going counter-clockwise (positive by
definition):
#1 #1 #1
.D e4 ) . ) .D
( œ 4( œ 4( .) œ #14 œ ( (1.9.1)
D e4 ) D
! ! ! G
The resulting integral along the circle G is called the contour integral; the arrow in
the symbol indicates the direction of encirclement of the pole (at D œ !).
Now let us move the pole from the origin to the point + œ Ba 4 Ca . The
corresponding function is then 0 ÐDÑ œ "ÎÐD +Ñ. The first attempt would be to integrate
along the contour G as shown in Fig. 1.9.1. Inside this contour the domain of the function is
analytic, except for the point +. Unfortunately G is a random contour and can not be
expressed in a convenient mathematical way. Since + is the only pole inside the contour G ,
we may select another, simpler integration path. As we have already mastered the
integration around a circular path, we select a circle Gc with the radius & that lies inside the
contour G . From Fig. 1.9.1 it is evident that:
& œ kD +k or D + œ & e4 ) (1.9.2)
Thus:
D œ & e4 ) + (1.9.3)
where the angle ) can have any value in the range ! á #1. Furthermore it follows that:
.D œ 4 & e4 ) .) (1.9.4)
The result is the same as we have obtained for the function 0 ÐDÑ œ "ÎD , in which
the pole was at the origin of the D plane.
jy jy
C C
Cc
ε θ
a
z
a
x x
Fig. 1.9.1: Contour integral around the pole at +. Fig. 1.9.2: The integral around the contour G
is zero because +, the only pole of the function,
lies outside the contour.
-1.53-
P. Starič, E.Margan The Laplace Transform
We look again at Fig. 1.8.3, where the integral around the path P" is equal to the
integral around the path P# because there is no pole between P" and P# . It would be
interesting to make the integral from D" to D# along the path P# and then back again from D#
to D" along the path P" , making a closed loop (contour) integral:
D# D"
( 0 ÐD Ñ .D ( 0 ÐD Ñ .D œ ( 0 ÐD Ñ .D œ! (1.9.6)
D"
ðóóñóóò D#
ðóóñóóò ðóñóò
along P# along P" along P# P"
Since both integrals have the same magnitude, by exchanging the limits of the
second integral, thus making it negative, their sum is zero. This statement affords us the
conclusion that the integral around the contour G in Fig. 1.9.2, which encircles an area
where the function is analytic, is zero (the only pole + in the vicinity lies outside the
contour of integration). This is expressed as:
( 0 ÐDÑ .D œ ! (1.9.7)
G
The expressions in Eq. 1.9.6 and 1.9.8 were derived by the French mathematician
Augustine Louis Cauchy (1788–1857). In all the calculations so far we have integrated in a
counter-clockwise sense, having the integration field, including the pole, always on the left
side. In the case of a clockwise direction, let us again take Eq. 1.9.1 and integrate clockwise
from #1 to !:
! ! !
.D e4 ) .) .D
( œ 4( œ 4( .) œ #14 œ ) (1.9.8)
D e4 ) D
#1 #1 #1 G
Note that the sign of the result changes if we change the direction of encirclement. So we
may write in general:
( 0 ÐD Ñ .D œ ) 0 ÐD Ñ .D (1.9.9)
G G
-1.54-
P. Starič, E.Margan The Laplace Transform
Let us take a function 0 ÐDÑ which is analytic inside a contour G . There are no
regulations for the nature of 0 ÐDÑ outside the contour, where 0 ÐDÑ may also have poles. So
this function is analytic also at the point + (inside G ) where its value is 0 Ð+Ñ.
Now we form another function:
0 ÐDÑ
gÐDÑ œ (1.10.1)
D+
This function is also analytic inside the contour G , except at the point +, where it has a
pole, as shown in Fig. 1.10.1. Let us take the integral around the closed contour G :
0 ÐDÑ
( .D (1.10.2)
D+
G
which is similar to the integral in Eq. 1.9.6, except that here we have 0 ÐDÑ in the numerator.
Because at the point + the function under the integral is not analytic, the path of integration
must avoid this point. Therefore we go around it along a circle of the radius &, which can be
made as small as required (but not zero).
For the path of integration we shall use the required contour G and the circle Gc . To
make the closed contour the complete integration path will start at point 1 and go counter-
clockwise around the contour G to come back to the point 1; then from the point 1 to the
point 2 along the dotted line; then clockwise around the circle Gc back to the point 2; and
finally from the point 2 back to the point 1 along the dotted line. In this way, the contour of
integration is closed. The integral from the point 1 to 2 and back again is zero. Thus there
remain only the integrals around the contour G and around the circle Gc .
jy
C
ε
θ 2 1
a
Cc
Since around the complete integration path the domain on the left hand side of the
contour was always analytical, the resulting integral must be zero. Thus:
-1.55-
P. Starič, E.Margan The Laplace Transform
Here we have changed the the second integral sign by reversing the sense of encirclement.
Similarly as in Eq. 1.9.2 and 1.9.4 we write:
.D
D + œ & e4 ) .D œ 4 & e4 ) .) and œ 4 .)
D+
Nothing would change if in Eq. 1.10.4 we write:
The integration must go from ! to #1 in order to encircle the point + in the required
direction. The value of the first integral on the right is:
#1
The function 0 ÐDÑ is continuous everywhere inside the field bordered by G and Gc ;
therefore the point D can be as close to the point + as desired. Consequently l0 ÐDÑ 0 Ð+Ñl
may also be as small as desired. The radius of the circle Gc inside the contour G is
& œ lD +l, and in Eq. 1.9.5 we have already observed that the value of the integral is
independent of &. If we take the limit & p ! we obtain:
#1
" 0 ÐDÑ .D
0 Ð+Ñ œ ( (1.10.11)
#1 4 D+
G
-1.56-
P. Starič, E.Margan The Laplace Transform
calculate the value of an analytic function at any desired point (say, +) if all the values on
the contour surrounding this point are known. Thus if:
0 ÐDÑ
gÐDÑ œ
D+
then the value 0 Ð+Ñ is called the residue of the function gÐD Ñ for the pole +.
To make the term ‘residue’ clear let us make a practical example. Suppose gÐDÑ is a
rational function of two polynomials:
where ,i and +i are real constants and 8 7. Eq. 1.10.12 represents a general form of a
frequency response of an amplifier, where D can be replaced by the usual = œ 5 4= and
,! Î+! is the DC amplification (at frequency = œ !). Instead of the sums, the polynomials
T ÐDÑ and UÐDÑ, and thus gÐDÑ, may also be expressed in the product form:
ÐD D" ÑÐD D# Ñ â ÐD D7 Ñ
gÐDÑ œ (1.10.13)
ÐD :" ÑÐD :# Ñ â ÐD :8 Ñ
In this equation, D" , D# , á , D7 are the roots of the polynomial T ÐDÑ, so they are also
the zeros of gÐDÑ. Similarly, :" , :# , á , :8 are the roots of the polynomial UÐDÑ and
therefore also the poles of gÐDÑ. Both statements are valid if :i Á Di for any i that can be
applied to Eq. 1.10.13 (if D D" were equal to, say, D :$ , there would be no pole at :$ ,
because this pole would be canceled by the zero D" ). Now we factor out the term with one
pole, i.e., "ÎÐD :# Ñ and write:
where:
ÐD D" ÑÐD D# Ñ â ÐD D7 Ñ
0 ÐDÑ œ (1.10.15)
ÐD :" Ñ ÐD :$ Ñ â ÐD :8 Ñ
If we focus only on 0 ÐDÑ and let D p :# , we obtain the residue of the function gÐDÑ
for the pole :# and this residue is equal to 0 Ð:# Ñ. Since we have taken the second pole we
have appended the index ‘#’ to the res(idue). By performing the suggested operation we
obtain:
The word ‘residue’ is of Latin origin and means the remainder. However, since a
remainder may also appear when we divide a polynomial by another, we shall keep using
the expression ‘residue’ in order to avoid any confusion. Also in our further practical
calculations we will simply write, say, res# instead of the complete expression res# J Ð=Ñ.
-1.57-
P. Starič, E.Margan The Laplace Transform
In the following examples we shall see that the calculation of residues is a relatively
simple matter.
From now on we shall replace the variable D by our familiar complex variable
= œ 5 4 =. Also, in order to distinguish more easily the functions of complex
frequency from functions of time, we shall write the former with capitals, like J Ð=Ñ
or KÐ=Ñ and the later with small letters, like 0 Ð>Ñ or gÐ>Ñ.
To prove that the calculation of residues is indeed a simple task let us calculate two
examples.
1.10.1 Example 1
Ð= #ÑÐ= $Ñ
Let us take a function: J Ð=Ñ œ
Ð= %ÑÐ= &ÑÐ= 'Ñ
We need to calculate the three residues of J Ð=Ñ for the poles at = œ %, = œ & and
= œ ':
Ð= #ÑÐ= $Ñ Ð% #ÑÐ% $Ñ
res" œ lim Ð= %Ñ œ œ"
= p % Ð= %ÑÐ= &ÑÐ= 'Ñ Ð% &ÑÐ% 'Ñ
Ð= #ÑÐ= $Ñ
res$ œ lim Ð= 'Ñ œ'
= p ' Ð= %ÑÐ= &ÑÐ= 'Ñ
An interesting fact here is that since all the poles are real, all the residues are real as
well; in other words, a real pole causes the residue of that pole to be real.
1.10.2 Example 2
Our function is:
Ð= #Ñ e=>
J Ð=Ñ œ
$ =# * = *
Here we must consider that the variable of the function J Ð=Ñ is only = and not >.
First we tackle the denominator to find both roots, which are the poles of our function:
$ =# * = * œ $ Ð=# $ = $Ñ œ !
-1.58-
P. Starič, E.Margan The Laplace Transform
Thus:
# È$
$ $ $
=",# œ 5" „ 4 =" œ „ ËŒ $ œ „ 4
# # # #
Ð= #Ñ e=>
J Ð=Ñ œ
$ Ð= =" ÑÐ= =# Ñ
We shall carry out a general calculation of the two residues and then introduce the
numerical values for 5" and =" .
We now set 5" œ $Î# and =" œ È$ Î# to obtain the numerical value of the residue:
È$ >Î#
Š$Î# 4 È$ Î# #‹ e$ >Î# e4
res" œ
' 4 È$
È$ 4 È
œ e$ >Î# e4 $ >Î#
"# È$
Since both poles are complex conjugate, both residues are complex conjugate as
well. In rational functions, which will appear in the later sections, all the poles will be
either real, or complex conjugate, or both. Therefore the sum of all residues of these
functions (that is, the time function) will always be real.
-1.59-
P. Starič, E.Margan The Laplace Transform
When a function contains multiple poles it is not possible to calculate the residues
in the way shown in the previous section. As an example let us take the function:
J Ð=Ñ
KÐ=Ñ œ (1.11.1)
Ð= +Ñ8
To calculate the residue we first expand J Ð=Ñ into a Taylor series [Ref. 1.4, 1.11]:
Now we divide all the fractions in this equation by Ð= +Ñ8 (considering that
!x œ " by definition):
J Ð=Ñ
KÐ=Ñ œ (1.11.3)
Ð= +Ñ 8
The values J Ð+Ñ, J w Ð+Ñ, J ww Ð+ÑÎ#x, á , J Ð8"Ñ Ð+ÑÎÐ8 "Ñx, á are constants and
we write them as E8 , EÐ8"Ñ , EÐ8#Ñ , á , E" , E! , E" , E# , á .
We may now express the function KÐ=Ñ as:
The sum of all fractions from the above function we call the principal part and the
rest is the analytic part (also known as the regular part).
Eq. 1.11.4 is named the Laurent series, after the French mathematician Pierre-
Alphonse Laurent, 1813–1854, who in 1843 described “a series with negative powers”.
A general expression for the Laurent series is:
+_
J Ð=Ñ œ " E8 Ð= +Ñ8 (1.11.5)
8œ7
-1.61-
P. Starič, E.Margan The Laplace Transform
( Ð= +Ñ8 .= (1.11.7)
G
because e4 5 #1 œ ", for any positive or negative integer 5, including !. For 8 œ ", we
derive from Eq. 1.11.8:
#1
.= 4 & e4 ) . )
( œ( œ #14 (1.11.10)
=+ & e4 )
G !
In order that the result corresponds to the Laurent series we must add the constant
factor with 8 œ " and this is E" . Eq. 1.11.8 to 1.11.10 prove that the contour integration
for the complete Laurent series KÐ=Ñ yields only:
Thus from the whole series ( Eq. 1.11.4 ) only the part with E" remained after the
integration. If we return to Eq. 1.11.3 we conclude that:
" .Ð8"Ñ
œ =lim
p+ – Ð= +Ñ8 KÐ=Ñ— (1.11.12)
Ð8 "Ñx .=Ð8"Ñ
-1.62-
P. Starič, E.Margan The Laplace Transform
is the residue of the function KÐ=Ñ œ J Ð=ÑÎÐ= +Ñ8 for the pole +. The following
examples will show how we calculate the residues for multiple poles in practice.
1.11.1 Example 1
We take a function:
J Ð=Ñ
KÐ=Ñ œ
Ð= +Ñ$
Our task is to calculate the general expression for the residue of the triple pole
Ð8 œ $Ñ at = œ +. According to Eq. 1.11.12 it is:
" . Ð$"Ñ
res œ =lim
p+ – Ð= +Ñ$ KÐ=Ñ—
Ð$ "Ñx .=Ð$"Ñ
" .# $
œ ” # Ð= +Ñ KÐ=Ñ•
# .= =p+
1.11.2 Example 2
Here we shall calculate with numerical values.
We intend to find the residues for the double pole at = œ # and for the single pole
at = œ $ of the function:
&
J Ð=Ñ œ
Ð= #Ñ# Ð= $Ñ
Solution:
. & &
œ” • œ” • œ &
.= Ð= $Ñ = p# Ð= $Ñ# = p#
& &
res# œ =lim
p $ ”
Ð= $Ñ •œ” • œ&
Ð= #Ñ# Ð= $Ñ Ð= #Ñ# = p$
-1.63-
P. Starič, E.Margan The Laplace Transform
So far we have calculated a contour integral around one pole (simple or multiple).
Now we will integrate around more poles, either single or multiple.
Cheese is a regular part of French meals. So we may imagine that the great
mathematician Cauchy observed a slice of Emmentaler cheese like that in Fig. 1.12.1 (the
characteristics of this cheese is big holes) on his plate and reflected in the following way:
Suppose all that is cheese is an analytic (regular) domain V of a function J Ð=Ñ. In
the holes are the poles =" , á , =& . We are not interested in the domain outside the cheese.
How could we ‘mathematically’ encircle the cheese around the crust and around the rims of
all the holes, so that the cheese is always on the left side of the contour?
jω C jω
R R
C1
s2 s1 s2 C2 s1
C4
s4 σ s4 A σ
s3 C5
s3
C3 s5
s5
Fig. 1.12.1: The cheese represents a regular Fig. 1.12.2: Encircling the poles by contours
(analytic) domain V of a function which has G , G" , á , G5 , so that the regular domain of
one simple pole in each hole. the function is always on the left side.
Impossible? No! If we take a knife and make a cut from the crust towards each hole
without removing any cheese, we provide the necessary path for the suggested contour, as
shown in Fig. 1.12.2.
Now we calculate a contour integral starting from the point E in the suggested
(counter-clockwise) direction until we come to the cut towards the first pole, =" . We follow
the cut towards contour G" , follow it around the pole and then go along the cut again, back
to the crust. We continue around the crust up to the cut of the next pole and so on, until we
arrive back to point E and close the contour. Since we have not removed any cheese in
making the cuts, the paths from the crust to the corresponding hole and back again cancel
out in this integration path. As we have proved by Eq. 1.9.6:
, +
( J Ð=Ñ .= ( J Ð=Ñ .= œ !
+ ,
Therefore, only the contour G around the crust and the small contours G" , á , G&
around the rims of the holes containing the poles are what we must consider in the
integration around the contour in Fig. 1.12.2. The contour G was taken counter-clockwise,
whilst the contours G" , á , G& were taken clockwise.
-1.65-
P. Starič, E.Margan The Laplace Transform
The result of integration is zero because along this circuitous contour of integration
we have had the regular domain always on the left side. By changing the sense of encircling
of the contours G" , á , G& we may write Eq. 1.12.1 also in the form:
When we changed the sense of encircling, we changed the sign of the integrals; this allows
us to put them on the right hand side with a positive sign. Now all the integrals have
positive (counter-clockwise) encircling. Therefore the integral encircling all the poles is
equal to the sum of the integrals encircling each particular pole.
By observing this equation we realize that the right hand side is the sum of residues
for all the five poles, multiplied by #14. Thus for the general 8-pole case the Eq. 1.12.2 may
also be written as:
8
( J Ð=Ñ .= œ #14 c res" â res8 d œ #14 " res3 (1.12.3)
G 3œ"
Eq. 1.12.2 and 1.12.3 are called the Cauchy–Goursat theorem; they are essentially
important for the inverse Laplace transform.
-1.66-
P. Starič, E.Margan The Laplace Transform
-4_
=>
1.13 Equality of the Integrals ( J Ð=Ñ e .= and ( J Ð=Ñ e=> .=
G -4_
The reader is invited to examine Fig. 1.13.1, where the function lJ Ð=Ñl œ l"Î=l was
plotted. The function has one simple pole at the origin of the complex plane. The resulting
surface has been cut between 4 and " to expose an arbitrarily chosen integration path P
between =" œ B" 4 C" œ ! 4 !Þ& and =# œ B# 4 C# œ !Þ& 4 ! (see the integration
path in the plot of the = domain in Fig. 1.13.2).
20
15
| F (s ) |
10 1
s | F (L) |
5
1.0
M
A 0.5
0 s2 0.0
s1 L
− 1.0 − 0.5 ℑ{ s }
− 0.5 0.0
0.5 − 1.0
ℜ{s} 1.0
Fig. 1.13.1: The complex function magnitude, lJ Ð=Ñl œ l"Î=l. The resulting surface has been cut
between 4 and " to expose an arbitrarily chosen integration path P, starting at =" œ ! 4 !Þ& and
ending at =# œ !Þ& 4 !. On the path of integration the function lJ Ð=Ñl has a maximum value Q .
ℑ{ s
}
ℜ{
s}
M
0.5
s2
L 1
s1
− 0.5 j A
L
−j
Fig. 1.13.2: The complex domain of Fig. 1.13.1 Fig. 1.13.3: The area E of Fig. 1.13.1 has
shows the arbitrarily chosen integration path P, been laid flat in order to show that it must
from =" œ ! 4 !Þ& to =# œ !Þ& 4 !. be smaller than or equal to the area of the
rectangle Q ×P.
Let us take a closer look at the area E between =" , =# , lJ Ð=" Ñl and lJ Ð=# Ñl, shown in
Fig 1.13.3. The area E corresponds to the integral of J Ð=Ñ from =" to =# and it can be shown
that it is always smaller than, or at best equal to, the rectangle Q ×P:
-1.67-
P. Starič, E.Margan The Laplace Transform
â =" â â =# â =# =#
â â â â
â â â .= â l.=l
â ( J Ð=Ñ .= â œ â ( âŸ( Ÿ ( Q k.=k œ Q P (1.13.1)
â â â = â l=l
â=" â â=" â ="
ðóóóóóóóóóóóóóóóóóóóóñóóóóóóóóóóóóóóóóóóóóò ="
Here Q is the greatest value of kJ Ð=Ñk for this particular path of integration P, as shown in
Fig. 1.13.3, in which the resulting 3D area between =" , =# , lJ Ð=" Ñl and lJ Ð=# Ñl was
stretched flat. So:
â =# â =#
â â
â â
â ( J Ð=Ñ .= â Ÿ ( ¸J Ð=Ñ .=¸ Ÿ Q P (1.13.2)
â â
â=" â ="
Eq. 1.13.2 is an essential tool in the proof of the inverse _ transform via the
integral around the closed contour.
Let us now move to network analysis, where we have to deal with rational functions
of the complex variable = œ 5 4 =. These functions have a general form:
V 7 e4 7 ) â , ! O
¸J Ð=Ѹ œ º º Ÿ 87 œ Q (1.13.5)
V 8 e4 8 ) â +! V
where O is a real constant and Q is the maximum value of ¸J Ð=Ѹ within the integration
interval, according to Fig. 1.13.1 and 1.13.3 (in [Ref. 1.10, p. 212] the interested reader can
find the complete derivation of the constant O ).
jω
s1 jω 1
4)
=" œ 5" 4 =" œ V e
5" œ V cos ) =" œ V sin ) R
V œ ÈÐ5" 4 =" ÑÐ5" 4 =" Ñ
θ
œ È5"# =#" β
σ
=" σ1
) œ arctan β
5"
Fig. 1.13.4: Cartesian and polar representations of a complex number (note: tan ) is equal for the
counter-clockwise defined ) from the positive real axis and for the clockwise defined " œ ) 1Ñ.
-1.68-
P. Starič, E.Margan The Laplace Transform
Let us draw the poles of Eq. 1.13.3 in the complex plane to calculate the integral
around an inverted D-shaped contour, as shown in Fig. 1.13.5 (for convenience only three
poles have been drawn there). Since Eq. 1.13.3 is assumed to describe a real passive system,
all poles must lie either on the left side of the complex plane or at the origin. As we know,
the integral around the closed contour embracing all the poles is equal to the sum of
residues of the function J Ð=Ñ:
5a 4 ="
8
( J Ð=Ñ .= œ ( J Ð=Ñ .= ( J Ð=Ñ .= œ #14 " res3 (1.13.6)
P 3œ"
5a 4 =" I
The contour has two parts: the straight line from 5a 4 =" to 5a 4 =" , where 5a is a
constant (which we will define more exactly later) and the arc I œ V # , where # is the arc
angle and V is its radius. According to Eq. 1.13.2, the line integral along the path P is:
â â
â â
â â
â( J Ð=Ñ .=â Ÿ Q P (1.13.7)
â â
âP â
where Q is the maximum value of the integral (magnitude!) on the path P. In our case:
O
Qœ and P œ # V œ I (1.13.8)
V87
jω
jω 1
s2
Γ R
γ
σa σ
s1
s3
− jω 1
Fig. 1.13.5: The integral along the inverted D-shaped contour encircling the poles is equal
to the sum of the residues of each pole. This contour is used to prove the inverse Laplace
transform, where the integral along the arc vanishes if V p _, provided that the number of
poles exceeds the number of zeros by at least # (in this example no zeros are shown).
-1.69-
P. Starič, E.Margan The Laplace Transform
If the condition of Eq. 1.13.10 holds, only the straight part of the contour counts
because if V p _ then also =" p _, thus changing the limits of the integral along the
straight path accordingly. If we make these changes to Eq. 1.13.6, it shrinks to:
5a 4_
8
( J Ð=Ñ .= œ ( J Ð=Ñ .= œ #14 " res3 (1.13.11)
P 3œ"
5a 4_
The function J Ð=Ñ may also contain the factor e=> , where d Ð=Ñ 5a and > !. In
this case the constant 5a , which is called the abscissa of absolute convergence [Ref. 1.3,
1.5, 1.8], must be small enough to ensure the convergence of the integral. The factor e=> is
always present in the inverse _ transform. Let us write this factor down and let us divide
Eq. 1.13.11 by the factor #14. In this way the integral obtains the form:
5a 4_
" " => =>
0 Ð>Ñ œ _ eJ Ð=Ñf œ ( J Ð=Ñ e .= œ " res šJ Ð=Ñ e › (1.13.12)
#14
5a 4_
and this is the formula for the inverse _ transform [Ref. 1.3, 1.5, 1.8]. The above integral is
convergent for > !, which is the usual constraint in passive network analysis. This
constraint will also apply to all derivations which follow.
In the condition written in Eq. 1.13.10 we see that the order of the denominator’s
polynomial must exceed the order of the numerator by at least two, otherwise we could not
prove the inverse _ transform by the method derived above. This means that the number of
poles must exceed the number of zeros by at least two. However, in network theory we
often deal with the input functions called positive real functions [Ref. 1.16]. The degree of
the denominator in these functions may exceed the degree in the numerator by one only. To
prove the inverse _ transform for such a case, we must reach for another method. The proof
is possible by using a rectangular contour [Ref. 1.5, 1.13, 1.17]:
When the degree of the denominator exceeds the degree of the numerator by one
only, Eq. 1.13.5 is reduced to:
O
¸J Ð=Ѹ Ÿ œQ (1.13.13)
V
" "
J Ð=Ñ œ œ (1.13.14)
= =p = 5p
This is a single-pole function, with the pole on the negative real axis (for our
calculations it is not essential that the pole lies on the real axis, but in the theory of real
passive networks, a single-pole always lies either on the negative 5 axis or at the origin of
the complex plane).
The pole and the rectangular contour with the sides I" , I# , I$ and I% are shown in
Fig. 1.13.6. We will integrate around this rectangular contour. At the same time we let both
5" p _ and =" p _. Next we will prove, considering these limits, that the line integrals
along the sides I# , I$ and I% are all equal to zero.
-1.70-
P. Starič, E.Margan The Laplace Transform
jω
Γ2
jω 1
Γ1
σ
σ1 sp σa
Γ3
Γ4 −jω 1
Fig. 1.13.6: By using a rectangular contour as shown it is possible to prove the inverse
Laplace transform by means of the contour integral, even if the number of poles
exceeds the number of zeros by only one. In this integral, encircling the single simple
pole, we let 51 p _ and =1 p _, so that the integrals along I# , I$ and I% vanish.
Here we will include the factor e=> (which always appears in the inverse
_ transform) at the very beginning, because it will help us in making the integral along I$
convergent. Let us start with the integral along the side I# , where =" is constant:
â 5" â
â â
=> â Ð5 4 =" Ñ> â
º( J Ð=Ñ e .=º œ â ( J a5 4 =" b e .5â
I# â â
â5a â
5a
O 5>
Ÿ( e .5
5"
5"
O "
œ † Še5a > e5" > ‹ p !º (1.13.16)
5" > 5" p_
Since we are calculating the absolute value, we can exchange the limits of the last integral.
The integral along I% is almost equal:
â 5a â
â â
=> â Ð5 4 =" Ñ> â
º( J Ð=Ñ e .=º œ â ( J a5 4 =" b e .5â
I% â â
â 5 " â
5a
O 5>
Ÿ( e .5
5"
5"
O "
œ † Še5a > e5" > ‹ p !º (1.13.17)
5" > 5" p_
-1.71-
P. Starič, E.Margan The Laplace Transform
O 5" >
œ e Ð=" =" Ñ p !º (1.13.18)
5" 5" p_
=" p_
Since the integrals along I# , I$ and I% are all equal to zero if 5" p _ and =" p_,
only the integral along I" remains, which, in the limit, is equal to the integral along the
complete rectangular contour and, in turn, to the sum of the residues of the poles of J a=b:
5a 4 =" 5a 4_
lim
=1 p_ (
J Ð=Ñ e=> .= œ => =>
( J Ð=Ñ e .= œ ( J Ð=Ñ e .=
5a 4 =" 5a 4_ I
If this equation is divided by #14, we again obtain the Eq. 1.13.12 which is the
inverse Laplace transform of the function J Ð=Ñ.
Although there was only a single pole in our J a=b in Eq. 1.13.14 the result obtained
is valid in the general case, when J a=b has 8 poles and 8 " zeros.
Thus we have proved the _" transform by means of a contour integral for positive
real functions. As in Eq. 1.13.12, here, too, the abscissa of absolute convergence 5a must be
chosen so that de=f 5a and also > ! in order to ensure the convergence of the integral.
However, we may also integrate along a straight path, where 5 5a , provided that all the
poles remain on the left side of the path.
From all the complicated equations above the reader must remember only one
important fact, which we will use very frequently in the following sections: By means of
the _" transform of J Ð=Ñ, the complex transfer function of a linear network, we
obtain the real time function, 0 Ð>Ñ, as the sum of the residues of all the poles of the
complex frequency function J Ð=Ñ e=> .
Let us put this in the symbolic form:
-1.72-
P. Starič, E.Margan The Laplace Transform
In the following parts of the book we will very frequently need the inverse Laplace
transform of two-pole and three-pole systems, in which the third pole at the origin, "Î=, is
the _ transform of the unit step function. Therefore it would be useful to perform our first
example of the _" transform calculation on such a network function.
A typical network of this sort is shown in Fig. 1.14.1. Our task is to calculate the
voltage on the resistor V as a function of time for > !. First we will apply an input current
3i in the form of an impulse $ Ð>Ñ, and next the input current will have a unit step form. Both
results will be used in many cases in the following parts of the book. In the same way as for
J a=b and 0 a>b we will label the voltages and currents with capitals (Z , M ) when they are the
functions of frequency and with small letters (@, 3) when they are functions of time.
iL L
iC
1V
ii =
R i C R o
The input current is composed of two components, the current through the capacitor
MG , and through the inductor MP (and the resistor V) and Zi is the input voltage:
Zi =# P G = V G "
Mi œ MG MP œ Zi = G œ Zi (1.14.1)
=PV =PV
Correspondingly:
Zi =PV
œ # (1.14.2)
Mi = PG =VG "
This is a typical input function [Ref. 1.16], in this case it has the form of an (input)
impedance, ^i . The characteristics of an input function is that the number of poles exceeds
the number of zeros by one only. The output voltage Zo is:
V
Zo œ Z i (1.14.3)
=PV
and so:
Zo V =PV V
œ † œ # (1.14.4)
Mi = P V =# P G = V G " = PG =VG "
The result is the transfer function of the network (from input to output, but is
expressed as the output to input ratio). Since the dimension of Eq. 1.14.4 is (complex)
Ohms it is also named the transimpedance. In general we will assume that the input current
is " VÎV, in order to obtain a normalized transfer function:
" [V]
Zo œ œ KÐ=Ñ (1.14.5)
=# P G = V G "
-1.73-
P. Starič, E.Margan The Laplace Transform
In our later applications of the circuit in Fig. 1.14.1 the denominator of Eq. 1.14.5
must have complex roots (although, in general, the roots can also be real). Now let us
calculate both roots of the denominator from its canonical form:
V "
=# = œ! (1.14.6)
P PG
with the roots:
V V# "
=",# œ 5" „ 4 =" œ „Ê (1.14.7)
#P % P# PG
In special cases, some of which we shall analyze in the later parts of the book, the roots
may also be double and real.
Expressing the transfer function, Eq. 1.14.5, by its roots, we obtain:
" "
KÐ=Ñ œ † (1.14.8)
PG Ð= =" ÑÐ= =# Ñ
From the _" transform of this function we obtain the system’s impulse response in
the time domain, gÐ>Ñ œ ge$ Ð>Ñf. The factor "ÎPG is the system resonance, =#" , which in a
different network may take a different form (in the general normalized second-order case it
is equal to the product of the two poles, =" =# ). Thus, we put O œ "ÎPG :
O
gÐ>Ñ œ _" eKÐ=Ñf œ _" œ
Ð= =" ÑÐ= =# Ñ
O e=> e=>
œ ( .= œ O " res œ (1.14.9)
#14 Ð= =" ÑÐ= =# Ñ Ð= =" ÑÐ= =# Ñ
G
-1.74-
P. Starič, E.Margan The Laplace Transform
3
∞
∞
2
| G (s ) |
0
-3
|G ( jω ) |
-2
s2
ℜ{s } -1 s1 2 3
0 1
0 -1
-3 -2 ℑ{s}
Fig. 1.14.2: The magnitude of the system transfer function, Eq. 1.14.8, for ="ß# œ a" „ 4 bÈ#
and O œ ". For dÐ=Ñ œ !, the surface lKÐ=Ñl is reduced to the frequency response’s magnitude
curve, lKÐ4 =Ñl. The height at ="ß# is _, but was limited to 3 in order to see lKÐ4 =Ñl in detail.
where ) is the angle between a pole and the positive 5 axis, as in Fig. 1.13.4.
-1.75-
P. Starič, E.Margan The Laplace Transform
In our example, 5"# =#" œ " (Butterworth case), so Eq. 1.14.17 can be simplified:
" 5" > "
gÐ>Ñ œ e sin =" > œ e5" > sin =" > (1.14.18)
=" sin )
Note that Eq. 1.14.13 and Eq. 1.14.17 are valid for any complex pole pair, not
just for Butterworth poles. This completes the calculation of the impulse response.
The next case, in which we are interested more often, is the step response. In
Example 1, Sec. 1.5, we have calculated that the unit step function in the time domain
corresponds to "Î= in the frequency domain. To obtain the step response in the time
domain, we need only to multiply the frequency response by "Î= and calculate the inverse
_ transform of the product. So by multiplying KÐ=Ñ by "Î= we obtain a new function:
" O
J Ð=Ñ œ KÐ=Ñ œ (1.14.19)
= = Ð= =" ÑÐ= =# Ñ
To calculate the step response in the time domain we use the _" transform:
O
0 Ð>Ñ œ _" eJ Ð=Ñf œ _" œ
= Ð= =" Ñ Ð= =# Ñ
O e=> e=>
œ ( .= œ O " res œ (1.14.20)
#14 = Ð= =" ÑÐ= =# Ñ = Ð= =" ÑÐ= =# Ñ
G
The difference between Eq. 1.14.9 and Eq. 1.14.20 is that here we have an additional
pole =! œ !, because of the factor "Î=. Thus here we have three residues:
e=> "
res! œ lim = œ
=p! = Ð= =" ÑÐ= =# Ñ =" =#
e=> e=" >
res" œ =lim
p=
Ð= =" Ñ œ
" = Ð= =" ÑÐ= =# Ñ =" Ð=" =# Ñ
e=> e=# >
res# œ =lim
p=
Ð= =# Ñ œ (1.14.21)
# = Ð= =" ÑÐ= =# Ñ =# Ð=# =" Ñ
In the double-pole case (coincident pole pair, =" œ =# ) the calculation is different
(remember Eq. 1.11.12) and it will be shown in several examples in Part 2. The time
domain function is the sum of all three residues (OÎ=" =# is factored out):
O =# ="
0 Ð>Ñ œ Œ" e=" > e=# > (1.14.22)
=" =# =" = # =# = "
By expressing =" œ 5" 4 =" and =# œ 5" 4 =" in each of the residues we obtain:
O O O
œ œ # œ" (see Eq. 1.14.16)
=" =# Ð5" 4 =" ÑÐ5" 4 =" Ñ 5" =#"
=# e=" > Ð5" 4 =" Ñ eÐ5" 4 =" Ñ> 5" 4 =" Ð5" 4 =" Ñ>
œ œ e
=" = # 5" 4 =" 5" 4 =" # 4 ="
=" e=# > Ð5" 4 =" Ñ eÐ5" 4 =" Ñ> 5" 4 =" Ð5" 4 =" Ñ>
œ œ e (1.14.23)
=# =" 5" 4 =" 5" 4 =" # 4 ="
-1.76-
P. Starič, E.Margan The Laplace Transform
By factoring out e5" > , and with a slight rearranging, we arrive at:
5" e4 =" > e4 =" > e4 =" > e4 =" >
0 Ð>Ñ œ " e5" > – — (1.14.25)
=" #4 #
Since Ðe4 =" > e4 =" > ÑÎ# 4 œ sin =" > and Ðe4=" > e4 =" > ÑÎ# œ cos =" > we can
simplify Eq. 1.14.25 into the form:
5"
0 Ð>Ñ œ " e 5" > Œ sin =" > cos =" > (1.14.26)
="
We could now numerically calculate the response, but we want to show two things:
1) how the formula relates to the physical circuit behavior;
2) explain an error, all too often ingored (even by experienced engineers!).
We can further simplify the sine–cosine term by using the vector sum of the two
phasors (this relation can be found in any mathematics handbook):
For the Buttreworth case, the square root is equal to #, but in the general case it is:
# È5"# =#"
5" "
Ë" Œ œ œº º (1.14.28)
=" =" sin )
Note that for any value of 5" and =" their square can never be negative, which is reflected
in the absolute value notation at the end; on the other hand, it is important to preserve the
correct sign of the phase shifting term in sin Ð=" > )Ñ. By putting Eq. 1.14.28 back into
Eq. 1.14.27 we obtain a relatively simple expression:
" 5 >
0 Ð>Ñ œ " º º e " sin Ð=" > )Ñ (1.14.29)
sin )
If we now insert the numerical values for 5" , =" and ) and plot the function for > in
the interval from 0 to 10, the resulting graph will be obviously wrong! What happened?
-1.77-
P. Starič, E.Margan The Laplace Transform
Let us check our result by applying the rule of initial and final value from Sec. 1.6.
We will use Eq. 1.14.29 and Eq. 1.14.8, considering that O œ =" =# (Eq. 1.14.16).
1. Check the initial value in the frequency-domain, = p _:
=" =#
0 Ð!Ñ œ =lim
p_
= J Ð=Ñ œ =lim
p_
œ! (1.14.30)
Ð= =" ÑÐ= =# Ñ
"
0 Ð!Ñ œ " e5" ! sin Ð=" ! )Ñ œ # (1.14.31)
lsin )l
which is wrong!
"
0 Ð_Ñ œ lim ”" e5" > sin Ð=" > )Ñ•
>p_ lsin )l
=" =#
œ = J Ð!Ñ œ œ" (1.14.32)
Ð! =" ÑÐ! =# Ñ
and at least this one is correct in both the time and frequency domain. Note that in both
checks the pole at = œ ! is canceled by the multiplication of J Ð=Ñ by =.
Considering the error in the inital value in the time domain, many engineers
wrongly assume that they have made a sign error and change the time domain
equation to:
" 5 >
0 Ð>Ñ œ " º º e " sin Ð=" > )Ñ (wrong!)
sin )
Although the step response plot will now be correct, a careful analysis shows that
the negative sign is completely unjustified! Instead we should have used:
="
) œ 1 arctanŒ (1.14.33)
5"
The reason for the added 1 lies in the tangent function, which repeats with a period
of 1 radians (and not #1, as the sine and cosine do). This results in a lost sign since
the arctangent can not tell between angles in the first quadrant from those in the
third and in the second quadrant from those in fourth. See Appendix 2.3 for more of
such cases in 3rd - and 4th -order systems.
A graphical presentation of the step response solution, given by Eq. 1.14.29 and
with the correct initial phase angle, Eq. 1.14.33, is displayed in Fig. 1.14.3.
The physical circuit behavior can be explained as follows:
-1.78-
P. Starič, E.Margan The Laplace Transform
The system resonance term, sinÐ=" >Ñ, is first shifted by ), the characteristic angle of
the pole, becoming sinÐ=" > )Ñ (the time shift is )Î=" ). At resonance the voltage and
current in reactive components are each others’ derivatives (a sine–cosine relationship, see
Eq. 1.14.26), the initial phase angle ) reflects their impedance ratios.
The amplitude of the shifted function is then corrected by the absolute value of the
function at > œ !, which is l"Îsin )l. Thus the starting value is equal to ", and in addition
the slope is precisely identical to the initial slope of the exponential damping function, e5" > ,
so that their product has zero initial slope.
This product is the system reaction to the unit step excitation, 2Ð>Ñ, which sets the
final value for > p _ (=! œ !). By summing the residue at =! (res0 œ " ) with the reaction
function gives the final result, the step response 0 Ð>Ñ.
1 3 jω
h (t ) s1 s 2 f (t )
2 4
s1
ω1
1.5 1 sin (ω1 t + θ )
| sin θ | σ1 θ σ
1.0
eσ1 t σ1 = − 1
sin (ω1 t + θ )
0.5 ω1 = 1 − ω1
t s2
a) 0
0 1 2 3 4 5 6 7 8 9 10
− 0.5 jω
eσ1 t sin (ω t + θ )
− 1.0 1
| sin θ | σ
− 1.5 s0
1.5 0, t < 0
h( t ) =
1, t > 0
1.0
eσ1 t sin (ω t + θ ) jω
0.5 f (t ) = 1 + 1
| sin θ | s1
t
b) 0
0 1 2 3 4 5 6 7 8 9 10
σ
− 0.5 s0
eσ1 t
− 1.0 sin (ω1 t + θ )
| sin θ |
s2
− 1.5
Fig. 1.14.3: Step by step graphic representation of the procedures used in the calculation of
the step response of a system with two complex conjugate poles.
We have purposely presented the complete calculation of the step response for the
VPG circuit in every detail for two reasons:
1) to show how the step response is calculated by means of the _" transform and
the theory of residues; and
2) because we shall meet such functions very frequently in the following parts.
-1.79-
P. Starič, E.Margan The Laplace Transform
1.15 Convolution
and, according to Eq. 1.14.17, the impulse response of the network F is:
"
gÐ>Ñ œ e5# > sin =# >
lsin )# l
Both functions are shown in Fig. 1.15.1a. We will convolve ga>b because it is easier
to do so. This convolving (folding) is done by time reversal about > œ !, obtaining
ga7 >b. The reversion interval 7 has to be chosen so that gÐ> 7 Ñ œ ! (or at least very
close to zero), otherwise the convolution integral would not converge to the correct final
value. The output function CÐ>Ñ is then the convolution integral:
>max
" "
CÐ>Ñ œ ( ”" e5" > sin Ð=" > )" Ñ• e5# Ð7 >Ñ sin =# Ð7 >Ñ .> (1.15.1)
lsin ) " l
ðóóóóóóóóóóóóóóóóñóóóóóóóóóóóóóóóóò lsin )# l
ðóóóóóóóóóóóóóóñóóóóóóóóóóóóóóò
!
0 Ð>Ñ gÐ7 >Ñ
To solve this integral requires a formidable effort and the reader may be assured that
we shall not attempt to solve it here, because — as we will see later — there is a more
elegant method of doing so. We have written the complete integral merely to give the
reader an example of the convolution based on the functions which we have already
calculated. Nevertheless, it is a challenge for the reader who wants to do it by himself (for
the construction of diagrams in Fig. 1.15.1, this integral has been solved!).
In Fig. 1.15.1b we first convolve the function gÐ>Ñ and introduce the time constant 7
to obtain gÐ7 >Ñ. Next, in Fig. 1.15.1c the function gÐ7 >Ñ is shifted right along the time
axis to the position > œ ", obtaining gÐ7 > "Ñ. The area E" under the product of the two
signals is the value of the convolution integral for the interval ! Ÿ > Ÿ ".
In a similar fashion, in Fig. 1.15.1d the function gÐ7 >Ñ is shifted to > œ #. Here
the value of the convolution integral for the interval ! Ÿ > Ÿ # is equal to the area E# .
-1.81-
P. Starič, E.Margan The Laplace Transform
A B a
h( t ) 1
s1 s2 s3 s4 (t ) f (t )
s0
f (t ) g( t ) g( t )
t = ti
( t ) = f ( t ) * g( t ) = f ( t ) g(τ − t ) d t
0
t=0 t
0 1 2 3 4 5
b c
1 1
d e
1 1
f (t ) f (t)
g( 2 − t ) g(3 − t )
A2 A3
0 0
t t
−5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5
f g
1 1
f (t) f (t )
g( 4 − t ) g(5 − t )
A4 A5
0 0
t t
−5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5
h (t )
1
f (t ) A3 A4 A
5
t = ti
A2
A i = ( ti) = f ( t ) g(τ − t ) d t
t=0
g( t )
A1
0
t
0 1 2 3 4 5
A0
-1.82-
P. Starič, E.Margan The Laplace Transform
In Fig. 1.15.1e, the function gÐ7 >Ñ is shifted to > œ $ to obtain the area E$ and in
Fig. 1.15.1f, the function gÐ7 >Ñ is shifted to > œ %, resulting in the area E% , which is in
part negative, owing to the shape of gÐ7 >Ñ.
In Fig. 1.15.1g, > œ & and the area E& is obtained. Since 0 Ð>Ñ has nearly reached its
final value and gÐ>Ñ is almost zero for > &, any further shifting changes E> only slightly.
Finally, in Fig. 1.15.1h the values of E" , á , E5 are inserted to point to the
particular values of the output function CÐ>Ñ. For comparison, the input of network F , 0 Ð>Ñ,
is also drawn. Although 0 Ð>Ñ has almost no overshoot, the Butterworth poles in the network
F cause an undershoot in gÐ>Ñ, which results in an overshoot in the output signal CÐ>Ñ.
Important note: In the last plot of Fig. 1.15.1 the system response CÐ>Ñ is plotted as
if the network F had a unity gain. The impulse response of a unity gain system is
characterized by the whole area under it being equal to 1; consequently, its peak
amplitude would be very small compared to 0 Ð>Ñ, so there would not be much to
see. Therefore for gÐ>Ñ we have plotted its ideal impulse response. The
normalization to a unity-gain is acomplished by dividing the ideal impulse-response
by its own time integral (numerically, each instantaneous amplitude sample is
divided by the sum of all samples). See Part 6 and Part 7 for more details.
From Eq. 1.6.51, it has become evident that convolution in the time domain
corresponds to a simple frequency domain multiplication. This is also shown in Fig. 1.15.2.
The upper half of the figure is the = domain whilst the bottom half is the > domain. Instead
of making the convolution gÐ>Ñ ‡ 0 Ð>Ñ in the > domain, which is difficult, (see Eq. 1.15.1 ),
we rather perform a simple multiplication KÐ=Ñ † J Ð=Ñ in the = domain. Then, by means of
the _" transform we obtain the function CÐ>Ñ which we are looking for.
G (s ) algebraic F (s ) Y (s )
system equation signal response
transform (easy) transform transform
s domain −1
transform transform
t domain
g (t ) integral f (t ) (t )
system equation
transfer function (difficult) signal response
Fig. 1.15.2: Equivalence of the system response calculation in the time domain, 0 Ð>чgÐ>Ñ,
and the frequency domain, J Ð=Ñ † KÐ=Ñ. For analytical work the transform route is the easy
way. For computer use the direct method is preferred.
-1.83-
P. Starič, E.Margan The Laplace Transform
By taking the transform route we need only to calculate the sum of all the residues
(five in the case shown in Fig. 1.15.1), which is far less difficult than the calculation of the
integral in Eq. 1.15.1.
The mathematical expression, which applies to this case, this is:
=" =# =$ = % =>
œ " res ” •” •e (1.15.2)
= Ð= = " ÑÐ= = # Ñ Ð= = $ ÑÐ= =% Ñ
ðóóóóóóóóóñóóóóóóóóóòðóóóóóóóóñóóóóóóóóò
J Ð=Ñ KÐ=Ñ
Here the numerators of both fractions have been normalized by introducing the products
=" =# and =$ =% respectively, to replace the constant O (according to Eq. 1.14.16) in the
Eq. 1.14.19 and 1.14.8. A solution of the above equation can be found in Part 2, Sec. 2.6.
Fig. 1.15.2 also reveals another very important possibility. If the input signal 0 Ð>Ñ is
known and a certain output signal CÐ>Ñ is desired, we can synthesize (not always!) the
intermediate network KÐ=Ñ by taking the _ transform of both time functions and calculating
their quotient:
] Ð=Ñ
KÐ=Ñ œ (1.15.3)
J Ð=Ñ
-1.84-
P. Starič, E.Margan The Laplace Transform
Résumé of Part 1
So far we have discussed the Laplace transform and its inverse, only to the extent
which the reader needs for understanding the rest of the book.
Since we shall calculate many practical examples of the _" transform in the
following chapters, we have discussed extensively only the calculation of the time function
of a simple two pole network with a complex conjugate pole pair, excited by the unit step
function.
The readers who want to broaden their knowledge of the Laplace transform, can
find enough material for further study in the references quoted.
-1.85-
P. Starič, E.Margan The Laplace Transform
References:
-1.87-
P. Starič, E. Margan
Wideband Amplifiers
Part 2:
-2.2-
P. Stariè, E. Margan Inductive Peaking Circuits
-2.3-
P. Stariè, E. Margan Inductive Peaking Circuits
2.10 Comparison of MFA Frequency Responses and of MFED Step Responses ...................................... 2.103
2.11 The Construction of T-coils .............................................................................................................. 2.105
References ................................................................................................................................................. 2.111
Appendix 2.1: General Solutions for 1st -, 2nd -, 3rd - and 4rd -order Polynomials ..................................... A2.1.1
Appendix 2.2: Normalization of Complex Frequency Response Functions ............................................ A2.2.1
Appendix 2.3: Solutions for Step Responses of 3rd - and 4th -order Systems ................................... (CD) A2.3.1
Appendix 2.4: Table 2.10 — Summary of all Inductive Peaking Circuits ......................................(CD) A2.4.1
List of Figures:
Fig. 2.1.1: A common base amplifier with VG load ..................................................................................... 2.9
Fig. 2.1.2: A hypothetical ideal rise time circuit ......................................................................................... 2.10
Fig. 2.1.3: A common base amplifier with the series peaking circuit .......................................................... 2.11
Fig. 2.2.1: A two-pole series peaking circuit .............................................................................................. 2.13
Fig. 2.2.2: The poles =" and =# in the complex plane .................................................................................. 2.14
Fig. 2.2.3: Frequency response magnitude of the two-pole series peaking circuit ...................................... 2.18
Fig. 2.2.4: Phase response of the series peaking circuit .............................................................................. 2.19
Fig. 2.2.5: Phase and envelope delay definition .......................................................................................... 2.20
Fig. 2.2.6: Phase delay and phase advance ................................................................................................. 2.21
Fig. 2.2.7: Envelope delay of the series peaking circuit .............................................................................. 2.22
Fig. 2.2.8: Step response of the series peaking circuit ................................................................................ 2.24
Fig. 2.2.9: Input impedance of the series peaking circuit ............................................................................ 2.26
Fig. 2.3.1: The three-pole series peaking circuit ......................................................................................... 2.27
Fig. 2.3.2: Frequency response of the third-order series peaking circuit ..................................................... 2.30
Fig. 2.3.3: Phase response of the third-order series peaking circuit ............................................................ 2.31
Fig. 2.3.4: Envelope delay of the third-order series peaking circuit ............................................................ 2.32
Fig. 2.3.5: Step response of the third-order series peaking circuit .............................................................. 2.33
Fig. 2.3.6: Pole patterns of the third-order series peaking circuit ............................................................... 2.34
Fig. 2.4.1: The basic T-coil circuit and its equivalent ................................................................................. 2.35
Fig. 2.4.2: Modeling the coupling factor ..................................................................................................... 2.35
Fig. 2.4.3: The poles and zeros of the all pass transimpedance function .................................................... 2.40
Fig. 2.4.4: The complex conjugate pole pair of the Bessel type ................................................................. 2.40
Fig. 2.4.5: The frequency response magnitude of the T-coil circuit ............................................................ 2.43
Fig. 2.4.6: The phase response of the T-coil circuit .................................................................................... 2.44
Fig. 2.4.7: The envelope delay of the T-coil circuit .................................................................................... 2.44
Fig. 2.4.8: The step response of the T-coil circuit, taken from G ............................................................... 2.45
Fig. 2.4.9: The step response of the T-coil circuit, taken from V ............................................................... 2.48
Fig. 2.4.10: An example of a system with different input impedances ......................................................... 2.49
Fig. 2.4.11: Input impedance compensation by T-coil sections ................................................................... 2.50
Fig. 2.5.1: The three-pole T-coil network ................................................................................................... 2.51
Fig. 2.5.2: The layout of Bessel poles for Fig.2.5.1 .................................................................................... 2.51
Fig. 2.5.3: The basic trigonometric relations of main parameters for one of the poles ............................... 2.52
Fig. 2.5.4: Three-pole T-coil network frequency response ......................................................................... 2.55
Fig. 2.5.5: Three-pole T-coil network phase response ................................................................................ 2.56
Fig. 2.5.6: Three-pole T-coil network envelope delay ................................................................................ 2.56
Fig. 2.5.7: The step response of the three-pole T-coil circuit ..................................................................... 2.58
Fig. 2.5.8: Low coupling factor, Group 1: frequency response .................................................................. 2.60
Fig. 2.5.9: Low coupling factor of Group 1: step response ........................................................................ 2.60
Fig. 2.5.10: Low coupling factor of Group 2: frequency response ............................................................... 2.61
Fig. 2.5.11: Low coupling factor of Group 2: step response ........................................................................ 2.61
Fig. 2.6.1: The four-pole L+T network ....................................................................................................... 2.63
Fig. 2.6.2: The Bessel four-pole pattern of L+T network ........................................................................... 2.63
Fig. 2.6.3: Four-pole L+T peaking circuit frequency response ................................................................... 2.67
-2.4-
P. Stariè, E. Margan Inductive Peaking Circuits
Fig. 2.6.4: Additional frequency response plots of the four-pole L+T peaking circuit ............................... 2.67
Fig. 2.6.5: Four-pole L+T peaking circuit phase response .......................................................................... 2.68
Fig. 2.6.6: Four-pole L+T peaking circuit envelope delay .......................................................................... 2.69
Fig. 2.6.7: Four-pole L+T circuit step response .......................................................................................... 2.72
Fig. 2.6.8: Some additional four-pole L+T circuit step responses .............................................................. 2.72
Fig. 2.7.1: A shunt peaking network ........................................................................................................... 2.73
Fig. 2.7.2: Two-pole shunt peaking circuit frequency response .................................................................. 2.76
Fig. 2.7.3: Two-pole shunt peaking circuit phase response ......................................................................... 2.77
Fig. 2.7.4: Two-pole shunt peaking circuit envelope delay ......................................................................... 2.77
Fig. 2.7.5: Two-pole shunt peaking circuit step response ........................................................................... 2.80
Fig. 2.7.6: Layout of poles and zeros for the two-pole shunt peaking circuit .............................................. 2.81
Fig. 2.8.1: Three-pole shunt peaking circuit ............................................................................................... 2.83
Fig. 2.8.2: Three-pole shunt peaking circuit frequency response ................................................................ 2.86
Fig. 2.8.3: Three-pole shunt peaking circuit phase response ....................................................................... 2.87
Fig. 2.8.4: Three-pole shunt peaking circuit envelope delay ....................................................................... 2.87
Fig. 2.8.5: Three-pole shunt peaking circuit step response ......................................................................... 2.89
Fig. 2.9.1: The shunt–series peaking circuit ................................................................................................ 2.91
Fig. 2.9.2: The shunt–series peaking circuit frequency response ................................................................ 2.97
Fig. 2.9.3: The shunt–series peaking circuit phase response ....................................................................... 2.98
Fig. 2.9.4: The shunt–series peaking circuit envelope delay ....................................................................... 2.98
Fig. 2.9.5: The shunt–series peaking circuit step response ....................................................................... 2.100
Fig. 2.9.6: The MFED shunt–series step responses by Shea and Braude .................................................. 2.100
Fig. 2.9.7: The MFED shunt–series pole layouts ...................................................................................... 2.101
Fig. 2.10.1: MFA frequency responses of all peaking circuits .................................................................. 2.104
Fig. 2.10.2: MFED step responses of all peaking circuits ......................................................................... 2.104
Fig. 2.11.1: Four-pole L+T circuit step response dependence on component tolerances .......................... 2.105
Fig. 2.11.2: T-coil coupling factor as a function of the coil length to diameter ratio ................................ 2.106
Fig. 2.11.3: Form factor as a function of the coil length to diameter ratio ................................................ 2.108
Fig. 2.11.4: Examples of planar coil structures ......................................................................................... 2.109
Fig. 2.11.5: Compensation of a bonding inductance by a planar T-coil .................................................... 2.109
Fig. 2.11.6: A high coupling T-coil on a double sided PCB ..................................................................... 2.110
List of Tables:
Appendix 2.4: Table 2.10 — Summary of all Inductive Peaking Circuits ......................................(CD) A2.4.1
-2.5-
P. Stariè, E. Margan Inductive Peaking Circuits
2.0 Introduction
In the early days of wideband amplifiers ‘suitable coils’ were added to the load
(consisting of resistors and stray capacitances) in order to extend the bandwidth, causing in
most cases a resonance peak in the frequency response. Hence the term inductive peaking.
Even though later designers of wideband amplifiers were more careful in doing their best
to achieve as flat a frequency response as possible, the word ‘peaking’ still remained and it
is used today as well.
In some respect the British engineer S. Butterworth might be considered the first to
introduce coils in the (then) anode circuits of electronic tubes to construct an amplifier with
a maximally flat frequency (low pass) response. In his work On the Theory of Filter
Amplifiers, published as early as October 1930 [Ref. 2.1], besides introducing the pole
placement which was later named after him, he also mentioned: “The writer has
constructed filter units in which the resistances and inductances are wound round a
cylinder of length 3in and diameter 1.25 in, whilst the necessary condensers are contained
within the core of the cylinder”. However, it is hard to tell exactly the year when these
‘necessary condensers’ were omitted to leave only the stray and inter-electrode
capacitances of the electronic tubes to form, together with the properly dimensioned coils
and load resistances, a wideband amplifier with maximally flat frequency response. This
was probably done some time in the mid 1930s, when the first electronic voltmeters,
oscilloscopes, and television amplifiers were constructed.
The need for wideband and pulse amplifiers was emphasized with the introduction
of radar during the Second World War. A book of historical value, G. E. Valley & H.
Wallman, Vacuum Tube Amplifiers [Ref. 2.2] was written right after the war and
published in 1948. Apart from details about other types of amplifiers, the most important
knowledge about wideband amplifiers, gained during the war in the Radiation Laboratory
at Massachusetts Institute of Technology, was made public. In this work the amplifier step
response calculation also received the necessary attention.
After the war people who worked in the Radiation Laboratory spread over USA
and UK, and many of them started working at firms where oscilloscopes were produced.
Many articles were written about wideband amplifiers with inductive peaking, but books
which would thoroughly discuss wideband amplifiers were almost non-existent. The reason
was probably because the emphasis has shifted from the frequency domain to the time
domain, where a gap-free mathematical discussion was considered difficult. Nevertheless,
here and there a book on this subject appeared, and one of the most significant was
published in 1957 in Prague: J. Bednařík & J. Daněk, Obrazové zesilovače pro televisí a
měřicí techniku, (Video Amplifiers for Television and Measuring Techniques) [Ref. 2.3].
There the authors attempted to present a thorough discussion of all inductive peaking
circuits known at that time and also of high frequency resonant amplifiers. Computers were
a rare comodity in those days, with restricted access and equally rare was the programming
knowledge; this prevented the authors from executing some important calculations, which
are too elaborate to be done by pencil and paper.
An important change in wideband amplifier design, using inductive peaking, was
introduced by E.L. Ginzton, W.R. Hewlett, J.H. Jasberg, and J.D. Noe in their revolutionary
article Distributed Amplification, [Ref. 2.4]. This was an amplifier with electronic tubes
connected in parallel, where the grid and anode interconnections were made of lumped
-2.7-
P. Stariè, E. Margan Inductive Peaking Circuits
sections of a delay line. In this way the bandwidth of the amplifier was extended beyond
the limits imposed by the mutual conductance (gm ) divided by stray capacitance (Gin ) of
electronic tubes. For reasons which we will discuss in Part 3, this type of amplification has
a rather limited application if transistors are used instead of electronic tubes. The necessary
delay in a distributed amplifier was realized using the so-called ‘m-derived’ T-coils, which
did not have a constant input impedance. The correct T-coil circuit was developed in 1964
by C.R. Battjes [Ref. 2.17] and was used for inductive peaking of wideband amplifiers.
Compared with a simple series peaking circuit, a T-coil circuit improves the bandwidth and
rise time exactly twofold. For many years the T-coil peaking circuits were considered a
trade secret, so the first complete mathematical derivations were published by a pupil of
C.R. Battjes only in the early 1980s [Ref. 2.5, 2.6] and in 1995 by C. R. Battjes himself
[Ref. 2.18]. Transistor inter-stage coupling with T-coils represented a special problem,
which was solved by R.I. Ross in late 1960s. This too was considered a classified matter
and appeared in print some ten to twenty years later [Ref. 2.7, 2.8, 2.9]. Owing to the
superb performance of the T-coil circuit we shall discuss it very thoroughly. The transistor
inter-stage T-coil coupling will be derived in Part 3.
Here in Part 2, we shall first explain the basic idea of inductive peaking, followed
by the discussion of the peaking circuits with poles only: series peaking two-pole, series
peaking three-pole, T-coil two-pole, T-coil three-pole, and L+T four-pole circuits. This will
be followed by peaking circuits with poles and zeros: shunt peaking two-pole and one-zero
circuit, shunt peaking three-pole and two-zero circuit, and shunt–series peaking circuit. For
each of the circuits discussed we shall calculate and plot the frequency, phase, envelope
delay, and the step response. The emphasis will be on T-coil circuits, owing to their superb
performance. All the necessary calculations will be explained as we proceed and, whenever
practical, the complete derivations will be given. The exception is the step response of the
series peaking circuit with one complex conjugate pole pair, which was already derived and
explained in Part 1. Since the complete calculation for the step-responses of four-pole L+T
circuits and shunt–series peaking circuits is rather complicated, only the final formulae will
be given. Those readers who want to have the derivations for these circuits as well, will be
able to do so themselves by learning and applying the principles derived in Part 1 and 2
(some assistance can also be found in Appendix 2.1, 2.2 and 2.3).
To the beginners we strongly recommend the study of Sec. 2.2 and 2.3: the circuit
examples are simple enough to allow the analysis to be easily followed and learned; the
same methods can then be applied to more sophisticated circuits in other sections, in which
some of the most basic details are omitted and some equations imported from those two
sections.
At the end of Part 2 we shall draw two diagrams, showing the Butterworth (MFA)
frequency responses and the Bessel (MFED) step responses, to offer an easy comparison of
performance. Finally, in Appendix 2.4 we give a summary table containing the essential
design parameters and equations for all the circuits discussed.
-2.8-
P. Stariè, E. Margan Inductive Peaking Circuits
A simple common base transistor amplifier is shown in Fig. 2.1.1. A current step
source 3s is connected to the emitter ; the time scale has its origin > œ ! at the current step
transition time and is normalized to the system time constant, VG . The collector is loaded
by a resistor V ; in addition there is the collector–base capacitance Gcb , along with the
unavoidable stray capacitance Gs and the load capacitance GL in parallel. Their sum is
denoted by G .
τ = RC
1.0
Q1 ic 0.9
Ccb
is CL R o o = 1 − e− t /RC
Cs 0.5
ic R
V bb V cc
0.1 τ r 1 = 2.2 RC
0.0
Ccb + Cs + CL = C 0 1 2 3 4 5
t1 t2
t / RC
Fig. 2.1.1: A common base amplifier with VG load: the basic circuit and its step response.
Because of these capacitances, the output voltage @o does not jump suddenly to the
value 3c V , where 3c is the collector current. Instead this voltage rises exponentially
according to the formula (see Part 1, Eq. 1.7.15):
The time elapsed between 10 % and 90 % of the final output voltage value (3c V ),
we call the rise time, 7r1 (the index ‘1’ indicates that it is the rise time of a single-pole
circuit). We calculate it by inserting the 10 % and 90 % levels into the Eq. 2.1.1:
!Þ*
7r" œ ># >" œ V G ln !Þ* V G ln !Þ" œ V G ln œ #Þ# V G (2.1.4)
!Þ"
The value #Þ# VG is the reference against which we shall compare the rise time of all
other circuits in the following sections of the book.
-2.9-
P. Stariè, E. Margan Inductive Peaking Circuits
Since in wideband amplifiers we strive to make the output voltage a replica of the
input voltage (except for the amplitude), we want to reduce the rise time of the amplifier as
much as possible. As the output voltage rises more current flows through V and less
current remains to charge G . Obviously, we would achieve a shorter rise time if we could
disconnect V in some way until G is charged to the desired level. To do so let us introduce
a switch W between the capacitor G and the load resistor V . This switch is open at time
> œ ! , when the current step starts, but closes at time > œ VG , as in Fig. 2.1.2. In this way
we force all the available current to the capacitor, so it charges linearly to the voltage 3c V .
When the capacitor has reached this voltage, the switch W is closed, routing all the current
to the loading resistor V .
τ = RC iC = 0 iR = ic
1.0
Q1 ic t = RC 0.9
vo
S b a
is t
vo C 0 < t < RC
0
C R 0.5 =
iR =
ic R t > RC
0.1 τ r0
0.0
Fig. 2.1.2: A hypothetical ideal rise time circuit. The switch disconnects V from the circuit, so that all
0 1 2 3 4
of 3c is available to charge G ; but after a time > œ VG the switch is closed and all 3c flows through V.
5
t t
The resulting output voltage is shown in b, compared1 with the 2exponential response in a. t / RC
Fig. 2.1.2: A hypothetical ideal rise time circuit. The switch disconnects V from the circuit, so that all
of 3c is available to charge G ; but after a time > œ VG the switch is closed and all 3c flows through V.
The resulting output voltage is shown in b, compared with the exponential response in a.
By comparing Fig. 2.1.1 with Fig. 2.1.2 , we note a substantial decrease in rise time
7r! , which we calculate from the output voltage:
7
>œ7
" 3c
@o œ ( 3c .> œ >º œ 3 c V (2.1.5)
G G >œ!
!
where 7 œ VG . Since the charging of the capacitor is linear, as shown in Fig. 2.1.2, the rise
time is simply:
In comparison with Fig. 2.1.1, where there was no switch, the improvement factor
of the rise time is:
7r" #Þ#! VG
(r œ œ œ #Þ(& (2.1.7)
7r! !Þ) VG
It is evident that the rise time (Eq. 2.1.6 ) is independent of the actual value of
the current 3c , but the maximum voltage 3c V (Eq. 2.1.5) is not. On the other hand, the
smaller the resistor V the smaller is the rise time. Clearly the introduction of the switch W
would mean a great improvement. By using a more powerful transistor and a lower value
resistor V we could (at least in principle) decrease the rise time at a will (provided that G
remains unchanged). Unfortunately, it is impossible to make a low on-resistance switch,
-2.10-
P. Stariè, E. Margan Inductive Peaking Circuits
functioning as in Fig. 2.1.2, which would also suitably follow the signal and automatically
open and close in nanoseconds or even in microseconds. So it remains only a nice idea.
But instead of a switch we can insert an appropriate inductance P between the
capacitor G and resistor V and so partially achieve the effect of the switch, as shown in
Fig. 2.1.3. Since the current through an inductor can not change instantaneously, more
current will be charging G , at least initially. The configuration of the VPG network allows
us to take the output voltage either from the resistor V or from the capacitor G . In the first
case we have a series peaking network whilst in the second case we speak of a shunt
peaking network. Both types of peaking networks are used in wideband amplifiers.
τ = RC
1.0
Q1 ic 0.9
o c a
b σ
L o e 1 sin ( ω t + θ )
is =1+ 1
ic R |sin θ |
C R 0.5
R2 1
σ 1 = − 2RL ω1 = ±
4L2
− LC
− ω1
θ = π + arctan σ1
0.1 τr
0.0
0 1 2 3 4 5
t1 t2 t / RC
Fig. 2.1.3: A common base amplifier with the series peaking circuit. The output voltage @o
(curve c) is compared with the exponential response (a, P œ !) and the response using the
ideal switch (b). If we were to take the output voltage from the capacitor G , we would have a
shunt peaking circuit (see Sec. 2.7). We have already seen the complete derivation of the
procedure for calculating the step response in Part 1, Sec. 1.14. However, the response
optimization in accordance with different design criteria is shown in Sec. 2.2 for the series
peaking circuit and in Sec. 2.7 for the shunt peaking circuit.
Fig. 2.1.3 is the simplest series peaking circuit. Later, when we discuss T-coil
circuits, we shall not just achieve rise time improvements similar to that in Eq. 2.1.7, but in
cases in which it is possible (usually it is) to split G into two parts, we shall obtain a
substantially greater improvement.
-2.11-
P. Stariè, E. Margan Inductive Peaking Circuits
Besides the series peaking circuit, in this section we shall discuss all the significant
mathematical methods which are needed to calculate the frequency, phase and
time delay response, the upper half power frequency and the rise time. In addition, we shall
derive the most important design parameters of the series peaking circuit, which we will
use in the other sections of the book also.
ii iL
iC L
o
C R
In Fig. 2.2.1 we have repeated the collector loading circuit of Fig. 2.1.3. Since the
inductive peaking circuits are used mostly as collector load circuits, from here on we shall
omit the transistor symbol; instead we shall show the input current Mi (formerly Mc ) flowing
into the network, with the common ground as its drain. At first we shall discuss the
behavior of the network in the frequency domain, assuming that Mi is the RMS value of the
sinusoidally changing input current. This current is split into two parts: the current through
the capacitance MG , and the current through the inductance MP . Thus we have:
Zi "
Mi œ MG MP œ Zi 4 = G œ Zi Š4 = G ‹ (2.2.1)
4=PV 4=P V
where the input voltage Zi is the product of the driving current Mi and the input impedance
^i (represented by the expression in paretheses). The output voltage is:
V
Zo œ M P V œ Z i (2.2.2)
4=PV
From these equations we obtain the transfer function:
V
Zi
Zo 4=PV V
œ œ
Mi " 4 = G Ð4 = P VÑ "
Zi Œ4 = G
4=PV
V
œ (2.2.3)
=# P G V 4 = G "
Let us set M i œ " Z ÎV and P œ 7V# G , where 7 is a dimensionless parameter; also let us
substitute 4 = with =. With these substitutions the output voltage Zo œ J a=b becomes:
" " "
J Ð=Ñ œ œ † (2.2.4)
=# 7 V # G # = V G " 7 V# G # = "
=#
7VG 7 V# G #
-2.13-
P. Stariè, E. Margan Inductive Peaking Circuits
The denominator roots, which for an efficient peaking must be complex conjugates,
as in Fig. 2.2.2, are the poles of J Ð=Ñ:
" " "
=",# œ 5" „ 4 =" œ „Ê (2.2.5)
#7VG % 7# V# G # 7 V# G #
m = 0.5
m > 0.25 m = 0.33 s1 jω
jω s1
s1
jω 1 m = 0.25
M s1 = s2
m = 0.25 θ
σ m =0 m =0 σ
σ1 s2 = − ∞ −2
−1 s 1 = −1
RC −θ RC RC
− jω 1
s2 s2
m > 0.25 s2
Fig. 2.2.2: The poles =" and =# in the complex plane. If the parameter 7 œ !, the poles are
=" œ "ÎVG and =# œ _. by increasing 7, they travel along the real axis towards each
other and meet at =" œ =# œ #ÎVG Ðfor 7 œ !.25Ñ. Increasing 7 further, the poles split
into a complex conjugate pair travelling along the circle, the radius of which is < œ "ÎVG and
its center at 5 œ <. The figure on the right shows the four characteristic layouts, which are
explained in detail in the text.
With these poles we may write Eq. 2.2.4 also in the following form:
" "
J Ð=Ñ œ † (2.2.6)
7VG Ð= =" ÑÐ= =# Ñ
At DC Ð= œ !Ñ Eq. 2.2.6 shrinks to:
" "
J Ð!Ñ œ † (2.2.7)
7VG =" =#
By dividing Eq. 2.2.6 by Eq. 2.2.7, we obtain the amplitude normalized transfer function:
=" = #
J Ð=Ñ œ (2.2.8)
Ð= =" Ñ Ð= =# Ñ
We shall need this expression for the calculation of the step response. But for the
frequency response J a4=b we replace both poles by their components from Eq. 2.2.5 and
group the imaginary parts to obtain:
5"# =#"
J Ð4 =Ñ œ (2.2.9)
Ò 5" 4 Ð= =" ÑÓ Ò 5" 4 Ð= =" ÑÓ
The next step is the calculation of the parameter 7. Its value depends on the type of
poles we want to have, which in turn depend on the intended application of the amplifier.
As a general rule, for sine wave signal amplification we prefer the Butterworth poles whilst
-2.14-
P. Stariè, E. Margan Inductive Peaking Circuits
for pulse amplification we prefer the Bessel poles. If high bandwidth is not of primary
importance, we can use a ‘critically damped’ system for a zero overshoot step response.
Other types of poles are optimized for use in filters, in which our primary goal is to
selectively amplify only a part of the spectrum. Poles are discussed in Part 4 (derived from
some chosen optimization criteria) and Part 6 (computer algorithms).
We shall calculate the actual values of the poles as well as the parameter 7, by
using Eq. 2.2.5 where we factor out "Î# 7VG . If the square root of Eq. 2.2.11 is
imaginary, which is true for 7 !Þ#& , we can also factor out the imaginary unit:
" "
=",# œ Š " „ È" %7 ‹ œ Š " „ 4È%7 " ‹ (2.2.11)
#7VG #7VG
We now compare this relation with the normalized 2nd -order Butterworth poles (the
reader can find them in Part 4, Table 4.3.1, or by running the BUTTAP computer routine
given in Part 6). The values obtained are 5"t œ !Þ(!(" and ="t œ „ !Þ(!(".
Note: From now on we will append the index ‘t’ to the poles taken from the
tables or calculated by a suitable computer program; these values are
normalized to the frequency of " radian per second.
Since both the real and imaginary axis of the Laplace plane have the
dimension of frequency, the pole dimension is radians per second [radÎs];
however, it has become almost a custom not to write the dimensions.
The sign is also seldom written; instead, most authors leave it to the
reader to keep in mind that the poles of unconditionally stable systems
always have the real part negative and the imaginary part is either zero or
both positive and negative, forming a complex conjugate pair.
To make it easier for the reader, we shall always have the symbols 5
and = signed as required by the mathematical operation to be performed,
whilst the numerical values within the symbols will always be negative for 5
and positive for =. For example, we shall express a complex conjugated pole
pair Ð=" , =# Ñ œ Ð=" , =‡" Ñ as:
=" œ 5" 4 =" œ !Þ(!(" 4 !Þ(!("
=# œ 5# 4 =# œ !Þ(!(" 4 !Þ(!("
Ê =# œ 5" 4 =" œ =‡"
In order to have the same response, the poles of Eq. 2.2.11 must be proportional to
those from the tables, so the ratio of their imaginary to the real part must be the same:
-2.15-
P. Stariè, E. Margan Inductive Peaking Circuits
and the same is true for =# (except the sign). From the square root of Eq. 2.2.12 it follows
that the value of 7 which satisfies our requirement for the Butterworth poles must be:
7 œ !Þ& (2.2.13)
Thus the inductance is:
P œ 7 V# G œ !Þ& V# G (2.2.14)
Finally, by inserting the value of 7 back into Eq. 2.2.11 the poles of our system are:
"
=",# œ 5" „ 4 =" œ ˆ"„4‰ (2.2.15)
VG
The value "ÎVG œ =h is equal to the upper half power frequency of the non-peaking
amplifier of Fig. 2.1.1 (at this frequency, since power is proportional to voltage squared, the
voltage gain drops to "ÎÈ# œ !Þ(!(" ). If we put "ÎVG œ " (or V œ " H and G œ " F,
or V œ &!! kH and G œ # .F, or any other similar combination, provided that it can be
driven by the signal source) , we obtain the normalized (denoted by the index ‘n’) poles:
If we use normalized poles, we must also normalize the frequency: 4 =Î=h instead of 4 = .
Note: It is important not to confuse our system with normalized poles (Eq. 2.2.16)
with the system having normalized Butterworth poles taken from the table
( ="t , =#t œ !Þ(!( „ 4 !Þ(!( ). Although both are Butterworth-type and both are
normalized, they differ in bandwidth:
This will become evident soon in Sec. 2.2.4, where we shall calculate and plot the
magnitude (absolute value) of the frequency response.
2.2.2 Bessel Poles for Maximally Flat Envelope Delay (MFED) Response
From Table 4.4.3 in Part 4 (or by using the BESTAP routine in Part 6), the poles
for the 2nd -order Bessel system are 5"t œ "Þ(&%% and ="t œ „ "Þ&!!! . Then, as for the
Butterworth case above, the ratio of their imaginary to real component is:
ee=" f ="t È%7 " "Þ&!!!
œ Ê œ (2.2.18)
d e=" f 5"t " "Þ(&%%
Solving for 7 gives:
"
7œ (2.2.19)
$
So the inductance is:
†
P œ !Þ$$ V# G (2.2.20)
and the poles are:
"
=",# œ Ð "Þ& „ 4 !Þ)'' Ñ (2.2.21)
VG
-2.16-
P. Stariè, E. Margan Inductive Peaking Circuits
In this case both poles are real and equal, so the imaginary part in Eq. 2.2.11 (the
square root) must be zero:
%7 " œ ! Ê 7 œ !Þ#& (2.2.22)
from which the inductance is:
P œ !Þ#& V# G (2.2.23)
resulting in a double real pole:
#
=",# œ (2.2.24)
VG
In general the parameter 7 may be calculated with the aid of Fig. 2.2.2, where both
poles and the angle ) are shown. If the poles are expressed by Eq. 2.2.11:
which is also equal to "Î% cos# ), as can be found in some literature. We prefer Eq. 2.2.26 .
Now we have all the data needed for further calculations of the frequency, phase,
time delay, and step responses.
We have already written the magnitude in Eq. 2.2.10. Here we will use the
normalized frequency =Î=h :
5"#n ="#n
¸J Ð=Ѹ œ Í (2.2.27)
Í # #
Í # = # =
–5 " Œ =" n —– 5" Œ =" n —
Ì
n
=h n
=h
An important amplifier parameter is its upper half power frequency, which we shall
name =H for the peaking amplifier (in contrast to =h in the non-peaking case). This is the
frequency at which the output voltage Zo drops to ZoDC ÎÈ# , where ZoDC is the output
-2.17-
P. Stariè, E. Margan Inductive Peaking Circuits
We shall use the term upper half power frequency intentionally, rather than the term
upper 3 dB frequency, which is commonly found in the literature. Whilst it has become a
custom to express the amplifier gain in dB, the dB scale (the log of the output-to-input
power ratio) implies that the driving circuit, which supplies the current Mi to the input, has
the same internal resistance as the loading resistor V . This is not the case in most of the
circuits which we shall discuss.
2.0
ωH
Vo CD
Ii R Bessel
Butterworth
1.0
0.7071
0.7
L=0
0.5
2
L = mR C
Ii Vo
L a ) m = 0.50
C R b ) m = 0.33
c ) m = 0.25
0.2
ω h = 1 / RC
a b c
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.2.3: Frequency response magnitude of the two-pole series peaking circuit for some
characteristic values of 7: a) 7 œ !Þ& is the maximally flat amplitude (MFA) response; b)
7 œ !Þ$$ is the maximally flat envelope delay (MFED) response; c) 7 œ !Þ#& is the critical
damping (CD) case; the non-peaking case (7 œ ! Ê P œ !) is the reference. The bandwidth of all
peaking responses is improved compared to the non-peaking bandwidth =h at Zo ÎMi V œ !Þ(!(".
For a series peaking circuit the calculation of =H is relatively easy. The calculation
becomes progressively more difficult for more sophisticated networks, where more poles
and sometimes even zeros are introduced. In such cases it is better to use a computer and in
Part 6 we have presented the development of routines which the reader can use to calculate
the various response functions.
If we solve Eq. 2.2.28 for =H Î=h we can define [Ref. 2.2, 2.4]:
=H
(b œ (2.2.29)
=h
-2.18-
P. Stariè, E. Margan Inductive Peaking Circuits
The value (b is the cut off frequency improvement factor, defined as the ratio of the
system upper half power frequency against that of the non-peaking amplifier (and, since the
lower half power frequency of a wideband amplifier is generally very low, usually it is flat
down to DC, we may call (b also the bandwidth improvement factor). In Table 2.2.1 at the
end of this section the bandwidth improvement factors and other data for different values
of the parameter 7 are given.
We calculate the phase angle : of the output voltage Zo referred to the input current
Mi by finding the phase shift :5 a=b of each pole =5 œ 55 „ 4 =5 and then sum them:
8 8 = … =5
:a=b œ ! :5 a=b œ ! arctan (2.2.30)
5œ" 5œ" 55
In Eq. 2.2.30 we have the ratio of the imaginary part to the real part of the pole, so
the pole values may be either exact or normalized. For normalized values we must also
normalize the frequency variable as =Î=h . Our frequency response function (Eq. 2.2.8) has
two complex conjugated poles, therefore the phase response is:
= =
=1n ="n
=h =h
:Ð=Ñ œ arctan arctan (2.2.31)
51n 5"n
In Fig. 2.2.4 the phase plots corresponding to the same values of 7 as in Fig. 2.2.3
are shown:
−20
−40
ϕ[ ] L=0
− 60
− 80
L = m R 2C c
−100 Ii Vo
L a ) m = 0.50
b
−120 C R b ) m = 0.33
c ) m = 0.25 a
−140
ω h = 1 / RC
−160
−180
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.2.4: Phase response of the series peaking circuit for a) MFA; b) MFED; c) CD case,
compared with the non-peaking response (P œ !). The phase angle scale was converted from
radians to degrees by multiplying it by 180/1. For = p _ the non-peaking (single-pole) response
has its asymptote at 90°, whilst the second-order peaking systems have their asymptote at 180°.
-2.19-
P. Stariè, E. Margan Inductive Peaking Circuits
For each pole the phase delay (or the phase advance for each zero) is:
:
7: œ (2.2.32)
=
If = is the positive angular frequency with which the input signal phasor rotates,
then the angle : by which the output signal phasor lags the input is defined in the direction
opposite to =, meaning that, for a phase-delay, : will be negative, as in Fig. 2.2.4;
consequently 7: will also be negative. Note that 7: has the dimension of time.
Now, 7: is obviously frequency dependent, so in order to evaluate the time domain
performance of a wideband amplifier on a fair basis we are much more interested in the
‘specific’ phase delay, known as the envelope delay (also group delay) and it is a frequency
derivative of the phase angle as the function of frequency:
.:
7e œ (2.2.33)
.=
Here, too, a negative result means a delay and a positive result an advance against
the input signal. In Fig. 2.2.5 a tentative explanation of the difference between the phase
delay and the envelope delay is displayed both in time domain and as a phasor diagram.
Input envelope t0
Output envelope S A (ω )
AV g Vg i o
ϕ (ω )
i = V g sin (ω t − t0 ) t > t
vi t2 t3 0
t0 t1 vo
t o = A V g sin (ω t − ϕ ) t > t + 1/ω
0
ℑ
50% ϕ AV g
Vg
dϕ ϕ ωt ℜ
τe = τϕ =
dω ω t = t0 + N / ω
Fig. 2.2.5: Phase delay and envelope delay definitions. The switch W is closed at the instant >! ,
applying a sinusoidal voltage with amplitude Zg to the input of the amplifier having a frequency
dependent amplitude response EÐ=Ñ and its associated phase response :Ð=Ñ. The input signal
envelope is a unit step. The output envelope lags the input by 7/ œ .:Î.=, measured from >! to >" ,
where >" is the instant at which the output envelope reaches 50% of its final value. A number of
periods later (NÎ=), the phase delay can be measured as the time between the input and output zero
crossing, indicated by ># and >$ , and is expressed as 7: œ :Î=. Note the phase lag being defined in
the oposite direction of the rotation => in the corresponding phasor diagram.
In the phase advance case, when zeros dominate over poles, the name suggests that
the output voltage will change before input, which is impossible, of course. To see what
actually happens we apply a sinewave to two simple VG networks, low pass and high pass,
as shown in Fig. 2.2.6. Compare the phase advance case, @oHP , with the phase delay case,
@oLP . The input signal frequency is equal to the network cutoff, "ÎVGÞ
-2.20-
P. Stariè, E. Margan Inductive Peaking Circuits
t1LP −τ ϕ
i
R oLP
oHP
C
C t0 t
i oHP
Vg R oLP
t 1HP
+τ ϕ
Fig. 2.2.6: Phase delay and phase advance. It is evident that both output signals undergo a phase
modulation during the first half period. The time from >! and the first ‘zero crossing’ of the output is
shorter for @oHP (>"HP ) and longer for @oLP (>"LP ). However, both envelopes lag the input envelope. On
the other hand, the phase, measured after a number of periods, exhibits an advance of 7: for the
high pass network and a delay of 7: for the low pass network.
Returning to the envelope delay for the series peaking circuit, in accordance with
Eq. 2.2.33 we must differentiate Eq. 2.2.30. For each pole we have:
.: . = … =i 5i
œ ”arctan •œ # (2.2.34)
.= .= 5i 5i Ð= … =i Ñ#
and, as for the phase delay, the total envelope delay is the sum of the contributions of each
pole (and zero, if any). Again, if we use normalized poles and the normalized frequency,
we obtain the normalized envelope delay, 7e =h , resulting in a unit delay at DC.
For the 2-pole case we have:
5"n 5"n
7e =h œ # # (2.2.35)
= =
5"#n Œ ="n 5"#n Œ ="n
=h =h
The plots for the same values of 7 as before, in accordance with Eq. 2.2.35, are
shown in Fig. 2.2.7.
For pulse amplification the importance of achieving a flat envelope delay cannot be
overstated. A flat delay means that all the important frequencies will reach the output with
unaltered phase, preserving the shape of the input signal as much as possible for the given
bandwidth, thus resulting in minimal overshoot of the step response (see the next section).
Also, a flat delay means that, since it is a phase derivative, the phase must be a linear
function of frequency up to the cutoff. This is why Bessel systems are often being referred
to as ‘linear phase’ systems. This property can not be seen in the log scale used here, but if
plotted against a linearly scaled frequency it would be seen. We leave it to the curious
reader to try it by himself.
In contrast the Butterworth system shows a pronounced delay near the cut off
frequency. Conceivably, this will reveal the system resonance upon the step excitation.
-2.21-
P. Stariè, E. Margan Inductive Peaking Circuits
0.0
L = m R 2C
Ii Vo a ) m = 0.50
− 0.2 b ) m = 0.33
L L=0
C R c ) m = 0.25
− 0.4
τe ω h
− 0.6 ω h = 1 / RC
c
− 0.8
b
− 1.0
a
− 1.2
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.2.7: Envelope delay of the series peaking circuit for the same characteristic values of 7
as before: a) MFA; b) MFED; c) CD. Note the MFED plot being flat up to nearly !Þ5 =h .
We have already derived the formula for the step response in Part 1, Eq. 1.14.29:
"
gÐ>Ñ œ " e5" > sin Ð=" > )Ñ (2.2.36)
ksin )k
where ) is the pole angle in radians, ) œ arctan a =" Î5" b 1 (read the following Note!).
Note: We are often forced to calculate some of the circuit parameters from the
trigonometric relations between the real and imaginary components of the pole. The
Cartesian coordinates of the pole =" in the Laplace plane are 5" on the real axis and ="
on the imaginary axis. In polar coordinates the pole is expressed by its modulus (the
distance of the pole from the origin of the complex plane):
Q œ ÈÐ5" 4 =" ÑÐ5" 4 =" Ñ œ È5"# ="#
and its argument (angle) ), defined so that:
="
tan ) œ
5"
Now, a mathematically correct definition of the positive-valued angle is counter-
clockwise from the positive real axis; so if 5" is negative, ) will be greater than 1Î# .
However, the tanget function is defined within the range of … 1Î# and then repeats for
values between 1 „ 5 1Î# . Therefore, by taking the arctangent, ) œ arctana=" Î5" b, we
loose the information about which half of the complex plane the pole actually lies and
consequently a sign can be wrong. This is bad, because the left (negative) side of the
real axis is associated with energy dissipative, that is, resistive circuit action, while the
right (positive) side is associated with energy generative action. This is why
-2.22-
P. Stariè, E. Margan Inductive Peaking Circuits
unconditionally stable circuits have the poles always in the left half of the complex
plane.
To keep our analytical expressions simple we will keep tracking the pole layout
and correct the sign and value of the arctan Ð Ñ by adding 1 radians to the angle )
wherever necessary. But in order to avoid any confusion our computer algorithm should
use a different form of equation (see Part 6).
See Appendix 2.3 for more details.
To use the normalized values of poles in Eq. 2.2.36 we must also enter the
normalized time, >ÎX , where X is the system time constant, X œ VG . Thus we obtain:
a) for Butterworth poles (MFA):
ga Ð>Ñ œ " È# e >ÎX sin Ð>ÎX !Þ()& 1Ñ (2.2.37)
c) for Critical damping (CD) we have a double real pole at =" , so Eq. 2.2.36 is not
valid here, because it was derived for simple poles. To calculate the step response for the
function with a double pole, we start with Eq. 2.2.8, insert the same (real!) value Ð=" œ =# Ñ
and multiply it by the unit step operator "Î=. The resulting equation:
=#"
KÐ=Ñ œ (2.2.39)
= Ð= =" Ñ#
Eq. 2.2.39 has a double real pole =" œ 5" œ #ÎVG or, normalized, 5"n œ #.
We insert this in the Eq. 2.2.41 to obtain the CD step response plot (curve c, 7 œ !Þ#&):
-2.23-
P. Stariè, E. Margan Inductive Peaking Circuits
1.2
a
δ b
1.0
o c
ii R L=0
0.8
L = m R 2C
ii
0.6 o
L a ) m = 0.50
C R b ) m = 0.33
0.4 c ) m = 0.25
c ba
0.2 ω h = 1 / RC T = 1/ ω h
0.0
0 1 2 3 4 5 6
t /T
Fig. 2.2.8: Step-response of the series peaking circuit for the four characteristic values of 7:
a) MFA; b) MFED; c) CD. The case for 7 œ ! (P œ !Ñ is the reference. The MFA overshoot
is $ œ %Þ$ %, whilst for MFED it is only $ œ !Þ%$ %.
The values for the bandwidth improvement (b and for the rise time improvement (r
are similar, but in general they are not equal. In practice we more often use (b , the
calculation of which is easier. If the step response overshoot is not too large ($ # %) we
can approximate the rise time starting from the formula for the cut off frequency:
" "
=h œ # 1 0 h œ and furthermore 0h œ
VG #1VG
where =h is the upper half power frequency in [radÎs], whilst 0h is the upper half-power
frequency in Hz. We have already calculated the non-peaking risetime 7r by Eq. 2.2.4 and
found it to be #Þ#! VG . From this we obtain 7r 0h œ #Þ#!Î#1 ¸ !Þ$&, and this relation we
meet very frequently in practice:
-2.24-
P. Stariè, E. Margan Inductive Peaking Circuits
!Þ$&
7r ¸ (2.2.44)
0h
By replacing 0h with 0H in this equation, we obtain (an estimate of) the rise time of
the peaking amplifier. But note that by doing so, we miss that Eq. 2.2.44 is exact only for
the single-pole amplifier, where the load is the parallel VG network. For all other cases, it
can be used as an approximation only if the overshoot $ 2 %. The overshoot of a
Butterworth two-pole network amounts to 4.3 % and it becomes larger with each additional
pole(-pair), thus calculating the rise time by Eq. 2.2.43 will result in an excessive error.
Even greater error will result for networks with Chebyshev and Cauer (elliptic) system
poles. In such cases we must compute the actual system step response and find the risetime
from it. For Bessel poles, the error is tolerable since the =h -normalized Bessel frequency
response closely follows the first-order response upto =h . Even so, by using a computer to
obtain the rise time from the step response yields more accurate results.
We shall use the series peaking network also as an addition to T-coil peaking. This
is possible since the T-coil network has a constant input impedance (the T-coil is discussed
in Sec. 2.4, 2.5 and 2.6). Therefore it is useful to know the input impedance of the series
peaking network. From Fig. 2.2.1 it is evident that the input impedance is a capacitor G in
parallel with the serially connected P and V :
" 4=P V
^i œ œ (2.2.45)
4=G "Îa4=P V b " =# PG 4=VG
4=
" 7Œ
=h
^i œ V # (2.2.46)
= 4=
" 7Œ
=h =h
By making the denominator real and carrying out some further rearrangement we obtain:
#
4= # =
"Œ Ð7 "Ñ 7 Œ —
=h – =h
^i œ V # % (2.2.47)
= # =
" Ð" #7ÑŒ 7 Œ
=h =h
#
ee^i f = # =
: œ arctan œ arctan Œ Ð7 "Ñ 7 Œ —Ÿ (2.2.48)
d e^i f =h – =h
-2.25-
P. Stariè, E. Margan Inductive Peaking Circuits
In Fig. 2.2.9 the plots of Eq. 2.2.49 and Eq. 2.2.48 for the same values of 7 as
before are shown:
1.0
b a
0.7 c
0.5 L = m R 2C
Ii L=0
Vo
| Z i|
L
R
Zi C R
a ) m = 0.50
0.2
b ) m = 0.33
ω h = 1 / RC c ) m = 0.25
0.1
0 b
a
c
− 30
ϕ[ ]
− 60
L=0
a c
b
− 90
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.2.9: Input impedance modulus (normalized) and the associated phase angle of the
series peaking circuit for the characteristic values of 7. Note that for high frequencies the
input impedance approaches that of the capacitance. a) MFA; b) MFED; c) CD.
Table 2.2.1 shows the design parameters of the two-pole series peaking circuit:
Table 2.2.1
response 7 (b (r $ Ò%Ó
MFA 0.50 1.41 1.49 4.30
MFED 0.33 1.36 1.39 0.43
CD 0.25 1.29 1.33 0.00
Table 2.2.1: 2nd -order series peaking circuit parameters summarized: 7 is the
inductance proportionality factor; (b is the bandwidth improvement; (r is the
risetime improvement; and $ is the step response overshoot.
-2.26-
P. Stariè, E. Margan Inductive Peaking Circuits
ii L
o
Ci R C
We shall calculate the network transfer function from the input admittance:
" "
]i œ 4= Gi (2.3.1)
V "
4= G
4= P
The input impedance is then:
" VÐ" =# PG Ñ
^i œ œ (2.3.2)
]i Ð" 4= Gi VÑÐ" =# PG Ñ 4= G V
The input voltage is:
Zi œ Mi ^i (2.3.3)
-2.27-
P. Stariè, E. Margan Inductive Peaking Circuits
Since Mi V is the voltage at zero frequency, we can obtain the amplitude-normalized transfer
function by dividing Eq. 2.3.6 by Mi V:
"
J Ð=Ñ œ (2.3.7)
" 4=VÐG Gi Ñ =# PG 4 =$ Gi GVP
where =h is the upper half power frequency of the non-peaking case (P œ !). With these
substitutions we obtain the function which is normalized both in amplitude and in
frequency (to the non-peaking system cut off):
"
J Ð=Ñ œ # $ (2.3.9)
= = =
"4 78Œ 4 7 8 Ð" 8ÑŒ
=h =h =h
Since the denominator is a 3rd -order polynomial we have three poles, one of which
must be real and the remaining two should be complex conjugated (readers less
experienced in mathematics can find the general solutions for polynomials of 1st -, 2nd -, 3rd -
and 4th -order in Appendix 2.1 ). Here we shall show how to calculate the required
parameters in an easier way. The magnitude is:
"
¸J Ð=Ѹ œ (2.3.10)
# #
ÊŠd eJ Ð=Ñf‹ ŠeeJ Ð=Ñf‹
By rearranging the real and imaginary parts in Eq. 2.3.9 and inserting them into
Eq. 2.3.10, we obtain:
"
¸J Ð=Ѹ œ Í (2.3.11)
Í # # $ #
Í = = =
" 7 8 Œ — 7 8 a " 8b Œ —
Ì– =h –=
h =h
-2.28-
P. Stariè, E. Margan Inductive Peaking Circuits
By comparing Eq. 2.3.13 with Eq. 2.3.12 we realize that the factors at Ð=Î=h Ñ# and
at Ð=Î=h Ñ% in Eq. 2.3.12 must be zero if we want the function to correspond to Butterworth
poles:
"#78 œ ! and 7 8 #Ð" 8Ñ œ !
Ê 7 œ #Î$ and 8 œ $Î% (2.3.14)
With these data we can calculate the actual values of Butterworth poles and the
upper half power frequency. By inserting 7 and 8 into Eq. 2.3.12 and, considering that
now the coefficients at Ð=Î=h Ñ# and at Ð=Î=h Ñ4 are zero, we obtain the frequency response;
its plot is shown in Fig. 2.3.2 as curve a.
To calculate the poles we insert the values for 7 and 8 into Eq. 2.3.9 and by
inserting = instead of 4=Î=h , the denominator of Eq. 2.3.9 gets the form:
W œ !Þ"#& =$ !Þ& =# = " (2.3.15)
To obtain the canonical form we divide this equation by 0.125. Then to find the roots we
equate it to zero:
=$ % =# ) = ) œ ! (2.3.16)
The roots of this function are the normalized poles of the function J Ð=Ñ:
The ‘classical’ way of calculating the parameters 7 and 8 for Bessel poles is first to
derive the formula for the envelope delay, 7e œ .:Î.=. This is a rational function of =. By
equating the two coefficients in the numerator with the corresponding two in the
denominator polynomial we obtain two equations from which both parameters may be
calculated. However, this is a lengthy and error-prone procedure. A more direct and easier
way is as follows: in the literature [e.g. Ref. 2.10, 2.11], or with an appropriate computer
program (as in Part 6, BESTAP), we look for the Bessel 3rd -order polynomial:
The canonical form of the denominator of Eq. 2.3.9, with = instead of 4=Î=h , is:
=# = "
W œ =$ (2.3.19)
"8 7 8 Ð" 8Ñ 7 8 Ð" 8Ñ
-2.29-
P. Stariè, E. Margan Inductive Peaking Circuits
The functions in Eq. 2.3.18 and Eq. 2.3.19 must be the same. This is only possible if the
corresponding coefficients are equal. Thus we may write the following two equations:
" "
œ' and œ "& (2.3.20)
"8 7 8 Ð" 8Ñ
The roots of Eq. 2.3.18 (or Eq. 2.3.19, with the above values for 7 and 8) are the Bessel
poles of the function J Ð=Ñ:
="n,#n œ 5"n „ 4 ="n œ "Þ)$)* „ 4 "Þ(&%%
=$n œ 5$n œ #Þ$### (2.3.22)
Note that the same values are obtained from the pole tables (or by running the
BESTAP routine in Part 6); in general, for Bessel poles, =5 n œ =5 t .
With these poles the frequency response, according to Eq. 2.3.11, results in the
curve b in Fig. 2.3.2. The Bessel poles are derived from the condition that the transfer
function has a unit envelope delay at the origin, so there is no simple way of relating it to
the upper half power frequency =H . We need to calculate lJ Ð=Ñl numerically for a range,
say, " =Î=h $, using either Eq. 2.3.11 or the FREQW algorithm in Part 6, and find =H
from it. The bandwidth improvement factor for Bessel poles is given in Table 2.3.1 at the
end of this section.
2.0
Vo
Ii R
1.0
b a c
0.7
L = m R 2C
Ii Vo
0.5
L L=0
Ci R C
a ) m = 0.67 C / Ci = 3
0.2
b ) m = 0.48 C / Ci = 5
c ) m = 0.67 C / Ci = 2
ω h = 1 / R (C + C i )
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.3.2: Frequency response of the third-order series peaking circuit for different values of
7. The correct setting for the required pole pattern is achieved by the input to output
capacitance ratio, GÎG3 . Fair circuit performance comparison is met by normalization to the
total capacitance G G3 . Here we have: a) MFA; b) MFED; c) SPEC, and the non-peaking
(P œ !) case as a reference. Although being of highest bandwidth, the SPEC case is non-
optimal, owing to the slight but notable dip in the range 0.5 =Î=h "Þ# .
-2.30-
P. Stariè, E. Margan Inductive Peaking Circuits
The corresponding frequency response is the curve c in Fig. 2.3.2. This gives a
bandwidth improvement (b œ #Þ#), which sounds very fine if there were not a small dip in
the range !Þ& < Ð=Î=h Ñ < "Þ#. So we regretably realize that the ratio G ÎGi can not be
chosen at random. The aberrations are even greater for the envelope delay and the step
response, as we shall see later.
For the calculation of phase response we can use Eq. 2.2.31, but we must also add
the influence of the real pole 5$n :
= = =
= "n ="n
=h =h =h
: œ arctan arctan arctan (2.3.25)
5"n 5"n 5$n
In Fig. 2.3.3 we have plotted the phase response for different values of parameters
7 and 8. Instead of the parameter 8, the ratio G ÎGi is given.
0
− 30
L=0
− 60
ϕ[ ]
− 90
Ii L = m R 2C
− 120 Vo
L
− 150 Ci R C
b
− 180
a ) m = 0.67 C / Ci = 3 a
− 210
b ) m = 0.48 C / Ci = 5
c
− 240 c ) m = 0.67 C / Ci = 2
ω h = 1 / R (C + C i )
− 270
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.3.3: Phase response of the third-order series peaking circuit for different values of 7:
a) MFA; b) MFED; c) SPEC; the non-peaking (P œ !) case is the reference.
-2.31-
P. Stariè, E. Margan Inductive Peaking Circuits
2.3.5. Envelope-delay
We apply Eq. 2.2.35 to which we add the influence of the real pole 5$n :
5"n 5"n 5$ n
7/ =h œ # # # (2.3.26)
= = =
5"#n Œ ="n 5"#n Œ ="n 5$#n Œ
=h =h =h
In Fig. 2.3.4 the corresponding plots for different values of parameters 7 and 8 are
shown; instead of 8 the ratio GÎGi is given.
0
a ) m = 0.67 C / Ci = 3
L=0
b ) m = 0.48 C / Ci = 5
− 0.4
c ) m = 0.67 C / Ci = 2
ω h = 1 / R (C + C i )
− 0.8
b
τe ω h
− 1.2
a
Ii L = m R 2C
Vo
L c
− 1.6 Ci R C
− 2.0
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.3.4: Envelope delay of the third-order series peaking circuit for some characteristic
values of 7: a) MFA; b) MFED; c) SPEC; the non-peaking (P œ !) case is the reference. Note
the MFED flatness extending beyond =h .
The calculation is done in a way similar to the case of a two-pole series peaking
circuit. Our starting point is Eq. 2.3.9, where we consider that we have two complex
conjugate poles =" and =# , and a real pole =$ . The resulting equation must be transformed
into a similar form as Eq. 2.2.8. We need a normalized form of equation, so we must
multiply the numerator by =" =# =$ (see Appendix 2.2). So we obtain a general form:
=" = # = $
J Ð=Ñ œ (2.3.27)
Ð= =" ÑÐ= =# ÑÐ= =$ Ñ
Since we apply a unit step to the network input, the above expression must be multiplied
by "Î= to obtain a new, fourth-order function:
=" = # = $
KÐ=Ñ œ (2.3.28)
= Ð= =" ÑÐ= =# ÑÐ= =$ Ñ
-2.32-
P. Stariè, E. Margan Inductive Peaking Circuits
Since the calculation of a three-pole network step response is lengthy, we give here only
the final result. The curious reader can find the full derivation in Appendix 2.3.
5$ 5# =#" 5$ >
gÐ>Ñ œ " É A# =#" B# e5" > sinÐ=" > " Ñ " e (2.3.30)
=" C C
where:
A œ 5" Ð5" 5$ Ñ =#" B œ # 5" 5$
C œ Ð5" 5$ Ñ# =#" " œ arctana =" BÎAb 1 (2.3.31)
Note that we have written " for the initial phase angle of the resonance function,
instead of the usual ), in order to emphasize the difference between the response phase and
the angle of the complex conjugated pole pair (in two-pole circuits they have the same
value). We enter the normalized poles from Eq. 2.3.17, 2.3.22, and 2.3.24, and the
normalized time >ÎVÐGi G Ñ œ >ÎX , obtaining the step responses (ploted in Fig. 2.3.5):
a) For Butterworth poles, where 7 œ !Þ''( and 8 œ !Þ(&! Ð" œ 1 radÑ:
gb Ð>Ñ œ " "Þ)$* e"Þ)$* >ÎX sinÐ"Þ(&% >ÎX #Þ&*(Ñ "Þ*&" e#Þ$## >ÎX (2.3.33)
c) For our Special Case, where 7 œ 8 œ !Þ''( Ð" œ 1 radÑ:
gc Ð>Ñ œ " !Þ(&' e!Þ(& >ÎX sinÐ"Þ*)& >ÎX 1Ñ e"Þ& >ÎX (2.3.34)
1.2
c
a
1.0
o
b
iiR
L=0
0.8
0.6
ii L = m R 2C
o a ) m = 0.67 C / Ci = 3
0.4 L b ) m = 0.48 C / Ci = 5
Ci R C c ) m = 0.67 C / Ci = 2
0.2 ω h = 1 / R (C + C i )
T= 1/ ω h
0.0
0 1 2 3 4 5 6
t /T
Fig. 2.3.5: Step response of the third-order series peaking circuit for some characteristic
values of 7: a) MFA; b) MFED; c) SPEC; the non-peaking ÐP œ !) case is the reference. The
overshoot of both MFA and SPEC case is too large to be suitable for pulse amplification.
-2.33-
P. Stariè, E. Margan Inductive Peaking Circuits
The pole patterns for the three response types discussed are shown in Fig. 2.3.6.
Note the three different second-order curves fitting each pole pattern: a (large) horizontal
ellipse for MFED, a circle for MFA, and a vertical ellipse for the SPEC case.
jω
MFA: s 1S
s 1B = −1.0000 + j 1.7321 s 1T s 1B
s 2B = −1.0000 − j 1.7321
s 3B = −2.0000
MFED:
s 1T = − 1.8389 + j 1.7544
s 2T = − 1.8389 − j 1.7544
s 3T = − 2.3222 s 3T s 3B s 3S σ
−4 −3 −2 −1
T T T T
SPEC:
s 1S = − 0.7500 + j 1.9848
s 2S = − 0.7500 − j 1.9848
s 3S = − 1.5000
s 2T s 2B
T= R (C +Ci )
s 2S
Fig. 2.3.6: Pole patterns of the 3-pole series peaking circuit for the MFA, the MFED, and the
SPEC case. The curves on which the poles lie are: a circle with the center at the origin for MFA;
an ellipse with both foci on the real axis (the nearer at the origin) for the MFED; and an ellipse
with both foci on the imaginary axis for the SPEC case (which is effectively a Chebyshev-type
pole pattern). Also shown are the characteristic circles of each complex conjugate pole pair.
Table 2.3.1 resumes the parameters for the three versions of the 3-pole series
peaking circuit. Note the high overshoot values for the MFA and the SPEC case, both
unacceptable for a pulse amplifier.
Table 2.3.1
response 7 8 (b (r $%
a) MFA 0.667 0.750 2.00 2.27 8.1
b) MFED 0.480 0.833 1.76 1.79 0.7
c) SPEC 0.667 0.667 2.28 2.33 10.2
-2.34-
P. Stariè, E. Margan Inductive Peaking Circuits
The circuit schematic of a two-pole T-coil peaking network is shown in Fig. 2.4.1aÞ
The main characteristic of this circuit is the center tapped coil P, which is bridged by the
capacitance Gb , consisting (ideally) of the coil’s self-capacitance [Ref. 2.3, 2.17-2.21].
Since the coils in the equivalent network, Fig. 2.4.1b, form a letter T we call it a T-coil
network. The coupling factor 5 between both halves of the coil P and the bridging
capacitance Gb must be in a certain relation, dependent on the network poles layout. In
addition the relation V œ ÈPÎG must hold in order to obtain a constant input impedance
^i œ V at any frequency [Ref. 2.18, 2.21]. This is true if the elements of the network do
not have any losses. Owing to losses in a practical circuit, the input impedance may be
considered to be constant only up to a certain frequency, which, with a careful design, can
be high enough for the application of the T-coil circuit in a wideband amplifier.
Cb C A
k L I3
ii L ii B C
R Vi
La Lb
o
R LM I1 I2
C
o
Ii D E
R
C
a) b) c)
Fig. 2.4.1: a) The basic T-coil circuit: the voltage output is taken from the center tap node of the
inductance P and its two parts are magnetically coupled by the factor ! 5 "; b) an equivalent
circuit, with no magnetic coupling between the coils — it has been replaced by the mutual
inductance PM ; c) a simplified generalized impedance circuit, excited by the current generator M3 ,
showing the current loops.
k L
L
x z x z
L1 L2 La Lb
− LM k=0
LM = k L1 L2
L 1 = L a − LM
L = L 1 + L 2 + 2 LM L 2 = L b − LM
a) b)
Fig. 2.4.2: Modeling the coupling factor: a) The T-coil coupling factor 5 between the two halves
P" and P# of the total inductance P can be represented by b) an equivalent circuit, having two
separate (non-coupled) inductances, in which the magnetic coupling is modeled by the mutual
inductance PM (negative in value), so that P" œ Pa PM and P# œ Pb PM .
1 Networks with tapped coils were already being used for amplifier peaking in 1954 by F.A. Muller [2.16],
but since the bridging capacitance Cb is not shown in that article, the networks described do not have a
constant input impedance as do have the T-coil networks discussed in this and the following three sections.
-2.35-
P. Stariè, E. Margan Inductive Peaking Circuits
If the output is taken from the loading resistor V , the network in Fig. 2.4.1a
behaves as an all pass network. However, for peaking purposes we take the output
voltage from the capacitor G and in this application the circuit is a low pass filter.
The equivalent network in Fig. 2.4.1b needs to be explained. We will do this with
the aid of Fig. 2.4.2. The original network has a center tapped coil whose inductance P can
be calculated by the same general equation for two coupled coils, [Ref. 2.18, 2.28]:
P œ P" P # # P M (2.4.1)
where P" and P# are the inductances of the respective coil parts (which, in general, need
not to be equal) and PM is their mutual inductance. PM is taken twice, since the magnetic
induction from P" to P# is equal to the induction from P# to P" and both contribute to the
total. If 5 is the factor of magnetic coupling between P" and P# the mutual inductance is:
PM œ 5 È P " P # (2.4.2)
Pa œ P " P M P b œ P# P M (2.4.3)
P" œ Pa PM P# œ P b P M (2.4.4)
Note the negative sign of PM , which is a consequence of magnetic coupling; owing to this
the driving impedance at the center tap as seen by G is lower than without the coupling. In
the symmetrical case, when P" œ P# , we can calculate the value of P" and P# from the
required coupling 5 and total inductance P:
P
P" œ P# œ (2.4.5)
# Ð" 5Ñ
Thus we have proved that the circuits in Fig. 2.4.1a and 2.4.1b are equivalent, even though
no coupling exists between the coils in the circuit of Fig. 2.4.1b.
The corresponding generalized impedance model of the T-coil circuit is shown in
Fig. 2.4.1c, where the input voltage Zi is equal to the product of the input current and the
circuit impedance, Mi ^i . The input current splits into M" and M# . The current M$ flows in the
remaining loop. The impedances in the branches are:
A œ "Î= Gb
B œ = Pa
C œ = Pb (2.4.6)
D œ = PM "Î= G
EœV
-2.36-
P. Stariè, E. Margan Inductive Peaking Circuits
We form a system of equations in accordance with the current loops in Fig. 2.4.1c:
Zi œ M" ÐB DÑ M# D M$ B
! œ M" D M# ÐC D EÑ M$ C (2.4.7)
! œ M" B M# C M$ ÐA B CÑ
J œ (B D) [(C D E) (A B C ) C # ]
D [ D (A B C ) BCÓ B [DC B (C D E )] (2.4.9)
After multiplication some terms will cancel. Thus the solution is simplified to:
For further calculation we shall need both cofactors J"" and J"# . The cofactor for M" is:
â â
â Zi D B â
â â
J"" œâ 0 CDE C â
â â
â 0 C ABCâ
œ Zi ÐCA CB DA DB DC EA EB EC Ñ (2.4.11)
œ Zi (DA DB DC BC ) (2.4.12)
Let us first find the input admittance, which we would like to be equal to "ÎV œ "ÎE.
M" J""
] œ œ
Zi Zi J
CA CB DA DB DC EA EB EC "
œ œ (2.4.13)
BCA BDA BEA BEC DCA DEA DEB DEC E
After eliminating the fractions and canceling some terms, we obtain the expression:
Now we put in the values from Eq. 2.4.6, considering that Pa œ Pb , perform all the
multiplications, and arrange the terms with the decreasing powers of =. We obtain:
-2.37-
P. Stariè, E. Margan Inductive Peaking Circuits
P#a P PM " P V#
=”Š ‹ V # P• Š ‹œ! (2.4.15)
Gb Gb = G Gb Gb
This expression tells us that the input admittance can indeed be made equal to "ÎV , as we
wanted in Eq. 2.4.13. For a constant input admittance circuit, Eq. 2.4.16 must be valid for
any = [Ref. 2.21]. This is possible only if both K" and K# are zero (Ross’ method):
P#a P PM
K" œ V# P œ ! (2.4.17)
Gb Gb
P V#
K# œ œ! (2.4.18)
G Gb Gb
P œ V# G
(2.4.19)
P G
PM œ V # Gb œ V # Š Gb ‹
% %
For the symmetrical case, with the tap at the center of the coil, Pa œ P b œ PÎ#. Since only
two parameters, G and V, are known initially, we must obtain another, independent
equation in order to calculate the parameters PM and Gb . For this we can use the
transimpedance equation, Zo ÎM" (Mi œ M1 , see Fig. 2.4.2.b) . From Fig. 2.4.1c it is evident
that the current difference M" M# flows through branch H. This difference current,
multiplied by the impedance "Î=G , is equal to the output voltage Zo . The transimpedance
is then:
Zo " M" M #
œ † (2.4.20)
M" =G M"
J"" J"#
M" œ and M# œ (2.4.21)
J J
Again we make use of the common expressions in Eq. 2.4.6. The difference of both
cofactors is:
-2.38-
P. Stariè, E. Margan Inductive Peaking Circuits
Zo " CA EA EB EC
œ † (2.4.24)
M" =G CA CB DA DB DC EA EB EC
The voltage Zi is a factor of both the numerator and the denominator, so it cancels out.
Now we replace the common expressions with those from Eq. 2.4.6, express Q with
Eq. 2.4.19, perform the indicated multiplication, make the long division of the polynomials,
and the result is a relatively simple expression:
Zo V
J Ð=Ñ œ œ # # (2.4.25)
M" = V G Gb =VG Î# "
Although the author of this idea, Bob Ross, calculated it ‘by hand’ [Ref. 2.21], we will not
follow his example because this calculation is a formidable work. With modern computer
programs (such as Mathematica [Ref. 2.34] or similar [Ref. 2.35, 2.38, 2.39, 2.40]), the
calculation takes less time than is needed to type in the data.
For those designers who want to construct a distributed amplifier using electronic
tubes or FETs (but not transistors, as we will see in Part 3!), where the resistor V is
replaced by another T-coil circuit and so forth, it is important to know the transimpedance
from the input current to the voltage ZV . The result is:
ZV =# V # G Gb =VGÎ# "
œV # # (2.4.26)
M" = V G Gb =VG Î# "
Besides the two poles on the left side of the =-plane, =": and =#: , this equation also has two
symmetrically placed zeros on the right side of the =-plane, ="D and =#D , as shown in
Fig. 2.4.3. Since Eq. 2.4.26 has equal powers of = both in the numerator and the
denominator it is an all pass response. We shall return to this when we shall calculate the
step response.
The poles are the roots of the denominator of Eq. 2.4.25. The canonical form is:
" "
=# = # œ! (2.4.27)
#VGb V G Gb
" "' Gb
=",2 œ "„Ê" (2.4.29)
%VGb G
-2.39-
P. Stariè, E. Margan Inductive Peaking Circuits
k increases jω
jω1 s1 z
s 1p
σ1 − σ1 σ
−2 2
RC RC
s 2p −j ω 1 s2 z
Fig. 2.4.3: The poles (=": and =#: ) and zeros (="D and =#D ) of the all pass transimpedance
function corresponding to Eq. 2.4.26 and Fig. 2.4.1a. By changing the bridge capacitance
G, and the mutual inductance PM (by the coupling factor 5) according to Eq. 2.4.19, both
poles and both zeros travel along the circles shown.
An efficient inductive peaking circuit must have complex poles. By taking the
imaginary unit out of the square root, the terms within it exchange signs. Then the pole
angle ) can be calculated from the ratio of its imaginary to the real component, as we have
done before. From Fig. 2.2.4:
"' Gb
Ê "
ee=" f G
tan ) œ œ (2.4.30)
d e=" f "
This gives a general result:
" tan# )
Gb œ G (2.4.31)
"'
The Bessel pole placement is show in Fig. 2.4.4. The characteristic angle ) is measured
from the positive real axis.
jω
s1
jω1
σ1 θ σ
−θ
−jω 1
s2
Fig. 2.4.4: The layout of complex conjugate poles =" and =# of a second-order
Bessel transfer function. In this case, the angle is ) œ "&!°.
By using the pole angle, which we have calculated previously, and Eq. 2.4.31 , the
corresponding bridging capacitance can be found:
For Bessel poles:
) œ "&!° tan# ) œ "Î$ Gb œ GÎ"# (2.4.32)
For Butterworth poles:
) œ "$&° tan# ) œ " Gb œ GÎ) (2.4.33)
-2.40-
P. Stariè, E. Margan Inductive Peaking Circuits
The general expression for the coupling factor is [Ref. 2.21, 2.28, 2.33]:
PM PM
5œ œ (2.4.36)
È P" P# È (Pa PM ) (Pb PM )
If Gb is expressed by Eq. 2.4.31, we may derive a very interesting expression for the
coupling factor 5 :
$ tan# )
5œ (2.4.38)
& tan# )
Since ) œ "&!° for the Bessel pole pair and "$&° for the Butterworth pole pair, the
corresponding coupling factor is:
for Bessel poles:
5 œ !Þ& (2.4.39)
for Butterworth poles:
†
5 œ !Þ$$ (2.4.40)
Let us calculate the parameters 5 , PM and Gb for two additional cases. If we want
to avoid any overshoot, then both poles must be real and equal. In this case ) œ ")!° and
the damping of the circuit is critical (CD). The expression under the root of Eq. 2.4.29 must
be zero and we obtain:
G $ V# G
Gb œ PM œ 5 œ !Þ' (2.4.41)
"' "'
We are also interested in the circuit values for the limiting case in which the coupling
factor 5 and consequently the mutual inductance PM are zero. Here we calculate Gb from
Eq. 2.4.31:
G
Gb œ º (2.4.42)
% 5œ!
PM œ!
The next task is to calculate the poles for all four cases. We will show only the
calculation for Bessel poles; the other calculations are equal.
-2.41-
P. Stariè, E. Margan Inductive Peaking Circuits
For the starting expression we use the denominator of Eq. 2.4.25 in the canonic
form, which we equate to zero:
" "
=# = # œ! (2.4.43)
#VGb V GGb
Now we insert Gb œ G Î"#, which corresponds to Bessel poles; the result is:
' "#
=# = # # œ! (2.4.44)
VG V G
By factoring out "ÎVG the roots (poles of Eq. 2.4.25) are:
"
=",# œ 5" „ 4 =" œ Ð $ „ 4 È$ Ñ (2.4.45)
VG
In a similar way we calculate the Butterworth poles, where Gb œ GÎ):
"
=",# œ 5" „ 4 =" œ Ð#„4# Ñ (2.4.46)
VG
For critical damping (CD) the imaginary part of the poles is zero, so Gb œ G Î"', as found
before. The poles are:
%
=",# œ 5" œ (2.4.47)
VG
In the no-coupling case (5 œ !) the bridging capacitance Gb œ GÎ% and the poles are:
"
=",# œ 5" „ 4 =" œ Ð " „ 4 È$ Ñ (2.4.48)
VG
For all four kinds of poles, the input impedance ^i œ V œ ÈPÎG and it is independent of
frequency. Now we have all the necessary data to calculate the frequency, phase, time delay
and the step response.
By inserting the values for normalized poles, with VG œ " and =h œ "ÎVG , we
can plot the response for each of the four types of poles, as shown in Fig. 2.4.5.
By comparing this diagram with the frequency-response plot for a simple series
peaking circuit in Fig. 2.2.3, we realize that the upper cut off frequency =H of the T-coil
circuit is exactly twice as much as it is for the two-pole series peaking circuit (by
comparing, of course, the responses for the same kind of poles). I.e., for Butterworth poles
we had ="n,2n œ " „ 4 (Eq. 2.2.16) for the series peaking circuit, whilst here we have
=1n,2n œ # „ 4 #. Thus the bandwidth improvement factor for a two pole T-coil circuit,
compared with the single pole (VG ) circuit is (b œ #Þ)$ (the ratio of the absolute values of
-2.42-
P. Stariè, E. Margan Inductive Peaking Circuits
poles). Similarly, for other kinds of poles, the bandwidth improvement is greater too, as
reported in Table 2.4.1 at the end of this section. Owing to this property, it is worth
considering the use of a T-coil circuit whenever possible. For the same reason we shall
discuss T-coil circuits further in detail.
2.0
Cb
Vo
Ii R k
Ii L
1.0 Vo
R
C
0.7
0.5
L=0
k C b /C
a) 0.33 1/8 d
c
b) 0.5 1/12 a b
0.2 c) 0.6 1/16
d) 0.0 1/4
L = R2 C
ω h = 1/ RC
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.4.5: The frequency response magnitude of the T-coil, taken from the coil center tap. The
curve a) is the MFA (Butterworth) case, b) is the MFED (Bessel) case, c) is the critical damping
(CD) case and d) is the no-coupling (5 œ !) case. The non-peaking (P œ !) case is the reference.
The bandwidth extension is notably larger, not only compared with the two-pole series peaking,
but also to the three-pole series peaking circuit.
and, by inserting the values for the normalized poles, as we did in the calculation of the
frequency response, we obtain the plots shown in Fig. 2.4.6.
and, with the pole values as before, we get the Fig. 2.4.7 responses.
-2.43-
P. Stariè, E. Margan Inductive Peaking Circuits
0
Cb
− 20 k
L=0 Ii L
− 40 Vo
R
− 60 C
ϕ[ ]
− 80
−100 k C b /C
a) 0.33 1/8 d a b c
−120 b) 0.5 1/12
c) 0.6 1/16
−140
d) 0.0 1/4
L = R2 C
−160 ω h = 1/ RC
−180
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.4.6: The transfer function phase angle of the T-coil circuit, for the same values of coupling
and capacitance ratio as for the frequency response magnitude. a) is MFA, b) is MFED, c) is CD
and d) is the no-coupling case. The non-peaking (P œ !) case is the reference.
0.00
k Cb /C
a) 0.33 1/8
b) 0.5 1/12 L = R2 C
ω h = 1/ RC L=0
− 0.25 c) 0.6 1/16
d) 0.0 1/4 c
b
− 0.50
a
τe ω h
d Cb
− 0.75
k
Ii L
Vo
− 1.00 R
C
−1.25
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.4.7: The envelope delay of the T-coil: a) MFA, b) MFED, c) CD, d) 5 œ !. The T-coil
circuit delay at low frequencies is exactly one half of that in the P œ ! case.
-2.44-
P. Stariè, E. Margan Inductive Peaking Circuits
We derive the step response from Eq. 2.4.25, as was done in Part 1, Eq. 1.14.29. We
take Eq. 2.2.36 for complex poles (MFA, MFED, and the case 5 œ !) and Eq. 2.2.41 for
double pole (the CD case). To make the expressions simpler we insert the numerical values
of normalized poles and substitute >ÎVG œ >ÎX :
†
a) for Butterworth poles (MFA), where 5 œ !Þ$$ and Gb œ G Î):
ga Ð>Ñ œ " È# e# >ÎX sin Ð# >ÎX !Þ()&% 1 Ñ (2.4.49)
b) for Bessel poles (MFED), where 5 œ !Þ& and Gb œ GÎ"#:
gb Ð>Ñ œ " # e$ >ÎX sin ÐÈ$ >ÎX !Þ&#$' 1 Ñ (2.4.50)
c) for Critical Damping (CD), where 5 œ !Þ' and Gb œ GÎ"':
gc Ð>Ñ œ " e% >ÎX Ð" % >ÎX Ñ (2.4.51)
d) for 5 œ ! and Gb œ GÎ%:
#
gd Ð>Ñ œ " e >ÎX sin ÐÈ$ >ÎX "Þ!%(# 1 Ñ (2.4.52)
È$
The plots corresponding to these four equations are shown in Fig. 2.4.8. Also shown
are the corresponding four pole patterns.
1.2
k C b /C d
a) 0.33 1/8 a
1.0 b) 0.5 1/12 b
o
c
c) 0.6 1/16
ii R
d) 0.0 1/4 L=0
0.8
s 1a jω
0.6 Cb s 1b s 1d
ωh
k
ii L
0.4 o
s 1c s 2c
−4 −3 −2 −1 0 σ
R ωh
C
0.2
L = R2 C
ω h = 1/ RC s 2b s 2d
s 2a
0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0
t / RC
Fig. 2.4.8: The step response of the T-coil circuit. As before, a) is MFA, b) is MFED, c) is CD
and d) is the case 5 œ !. The non-peaking case (P œ !) is the reference. The no-coupling case
has excessive overshoot, 16.3 %, but MFA overshoot is also high, 4.3 %. Note the pole pattern
of the four cases: the closer the poles are to the imaginary axis, the greater is the overshoot. The
diameter of the circle on which the poles lie is %ÎVG .
-2.45-
P. Stariè, E. Margan Inductive Peaking Circuits
By multiplication with "Î= we obtain the corresponding formula for the step response in
the frequency domain:
Ð= =$ ÑÐ= =% Ñ
KÐ=Ñ œ (2.4.55)
= Ð= =" ÑÐ= =# Ñ
Ð= =$ Ñ Ð= =% Ñ => =$ =% 5 # =#"
res! œ lim =” e •œ œ "# œ"
= p ! = Ð= =" Ñ Ð= =# Ñ =" =# 5" =#"
-2.46-
P. Stariè, E. Margan Inductive Peaking Circuits
% 5" 5 " > e4 =" > e4 =" > % 5" 5" >
œ" e Œ œ" e sin =" > (2.4.58)
=" #4 ="
For critical damping (CD) both zeros and both poles are real. Then, =" œ =# and
=$ œ =% œ =" . There are only two residues, which are calculated in two different ways
(because the residue of the double pole must be calculated from the first derivative):
Ð= =$ Ñ# => =#
res! œ lim =” #
e • œ #$ œ " (because =$ œ =" )
=p! = Ð= =" Ñ ="
. Ð= =$ Ñ# =>
res" œ lim ” Ð= =" Ñ# e •
= p =" .= = Ð= =" Ñ#
. Ð= =$ Ñ# =>
œ lim ” e •
= p =" .= =
= > e=> e=>
œ lim ” e=> = > e=> # =$ > e=> =$# •
= p =" =#
œ % =" > e=" > (because =$ œ =" ) (2.4.59)
The sum of both residues is the time response sought. We insert the normalized
poles and put >ÎVG œ >ÎX to obtain:
†
a) for Butterworth poles (MFA), where 5 œ !Þ$$ and Gb œ G Î):
All four plots are shown in Fig. 2.4.9. Note the initial transition owing to the bridge
capacitance Gb at high frequencies, the dip where the phase inversion between the high
pass and low pass section occurs, and the transition to the final value.
-2.47-
P. Stariè, E. Margan Inductive Peaking Circuits
1.2
1.0
R
ii R
0.8
0.6
c b a d
0.4
Cb
0.2 k k Cb /C
ii L R a) 0.33 1/8
0.0
b) 0.5 1/12
R c) 0.6 1/16
− 0.2 C
d) 0.0 1/4
− 0.4 L = R2 C
ω h = 1/ RC
− 0.6
− 0.3 0.0 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3.0
t / RC
Fig. 2.4.9: The step response of the T-coil circuit, but now with the output from the loading
resistor V (this is interesting for cascading stages, as explained later). As before, a) MFA,
b) MFED, c) CD, and d) 5 œ ! . The system has the characteristics of an all pass filter.
All the significant data of the T-coil peaking circuit are collected in the Table 2.4.1.
Table 2.4.1
response type 5 Gb ÎG (b (r $ Ò%Ó
a) MFA !Þ$$ "Î) #Þ)$ #Þ*) %Þ$!
b) MFED !Þ&! "Î"# #Þ(# #Þ(' !Þ%$
c) CD !Þ'! "Î"' #Þ&( #Þ'' !Þ!!
d) 5 œ ! !Þ!! "Î% #Þ&% $Þ#$ "'Þ$
Table 2.4.1: Two-pole T-coil circuit parameters.
-2.48-
P. Stariè, E. Margan Inductive Peaking Circuits
capacitances G" , G# , and G$ . This will cause a slight decrease in bandwidth, but — as we
will see later — the decrease introduced by T-coils will not harm the operation of the total
system in any way.
Fig. 2.4.10: An example of a system with different input impedances (TV studio equipment).
The signal from a color TV camera is controlled on the monitor screen, the RGB color vectors
are measured by the vectorscope, and, finally, the signal is sent to the video modulator for
broadcasting. All interconnections are made by a coaxial cable with the characteristic
impedance of 75H. With long cables, adding considerable delay, the input capacitances can
affect the highest frequencies, causing reflections.
On the basis of the data in Fig. 2.4.10 we will calculate the T-coil for each of the
three devices. In addition we will calculate the bandwidth at each input. Since the whole
system must faithfully transmit pulses, we will consider Bessel poles for all three T-coils.
We use the following four relations:
P œ V# G (Eq. 2.4.19)
So we calculate:
a) for the monitor,
P" œ "&# nH, Gb" œ #Þ#& pF, 0h" œ ()Þ' MHz, 0H" œ #$" MHz;
The bandwidths are far above the requirement of the system, which is about 6 MHz
for either a color or a black and white signal. Fig. 2.4.11 shows the schematic diagram in
which the calculated component values are implemented.
-2.49-
P. Stariè, E. Margan Inductive Peaking Circuits
C b1 C b2 C b3
TV Camera k k k
Signal
Coax L1 1 Coax L2 2 Coax L3 3
R1
Video Monitor Vectorscope Video Modulator
Fig. 2.4.11: Input impedance compensation by T-coil sections prevents signal reflections.
Each section of the coaxial cable sees the terminating 75 H resistor at the end of the chain.
The bandwidth is affected only slightly. The circuit values are shown in the text above.
Since a properly designed T-coil circuit has a constant input impedance, it may be
used in connection with a series peaking circuit in order to improve further the system
bandwidth, as we shall see in Sec. 2.6. But first we shall examine a 3-pole T-coil system.
-2.50-
P. Stariè, E. Margan Inductive Peaking Circuits
s1 jω
Cb
θ3
k
ii L θ1 σ
R s3 −θ1
o
R
Ci R
C s1 = − 1.839 + j 1.754
s2 s2 = − 1.839 − j 1.754
D2
s3 = − 2.322
D1
s3 s1 , s 2
Fig. 2.5.1: The three-pole T-coil network. Fig. 2.5.2: Bessel pole layout for the Fig. 2.5.1.
If properly designed, the basic two-pole T-coil circuit will have a constant input
impedance V , independent of frequency. This property allows a great simplification of the
three-pole network analysis. To both poles of the T-coil, =" and =# , we only need to add the
third input pole =$ œ "ÎVGi . In order to design an efficient peaking circuit, the tap of
the coil must feed a greater capacitance, so Gi < G , because the T-coil has no influence on
the input pole =$ . Since the network is reciprocal (the current input and voltage output
nodes can be exchanged without afecting the response) we can always fulfil this
requirement. Also, because of the constant input impedance we can obtain the expression
for the transfer function from that of a two-pole T-coil circuit (Eq. 2.4.25) by adding to it
the influence of the third pole =$ (resulting in a simple multiplication of the first-order and
second-order transfer function):
Zo "
J Ð=Ñ œ œ (2.5.1)
M" V " # # VG
Œ= Œ= V G Gb = "
VGi #
-2.51-
P. Stariè, E. Margan Inductive Peaking Circuits
From the analysis of a two-pole T-coil circuit we remember that the diameter of the
circle on which both poles =" and =# lie is H" œ %ÎVG (see Fig. 2.4.3 or 2.4.8). The
diameter H# of the circle which goes through the real pole =$ œ 5# is simply "ÎVGi (the
reason why we have drawn the circle through this pole also, will become obvious later,
when we shall analyze the four-pole L+T circuits). We introduce a new parameter:
G
8œ (2.5.2)
Gi
The ratio of the diameters of these circles going through the poles and the origin is then:
" 8
H# VGi VG 8
œ œ œ (2.5.3)
H" % % %
VG VG
From this we obtain:
G H#
œ8œ% (2.5.4)
Gi H"
jω
s1 ω1
ω1 = − 4 sin θ1 cos θ1
π M
2 1 =−
4
RC c
os θ
1 RC
θ1
D1 σ1
0 σ
σ1 = − 4 cos 2 θ 1
RC
D1 = − 4
RC
M1 = √ σ12 + ω 12
ω
θ 1 = arctan σ 1
1
Fig. 2.5.3: The basic trigonometric relations of the main parameters for one of
the poles of the T-coil circuit. Knowing one pair of parameters, it is possible to
calculate the rest by these simple relations.
Fig. 2.5.3 illustrates some basic trigonometric relations between the polar and
Cartesian expression of the poles by taking into account the similarity relationship between
the two right angle triangles: ˜! 5" =" and ˜! =" H" :
%
dÖ=" × œ 5" œ H" cos# )" œ cos# )" (2.5.5)
VG
where H" is the circle diameter, H" œ %ÎVG . Likewise:
%
eÖ=" × œ =" œ H" cos )" sin )" œ cos )" sin )" (2.5.6)
VG
-2.52-
P. Stariè, E. Margan Inductive Peaking Circuits
From these equations we can calculate the coupling factor 5 and the bridging
capacitance Gb . Since:
eÖ=" × ="
tan )" œ œ (2.5.7)
d Ö=" × 5"
the corresponding coupling factor, according Eq. 2.4.36, is:
$ tan# )"
5œ
& tan# )"
and, as before in Eq. 2.4.31, the bridging capacitance is:
" tan# )"
Gb œ G
"'
Next we must calculate the parameter 8 from the table of poles in Part 4. For
Butterworth poles, listed in Table 4.3.1, the values for order 8 œ $ are:
Returning to the equations for 5 and Gb we find 5 œ ! (no coupling !!!) and
Gb œ !Þ#& G . Just as it was for a two-pole T-coil circuit, here, too, P œ V# G . So we have
all the circuit parameters for the Butterworth poles.
We can take the values for Bessel poles for order 8 œ $ either from Table 4.4.3 in
Part 4 , or by running the BESTAP routine (Part 6):
-2.53-
P. Stariè, E. Margan Inductive Peaking Circuits
This is important, because if the coil is replaced by a short circuit both capacitances
appear in parallel with the loading resistor V . Since Gi œ GÎ8, we may express both
capacitances with the total capacitance Gc œ G Gi and obtain:
8 "
G œ Gc and Gi œ Gc (2.5.15)
8" 8"
So far we have used the pole data from tables, since we needed only the ratios of
these poles. But, to calculate the frequency, phase, envelope delay, and step response we
shall need the actual values of the poles. We have calculated the poles of the T-coil circuit
by Eq. 2.4.29, which we repeat here for convenience:
" "' Gb
=",# œ "„Ê"
%VGb G
We shall use the Butterworth poles to explain the procedure. For these poles we
have Gb œ GÎ% and 8 œ #. By inserting these values in the above formula we obtain:
"
=",# œ Š" „ 4 È$ ‹ (2.5.16)
VG
"
œ Ð"Þ& „ 4 #Þ&*)"Ñ (2.5.17)
VGc
The input pole is:
" " $
=$ œ œ Ð8 "Ñ œ (2.5.18)
VGi VGc VGc
In a similar way we also calculate the values for Bessel poles and obtain:
" $Þ'%%(
= ", # œ Ð#Þ))'! „ 4 #Þ(&$#Ñ and =$ œ (2.5.19)
VGc VGc
When calculating the values for the critical damping case (CD) we must consider
that the imaginary values of the poles =" and =# must be zero. This gives Gb œ GÎ"'. Here
we may choose 8 œ #, and this means that Gi œ G Î#. The corresponding poles, which are
all real, are:
' $
=",# œ and =$ œ (2.5.20)
VGc VGc
To calculate the frequency response, we can use Eq. 2.2.27 for a two-pole series
peaking circuit and add the effect of the additional input real pole, 5$ . We insert the
-2.54-
P. Stariè, E. Margan Inductive Peaking Circuits
normalized poles (VGc œ ") and the normalized frequency =/=h . Thus we obtain the
following expression:
a5"#n =#"n b 5$
¸J a=b¸ œ Í
Í # # #
Í # = = =
–5"n Œ ="n —–5"#n Œ ="n —–5$#n Œ —
Ì =h =h =h
(2.5.21)
The plot for all three types of poles is shown in Fig. 2.5.4. By comparing the curve
a, MFA, where 5 œ ! , with the curve a in Fig. 2.4.5, where also 5 œ ! , we realize that we
have achieved a bandwidth extension just by splitting the total circuit capacitance into the
input capacitance Gi and the coil loading capacitance G .
2.0
Vo
Ii R
1.0
c b a
0.7
Cb
L = R2 C
0.5 k ω h = 1/ R ( C + C i )
Ii L L=0
Vo
Ci R
C
0.2
k Cb /C C /C i
a) 0.00 0.25 2.00
b) 0.35 0.12 2.63
c) 0.60 0.06 2.00
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.5.4: Three-pole T-coil network frequency response: a) MFA; b) MFED; c) CD case. The
non-peaking response (P œ !) is the reference. The MFA bandwidth is larger then that of the two-
pole circuit in Fig.2.4.5; in contrast, MFED bandwidth is nearly the same, but the circuit can be
realized more easily, owing to the lower magnetic coupling factor required. Note also that, owing
to the possibility of separating the total capacitance into a driving and loading part, the reference
non-peaking cut off frequency =h must be defined as "ÎVÐG Gi Ñ.
For the phase response we can use Eq. 2.3.25 again, and, by inserting the values for
the poles we can plot the responses as shown in Fig. 2.5.5:
= = =
="n ="n
=h =h =h
: œ arctan arctan arctan
5"n 5"n 5$n
-2.55-
P. Stariè, E. Margan Inductive Peaking Circuits
− 30
− 60
L=0
Cb
− 90 L = R2 C
ϕ[ ] k ω h = 1/ R ( C + C i )
Ii L
−120
Vo
−150 Ci R
C
a b c
−180
k Cb /C C /C i
−210
a) 0.00 0.25 2.00
−240 b) 0.35 0.12 2.63
c) 0.60 0.06 2.00
−270
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.5.5: Three-pole T-coil network phase response: a) MFA; b) MFED; c) CD case. Note that
at high frequencies the 3-pole system phase asymptote is #(!° (3 ‚ 90°).
We take Eq. 2.3.26 again, and by inserting the values for poles we can plot the
envelope delay, as in Fig. 2.5.6:
5"n 5"n 5$ n
7e =h œ # # #
= = =
5"#n Œ ="n 5"#n Œ ="n 5$#n Œ
=h =h =h
0.0 Cb
L = R2 C
k
Ii L ω h = 1/ R ( C + C i )
− 0.2 Vo
R L=0
Ci C
τe ω h
− 0.4 k C b /C C /C i
a) 0.00 0.25 2.00
b) 0.35 0.12 2.63
c
c) 0.60 0.06 2.00
− 0.6
b
a
− 0.8
− 1.0
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.5.6: Three-pole T-coil network envelope delay: a) MFA; b) MFED; c) CD case.
Note that the MFED flatness now extends to nearly "Þ& =h .
-2.56-
P. Stariè, E. Margan Inductive Peaking Circuits
We shall use again Eq. 2.3.27 and Eq. 2.3.28 (see Appendix 2.3, Sec. A2.3.1 for
complete derivation). By inserting the values for poles we can calculate the responses and
plot them, as shown in Fig. 2.5.7a and 2.5.7b:
For the CD case, where we have a double real pole (=# œ =" œ 5" ), the calculation
is different (see Eq. 1.11.12 in Part 1). The general expression:
=#" =$
J Ð=Ñ œ (2.5.24)
Ð= =" Ñ# Ð= =$ Ñ
(where =" œ 5" and =$ œ 5$ ) must be multiplied by the unit step operator "Î= to obtain the
form appropriate for _" transform:
=#" =$
KÐ=Ñ œ (2.5.25)
= Ð= =" Ñ# Ð= =$ Ñ
and the step response is the inverse Laplace transform of KÐ=Ñ, which in turn is equal to the
sum of its residues:
=#" =$ e=>
gÐ>Ñ œ _" ˜KÐ=Ñ™ œ ! res (2.5.26)
= Ð= =" Ñ# Ð= =$ Ñ
=$ =" > =$ =#" > #=" =" > =" Ð=" =$ Ñ > #=" =$ =" >
œ =#" =$ e œ =$ e
=#" Ð=3 =" Ñ# Ð=" =$ Ñ#
-2.57-
P. Stariè, E. Margan Inductive Peaking Circuits
Finally, we normalize the poles (Eq. 2.5.20Ñ, 5"n œ ' and 5$n œ $ and
normalize the time as >ÎX , where X œ VÐGi GÑ, to obtaint the formula by which the plot
c in Fig. 2.5.7 is calculated:
gc Ð>Ñ œ " $ Ð" # >ÎX Ñ e'>ÎX % e$>ÎX (2.5.30)
1.2
a
b
1.0
o c
ii R
0.8
L=0
0.6 L = R2 C
ω h = 1/ R ( C + C i ) Cb
k
0.4 ii L
k C b /C C /C i
o
a) 0.00 0.25 2.00
Ci R
b) 0.35 0.12 2.63 C
0.2
c) 0.60 0.06 2.00
0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0
t /R (C + Ci )
Fig. 2.5.7: The step response of the three-pole T-coil circuit: a) MFA; b) MFED; c) CD.
The non-peaking case (P œ !) is the reference. Since the total capacitance G Gi is equal
to G of the two-pole T-coil circuit, the MFED rise time is also nearly identical. However,
the three-pole circuit is much easier to realize in practice, owing to the lower 5 required .
-2.58-
P. Stariè, E. Margan Inductive Peaking Circuits
responses, making three plots each, with a different ratio Gi ÎG œ 8 . In the first group we
†
shall put 5 œ !Þ$$ and in the second group 5 œ !Þ#.
The corresponding poles are:
†
Group 1: 5 œ !Þ$$ , Gb œ G Î)
a) 8 œ #Þ& b) 8 œ # c) 8 œ "Þ&
† †
="n,#n œ #Þ) „ 4 #Þ) ="n,#n œ $ „ 4 $ ="n,#n œ $Þ$$ „ 4 $Þ$$
=$n œ $Þ& =$n œ $ =$n œ #Þ&
The poles are selected so that the sum of G Gi is the same for all three cases. In
this way we have the same upper half power frequency =h for any set of poles. This is
necessary in order to have the same scale for all three plots. For the above poles we obtain
the frequency response as in Fig. 2.5.8 and the step response as in Fig. 2.5.9.
Group 2: 5 œ !Þ# , Gb œ !Þ"( G
a) 8 œ # b) 8 œ "Þ& c) 8 œ "
="n,#n œ #Þ& „ 4 #Þ*!) ="n,#n œ #Þ& „ 4 $Þ##( ="n,#n œ $ „ 4 $Þ)$(
=$n œ $ =$n œ #Þ& =$n œ #
The corresponding frequency response plots are displayed in Fig. 2.5.10 and the
step responses in Fig. 2.5.11. From Fig. 2.5.11 it is evident that we have decreased the
coupling factor in the second group too much. Nor is a single curve in this figure suitable
for the peaking circuit of a pulse amplifier. In curve a the overshoot is excessive, whilst the
curve c exhibits too slow a response. The curve b rounds off too soon, reaching the final
value with a much slower slope. In a plot with a coarser time scale this curve would clearly
show a missing chunk of the step response. Needless to say, it would be very annoying if
an oscilloscope amplifier were to have such a step response.
All the important data for the three-pole T-coil peaking circuits are collected in
Table 2.5.1. It is worth noting that we achieve a three-pole MFED response with the
coupling factor 5 œ !Þ$& ((r œ #Þ()), whilst for a two-pole T-coil MFED response the
5 œ !Þ& was necessary (for a similar (r œ #Þ('). If we are satisfied with a slightly smaller
†
bandwidth it is possible to use the parameters of Group 1, where the coupling factor is !Þ$$
only. Such a small coupling factor is much easier to achieve than 5 œ !Þ&. So for the
practical construction of a wideband amplifier we find the three-pole T-coil circuits very
convenient.
Table 2.5.1
response type 5 Gb ÎG GÎGi (b (r $ [%]
MFA ! !Þ#& # $Þ!! #Þ)) )Þ!)
MFED !Þ$& !Þ"#& #Þ'%& #Þ(& #Þ() !Þ(&
CD !Þ'! !Þ!'$ # #Þ## #Þ#' !Þ!!
Group 1, a !Þ$$ !Þ"#& #Þ& #Þ(& #Þ(& !Þ)!
Group 1, b !Þ$$ !Þ"#& # #Þ&* #Þ'% !Þ!!
Group 1, c !Þ$$ !Þ"#& "Þ& #Þ$& #Þ'! !Þ!!
Group 2, a !Þ# !Þ"'( # #Þ)% #Þ)! "Þ)&
Group 2, b !Þ# !Þ"'( "Þ& #Þ&* #Þ'# !Þ!!
Group 2, c !Þ# !Þ"'( " #Þ"$ #Þ"' !Þ!!
Table 2.5.1: Three-pole T-coil circuit parameters.
-2.59-
P. Stariè, E. Margan Inductive Peaking Circuits
2.0
Vo
Ii R
1.0
a b
0.7 c
Cb
0.5 k L = R2 C L=0
Ii L
ω h = 1/ R ( C + C i )
Vo
Ci R
C
0.2 k Cb /C C /C i
a) 1/3 1/8 2.5
b) 1/3 1/8 2.0
c) 1/3 1/8 1.5
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.5.8: Low coupling factor, Group 1: frequency response.
1.2
s 1abc jω
1.0
s 3b a
o θ1 σ
ii R s 3a s 3c b
c
0.8
s 2abc
Cb
0.6
k
L = R2 C ii L
ω h = 1/ R ( C + C i ) o
0.4 R
Ci C
k Cb /C C /C i
0.2 a) 1/3 1/8 2.5
b) 1/3 1/8 2.0
c) 1/3 1/8 1.5
0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0
t /R (C + C i)
Fig. 2.5.9: Low coupling factor, Group 1: step response. In all three cases the real pole =$ is
placed closer to the origin (becoming dominant) than in the MFA and MFED case, making the
responses more similar to the CD case. The characteristic circle of the complex conjugate pole
pair has a slightly different diameter in each case.
-2.60-
P. Stariè, E. Margan Inductive Peaking Circuits
2.0
Vo
Ii R
1.0 a b
c
0.7
Cb
0.5 k L = R2 C L=0
Ii L
ω h = 1/ R ( C + C i )
Vo
Ci R
C
0.2 k Cb /C C /C i
a) 0.2 1/6 2.0
b) 0.2 1/6 1.5
c) 0.2 1/6 1.0
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.5.10: Low coupling factor, Group 2: frequency response.
1.2
s 1abc jω
a
1.0
o s 3b θ1 σ b
ii R s 3a s 3c c
0.8
L=0
s 2abc
Cb
0.6 k
L = R2 C ii L
ω h = 1/ R ( C + C i )
o
0.4 Ci R
C
k Cb /C C /C i
a) 0.2 1/6 2.0
0.2
b) 0.2 1/6 1.5
c) 0.2 1/6 1.0
0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0
t /R (C + Ci )
Fig. 2.5.11: Low coupling factor, Group 2: step response. The pole =$a is slightly further
away from the real axis than =$b or =$c , therefore causing an overshoot larger than in the
MFED case. Both responses b and c are over-damped, reaching the final value much later
than in the MFED case.
-2.61-
P. Stariè, E. Margan Inductive Peaking Circuits
Instead of leaving the input capacitance Gi without any peaking coil, as it was in the
three-pole T-coil circuit, we can add another coil between Gi and the T-coil. This is the
same as adding the series peaking components to the T-coil (it can be done since the input
impedance of the T-coil is resistive, if properly designed). In this way, the MFA bandwidth
can be extended by over 4 times, compared with the simple VG system. This is
substantially more than the 2.75 times found in the two-pole T-coil circuit, where there was
only one capacitance. In Fig. 2.6.1 we see such a circuit. The price to pay for such an
improvement is a coupling factor 5 !Þ&, which is difficult, but possible, to achieve.
s3
s1,2 = − 2.8962 ± j 0.8672 jω
Cb s3,4 = − 2.1038 ± j 2.6574
s1 θ1
k
ii Li L θ3 σ
o
R s2
Ci R
C
s4
D1
s3 , s4 s1 , s 2 D3
Fig. 2.6.1: The four-pole L+T network. Fig. 2.6.2: The Bessel four-pole pattern of L+T.
Here we utilize the basic property of a T-coil circuit — its constant and real input
impedance, presenting the loading resistor to the input series peaking section. Since the
input capacitance Gi and the input inductance Pi form a letter ‘L’ (inverted), we call the
network in Fig. 2.6.1 the L+T circuit. This is a four-pole network and its input impedance
is not constant, but it is similar to the series peaking system, which we have already
calculated (Eq. 2.2.44 — 2.2.48, plots in Fig. 2.2.9).
The transfer function of the L+T network is simply the product of the transfer
function for a two-pole series peaking circuit (Eq. 2.2.4) and the transfer function for a two-
pole T-coil circuit (Eq. 2.4.25). Explicitly:
" V
7V# Gi# V # GGb
J a=b œ † (2.6.1)
= " = "
=# # # =# #
7VGi 7V Gi VGb V GGb
Both polynomials in the denominator are written in the canonical form. It would be useless
to multiply them, because this would make the analysis very complicated. If we replace V
in the later numerator by 1, a normalized equation results.
The T-coil section has two poles, which we can rewrite from Eq. 2.4.28:
-2.63-
P. Stariè, E. Margan Inductive Peaking Circuits
whilst the input section L has two poles, rewritten from Eq. 2.2.5:
The T-coil circuit extends the bandwidth twice as much as the series peaking
circuit, so in order to extend the bandwidth of the L+T-network as much as possible, the
T-coil tap must be connected to whichever capacitance is greater. Thus Gi G . Therefore,
the circle with the diameter H" and the angle )" , corresponding to the T-coil circuit poles
=",# , are smaller than the circle with the diameter H$ and the angle )$ corresponding to the
poles =$,% of the L branch of the circuit.
Our task is to calculate the circuit parameters for the Bessel pole pattern shown in
Fig. 2.6.2, which gives an MFED response. The corresponding values for Bessel poles,
shown in Table 4.4.3 in Part 4, order 8 œ %, are:
From Fig. 2.2.2 it is evident that the diameter of the circle, on which the poles of the series
peaking circuit lie, is #ÎVGi . But from Fig. 2.4.3, in the case of a two-pole T-coil circuit,
the circle diameter is %ÎVG . Furthermore it is:
G G
œ8 or Gi œ (1.6.6)
Gi 8
From this we derive:
# #8
H$ VGi VG 8 H$
œ œ œ Ê 8œ# œ # † "Þ($!% œ $Þ%'!) (2.6.7)
H" % % # H"
VG VG
As for the three-pole T-coil analysis, here, too, we express the upper half power frequency
of the uncompensated circuit (without coils) as a function of the total capacitance Gc :
" "
=h œ œ
V ÐG Gi Ñ VGc
We can define the capacitors in relation to their sum, like in EqÞ 2.5.15:
8 "
G œ Gc and Gi œ Gc
8" 8"
-2.64-
P. Stariè, E. Margan Inductive Peaking Circuits
Now we can calculate all other parameters of the L+T circuit and also the actual
values of the poles:
a) Coupling factor (Eq. 2.4.36):
$ tan# )" $ tan# "'$Þ$$°
5œ œ œ !Þ&(")
& tan# )" & tan# "'$Þ$$°
b) Bridging capacitance (Eq. 2.4.31):
e) Real part of the pole =" , de=" f œ 5" , (Eq. 2.5.5 and Fig. 2.5.3):
% % † "Þ#)*! %Þ($"(
5" œ cos# )" œ cos# "'$Þ$$° œ
VG V Gc V Gc
f) Imaginary part of the pole =" , ee=" f œ =" , (Eq. 2.5.6 and Fig. 2.5.3):
% % † "Þ#)*! "Þ%"'(
=" œ cos )" sin )" œ cos "'$Þ$$° sin "'$Þ$$° œ
VG VGc VGc
g) Real part of the pole =$ , de=$ f œ 5$ , (Eq. 2.5.5 and Fig. 2.5.3):
# # † %Þ%'!) $Þ%$('
5$ œ cos# )$ œ cos# "#)Þ$(° œ
VGi VGc VGc
h) Imaginary part of the pole =$ , ee=$ f œ =$ , (Eq. 2.5.6 and Fig. 2.5.3):
# # † %Þ%'!) %Þ$%"*
=$ œ cos )$ sin )$ œ cos "#)Þ$(° sin "#)Þ$(° œ
VGi VGc VGc
As above, we can calculate the parameters for the MFA response from normalized
ÐVGc œ "Ñ values of the 4th -order Butterworth system (Table 4.3.1 , BUTTAP, Part 6):
The L+T network parameters for some other types of poles are given in Table 2.6.1
at the end of this section.
-2.65-
P. Stariè, E. Margan Inductive Peaking Circuits
For Butterworth poles (and only for these!) it is very easy to calculate the upper half
power frequency =H : it is equal to the radius of the circle centered at the origin, on which
all four poles lie, which in turn is equal to the absolute value of any one of the four poles. If
we use the normalized pole values, the circle radius is also equal to the factor of bandwidth
improvement, (b . By dividing this value by VGc , we obtain =H . We can use any one of the
four poles, e.g. ="n :
=H
(b œ œ l="n l œ É5"#n =#"n œ È%Þ"#"$# "Þ(!("# œ %Þ%'!* (2.6.9)
=h
and this is a really impressive bandwidth improvement.
Let us compose a general expression for the frequency response for our L+T circuit.
The formula with normalized values is very similar to Eq. 2.5.21, except that here we have
two pairs of poles, 5"n „ 4 ="n and 5$n „ 4 =$n :
#
a5"#n ="n b
¸J a=b¸ œ Í
Í # #
Í # = # =
– 5 " Œ = "n —– 5" Œ ="n —
Ì
n
=h n
=h
#
a5$#n =$n b
† Í (2.6.10)
Í # #
Í # = # =
–5$n Œ = =$n —–5$n Œ = =$n —
Ì h h
Since we have inserted the normalized poles, the frequency, too, had to be
normalized as =Î=h . Fig. 2.6.3 shows the frequency response for a) MFA and b) MFED
and also for two other pole placements, reported in [Ref. 2.28], the data of which are:
Whilst c and d offer an improvement in (b and (r , their step response is far from optimum.
In Fig. 2.6.4, the plot e) is the Chebyshev response, ?: œ „ !Þ!&° [Ref. 2.24, 2.30]:
-2.66-
P. Stariè, E. Margan Inductive Peaking Circuits
The plot f) is the Gaussian frequency response (to "# dB) [Ref. 2.24, 2.30]:
="n,#n œ 5"n „ 4 ="n œ $Þ$)$& „ 4 #Þ!'%( and )" œ "%)Þ)$°
=$n,%n œ 5$n „ 4 =$n œ $Þ%"&! „ 4 'Þ#&&' and )$ œ "")Þ'!°
The plot g) corresponds to a double pair of Bessel poles, with the following data:
="n,#n œ =$n,%n œ 5"n „ 4 ="n œ %Þ&!!! „ 4 #Þ&*)" and )" œ "&!Þ!!°
a
1.0
d
0.7 c
Li = 0
Cb L = R2 C b
0.5 L i = mR 2Ci L=0
k
Ii Li L ω h = 1/ R ( C + C i )
Vo Vo
Ii R R
Ci
C
0.2 m k C /C i C b /C
a) 1.71 0.55 4.83 0.073
b) 0.65 0.57 3.46 0.068
c) 1.49 0.59 6.00 0.064
d) 1.66 0.53 6.00 0.067
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.6.3: Four-pole L+T peaking circuit frequency-response: a) MFA; b) MFED; c) Group C;
d) Group A. In the non-peaking reference case both inductances are zero.
1.0
e
0.7
Cb L = R2 C f
L i = mR 2Ci L=0 g
0.5 k
Ii Li L ω h = 1/ R ( C + C i ) Li = 0
Vo Vo
Ii R R
Ci
C
0.2 m k C /C i C b /C
e) 1.13 0.54 4.79 0.076
f) 1.09 0.49 6.43 0.085
g) 0.33 0.50 2.00 0.083
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.6.4: Some additional frequency response plots of the four-pole L+T peaking circuit:
e) Chebyshev with 0.05° phase ripple; f) Gaussian to 12dB; g) double 2nd -order Bessel.
Note the lower bandwidth of g) (2.9 =h ) compared with b) in Fig. 2.6.3 (3.47 =h ). This
clearly shows that the bandwidth of a cascade of identical stages is lower than if the stages
have staggered poles.
-2.67-
P. Stariè, E. Margan Inductive Peaking Circuits
Note: The lower bandwidth (=H œ 2.9 =h ) of the system with repeated poles, plot g,
compared with the staggered pole placement Fig. 2.6.2, plot b (=H œ 3.47 =h ), clearly
shows that using repeated poles is not a clever idea! See also the step response plots.
All these groups of poles can be found in the tables [Ref. 2.30]. Here we can see the
extreme adaptability of the calculation method based on the trigonometric relations as
shown in Fig. 2.5.2, 2.5.3, 2.6.2, and the corresponding formulae. We call this method
geometrical synthesis. By this method, the calculation of circuit parameters for the
inductive peaking amplifier with any suitable pole placement is very easy and we will use it
extensively throughout the rest of the book.
L=0
− 60 Li = 0
ϕ[ ]
Cb L = R2 C
− 120
L i = mR 2Ci
k
Ii Li L ω h = 1/ R ( C + C i )
Vo
− 180 Ci R
C
m k C /C i Cb /C b
− 240 a) 1.71 0.55 4.83 0.073
b) 0.65 0.57 3.46 0.068 a d
c
c) 1.49 0.59 6.00 0.064
d) 1.66 0.53 6.00 0.067
− 300
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.6.5: Four-pole L+T peaking circuit phase response: a) MFA; b) MFED; c) Group C;
d) Group A. The non-peaking case, in which both inductors are zero, has a 90° maximum phase
shift; all other cases, being of 4th -order, have a 360° maximum phase shift.
-2.68-
P. Stariè, E. Margan Inductive Peaking Circuits
The corresponding plots for the first four groups of poles are displayed in Fig. 2.6.6.
Note that the value of the delay at low frequency is slightly different for each pole group.
This is owed to a different normalization for each circuit.
0.0
Cb L = R2 C
L i = mR 2Ci
k ω h = 1/ R ( C + C i )
− 0.2 Ii Li L
Vo Li = 0
Ci R L=0
C
− 0.4
τeω h
c b
− 0.6
a d
m k C /C i Cb /C
a) 1.71 0.55 4.83 0.073
− 0.8 b) 0.65 0.57 3.46 0.068
c) 1.49 0.59 6.00 0.064
d) 1.66 0.53 6.00 0.067
− 1.0
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.6.6: Four-pole L+T peaking circuit envelope delay: a) MFA; b) MFED; c) Group C;
d) Group A. For the non-peaking case at DC. the envelope delay is 1; all other cases have a larger
bandwidth and consequently a lower delay. Note the MFED flatness up to nearly 3.3 =h .
The _ transform of the step response is obtained by multiplying this function by the
unit step operator "Î=, resulting in a new, five-pole function:
=" =# =$ =%
KÐ=Ñ œ (2.6.14)
= Ð= =" ÑÐ= =# ÑÐ= =$ ÑÐ= =% Ñ
-2.69-
P. Stariè, E. Margan Inductive Peaking Circuits
and to obtain the step response in the time domain, we calculate the _" transform:
%
gÐ>Ñ œ _" ˜KÐ=Ñ™ œ " </=3 ˜KÐ=Ñ™ (2.6.15)
3œ!
The analytical calculation is a pure routine of algebra, but it would require some 8
pages to present. Readers who are interested in the details, can find it in Appendix 2.3.
Here we will write only the result:
K" 5 " > É #
ga>b œ " e a5" A =#" Bb =#" aA 5" Bb# sina=" > )" b
="
K$ 5 $ > É #
e a5$ C =#$ Bb =#$ aC 5$ Bb# sina=$ > )$ b (2.6.16)
=$
where:
# 5$# =#$
A œ a5" 5$ b# a=#" =$ b K" œ
A# =#" B#
B œ # a5" 5$ b
# 5"# =#"
C œ a5" 5$ b# a=#" =$ b K$ œ
C# =#$ B#
=" aA 5" Bb =$ aC 5$ Bb
)" œ arctan )$ œ arctan (2.6.17)
5" A =#" B 5$ C =#$ B
Note: The angles )" and )$ calculated by the arctangent will not always give a correct
result. Depending on the pole pattern, one or both will require an addition of 1 radians, as
we show in Appendix 2.3. In the following relations we give the correct values.
By inserting the normalized values for poles, and the normalized time >ÎX , where
X œ VÐGi GÑ, we obtain the following step response functions:
a) MFA response (Butterworth poles)
ga>b œ " #Þ%"%# e%Þ"#"$ >ÎX sinÐ"Þ(!(" >ÎX !Þ()&% 1Ñ
!Þ**') e"Þ(!(" >ÎX sinÐ%Þ"#"$ >ÎX !Þ()&% 1Ñ
-2.70-
P. Stariè, E. Margan Inductive Peaking Circuits
f) Gaussian response to 12 dB
ga>b œ " #Þ)!)% e$Þ$)$& >ÎX sinÐ#Þ!%'( >ÎX !Þ&%!$ 1Ñ
!Þ&!*) e$Þ%"&! >ÎX sinÐ'Þ#&&' >ÎX "Þ!&*)Ñ
The last response was calculated by convolution. We will not repeat it here, because
it is a very lengthy procedure and it has already been published [Ref. 2.5]. As with any
function with repeating poles, the resulting step response is slow compared with the
function with the same number of poles but in an optimized pattern. Fig. 2.6.7 shows the
step responses of a), b), c) and d) ; Fig. 2.6.8 shows the step responses of e), f) and g) .
The data for the four-pole L+T peaking circuit are given in Table 2.6.1.
Table 2.6.1
response type 5 7 8 Gb ÎG (b (r $ [%]
MFA !Þ&%'* "Þ(!(" %Þ)#)$ !Þ!($# %Þ%' %Þ!# "!Þ*
MFED (4th -order Bessel) !Þ&(") !Þ'%)) $Þ%'!) !Þ!')" $Þ%( $Þ%' !Þ*!
Group A !Þ&$$$ "Þ''%( 'Þ!!!! !Þ"''( %Þ%! %Þ!) "Þ*!
Group C !Þ&*!" "Þ%)(( 'Þ!!!! !Þ!'%% %Þ(# %Þ"& 'Þ#!
Chebyshev 0.05° !Þ&$&) "Þ"$%$ %Þ(*!# !Þ#!)) %Þ!* $Þ&# $Þ&'
Gaussian to 12 dB !Þ%*!% "Þ!))) 'Þ%$&( !Þ"&&% $Þ(" $Þ%$ !Þ%(
Double 2nd -order Bessel !Þ&!!! !Þ$$$$ #Þ!!!! !Þ!)$$ #Þ*# #Þ*' !Þ%%
Table 2.6.1: Four-pole L+T peaking circuit parameters.
Thus we have concluded the section on four-pole L+T peaking networks. Here we
have discussed the geometrical synthesis in a very elementary way, which can be briefly
explained as follows:
If the main capacitance G loading the T-coil network tap is known and the
loading resistor V is selected upon the required gain, we can — based on the pole data
and the geometrical relations of their real and imaginary parts — calculate all the
remaining circuit parameters for the complete L+T network.
As we shall see later in the book, the same procedure can be used to calculate the
circuit parameters for a multi-stage amplifier by implementing the peaking networks
described so far.
-2.71-
P. Stariè, E. Margan Inductive Peaking Circuits
1.2
a
d
c
1.0
o b
ii R
L=0
0.8 Li = 0
Cb L = R2 C
L i = mR 2Ci
k
0.6 ii Li L ω h = 1/ R ( C + C i )
o
Ci R
C
0.4
m k C /C i Cb /C
a) 1.71 0.55 4.83 0.073
0.2
b) 0.65 0.57 3.46 0.068
c) 1.49 0.59 6.00 0.064
d) 1.66 0.53 6.00 0.067
0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0
t /R (C + Ci )
Fig. 2.6.7: Four-pole L+T circuit step response: a) MFA; b) MFED; c) Group C; d) Group A.
1.2
e
1.0
o f
ii R g
0.8 L=0
Li = 0 Cb L = R2C
L i = mR 2Ci
k
0.6 ii Li L ω h = 1/ R ( C + C i )
o
Ci R
0.4 C
m k C /C i Cb /C
0.2 e) 1.13 0.54 4.79 0.076
f) 1.09 0.49 6.43 0.085
g) 0.33 0.50 2.00 0.083
0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0
t /R ( C + Ci)
Fig. 2.6.8: Some additional four-pole L+T circuit step responses: e) Chebyshev 0.05°à
f) Gaussian to 12dBà g) double 2nd -order Bessel. Again, the step response confirms
that repeating the poles is not optimalà compare the rise times of g) and b) in Fig. 2.6.7.
-2.72-
P. Stariè, E. Margan Inductive Peaking Circuits
In some cases, when a single amplifying stage is sufficient, we can use a very
simple and efficient shunt peaking circuit, shown in Fig. 2.7.1. This is equivalent to the
Fig. 2.1.3, but with the output taken from the capacitor G . Because shunt peaking networks
are very simple to make and their bandwidth extension and risetime improvement surpass
their series peaking counterparts, they have found very broad application in single stage
amplifiers, e.g., in TV receivers.
As the following analysis will show, the two-pole shunt peaking circuit in Fig. 2.7.1
also has one zero. Likewise, the three-pole shunt peaking circuit, which we will discuss in
Sec. 2.8, has two zeros. These zeros prevent us from using the geometrical synthesis
method for shunt peaking circuits.
ii iC
iL
L
o
C
R
Fig. 2.7.1: A shunt peaking network. It has two poles and one zero.
-2.73-
P. Stariè, E. Margan Inductive Peaking Circuits
We shall first find the value of the parameter 7 for the MFA response. In this case
the factors at Ð=Î=h Ñ# in the numerator and in the denominator must be equal [Ref. 2.4]:
We calculate the value of the parameter 7 for the MFED response from the
envelope delay response, which we derive from the phase angle :a=b:
e˜J Ð=Ñ™
:a=b œ arctan
d ˜J Ð=Ñ™
where J Ð=Ñ can be derived from Eq. 2.7.3 by making the denominator real. This is done by
multiplying both the numerator and the denominator by the complex conjugate value of the
denominator: " 7Ð=Î=h Ñ 4 Ð=Î=h Ñ.
-2.74-
P. Stariè, E. Margan Inductive Peaking Circuits
Next we multiply the brackets in the numerator and separate the real and imaginary parts:
$
= # =
a œ " 4 –a7 "bŒ 7 Œ (2.7.8.)
=h =h —
By dividing the imaginary part of J Ð=Ñ by its real part, W cancels out from the phase:
$
e ˜a ™ = =
: œ arctan œ arctan –a7 "bŒ 7# Œ (2.7.9)
d ˜a ™ =h =h —
By inserting 7 œ !Þ%"%" (Eq. 2.7.5), we would get the phase response of the MFA
case, as plotted in Fig. 2.7.3, curve a. But for the MFED response, the correct value of 7
must be found from the envelope delay, so we must calculate .:Î.= from Eq. 2.7.9:
$
.: . = # =
7e œ œ arctan –a7 "bŒ 7 Œ
.= .= =h =h —Ÿ
#
=
a7 "b $ 7# Œ
=h "
œ † (2.7.10)
$ # =h
= # =
" –a7 "bŒ 7 Œ
=h =h —
Let us square the bracket in the denominator, factor out Ð7 "Ñ and multiply both sides of
the equation by =h in order to obtain the normalized envelope delay:
#
$7# =
a7 "b–" Œ
7 " =h —
7e =h œ # % 6
(2.7.11)
= = =
" Ð7 "Ñ# Œ #
#7 a7 "bŒ
%
7 Œ
=h =h =h
-2.75-
P. Stariè, E. Margan Inductive Peaking Circuits
The only real solution of this equation is 7 œ !Þ$### . If we put it into Eq. 2.7.4 for
the frequency response, Eq. 2.7.9 for phase response and Eq. 2.7.11 for envelope delay, we
can make the plots b in Fig. 2.7.2, 2.7.3, and 2.7.4.
Now we have enough data to calculate both poles and the zero for the MFA and
MFED case, which we shall also need to calculate the step response. But we still have to
find the value of 7 for the critical damping (CD) case. We can derive it from the fact that
for CD all the poles are real and equal. To find the poles, we take Eq. 2.7.3, divide it by V,
and replace the normalized fequency 4 =Î=h with the complex variable =:
"
^ a=b "7= =
J Ð=Ñ œ œ œ 7 (2.7.14)
V " = 7 =# " "
=# =
7 7
We obtain the normalized poles from the denominator of J Ð=Ñ:
" " È "
="n,#n œ 5"n „ 4 ="n œ „ " %7 œ Š " „ 4È%7 " ‹
#7 #7 #7
(2.7.15)
and the normalized real zero from the numerator:
"
=$n œ 5$n œ (2.7.16)
7
Since the poles are usually complex, we have written the complex form in the
solution of the quadratic equation (Eq. 2.7.15). However, for CD, the solution must be real,
so the expression under the square root must be zero and this gives 7 œ !Þ#&. The curves
corresponding to CD in Fig. 2.7.2, 2.7.3, and 2.7.4 are marked with the letter c.
Note that, in spite of the higher cut off frequency, all the curves have the same high
frequency asymptote as the first-order response.
2.0
Vo
Ii R
1.0
a
0.7 c b
Ii
0.5 Vo
L L=0
a ) m = 0.41
C b ) m = 0.32
R c ) m = 0.25
0.2
L = mR 2C ω h = 1/ RC
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.7.2: Shunt peaking circuit frequency response: a) MFA; b) MFED; c) CD. As usual, the
non-peaking case (P œ !Ñ is the reference. The system zero causes the high-frequency asymptote
to be the same as for the non-peaking system.
-2.76-
P. Stariè, E. Margan Inductive Peaking Circuits
Ii
− 10 Vo
L
− 20 a ) m = 0.41
C
b ) m = 0.32 R
c ) m = 0.25
− 30
L=0
− 40
ϕ[ ] L = mR 2C
− 50 ω h = 1/ RC
− 60
− 70
b c
− 80 a
− 90
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.7.3: Shunt peaking circuit phase response: a) MFA; b) MFED; c) CD. The non-peaking
case (P œ !Ñ is the reference. The system zero causes the high frequency phase to be 90°, the
same as for the non-peaking system.
0
Ii
Vo
L a ) m = 0.41
b ) m = 0.32
− 0.2 C c ) m = 0.25
R L=0
− 0.4
L = mR 2C ω h = 1 / RC
c
τ e ωh
− 0.6 b
− 0.8
− 1.0
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.7.4: Shunt peaking circuit envelope delay: a) MFA; b) MFED; c) CD. The non-peaking
case (P œ !Ñ is the reference. The peaking systems have a higher bandwidth, and consequently
a lower delay at DC than the non-peaking system.
-2.77-
P. Stariè, E. Margan Inductive Peaking Circuits
For the calculated values of 7 the poles (Eq. 2.7.15) and the zero (Eq. 2.7.16) are:
a) for MFA response:
the poles ="n,#n œ "Þ#!(" „ 4 !Þ)"!& and the zero =$n œ 5$n œ #Þ%"%#
b) for MFED response:
the poles ="n,#n œ "Þ&&") „ 4 !Þ&$(% and the zero =$n œ 5$n œ $Þ"!$(
c) for CD response:
the double pole ="n,#n œ 5"n œ # and the zero =$n œ 5$n œ %
With these data we can calculate the step response. At first we calculate the MFA
and MFED responses, where in both cases we have two complex conjugate poles and one
real zero. The general expression for the frequency response is:
=" =# Ð= =3 Ñ
J Ð=Ñ œ (2.7.17)
=3 Ð= =" ÑÐ= =# Ñ
We multiply this equation by the unit step operator "Î= and obtain a new function:
=" =# Ð= =$ Ñ
KÐ=Ñ œ (2.7.18)
= =$ Ð= =" ÑÐ= =# Ñ
To calculate the step response in the time domain we take the _" transform:
=" =# Ð= =$ Ñ e=>
gÐ>Ñ œ _" ˜KÐ>Ñ™ œ ! res (2.7.19)
= =$ Ð= =" Ñ Ð= =# Ñ
Since the procedure is the same as for the previous circuits, we shall omit some
intermediate expressions. After inserting all the pole components, the sum of residues is:
A 4 B 5" > 4 = " > A 4 B 5" > 4 =" >
ga>b œ " e e e e (2.7.21)
#4B #4B
where A œ =#" 5" Ð5" 5$ Ñ and B œ =" 5$ . After factoring out e5" > ÎB we obtain:
-2.78-
P. Stariè, E. Margan Inductive Peaking Circuits
The expression in parentheses can be simplified by sorting the real and imaginary parts:
A 4 B 4 =" > A 4 B 4 =" > e4 =" > e4 =" > e4 =" > e4 =" >
e e œA B
#4 #4 #4 #
where:
=" 5$
" œ arctan 1 (2.7.25)
=#" 5" Ð5" 5$ Ñ
We now need the general expression for the step response for the CD case, where
we have a double real pole =" and a real zero =$ . We start from the normalized frequency
response function:
=#" a= =$ b
J a=b œ (2.7.26)
=$ a= =" b#
which must be multiplied by the unit step operator "Î=, thus obtaining:
=#" a= =$ b
Ka=b œ (2.7.27)
= =$ a= =" b#
There are two residues:
(2.7.28)
. =#"
a= = $ b e =>
="
res" œ lim a= =" b# — œ ”=" > Œ" = "• e
=" >
= p=" .= – = =$ a= =" b# $
If we express the poles in the second residue with their real and imaginary parts and
take the sum of both residues, we obtain:
5"
ga>b œ " e 5" > ”5" > Œ" "• (2.7.29)
5$
-2.79-
P. Stariè, E. Margan Inductive Peaking Circuits
Finally we insert the normalized numerical values for the poles, the zeros and the
time variable >ÎX œ >ÎVG . The step response is:
The step response plots are shown in Fig. 2.7.5. By comparing them with those for
the two-pole series peaking circuit in Fig. 2.2.8 we note that the step response derivative at
time > œ ! is not zero, in contrast to those of the series peaking circuit. Instead the
responses look more like the step response of the non-peaking first-order case. The reason
for this is in the difference between the number of poles and zeros, which is only 1 in favor
of the poles in the shunt peaking circuit.
1.2
a
1.0
b
o
c
ii R
0.8
L=0
ii
0.6 o
L a ) m = 0.41
b ) m = 0.32
C c ) m = 0.25
0.4 R
0.2 L = mR 2C ω h = 1/ RC
0.0
0 1 2 3 4 5
t / RC
Fig. 2.7.5: Shunt peaking circuit step response: a) MFA; b) MFED; c) CD. The non-peaking
case (P œ !Ñ is the reference. The difference between the number of poles and the number of
zeros is only 1 for the shunt peaking systems, therefore the starting slope of the step response
is similar to that of the non-peaking first-order system.
-2.80-
P. Stariè, E. Margan Inductive Peaking Circuits
Fig. 2.7.6 shows the pole placements for the three cases. Note the placement of the
zero, which is farther from the origin for those systems which have the poles with lower
imaginary part.
jω
1 a ) m = 0.4142 s1
s1,2 =
2 mRC
( −1 ± √ 1 − 4 m )
−1
s 3 = mRC θ = 135
− 2.41
RC
σ
s3 − 2 −1
s1 j ω RC RC
b ) m = 0.3222
θ = 150 s2
s3
σ
−3.104 −2 −1
RC RC RC c ) m = 0.2500 jω
s2
θ = 180
s3 s1
σ
s2
-4 −3 −2 −1
RC RC RC RC
Fig. 2.7.6: Shunt peaking circuit placement of the poles and the zero: a) MFA; b) MFED;
c) CD. Note the position of the zero =$ at the far left of the real axis. Although far from the
poles, its influence on the response is notable in each case.
We conclude the discussion with Table 2.7.1, in which all the important two-pole
shunt peaking circuit parameters are listed.
Table 2.7.1
response type 7 (b (r $ [%]
a) MFA !Þ%"%" "Þ(# "Þ)! $Þ!)
b) MFED !Þ$### "Þ&) "Þ&( !Þ%"
c) CD !Þ#&!! "Þ%% "Þ%" !Þ!!
Table 2.7.1: Two-pole shunt-peaking circuit parameters.
-2.81-
P. Stariè, E. Margan Inductive Peaking Circuits
If we consider the self capacitance GP of the coil P the two-pole shunt peaking
circuit acquires an additional pole and an additional zero. If the value of GP can not be
neglected it must be in a well defined proportion against other circuit components in order
to achieve optimum performance in the MFA or MFED sense. Fig. 2.8.1 shows the
corresponding three-pole, two-zero shunt peaking circuit.
ii
o
R
C
L CL
Fig. 2.8.1: The shunt peaking circuit has three poles and two zeros.
V 4 = P =# PGP V
œ (2.8.1)
4 = G Va" =# PGP b =# P aG GP b "
The system transfer function can be obtained easily from ^Ð=Ñ. We first replace the
normalized imaginary frequency 4 Ð=Î=h Ñ by the complex frequency variable =. Then, by
realizing that the output voltage is equal to the product of input current and the system
impedance: Zo œ Mi ^ a=b, we can express the transfer function by normalizing the output
to the final value at DC:
Zo ^ a=b
J a=b œ œ (2.8.4)
Mi V V
-2.83-
P. Stariè, E. Margan Inductive Peaking Circuits
The magnitude ¸J a=b¸ œ ÈJ a=b † J * a=b can be obtained more easily from the
impedance magnitudeÞ We start from Eq. 2.8.3, square the imaginary and real parts in the
numerator and in the denominator and take a square root of the whole fraction:
Í
Í # # #
Í
Í = # =
Í – " 7 8 Œ — 7 Œ
Í =h =h
¸^ a=b¸ œ V Í # (2.8.6)
Í # # # #
Í = = =
" 7 a " 8 b Œ
— Œ – " 7 8 Œ —
Ì – =h =h =h
and here we have replaced the normalized frequency =Î=h with the simbol ; in order to be
able to write the equation on a single line.
For the MFA response the numerator and denominator factors at the same powers
of Ð=Î=h Ñ in Eq. 2.8.7 must be equal [Ref. 2.4]. Thus we have two equations:
7# # 7 8 œ " # 7 Ð" 8Ñ
(2.8.8)
7# 8# œ 7# Ð" 8Ñ# # 7 8
from which we calculate the values of 7 and 8 for the MFA response:
For the MFED response the procedure for calculating the parameters 7 and 8 can
be similar to that for the two-pole shunt peaking circuit: we would first calculate the
formula for the envelope delay and equate the factors at the same powers of Ð=Î=h Ñ in the
numerator and the denominator, etc. But, with the increasing number of poles, the
calculation becomes more complicated. It is much simpler to compare the coefficients of
the characteristic polynomial of the complex frequency transfer function Eq. 2.8.5.
The numerical values of the coefficients of the 3rd -order Bessel polynomial, sorted
by the falling powers of =, are: ", ', "& and "& again. Thus, we have two equations:
"8 "
œ' and œ "& (2.8.10)
8 78
from which we get:
" "
8œ and 7œ (2.8.11)
& $
-2.84-
P. Stariè, E. Margan Inductive Peaking Circuits
Compare these values to those from the work of V.L. Krejcer [Ref. 2.4, loc. cit.].
His values for MFED responses are:
Krejcer also calculated the parameters for a "special" case circuit (SPEC):
By inserting the values of parameters from Eq. 2.8.9 – 2.8.12 into Eq. 2.8.7, we can
calculate the corresponding frequency responses. However, for the phase, envelope delay,
and step response we also need to know the values of poles and zeros. Since we know all
the values of parameters 7 and 8, we can use Eq. 2.8.5 . We equate the denominator W of
J Ð=Ñ to zero and find the roots, which are the three poles of J Ð=Ñ. Similarly, by equating
the numerator a of J Ð=Ñ to zero we calculate the two zeros (for readers less experienced
in mathematics we have reported the general solutions for polynomials of first, second and
third order in Appendix 2.1 ).
By inserting the values of 7 and 8 in Eq. 2.8.7 we can calculate the frequency
response magnitude of the three cases. The resulting plots are shown in Fig. 2.8.2. Note the
high frequency asymptote, which is the same as for the non-peaking single-pole case.
-2.85-
P. Stariè, E. Margan Inductive Peaking Circuits
2.0
Vo
Ii R
1.0
0.7
Ii
Vo L=0
0.5
R L = mR 2C a b
C c
ω h = 1/ RC
L CL
m CL /C
0.2
a) 0.414 0.354
b) 0.333 0.200
c) 0.450 0.220
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.8.2: Three-pole shunt peaking circuit frequency response: a) MFA; b) MFED; c) SPEC.
The non-peaking case (P œ !Ñ is the reference. The difference of the number of poles and the
number of zeros is only 1 for peaking systems, therefore the ending slope of the frequency
response is similar to that of the non-peaking system.
We use Eq. 2.2.30 positive for each pole and negative for each zero and sum them:
= = =
=" n = "n
=h =h =h
: œ arctan arctan arctan
5"n 5"n 5$n
= =
=&n = &n
=h =h
arctan arctan (2.8.13)
5&n 5&n
By entering the numerical values of poles and zeros we obtain the phase response
equations for each case. In Fig. 2.8.3 the corresponding plots are shown.
We use Eq. 2.2.34, adding a term for each pole and subtracting for each zero:
5"n 5"n 5$ n
7e =h œ # # #
= = =
5"#n Œ ="n 5"#n Œ ="n 5$#n Œ
=h =h =h
5&n 5&n
# # (2.8.14)
# = # =
5&n Œ =&n 5&n Œ =&n
=h =h
-2.86-
P. Stariè, E. Margan Inductive Peaking Circuits
Again we insert the numerical values for poles and zeros in Eq. 2.8.14 to plot the
envelope delay as shown in Fig. 2.8.4. As we have explained in Fig. 2.2.6, there is an
envelope advance (owed to system zeros) in the high frequency range.
0
Ii
− 10 Vo
R L = mR 2C
C
− 20 ω h = 1/ RC
L CL
− 30
m CL /C
− 40 a) 0.414 0.354
ϕ[]
b) 0.333 0.200
− 50 c) 0.450 0.220
− 60
L=0
− 70
a c b
− 80
− 90
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.8.3: Three-pole shunt peaking circuit phase response: a) MFA; b) MFED; c) SPEC.
The non-peaking case (P œ !Ñ is the reference. The system zeros cause the phase response
bouncing up at the 90° boundary and then returning back.
0.2
Ii
Vo L = mR 2C
R
0.0 C ω h = 1/ RC
L CL
m CL /C
− 0.2
a) 0.414 0.354
L=0
τe ω h b) 0.333 0.200
c) 0.450 0.220
− 0.4
b
− 0.6
c
− 0.8
a
− 1.0
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.8.4: Three-pole shunt peaking circuit envelope delay: a) MFA; b) MFED; c) SPEC.
The non-peaking (P œ !Ñ case is shown as the reference. Note the envelope advance in the
high frequency range, owed to system zeros.
-2.87-
P. Stariè, E. Margan Inductive Peaking Circuits
For the the step response we use the general transfer function for three poles and
two zeros, which we shall reshape to suit a solution in the Laplace Transform Tables.
=" =# =$ Ð= =% ÑÐ= =& Ñ
J Ð=Ñ œ (2.8.15)
=% =& Ð= =" ÑÐ= =# ÑÐ= =$ Ñ
We multiply this function by the unit step operator "Î= and obtain a new equation:
" =" =# =$ Ð= =% ÑÐ= =& Ñ
KÐ=Ñ œ † (2.8.16)
= =% =& Ð= =" ÑÐ= =# ÑÐ= =$ Ñ
The step response gÐ>Ñ is fully derived in Appendix 2.3. The result is:
A œ c5" a5" 5$ b =#" dc5"# =#" 5&# =#& # 5& 5" d # 5& ="# a# 5" 5$ b
B œ a# 5" 5$ bc5"# =#" 5&# =#& # 5& 5" d # 5& c5" a5" 5$ b ="# d
=" B
" œ arctanŒ 1
A
5$
K" œ
a5&# =&# bcÐ5" 5$ Ñ# ="# d
By inserting the numerical values of the poles and zeros we obtain the following relations:
a) MFA response ( 7 œ !Þ%"% and 8 œ !Þ$&% ):
gÐ>Ñ œ " !Þ&&($ e!Þ)&! >ÎX sinÐ"Þ&(( >ÎX !Þ((%" 1Ñ !Þ'"!% e#Þ"#& >ÎX
gÐ>Ñ œ " !Þ)!&% e"Þ)$* >ÎX sinÐ"Þ(&% >ÎX !Þ"((# 1Ñ "Þ"%#! e#Þ$## >ÎX
gÐ>Ñ œ " !Þ))"% e"Þ!$& >ÎX sinÐ"Þ$&& >ÎX "Þ!$$$ 1Ñ !Þ#%#* e$Þ%(& >ÎX
The plots of these responses can be seen in Fig. 2.8.5Þ Because the difference
between the number of poles and zeros is one only, the initial slope of the response is the
same as for the non-peaking response.
-2.88-
P. Stariè, E. Margan Inductive Peaking Circuits
1.2
c
1.0 a
o b
ii R
L=0
0.8
ii
o
0.6 R L = mR 2C
C
ω h = 1/ RC
L CL
0.4
m CL /C
a) 0.414 0.354
0.2
b) 0.333 0.200
c) 0.450 0.220
0.0
0 1 2 3 4 5
t / RC
Fig. 2.8.5: Three-pole shunt peaking circuit step response: a) MFA; b) MFED; c) SPEC. The
non-peaking case (P œ !Ñ is the reference. The initial slope is similar to the non-peaking
response, since the difference between the number of system poles and zeros is one only.
We conclude the discussion of the three-pole and two-zero shunt peaking circuit by
the Table 2.8.1, which gives all the important circuit parameters.
Table 2.8.1
response type 7 8 (b (r $ [%]
a) MFA !Þ%"% !Þ$&% "Þ)% "Þ)( (Þ"
b) MFED !Þ$$$ !Þ#!! "Þ($ "Þ(& !Þ$(
c) SPEC !Þ%&! !Þ##! "Þ)& "Þ*$ (Þ!
Table 2.8.1: Three-pole shunt-peaking circuit parameters.
-2.89-
P. Stariè, E. Margan Inductive Peaking Circuits
In those cases in which the amplifier capacitance may be split into two parts, Gi and
G, we can combine the shunt and the series peaking to form a network, shown in Fig. 2.9.1,
named the shunt–series peaking circuit. The bandwidth of the shunt–series circuit is
increased further than can be achieved by each system alone.
ii i3 L 2
o
i1 i2
L1
Ci
C
R
Although the improvement of the bandwidth and rise time in a shunt–series peaking
circuit exceeds that of a pure series or pure shunt peaking circuit, the improvement factors
just barely reach the values offered by the three-pole T-coil circuit, which is analytically
and practically much easier to deal with; not to speak of the improvement offered by the
L+T network, which is substantially greater. This circuit has been extensively treated in
literature [Ref. 2.4, 2.25, 2.26]. The calculation of the step response for this circuit can be
found in Appendix 2.3, so we shall give only the essential relations.
We start the analysis by calculating the input impedance:
Zi Zi
^i œ œ (2.9.1)
Mi M" M# M$
where:
Zi Zi Zi
M" œ M# œ M$ œ (2.9.2)
" V = P" "
=P#
= Gi =G
By introducing this into Eq. 2.9.1 and eliminating the double fractions we get:
ÐV = P" ÑÐ" =# P# GÑ
^i œ (2.9.3)
= Gi ÐV = P" ÑÐ" =# P# GÑ =# P# G " = GÐV = P" Ñ
-2.91-
P. Stariè, E. Margan Inductive Peaking Circuits
(divide by the coefficient at the highest power of =) first in the numerator (because it is
easy) and then in the denominator:
P" V
Œ=
Zo V P"
œ (2.9.6)
Mi V Ð= Gi V =# Gi P" ÑÐ" =# P# G Ñ =# P# G " = G ÐV = P" Ñ
P" V
Œ=
V P"
œ %
= P" P# GGi =$ P# GGi V =# Gi P" =# P# G =# P" G = GV = Gi V "
" P" V
† Œ=
P" P # G G i V P"
œ
V P# G P " G i P " G V aGi G b "
=% = $ =# =
P" P" P # G G i P" P # G G i P" P# G Gi
Since we would like to know how much we can improve the bandwidth with
respect to the non-peaking circuit (inductances shorted), let us normalize the transfer
function to =h œ "ÎVÐGi GÑ by putting V œ " and Gi G œ ". To simplify the
expressions, we introduce the following parameters:
G P" P#
8œ 7" œ 7# œ (2.9.7)
G Gi V# ÐG Gi Ñ V # ÐG Gi Ñ
" "
7" Œ =
7" 7# 8 Ð" 8Ñ 7"
J Ð=Ñ œ
" 7 # 8 7" " "
=% = $ =# =
7" 7" 7# 8 Ð" 8Ñ 7" 7# 8 Ð" 8Ñ 7" 7# 8 Ð" 8Ñ
(2.9.9)
Now we compare this with the generalized four-pole one-zero transfer function:
Ð "Ñ% =" =# =$ =% = =&
J Ð=Ñ œ † (2.9.10)
a= =" ba= =# ba= =$ ba= =% b =&
-2.92-
P. Stariè, E. Margan Inductive Peaking Circuits
where:
+ œ a=" =# =$ =% b
. œ =" = # = $ = %
For the MFED response the coefficients of the fourth-order Bessel polynomial (which we
obtain by running the BESTAP routine in Part 6) have the following numerical values:
So, from +:
71 œ !Þ" (2.9.17)
From , and - :
,
7"
, œ a7# 8 7" b - Ê 7# œ - (2.9.18)
8
From - or . :
" 8
8Ð" 8Ñ œ Ê 8 8# œ!
- 7" 7 # %&
"!& † !Þ" † Œ !Þ"
"!&
8
Ê 8 8# œ!
$Þ%&
"
Ê 8Œ" 8 œ! (2.9.19)
$Þ%&
And, since 8 Á !:
"
8œ" œ !Þ("!" (2.9.20)
$Þ%&
With this we calculate 7# :
, %&
7" !Þ"
7# œ - œ "!& œ !Þ%'#( (2.9.21)
8 !Þ("!"
The component values for the MFED transfer function will be:
P# œ 7# V# ÐG Gi Ñ œ !Þ%'#( V # ÐGi G Ñ
-2.93-
P. Stariè, E. Margan Inductive Peaking Circuits
The MFED poles ="–% (BESTAP routine, Part 6) and the zero =& (Eq. 2.9.11) are:
="n,#n œ ="t,#t œ #Þ)*'# „ 4 !Þ)'(#
=$n,%n œ =$t,%t œ #Þ"!$) „ 4 #Þ'&(%
=&n œ "!Þ!!! (2.9.23)
For the MFA we can use the same procedure as in Sec.2.3.1, but since we have a
system of 4th -order we would get an 8th -order polynomial and, consequently, a complicated
set of equations to solve. Instead we shall use a simpler approach (which, by the way, can
be used in any other case). We must first consider that our system will have a bandwidth
larger than the normalized Butterworth system. Let (b be the proportionality factor between
each normalized Butterworth pole and the shunt–series peaking system pole:
= 5 œ (b = 5 t (2.9.24)
th
The normalized 4 -order Butterworth system poles (see Part 6, BUTTAP routine) are:
="t,#t œ !Þ$)#( „ 4 !Þ*#$*
=$t,%t œ !Þ*#$* „ 4 !Þ$)#( (2.9.25)
The polynomial coefficients +, ,, - and . of the shunt-series peaking system are then:
, œ (b# a="t =#t ="t =$t ="t =%t =#t =$t =#t =%t =$t =%t b œ (b# † $Þ%"%#
(2.9.27)
- œ (b$ a="t =#t =$t ="t =#t =%t ="t =$t =%t =#t =$t =%t b œ (b$ † #Þ'"$"
Since the coefficients +, ,, - and . are the same as in Eq. 2.9.15, we get four equations
form which we will extract the values of factors (b , 7" , 7# and 8:
" 7# 8 7"
(b † #Þ'"$" œ (b# † $Þ%"%# œ
7" 7" 7# 8 Ð" 8Ñ
" "
(b$ † #Þ'"$" œ (b% œ (2.9.28)
7" 7# 8 Ð" 8Ñ 7" 7# 8 Ð" 8Ñ
Effectively, the pole multiplication factor is equal to the MFA bandwidth extension. From
the first equation of 2.9.28 we can now calculate 7" :
" "
7" œ œ # œ !Þ"%'% (2.9.30)
#Þ'"$" (b (b
-2.94-
P. Stariè, E. Margan Inductive Peaking Circuits
From the last equation of 2.9.28 we can establish the relationship between 7# and 8:
" "
(b% œ Ê 7# œ (2.9.31)
7" 7# 8 Ð" 8Ñ (b# 8 Ð" 8Ñ
Here we substitute 7" with "Î(b# and 7# with "Î(b# 8 Ð" 8Ñ:
" 8 "
Ê $Þ%"%# œ # #
(b# (b 8 Ð" 8Ñ (b
=&n œ 'Þ)#)$
-2.95-
P. Stariè, E. Margan Inductive Peaking Circuits
Let us insert these data into Eq. 2.9.9 and calculate the poles and the zero À
We shall use the normalized formula which we developed for the 4-pole L+T
circuit, (Eq. 2.6.10), to which we must include the influence of the zero. The magnitude of
the transfer function (to shorten the expression, we will omit the index ‘n’ here) is:
"
ˆ5"# ="# ‰ˆ5$# =$# ‰ É5&# ;#
¸J Ð=Ѹ œ 5&
É5"# a; =" b# ‘5"# a; =" b# ‘5$# a; =$ b# ‘5$# a; =$ b# ‘
(2.9.37)
where again ; œ =Î=h . In Fig. 2.9.2 we have ploted the responses resulting from this
equation by inserting the values of the poles and the zero for our MFA and MFED.
-2.96-
P. Stariè, E. Margan Inductive Peaking Circuits
2.0
Vo
Ii R
1.0
a
b
0.7
Ii L2 ω h = 1 / R (C + C i )
0.5 Vo
L1 n = C / (C + C i ) L1 = L2 = 0
Ci
C
L 1 = m 1 R 2(C + C i )
R
L 2 = m 2 R 2(C + C i )
0.2
m1 m2 n
a) 0.146 0.604 0.586
b) 0.100 0.463 0.710
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.9.2: The shunt–series peaking circuit frequency-response: a) MFA; b) MFED. Note
the MFA not being exactly maximally flat, owing to the system zero.
As before, we apply Eq. 2.2.30 for each pole and (negative) for the zero:
= =
="n ="n
=h =h
:a=b œ arctan arctan
5"n 5"n
= = =
= $n = $n
=h =h =h
arctan arctan arctan (2.9.38)
5 $n 5$n 5&n
By inserting the values for the poles and the zero from the equations above, we
obtain the responses shown in Fig. 2.9.3 .
-2.97-
P. Stariè, E. Margan Inductive Peaking Circuits
By inserting the values for the poles and the zero from the equations above, we
obtain the responses shown in Fig. 2.9.4. Again, as in pure shunt peaking, we have different
low frequency delays for each type of poles, owing to the different normalization.
− 30
L1 = L 2 = 0
− 60
ϕ[ ]
− 90
− 120 Ii L2 ω h = 1 / R (C + C i )
Vo
L1 n = C / (C + C i )
− 150 Ci
C
L 1 = m 1 R 2 (C + C i )
R
− 180 L 2 = m 2 R 2 (C + C i )
b
− 210
m1 m2 n a
− 240 a) 0.146 0.604 0.586
b) 0.100 0.463 0.710
− 270
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.9.3: The shunt–series peaking circuit phase response: a) MFA; b) MFED.
0.0
Ii L2 ω h = 1 / R (C + C i )
Vo
− 0.2 L1 n = C / (C + C i )
Ci
C
L 1 = m 1 R 2 (C + C i )
R
− 0.4 L 2 = m 2 R 2 (C + C i )
τe ωh L1 = L2 = 0
m1 m2 n
− 0.6
a) 0.146 0.604 0.586
b) 0.100 0.463 0.710
− 0.8
b
− 1.0
a
− 1.2
− 1.4
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.9.4: The shunt–series peaking circuit envelope delay: a) MFA; b) MFED.
-2.98-
P. Stariè, E. Margan Inductive Peaking Circuits
The normalized general expression for four poles and one zero in the frequency
domain is:
=" =# =$ =% Ð= =& Ñ
J Ð=Ñ œ (2.9.40)
=& Ð= =" ÑÐ= =# ÑÐ= =$ ÑÐ= =% Ñ
To get the step response in the = domain, we multiply J Ð=Ñ by the unit step operator "Î=:
=" =# =$ =% Ð= =& Ñ
KÐ=Ñ œ (2.9.41)
= =& Ð= =" ÑÐ= =# ÑÐ= =$ ÑÐ= =% Ñ
The step response in the time domain is obtained by taking the _" transform:
This formula requires even more effort than was spent for the L+T network. We
shall skip the lengthy procedure (which is presented in Appendix 2.3) and give only the
solution, which for all the listed poles and zeros is:
K"
gÐ>Ñ œ " e5" > cM sinÐ=" >ÎX Ñ =" N cos(=" >ÎX Ñd
5& ="
K$
e5$ > cP sinÐ=$ >ÎX Ñ =$ Q cosÐ=$ >ÎX Ñd (2.9.43)
5& =$
where:
M œ a5" 5& bc5" A =#" Bd =#" aA 5" Bb
N œ c5" A =#" Bd a5" 5& baA 5" Bb
P œ a5$ 5& bc5$ C =#$ Bd =#$ aC 5$ Bb
Q œ c5$ C =#$ Bd a5$ 5& baC 5$ Bb (2.9.44)
whilst A, B, C, K" and K$ are the same as for the L+T network (Eq. 2.6.17).
The plots in Fig. 2.9.5 and Fig. 2.9.6 were calculated and drawn by using these
formulae.
Let us now compare the MFED response with those obtained by Braude and Shea.
The step response relation is the same for all three systems (Eq. 2.9.43, 2.9.44), but the pole
and zero values are different. As it appears from the comparison of the characteristic
polynomial coefficients and even more so from the comparison of the poles and zeros, the
three systems were optimized in different ways. This is evident from Fig. 2.9.7.
Although at first glance all three step responses look very similar (Fig. 2.9.6), a
closer look reveals that the Braude case has an excessive overshoot. The Shea case has the
steepest slope (largest bandwidth), but this is paid for by an extra overshoot and ringing.
The Bessel system has the lowest transient slope; however, it has the minimal overshoot
and it is the first to settle to < 0.1% of the final amplitude value (in about #Þ( >ÎX ).
-2.99-
P. Stariè, E. Margan Inductive Peaking Circuits
1.2
a
b
1.0
o
ii R L1 = L2 = 0
0.8
ii L2 ω h = 1 / R (C + C i )
o
0.6 L1 n = C / (C + C i )
Ci
C
L 1 = m 1 R 2 (C + C i )
R
0.4 L 2 = m 2 R 2 (C + C i )
m1 m2 n
0.2
a) 0.146 0.604 0.586
b) 0.100 0.463 0.710
0.0
0 1 2 3 4 5
t / R (C + C i )
Fig. 2.9.5: The shunt–series peaking circuit step response: a) MFA; b) MFED.
o
ii R
b
1.0
a c L1 = L2 = 0
0.8 b 10 o
ii R
a 1.02
0.6
Vertical scale × 10
0.4 1.00
c
0.2 0.98
L1 = L2 = 0
0.0 0.96
m1 m2 n 0.94
a ) Shea 0.133 0.467 0.667
b ) Braude 0.122 0.511 0.656 0.92
c ) Bessel 0.100 0.464 0.710 0.90
0 1 2 3 4 5
t / R (C + C i )
Fig. 2.9.6: The MFED shunt–series peaking circuit step response: a) by Shea; b) by Braude;
c) a true Bessel system. The ×10 vertical scale expansion shows the top 10 % of the response.
The overshoot in the Braude case is excessive, whilst the Shea version has a prolonged
ringing. Although slowest, the Bessel system is the first to settle to < 0.1 % of the final value.
-2.100-
P. Stariè, E. Margan Inductive Peaking Circuits
The pole layout in Fig. 2.9.7 confirms the statements above. In the Braude case the
two poles with the smaller imaginary part are too far from the imaginary axis to
compensate the peaking of the two poles closer, so the overshoot is inevitable. The Shea
case has the widest pole spread and consequently the largest bandwidth, but the two poles
with the lower imaginary part are too close to the imaginary axis (this is needed in order to
level out the peaks and deeps in the frequency response). As a consequence, whilst the
overshoot is just acceptable, there is some long term ringing, impairing the system’s
settling time. The Bessel system pole layout follows the theoretical requirement. In spite of
the presence of the zero (located far from the poles, the farthest of all three systems), the
system performance is optimal.
poles : jω
s 1,2 = −2.1360 ± j 1.0925
Shea
s 1,2 = −1.6234 ± j 3.1556
s 1,2 = − 2.6032 ± j 0.9618
Braude s 1,2 = − 1.4951 Bessel
± j 2.6446
Braude
s 1,2 = − 2.8976 ± j 0.8649
Bessel
s 3,4 = − 2.1024 ± j 2.6573
Shea
zeros ( s 5 )
σ
−3 0
−10 −9 −8 −7 −6 −5 −4 −2 −1 1 2
Bessel Braude Shea
−10.0 − 8.2 −7.5
Fig. 2.9.7: The MFED shunt–series peaking circuit pole loci of the three different systems.
The zero of each system is too far from the poles to have much influence. It is interesting how
a similar step response can be obtained using three different optimization strategies. Strictly
speaking, only the Bessel system is optimal.
Let us conclude this section with Table 2.9.1, in which we have collected all the
design parameters, in addition to the bandwidth and rise time improvements and the
overshoots for the cases discussed.
Table 2.9.1
response type author 7" 7# 8 (b (r $ [%]
a) MFA PS/EM !Þ"%'% !Þ'!$' !Þ&)&) #Þ'" #Þ(# "#Þ#$
b) MFED PS/EM !Þ"!!! !Þ%'#( !Þ("!" #Þ") #Þ#" !Þ*!
c) MFED Shea !Þ"$$ !Þ%'( !Þ''( #Þ%% #Þ$* "Þ)'
d) MFED Braude !Þ"## !Þ&"" !Þ'&' #Þ&! #Þ$' %Þ%&
Table 2.9.1: Series–shunt peaking circuit parameters.
-2.101-
P. Stariè, E. Margan Inductive Peaking Circuits
-2.102-
P. Stariè, E. Margan Inductive Peaking Circuits
-2.103-
P. Stariè, E. Margan Inductive Peaking Circuits
2.0
Vo
Ii R
1.0
b
0.7 c
MFA Inductive Peaking
a
0.5 a ) no peaking (single-pole)
d g i
b ) series, 2-pole
c ) series, 3-pole
d ) shunt, 2-pole, 1-zero h
e ) shunt, 3-pole, 1-zero
e
0.2 f ) shunt-series, 4-pole, 1-zero
g ) T-coil, 2-pole f
h ) T-coil, 3-pole
i ) L+T-coil, 4-pole
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.10.1: MFA frequency responses of all the circuit configurations discussed. By far the
4-pole T-coil response i) has the largest bandwidth.
1.2
1.0 i
o c
ii R g h e d b
a
0.8
f
MFED Inductive Peaking
a ) no peaking (single-pole)
0.6
b ) series, 2-pole
c ) series, 3-pole
d ) shunt, 2-pole, 1-zero
0.4 e ) shunt, 3-pole, 1-zero
f ) shunt-series, 4-pole, 1-zero
0.2 g ) T-coil, 2-pole
h ) T-coil, 3-pole
i ) L+T-coil, 4-pole
0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0
t /T
Fig. 2.10.2: MFED step responses of all the circuit configurations discussed. Again, the 4-pole
T-coil step response i) has the steepest slope, but the 3-pole T-coil response h) is close.
-2.104-
P. Stariè, E. Margan Inductive Peaking Circuits
1.0
o
ii R
0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3
t/T
1.0
Cb
o T = R ( C + Ci )
ii R k
ii Li L L = R2 C
L i = mR 2Ci
o
Ci R
C
+40% +40%
∆L i ∆Ci
− 40% − 40%
m k C /C i C b /C
0.65 0.57 3.46 0.068
0 1 2 3 0 1 2 3
Fig. 2.11.1: Four-pole L+T peaking circuit step response sensitivity on component tolerances.
Such graphs were drawn originally by Carl Battjes for a class lecture at Tektronix, but with
another set of parameters as the reference. The responses presented here were obtained using the
MicroCAP–5 circuit analysis program. [Ref.2.36].
-2.105-
P. Stariè, E. Margan Inductive Peaking Circuits
These figures prove that the inductance P, the coupling factor 5 , and the loading
capacitance G must be kept within close tolerances in order to achieve the desired
performance, whilst the tolerances of the bridging capacitance Gb of the T-coil, the input
coil Pi , and input capacitance Gi are less critical. Therefore, the construction of a properly
calculated T-coil is not a simple matter. In some respect it resembles a digital AND
function: only if all the parameters are set correctly will the result be an efficient peaking
circuit. There is not much rom for compromise here.
In the serial production of wideband amplifiers there are always some tolerances of
stray capacitances, so the T-coils must be made adjustable. Usually the coils are wound on
a simple polystyrene cylindrical coil form, with a threaded ferrite core inside. By adjusting
the core the required inductance can be set. However, the coupling factor 5 depends only
on the coil length to diameter ratio (6ÎH) [Ref. 2.33] and it is independent of whether the
coil has a ferrite core inside or not. The relation between the coupling factor 5 and the
ratio 6ÎH is shown in the diagram in Fig. 2.11.2, which is valid for the center tapped
cylindrical coils.
1.0
N = 2n
k
k
n n D
0.5
0.2
0.1
0.01 0.02 0.05 0.10 0.20 0.50 1.00 2.00 5.00 10.00
l /D
Fig. 2.11.2: T-coil coupling factor 5 as a function of the coil’s length to diameter ( 6ÎH ) ratio.
The coil inductance P depends on the number of turns, the length to diameter ratio,
set by the coil form on which the coil is wound, and on the ferrite core if any is used; both
the coil form and the core can be obtained from different manufacturers, together with the
formulae for calculating the required number of turns. However, these formulae are often
given as some sort of ‘cookery book receipts’, with the key parameters usually in numerical
form for a particular product. As such, they are satisfactory for building general purpose
coils, but they do not offer the understanding needed to perform the optimization procedure
within a set of possible solutions.
The reader is therefore forced to look for more theoretical explanations in standard
physics and electronics text books.
-2.106-
P. Stariè, E. Margan Inductive Peaking Circuits
The main problem with the Eq. 2.11.1 is the term ‘homogeneous’; this implies a
uniform magnetic field, entirely contained within the coil, with no appreciable leakage
outwards. Toroidal coils are not easy to build and can not be made adjustable, so in
practice cylindrical coils are widely used. For a cylindrical coil the magnetic field is of the
same form as that of a stick magnet: the field lines close outside the coil and at both ends
the field is non-homogeneous. Because of this, the inductance is reduced by a form factor
', which is a function of the ratio HÎ6 ( Fig. 2.11.3 ).
An important note for T-coil production: the form factor, and consequently the
inductance, increase with H and decrease with 6, in contrast to the coupling factor 5 . This
additionally restricts our choice of H and 6.
Also, if the coil is going to be adjustable the relative permeability of the core
material, .r , must be taken into account; however, only a part of the magnetic field will be
contained within the core, so we introduce the average permeability, — .r , reflecting that only
a part of the turns will encircle the core. The relative permeability of the air is 1 and that of
the ferromagnetic core material can be anything up to several hundred. However, since the
field path in air will be much longer than inside the core, the average permeability will be
rather low. Note also that the core material is ‘slow’, i.e., its permeability has an upper
frequency limit, often lower than our bandwidth requirement.
Finally, if the bridging capacitance Gb of the T-coil network has to be precisely
known, we must take into account the coil’s self capacitance, Gs , which appears in parallel
with the coil, with a value equivalent to a series connection of capacitances between
adjacent turns. Owing to Gs the inductance will appear lower when measured, so Gs should
also be measured and the actual inductance value calculated from the two measurements. If
the turns are tightly wound the relative permittivity &r of the wire isolation lacquer must be
considered. Its value is several times larger than for air. The lacquer thickness is also
influencing Gs . If Gs is too large it can easily be reduced by increasing the distance
between the turns by a small amount, $ , but this will also cause additional field leakage and
reduce the inductance slightly. To compensate, the number of turns can be increased;
-2.107-
P. Stariè, E. Margan Inductive Peaking Circuits
because the inductance increases with R # , it will outperform the slight decrease resulting
from the larger length 6.
Multi-layer coils are less suitable for use in wideband amplifiers, because of their
high capacitance between the adjacent layers.
Fortunately wideband amplification does not require large inductance values. Also,
since the inductances are always in series with relatively large resistive loads (almost never
less than 50 H), the wire resistance and the skin effect can usually be neglected.
With all these considerations the inductance becomes:
.r . ! 1 H # 6
—
Pœ' (2.11.2)
% a. $ b#
The Fig. 2.11.3 shows the value of ' as a function of the ratio HÎ6. The actual
function is found through elliptic integration of the magnetic field flux density, which is
too complex to be solved analytically here. But a fair approximation, fitting the
experimental data to better than 1%, can be obtained using the following relation:
+
'œ ,
(2.11.3)
H
+Œ
6
where:
+œ#
1
, œ sin
$
10 0
ζ= a
a + ( D/ l ) b
ζ a=2
b = sin π
3
−1
10
µr µ0 π D2 l l
L= ζ N=
4 ( d + δ )2 d +δ
0≤ δ ≤ d δ d
µr N D
l
−2
10
−2 −1
10 10 10 0 10 1 10 2
D/ l
Fig. 2.11.3: The ' factor as a function of the coil diameter to length ratio, HÎ6. The
equation shown in the upper right corner fits experimental data to better than 1%.
-2.108-
P. Stariè, E. Margan Inductive Peaking Circuits
Inductances are susceptible to external fields, mainly from the power supply, or
other nearby inductances. The influence of a nearby inductance can be minimized by a
perpendicular orientation of coil axes. Otherwise, the circuit should be appropriately
shielded, but the shield will act as a shorted single-turn inductance, lowering the effective
coil inductance if it is too close.
In modern miniaturized, bandwidth hungry circuits the coil dimensions become
critical, and one possible solution is to construct the coil in a planar spiral form on two
sides of a printed circuit board or even within an integrated circuit. This gives the
possibility of more tightly controlled parameter tolerances, but there is no free lunch: the
price to pay is in many weeks or even months of computer simulation before a satisfactory
solution is found by trial and error, since the exact mathematical relations are extremely
complicated (a good example of how this is done can be found in the excellent article by
J.Van Hese of Agilent Technology [Ref. 2.37], where the finite element numerical analysis
method is used).
The following figures show a few examples of planar coils made directly on the
surface of IC chips, ceramic hybrids, or double-sided conventional PCBs.
-2.109-
P. Stariè, E. Margan Inductive Peaking Circuits
Fig. 2.11.6: A planar T-coil with a high coupling factor, realized on a conventional
double-sided PCB. Multi-turn spiral structures are also possible, but need at least a
three-layer board for making the inner to outer turn connections.
-2.110-
P. Stariè, E. Margan Inductive Peaking Circuits
References:
[2.1] S. Butterworth, On the Theory of Filter Amplifiers,
Experimental Wireless & Wireless Engineer, Vol. 7, October, 1930, pp. 536–471.
[2.2] G.E. Valley & H. Wallman, Vacuum tube amplifiers,
MIT, Radiation Laboratory Series, Vol. 18, McGraw-Hill, New York, 1948.
[2.3] J. Bednařík & J. Daněk, Obrazove´ zesilovače pro televisi a měřicí techniku
(Video Amplifiers for Television and Measuring Techniques),
Statní nakladatelství technicke´ literatury, Prague, 1957.
[2.4] E.L. Ginzton, W.R. Hewlett, J.H. Jasberg, J.D. Noe, Distributed Amplification,
Proc. I.R.E., Vol. 36, August, 1948, pp. 956–969.
[2.5] P. Starič, An Analysis of the Tapped-Coil Peaking Circuit for Wideband/Pulse Amplifiers,
Elektrotehniški vestnik, 1982, pp. 66–79.
[2.6] P. Starič, Three- and Four-Pole Tapped Coil Circuits for Wideband/Pulse Amplifiers,
Elektrotehniški vestnik, 1983, pp. 129–137.
[2.7] P. Starič, Application of T-coil Interstage Coupling in Wideband/Pulse Amplifiers,
Elektrotehniški vestnik, 1990, pp. 143–152.
[2.8] D.L. Feucht, Handbook of Analog Circuit Design,
Academic Press, Inc. San Diego, 1990.
[2.9] J.L. Addis, Good Engineering and Fast Vertical Amplifiers, Part 4, section 14,
Analog Circuit Design, edited by J. Williams,
Butterworth-Heinemann, Boston, 1991.
[2.10] M.E. Van Valkenburg, Introduction to Modern Network Synthesis,
John Wiley, New York, 1960.
[2.11] G. Daryanani, Principles of Active Network Synthesis and Design,
John Wiley, New York, 1976.
[2.12] T.R. Cuthbert, Circuit Design using Personal Computers,
John Wiley, 1983.
[2.13] G.J.A. Byrd, Design of Continuous and Digital Electronic Systems,
McGraw-Hill (U K), London, 1980.
[2.14] P. Starič, Interpolacija med Butterworthovimi in Thomsonovimi poli
(Interpolation between Butterworth’s and Thomson’s poles),
Elektrotehniški vestnik, 1987, pp. 133–139.
[2.15] G.A. Korn & T.M. Korn, Mathematical Handbook for Scientists and Engineers,
McGraw-Hill, New York, 1961.
[2.16] F.A. Muller, High-Frequency Compensation of RC Amplifiers,
Proceedings of the I.R.E., August, 1954, pp. 1271–1276.
[2.17] C.R. Battjes, Technical Notes on Bridged T-coil Peaking,
(Internal Publication), Tektronix, Inc., Beaverton, Ore., 1969.
[2.18] C.R. Battjes, Who Wakes the Bugler?, Part 2, section 10,
The Art and Sciece of Analog Circuit Design, edited by J. Williams,
Butterworth-Heinemann, Boston, 1995.
[2.19] N.B. Schrock, A New Amplifier for Milli-Microsecond Pulses,
Hewlett-Packard Journal, Vol. 1, No. 1., September, 1949.
[2.20] W.R. Horton, J.H. Jasberg, J.D. Noe, Distributed Amplifiers: Practical Consideration and
Experimental Results, Proc. I.R.E. Vol. 39, pp. 748–753.
[2.21] R.I. Ross, Wang Algebra Speeds Network Computation of Constant Input Impedance Networks,
(Internal Publication), Tektronix, Inc., Beaverton, Ore. 1968.
[2.22] R.J. Duffin, An Analysis of the Wang Algebra of the Networks,
Trans. Amer. Math. Soc., 1959, pp. 114–131.
-2.111-
P. Stariè, E. Margan Inductive Peaking Circuits
-2.112-
P.Starič, E.Margan Appendix 2.1
Appendix 2.1
General Solutions for 1 -, 2nd -, 3rd - and 4th -order polynomials
st
Second-order polynomial: + B# , B - œ !
, -
Canonical form: B# B œ!
+ +
, … È,# % + -
Solutions: B"ß# œ
#+
B$ + B# , B - œ !
Solutions:
By substituting: O œ È+# $,
Q œ %+$ - +# ,# ")+,- %,$ #(- #
R œ #+$ *+, #(-
the real solution is:
4R
+>+8
+ # È
$ $Q
B" œ O =38
$ $ $
4R 4R
+>+8 +>+8
+ È
$ $Q È$ È
$ $Q
B# œ O =38 O -9=
$ $ $ $
4R 4R
+>+8 +>+8
+ È
$ $Q È$ È
$ $Q
B$ œ O =38 O -9=
$ $ $ $
-A2.1.1-
P.Starič, E.Margan Appendix 2.1
-A2.1.2-
P.Starič, E.Margan Appendix 2.1
B +C -
B# a+ Eb ŠC ‹œ!
# E
where:
E œ „ È)C +# %,
-A2.1.3-
P.Starič, E.Margan Appendix 2.2
Appendix 2.2
Normalization of complex frequency response functions
or
Why do some expressions have strange signs?
"
J Ð=Ñ œ R
(A2.2.1)
# Ð= =3 Ñ
3 œ"
" "
J Ð!Ñ œ R
œ R
(A2.2.2)
# Ð! =3 Ñ # Ð=3 Ñ
3 œ" 3 œ"
"
R R
# Ð= =3 Ñ # Ð=3 Ñ
J Ð=Ñ 3 œ" 3 œ"
Jn Ð=Ñ œ œ œ (A2.2.3)
J Ð!Ñ " R
R
# Ð= =3 Ñ
# Ð=3 Ñ 3 œ"
3 œ"
-A2.2.1-
P.Starič, E.Margan Appendix 2.2
The numerator of the last term in Eq. A2.2.3 can be written so that the signs
are collected together in a separate product, defining the sign of the total:
R
Ð"ÑR # =3
3 œ"
Jn Ð=Ñ œ R
(A2.2.4)
# Ð= =3 Ñ
3 œ"
This means that all odd order functions must be multiplied by " in order
to have a correctly normalized expression. But please, note that the sign defining
expression Ð"ÑR is not the consequence of all our poles lying in the left half of the
complex plane, as is sometimes wrongly explained in literature!
In Eq. A2.2.4 the poles still retain their actual value, be it positive or negative.
The term Ð"ÑR is just the consequence of the mathematical operation (subtraction)
required by the function: = must acquire the exact value of =3 , sign included, if the
function is to have a pole at =3 :
= =3 œ ! º Ê J Ð=3 Ñ œ „ _ (A2.2.5)
= œ =3
In some literature the sign is usually neglected because we are all too often
interested in the frequency response magnitude, which is the absolute value of J Ð=Ñ,
or lJ Ð=Ñl. However, as amplifier designers we are interested mainly in the system’s
time domain performance. If we calculate it by the inverse Laplace transform we must
have the correct sign of the transfer function, and consequently the signs of the
residues at each pole.
In addition it is important to note that a system with zeros must have the
product of zeros normalized in the same way (even if some of the systems with zeros
do not have a DC response, such as high pass and band pass systems). If our system
has poles :3 and zeros D5 , the normalized transfer function is:
R Q
Ð"ÑR # :3 # Ð= D5 Ñ
3 œ" 5 œ"
Jn Ð=Ñ œ R
· Q
(A2.2.6)
# Ð= :3 Ñ Ð"ÑQ # D5
3 œ" 5 œ"
which is, incidentally, also equal to Ð"ÑR Q , but there is nothing mystical about
that, really.
-A2.2.2-
P. Starič, E. Margan
Wideband Amplifiers
Part 3:
Back To Basics
-3.2-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
Contents:
3.0 Introduction: A Farewell to Exact Calculations ............................................................................... 3.7
3.1 Common Emitter Amplifier ............................................................................................................... 3.9
3.1.1 Calculation of Voltage Amplification (Based on Fig. 3.1.1d) ........................................ 3.14
3.2 Transistor as an Impedance Converter ............................................................................................ 3.17
3.2.1 Common Base Transistor Small Signal HF Model ........................................................ 3.17
3.2.2 The Conversion of Impedances ...................................................................................... 3.20
3.2.3 Examples of Impedance Transformations ...................................................................... 3.21
3.2.4 Transformation of Combined Impedances ..................................................................... 3.26
3.3 Common Base Amplifier ................................................................................................................. 3.33
3.3.1 Input Impedance ............................................................................................................ 3.34
3.4 Cascode Amplifier ........................................................................................................................... 3.37
3.4.1 Basic Analysis ................................................................................................................ 3.37
3.4.2 Damping of the Emitter circuit of U# ............................................................................. 3.38
3.4.3 Thermal Compensation of Transistor U" ....................................................................... 3.42
3.5 Emitter Peaking in a Cascode Amplifier ......................................................................................... 3.49
3.5.1 Basic Analysis ................................................................................................................ 3.49
3.5.2 Input Impedance Compensation ..................................................................................... 3.54
3.6 Transistor Interstage T-coil Peaking ............................................................................................... 3.57
3.6.1 Frequency Response ...................................................................................................... 3.61
3.6.2 Phase Response .............................................................................................................. 3.62
3.6.3 Envelope Delay .............................................................................................................. 3.62
3.6.4 Step Response ................................................................................................................ 3.64
3.6.5 Consideration of the Transistor Input Resistance ........................................................... 3.65
3.6.6 Consideration of the Base Lead Stray Inductance .......................................................... 3.66
3.6.7 Consideration of the Collector to Base Spread Capacitance .......................................... 3.67
3.6.8 The ‘Folded’ Cascode .................................................................................................... 3.68
3.7 Differential Amplifiers .................................................................................................................... 3.69
3.7.1 Differential Cascode Amplifier ...................................................................................... 3.70
3.7.2 Current Source in the Emitter Circuit ............................................................................ 3.72
3.8 The 0T Doubler ............................................................................................................................... 3.75
3.9. JFET Source Follower .................................................................................................................... 3.79
3.9.1 Frequency Response Magnitude ................................................................................... 3.82
3.9.2 Phase Response .............................................................................................................. 3.84
3.9.3 Envelope Delay .............................................................................................................. 3.84
3.9.4 Step Response ................................................................................................................ 3.85
3.9.5 Input Impedance ............................................................................................................ 3.89
-3.3-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
List of Figures:
-3.4-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
Fig. 3.9.8: JFET step response including the signal source resistance ................................................. 3.88
Fig. 3.9.9: JFET source follower input impedance model .................................................................... 3.90
Fig. 3.9.10: Normalized negative input conductance ........................................................................... 3.91
Fig. 3.9.11: JFET negative input impedance compensation ................................................................. 3.92
Fig. 3.9.12: JFET input impedance Nyquist diagrams ......................................................................... 3.93
Fig. 3.9.13: Alternative JFET negative input impedance compensation .............................................. 3.94
List of Tables:
-3.5-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
-3.7-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
extension within a single stage and we shall discuss it next. This will be followed by an
analysis of JFET source follower, commonly used as the input stage of oscilloscopes
and other measuring instruments. Such a stage can have the input impedance negative
at high frequencies when the JFET source is loaded by a capacitor (which is always the
case) and we shall show how to compensate this very undesirable property.
In Part 2 we have analyzed the T-coil peaking circuit with a purely capacitive tap
to ground impedance. However, if the T-coil circuit is used for a transistor interstage
coupling the tap to ground impedance ceases to be purely capacitive. This fact requires
a new analysis, with which we shall deal in the last section.
Probably, the reader will ask how accurately we need to model the active
devices in our circuits to obtain a satisfying approximation. In 1954, Ebers and Moll
[Ref. 3.9] had already described a relatively simple non-linear model, which, over the
years, was improved by the same authors, and lastly in 1970 by Gummel and Poon [Ref.
3.10]. Modern computer programs for circuit simulation allow the user to trade
simulation speed for accuracy by selecting models with different levels of complexity
(e.g., an older version of MicroCAP [Ref. 3.30] had 3 EM and 2 GP models for the
BJT, the most complex GP2 using a total of 51 parameters). For simpler circuit analysis
we shall use the linearized high frequency EM2 model, explained in detail in Sec. 3.1.
All these models look so simple and perform so well, that it seems as if anyone
could have created them. Nothing could be farther from the truth. It takes lots of
classical physics (Boltzmann’s transport theory, Gauss’ theorem, Poisson’s equation,
the charge current mean density integral calculus, the complicated p–n junction
boundary conditions, Maxwell’s equations, ... ), as well as quantum physics (Fermi
levels, Schrödinger’s equation, the Pauli principle, charge generation, injection,
recombination and photon–electron and phonon–electron scattering phenomena, to
name just a few important topics) to be judiciously applied in order to find well defined
special cases and clever approximations that would, within limits, provide a model
simple enough for everyday use. Of course, if pushed too far the model fails, and there
is no other way to the solution but to rework the physics neglected. In our analysis we
shall try not to go that far.
It cannot be overstressed that in our analysis we are dealing with models of
semiconductor devices! As Philip Darrington, former Wireless World editor, put it in
one of his editorials, “the map is not the territory”, just as the schematic diagram is not
the circuit. As in any branch of science, we build a (theoretical) model, analyze it, and
then compare with the real thing; if it fits, we have had a good nose there, or perhaps we
have simply been lucky; if it does not fit we go back to the drawing board.
In the macroscopic world, from which all our experience arises, most models are
quite simple, since the size ratio of objects, which can still be perceived directly with
our senses, to the atomic size, where some odd phenomena begin to show up, is some 6
orders of magnitude; thus the world appears to us to be smooth and continuous.
However, in the world of ever shrinking semiconductor devices we are getting ever
closer to the quantum phenomena (e.g., the dynamic range of our amplifiers is limited
by thermal noise, which is ultimately a quantum effect). But long before we approach
the quantum level we should stay alert: even if we forget that the oscilloscope probe
loads our circuit with a shunt capacitance of some 10–20 pF and a serial inductance of
about 70–150 nH of the ground lead, the circuit will not forget, and sometimes not
forgive, either!
-3.8-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
Fig. 3.1.1a shows a common emitter amplifier (the name reflects that the emitter
is the common reference for both input and output signals), whilst Fig. 3.1.1b represents
its small signal equivalent circuit (the EM2 model, see [Ref. 3.4 and 3.9]). If a signal
source amplitude is much smaller than the base bias voltage, resulting in an output
signal small enough compared to the supply voltage, we can assume that all the
equivalent circuit components are linear (not changing with the signal). However, when
the transistor has to handle large signals, the equivalent model becomes rather
complicated and the analysis is usually carried out by a suitable computer program.
iµ ic rco
Q1 o
rµ Cµ
Vcc ib rb π gm
RL o ro
RL
Ic + ic rπ Cπ π C sub
Ib + ib
Q1 is Rs
is iπ
Rs i Ie + ie r eo
ie
a) b)
iµ ic
Q1 o ic
o
Cµ
ib π gm ib π gm
rb
rb
is is
rπ Cπ RL rπ
Rs Rs Ct RL
ie
c) d)
Fig. 3.1.1: The common emitter amplifier: a) circuit schematic diagram – the current and voltage
vectors are drawn for the npn type of transistor; b) high frequency small signal equivalent circuit;
the components included in the U" model are explained in the text; c) simplified equivalent
circuit; d) an oversimplified circuit where Gt œ G1 EG. œ constant.
During the first steps of circuit design we can usually neglect the non-linear
effects and obtain a satisfactory performance description by using a first order
approximation of the transistor model, Fig. 3.1.1c. Some of the circuit parameters can
even be estimated by an oversimplified model of Fig. 3.1.1d. By assuming a certain
-3.9-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
operating point OP, set by the DC bias conditions, we can explain the meaning of the
model components. On the basis of these explanations it will become clear why and
when we may neglect some of them, thus simplifying the model and its analysis.
3c
gm transistor mutual conductance in siemens [S œ "ÎH]: gm œ ¸ "Î<e
@1
where 3c is the instantaneous collector current;
@1 is the voltage across <1 (see below);
ZT
and <e is the dynamic emitter resistance: <e œ [H] (ohm);
Me
Me is the d.c. emitter current [A] (ampere);
5B X
ZT is the p-n junction ‘thermal’ voltage œ [V] (volt);
;
where ; is the electron charge œ "Þ'!# † "!"* [C] (coulomb œ [As]);
X is the absolute temperature [K] (kelvin);
5B is the Boltzmann constant œ "Þ$) † "!#$ [JÎK] (jouleÎkelvin)
( œ [VAsÎK] ).
<o equivalent collector–emitter output resistance, representing the variation of
the collector–emitter potential due to collector signal current at a specified
operating point (its value is nearly always much greater than the load):
@ce . Zce ZA Zce
<o œ œ º œ
3c . Mc OP Mc
where ZA is the Early voltage (see Fig. 3.4.10);
In wideband amplifiers: <o ¦ VL
<. collector to base resistance, representing the variation of the collector to base
potential due to base signal current at some specified DC operating point
condition OP(Zcc , Mb ); in wideband amplifiers its value is always much
larger than the source resistance or the base spread resistance:
@cb . Zcb ZA Zce
<. œ œ º œ
3b . Mb OP Mb
<. ¦ V s < b
<1 base to emitter input resistance (forward biased BE junction), representing
the variation of the base to emitter potential owed to the base signal current
at a specified operating point:
@1 . Zbe ZT "
<1 œ œ º œ" œ ¸ " <e
3b . Mb OP Mc gm
<b base spread resistance (resistance between the base terminal and the base–
emitter junction); value range: "! H <b #!! H.
<co presumably constant collector resistance of internal bonding and external
lead; approximate value " H.
-3.10-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
-3.11-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
ZX #' mV #' mV
<e œ œ ¸ (3.1.1)
Me Me Mc
G.0
G. ÐZcb Ñ œ (3.1.2)
Zcb 7c
”" •
Zjc
This equation is valid under the assumption that there is no charge in the
collector to base depletion layer. The meaning of the symbols are:
G.0 œ junction capacitance [F] (farad) (when Zcb œ 0 V)
Zcb œ collector to base voltage [V] (volts)
Zjc œ collector to base barrier potential ¶ !Þ'–!Þ) V for silicon transistors
7c œ collector voltage potential gradient factor
(!Þ& for abrupt junctions and !Þ$$ for graded junctions)
"
G1 œ Gt G. œ gm 7T G. œ G. (3.1.3)
#10T <e
-3.12-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
"
G1 ¸ (3.1.4)
# 10T <e
The product <e G1 is called the transistor time constant 7T œ "Î=T , where
=T œ # 1 0T . The value =" œ =T œ "ÎÐ<e G1 Ñ represents the dominant pole of the
amplifier and thus it is the main bandwidth limiting factor. In our further discussions we
shall find the way to drastically reduce the influence of G1 , at the expense of the
amplification factor.
The next problem is to calculate the input impedance. Here we must consider
the Miller effect [Ref. 3.7, 3.12] owed to capacitance G. (in practice, there is also a CB
leads stray capacitance, parallel to the junction capacitance, that has to be taken into
account). Therefore we first calculate the input admittance looking right into the
internal <b G. junction in Fig. 3.1.1c. The current 3. flowing through G. is:
3. œ Ð@1 @o Ñ G. = (3.1.5)
@o œ 3c VL œ gm @1 VL (3.1.6)
3. œ Ð" gm VL Ñ = G. @1 (3.1.7)
From this equation it follows that the junction input impedance, owed to
capacitance G. , is a capacitance with the value:
1Note the negative sign in Eq. 3.1.6: actually, the output voltage is @o œ Zcc 3c VL . However, since we
have agreed to replace the supply voltage with a short circuit, we are left with the negative part only.
-3.13-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
When the voltage gain is large the effect of G. (and, respectively, GM ) becomes
prevalent. However, there are ways of reducing the effect of Ev G. (by lowering the
voltage gain or by using other circuit configurations); we discuss it in later sections.
The complete input impedance that the signal source would see at the base is:
" <1
^b œ <b œ <b (3.1.10)
" " =aG1 GM b <1
=G1 =GM
<1
On the basis of the analysis made so far, we come to the conclusion that the two
capacitances G1 and G. (effectively GM ) are connected in parallel in the base circuit.
We can simply add them together and consider their sum to be Gt . This has been drawn
in Fig. 3.1.1d. This equivalent circuit is appropriate for the calculation of both input
impedance and the transimpedance. But since we have removed any connection
between the output and input (where G. is effective), this circuit is not suitable for the
calculation of output impedance. Therefore when calculating the voltage gain we must
also consider the pole =# ¸ "ÎVL G. on the collector side, according to Fig. 3.1.1c (in
general, some collector to ground stray capacitance must be added to G. , but for the
time being, we shall write only G. ).
According to Fig. 3.1.1d, thus neglecting the pole =# , but including the source
impedance Vs , we have:
@i @1 "
œ @1 Œ = Gt (3.1.11)
Vs <b <1
We would like to separate the frequency dependent part of Ev from the frequency
independent part, in a normalized form, as:
="
Ev Ð=Ñ œ E! (3.1.15)
= ="
-3.14-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
To achieve this separation we first divide both the numerator and the
denominator of Eq. 3.1.14 by Gt <1 aVs <b b:
<1
Gt <1 aVs <b b
Ev œ gm VL (3.1.16)
<1 Vs <b
=Œ
Gt <1 aVs <b b
To make the two ratios equal we must multiply the numerator by a<1 Vs <b bÎ<1
and then multiply the whole by the reciprocal:
gm VL <1
E! œ (3.1.19)
<1 Vs <b
Since =" is a simple real pole it is equal to the system’s upper half power frequency:
and it can be seen that it is proportional to the sum Vs <b in parallel with <1 and is
inversely proportional to G1 , G. , gm , and VL .
If Vs <b ¥ <1 and if VL is very small the approximate value of the pole is:
" " gm =T
k=" k ¸ œ † œ (3.1.22)
<1 G1 "o G1 "!
-3.15-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
Before more sophisticated circuits were invented, the common emitter amplifier
was used extensively (with many amplifier designers having hard times and probably
cursing both G1 and G. ). In 1956 G. Bruun [Ref. 3.22] thoroughly analyzed this type of
amplifier with the added shunt–series inductive peaking circuit. In view of modern
wideband amplifier circuits, this reference is only of historical value today.
Nevertheless, the common emitter stage represents a good starting point for the
discussion of more efficient wideband amplifier circuits.
-3.16-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
As we explained in Sec. 3.1, if the voltage gain is not too high the base emitter
capacitance G1 is the dominant cause of the frequency response rolling off at high
frequencies. By considering this we can make a simplified small signal high frequency
transistor model, as shown in Fig. 3.2.1, for the common base configuration, where 3c ,
3e and 3b are the collector-, emitter-, and base-current respectively. For this figure the
DC current amplification factor is:
Mc
!! œ (3.2.1)
Me
!! "!
gm œ œ (3.2.2)
<e Ð" "! Ñ <e
where "! is the common emitter DC current amplification factor. If "! ¦ " then
!! ¶ ", so the collector current Mc is almost equal to the emitter current Me and
gm ¶ "Î<e . This simplification is often used in practice.
ie Q1 π gm ic
π Cµ
ie Q1 ic is Rs Cπ iµ
o
rπ
Rs RL rb RL
i
ib o
i
is Vcc ib
a) b)
-3.17-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
For the moment let us assume that the base resistance <b œ ! and consider the
low frequency relations. The input resistance is:
@1
<1 œ (3.2.3)
3b
where @1 is the base to emitter voltage. Since the emitter current is:
The last simplification is valid if "! ¦ ". To obtain the input impedance at high
frequencies the parallel connection of G1 must be taken into account:
Ð" "! Ñ <e
^b œ (3.2.7)
" Ð" "! Ñ = G1 <e
The base current is:
@1 " Ð" "! Ñ = G1 <e
3b œ œ @1 (3.2.8)
^b Ð" "! Ñ <e
Furthermore it is:
Ð" "! Ñ <e
@1 œ 3 b (3.2.9)
" Ð" "! Ñ = G1 <e
The collector current is:
"! " !!
3c œ gm @1 œ † @1 œ @1 (3.2.10)
" "! <e <e
In the very last expression we assumed that "! ¦ " and 7T œ <e G1 œ "Î=T ,
where =T œ #10T is the angular frequency at which the current amplification factor "
decreases to unity. The parameter 7T (and consequently =T ) depends on the internal
configuration and structure of the transistor. Fig. 3.2.2 shows the frequency dependence
of " and the equivalent current generator.
-3.18-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
102
β0 → ∞
ωT
s >>
β0 β0
β0
ic
√2 C
1
10 ib
B s τT
ib
ωT
ωT ≈ 1 ie
β0 re C π
E
100
−3 −2 −1 0
10 10 10 10
ω /ω T
a) b)
Fig. 3.2.2: a) The transistor gain as a function of frequency, modeled by
the Eq. 3.2.11; b) the equivalent HF current generator.
where =" is the pole at =T Î"! . This relation will become useful later, when we shall
apply one of the peaking circuits (from Part 2) to the amplifier. At very high
frequencies, or if "! ¦ ", the term = 7T prevails. In this case, from Eq. 3.2.11:
3c " "
œ " Ð=Ñ ¸ œ (3.2.13)
3b = 7T 4 = <e G1
This simplified relation represents the 20 dBÎ100 asymptote in Fig. 3.2.2a.
-3.19-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
We can use the result of Eq. 3.2.11 to transform the transistor internal (and the
added external) impedances from the emitter to the base circuitry, and vice versa.
Suppose we have the impedance ^e in the emitter circuit, as displayed in Fig. 3.2.3a,
and we are interested in the corresponding base impedance ^b :
1
Ze β 0 >> s τ T << 1
sτT
sτT
Ze
β0 sτT
τT β0 Z e
Zb Zb Zb Zb
Ze Ze ( β 0+ 1) Z e
Ze
a) b) c) d)
We know that:
^b œ " Ð=Ñ ^e ^e œ " Ð=Ñ " ‘ ^e (3.2.15)
^b
^e œ (3.2.18)
"Ð=Ñ "
-3.20-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
1 s τ T << 1
β 0 >>
sτT
β0 Z b sτT
τT Zb
Ze Zb Ze Ze
Zb Zb Z b sτT β 0+ 1
Ze Zb β0
a) b) c) d)
-3.21-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
Ô × "
= 7T "
" Ð=Ñ " Ö " Ù " "!
^b œ œÖ "Ù œ
=G " =G "
Õ " = 7T Ø Œ = 7T = G
! "!
"
= 7T Œ"
"!
œ (3.2.22)
=G
=# 7T G
"!
Let us synthesize this expression by a simple continued fraction expansion [Ref. 3.27]:
=G
=# 7T G
"! =G
œ =G (3.2.24)
" "
= 7T Œ" = 7T Œ"
"! "!
The fraction on the right is a negative admittance with the corresponding impedance:
" "
= 7T Œ" "
"! 7T "!
^bw œ œ (3.2.25)
=G G =G
-3.22-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
]b œ d ˜]b ™ 4 e˜]b ™ œ Kb 4 = Gb
7T !! "
7T#
G =# !#!
œ 4=G (3.2.29)
" "
7T# 7T# # #
=# !#! = !!
β0 − α0 C
τT Zb
C
Zb τT
−C
C
a) b)
Fig. 3.2.5: A capacitive emitter load is reflected into the
base (junction) with negative components.
The negative input (base) conductance Kb can cause ringing on steep signals or
even continuous oscillations if the signal source impedance has an emphasized
inductive component. We shall thoroughly discuss this effect and its compensation
later, when we shall analyze the emitter–follower (i.e., common collector) and the JFET
source–follower amplifiers.
Now let us derive the emitter impedance ^e in the case in which the base
impedance is inductive (= P). Here we apply Eq. 3.2.18:
=P =P
^e œ œ œ (3.2.30)
" Ð=Ñ " "
"
"
= 7T
"!
" =P
=PŒ = 7T =# P 7T
"! "!
œ œ (3.2.31)
" "
" = 7T = 7T Œ"
"! "!
By continued fraction expansion we obtain:
=P
=# P 7T
"! =P
œ =P (3.2.32)
" "
= 7T Œ" = 7T Œ"
"! "!
The negative part of the result can be inverted to obtain the admittance:
" "
= 7T Œ" "
"! 7T "!
]ew œ œ (3.2.33)
=P P =P
-3.23-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
This means we have two parallel impedances. The first one is a negative resistance:
P
Vx œ (3.2.34)
7T
and the second one is a negative inductance:
P "!
Px œ œ P œ !! P (3.2.35)
" " "!
"
"!
As required by Eq. 3.2.32, we must add the inductance P in series, thus arriving at the
equivalent emitter impedance shown in the figure below:
−L −α0 L
β0 τ T τT
Ze Ze
L L
a) b)
Fig. 3.2.6: The inductive source is reflected into the emitter with negative components.
-3.24-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
β0 β0 τ T
τT
Zb Ze
Z Z Z
τT
R
τT R
β0 R
R
R R
R
β0
β0 L −α 0 L
L L
τT
−L
τT
L L
−α 0 C β 0C
C C
τT
C
τT
−C C
-3.25-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
The Table 3.2.1 can also help us in transforming impedances consisting of two
or more components.
Example 1:
Suppose we have a parallel Vb Gb combination in the base circuit, as shown in
Fig. 3.2.7a. What is the emitter impedance ^e if the common base transistor has a
current amplification factor "! and the time constant 7T ( œ "Î=T )?
β0 τ T
τ T Rb β 0 Cb
Ze Ze
Rb
Rb τT Cb
Rb Cb
β0 Cb
a)
b)
Ze Rb Ze Rb
Rb β0 Cb β 0+ 1 Cb
if Rb Cb = β0 τ T
c) d)
From the Table 3.2.1 we first transform the resistance Vb from base to emitter
and obtain what is shown on the left half of Fig. 3.2.7b. Then we transform the
capacitance Gb and obtain the right half of Fig. 3.2.7b. If we want the transformed
network to have the smallest possible influence in the emitter circuit, we can apply the
principle of constant resistance network (P and G cancel each other when VG œ PÎV ,
[Ref. 3.27]). To do so let us focus on both middle branches of the transformed network,
where we select such values of Vb and Gb that:
Vb 7 T Vb
ËG " œ " (3.2.36)
b ! !
-3.26-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
With such values of Vb and Gb the middle two branches obtain the form of a
resistor with the value Vb Î"! , as shown in Fig. 3.2.7c, which allows us to further
simplify the complete circuit to that in Fig. 3.2.7d.
To acquire a feeling for practical values, let us make a numerical example. Our
transistor has:
"! œ )! 0T œ '!! MHz Vb œ %( H
Vb %(
Vq œ œ œ !Þ&) H (3.2.40)
" "! " )!
We shall consider these results in the design of the common base amplifier and
of the cascode amplifier.
Example 2:
By using the same principles as we have used above, we shall take another
example, which is also very important for the wideband amplifier design. We shall
transform a parallel combination Ve Ge , as shown in Fig. 3.2.8a, from emitter to base.
With the data from Table 3.2.1, we can draw the transformed base network separately
for Ve and Ge and then connect them in parallel. This is shown in Fig. 3.2.8b. Now we
-3.27-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
focus only on the middle part of the circuit, which is drawn in Fig. 3.2.8c. If we select
such values of Ve Ge that:
Ve Ge œ 7T (3.2.44)
and if we consider that !0 ¸ ", then the admittance ] of the middle part of the circuit
becomes zero, because in this case:
7T
Ve œ (3.2.45)
Ge
and:
7T
œ !! Ge ¸ Ge (3.2.46)
Ve
and the parallel connection of a positive and an equal negative admittance gives zero
admittance:
" "
] œ œ!» (3.2.47)
" 7T "
Ve 7T Ve G e œ 7 T
= Ge = Ge
Ve
A zero admittance is an infinite impedance. So in this case the only components
that remain are the parallel connection of Ge and a"! "bVe , as in Fig. 3.2.9d.
Note that this transformation was carried out by taking the internal base
junction as the viewing point. The actual input impedance at the external base terminal
will be equal to the parallel VG combination of Fig. 3.2.8d to which the series
connected base spread resistance <b must be added.
β0
τT
Zb β0 Re τT − α 0C e
Zb Re
Re τT Ce
Ce −
Re Ce
a)
b)
ReCe = τT
Y = 0 if
α0 = 1
− α 0 Ce Zb
τT
Y Re
( β0 +1)Re Ce
τT
−
Re Ce
c) d)
Fig. 3.2.8: Transformation of the emitter VG network as seen from the base:
a) schematic; b) equivalent circuit; c) if VG œ 7T and !! œ ", the sum of
admitances of the components in the middle is zero; d) final equivalent circuit.
-3.28-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
The transformation in Fig. 3.2.8 allows us to trade the gain of a common emitter
circuit for the reduced input capacitance. Instead of G1 (large) if the emitter was
grounded, the input capacitance is the same as the capacitance Ge (small) which we
have inserted in the emitter circuit. Of course, we still have to add the base to collector
capacitance G. or Miller capacitance GM . As we will see in the Sec. 3.4, where we shall
discuss the cascode amplifier, the gain is reduced in proportion to VL ÎVe . Since in a
wideband amplifier stage we almost never exceed the voltage gain of ten, we can
always apply the above transformation.
For a numerical example let us use the same transistor as before ("! œ )!,
0T œ '!! MHz). According to Eq. 3.2.38 the corresponding 7T is 265 ps. Let us say that
on the basis of the desired current gain of the common-emitter stage we select an
emitter resistor Ve œ "!! H. What is the value of the parallel emitter capacitance Ge
which would give the input impedance according to Fig. 3.2.8d?
Since Ve Ge œ 7T œ #'& ps, it follows that:
7T #&' † "!"#
Ge œ œ œ #Þ'& pF only! (3.2.48)
Ve "!!
#' mV
<e œ œ #Þ' H (3.2.49)
"! mA
Without the Ve Ge network in the emitter, the base to emitter capacitance G1 would
define the bandwidth and its estimated value would be (Eq. 3.1.4):
" "
G1 œ œ œ "!# pF (3.2.50)
# 1 <e 0T # 1 † #Þ' † '!! † "!'
Here, too, we have neglected the base resistance <b ; it must be added to the value above
to obtain a more accurate figure.
In Fig. 3.2.9a the transistor stage with Ve Ge is shown again and in Fig. 3.2.9b is
its small signal equivalent input circuit.
In wideband amplifiers we usually make the emitter network with Ge Ÿ #! pF.
In order to match Ve Ge œ 7T the capacitor Ge is often made adjustable, because 7T in
commercially available transistors have rather large tolerances.
-3.29-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
bJ
β0
B bJ
τT
rb Cµ
B C
Re Ce ( β0 +1)R e Ce
R e C e = τT
a) b)
With an appropriate VL in the collector (not shown in Fig. 3.2.9) we might now
calculate the (decreased) voltage amplification Ev owed to the Ve Ge network in the
emitter circuit of the common emitter stage and consider the decreased value of the
Miller capacitance GM . Since we shall not use exactly such amplifier configuration we
leave this as an exercise to the reader.
But for the application in the cascode amplifier, which we are going to discuss
in Sec. 3.4 , it is important to know the transconductance 3o Î@i of the amplifier with the
Ve Ge network. The corresponding circuit is drawn again in Fig. 3.2.10a and Fig. 3.2.10b
shows the equivalent small signal circuit.
Cµ io
io
rb
ii
Q1 rπ π π gm
Cπ
ii
vi
Re Ce i
Re Ce
a) b)
Fig. 3.2.10: Common collector amplifier: a) schematic; b) equivalent small signal circuit.
If we neglect the resistance <b and the capacitance G. the following relation is
valid for the remaining circuit:
@i œ 3i ÐD1 ^e Ñ 3o ^e (3.2.52)
where:
3o œ gm @1 and @1 œ 3i D1
therefore:
3o œ gm 3i D1 (3.2.53)
-3.30-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
The output current can be obtained by inserting Eq. 3.2.56 back into Eq. 3.2.53:
gm D1 @i
3o œ (3.2.57)
D1 ^e gm D1 ^e
The transadmittance is:
3o gm D1
œ (3.2.58)
@i D 1 ^ e gm D 1 ^ e
3o " "
œ † (3.2.59)
@i ^e " "
"
gm ^e gm D1
Now we insert the expressions for D1 and ^e from Eq. 3.2.54 and replace gm by "Î<e :
3o " " = Ge Ve
œ † <e <e (3.2.60)
@i Ve a" = Ge Ve b a" = G1 <1 b "
Ve <1
and with a slight rearrangement we obtain:
3o " " = Ge Ve
œ † <e <e (3.2.61)
@i Ve " = aGe G1 b <e
Ve <1
Because a<e ÎVe b ¥ " and a<e Î<1 b ¥ " we can neglect them, so:
3o " " = Ge Ve
œ † (3.2.62)
@i Ve " = aGe G1 b <e
Ge Ve œ aGe G1 b <e
Ê Ge Ve ¸ G1 <e (3.2.63)
-3.31-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
3o "
œ (3.2.64)
@i Ve
Here we must not forget that at the beginning of our analysis we have neglected
the resistance <b , which, together with the transformed input capacitance Ge and the
collector to base capacitance G. , makes a pole at:
The magnitude of =" is equal to the upper half power frequency: l=" l œ =h . This
pole makes the stage frequency dependent in spite of Eq. 3.2.64. We have also
neglected the input resistance "! Ve , but, since it is much larger than <b , we shall not
consider its influence (with it, the bandwidth would increase slightly). By introducing
the pole =" back into Eq. 3.2.64, we obtain a more accurate expression for the
transadmittance:
"
3o " ÐGe G. Ñ <b
œ † (3.2.66)
@i Ve "
=” •
ÐGe G. Ñ <b
-3.32-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
In the previous sections we have realized that the collector to base capacitance
G. has a very undesirable effect on the stage bandwidth (Miller effect). But in the
common base configuration the base is effectively grounded and any current through G.
is fed to the ground, not affecting the base current (actually, owing to the physical
construction of the CB junction G. is spread across the whole base resistance <b , so part
of that current would nevertheless reach the base, which we will analyze later).
The common base circuit is drawn in Fig. 3.3.1a and its small signal equivalent
circuit in Fig. 3.3.1b. In wideband amplifiers the loading resistor V L is much smaller
than the collector to base resistance <. , so we shall neglect the later. In order to make
the expressions still simpler, at the beginning of our analysis we shall also not take into
account the base resistance <b . However, we shall have to include <b later, when we
shall discuss the input impedance.
π gm
ie ic
ie Q1 ic rπ
is Cπ π Cµ
Rs o
Rs RL o i rb RL
i ib
is ib
a) b)
Fig. 3.3.1: Common base amplifier: a) schematic; b) equivalent small signal model.
The main characteristics of the common base stage are a very low input
impedance, a very high output impedance, the current amplification factor !0 ¸ ", and,
with the correct value of the loading resistor VL , the possibility of achieving higher
bandwidths. The last property is owed to a near elimination of the Miller effect, since
G. is now grounded and does not affect the input Ev times. Thus G. is effectively in
parallel with the loading resistor VL and — because we can make the time constant
VL G. relatively small — the bandwidth of the stage may be correspondingly large.
Another very useful property of the common base stage is that the collector to
base breakdown voltage Zcbo is highest when the base is connected to ground and the
higher reverse voltage reduces G. further (Eq. 3.1.2). Owing to all the listed properties
the common base stage is used almost exclusively for wideband amplifier stages where
large output signals are expected.
Following the current directions in Fig. 3.3.1, the input emitter current is:
@1
3e œ gm @1 (3.3.1)
D1
where:
<1
D1 œ (3.3.2)
" = G1 <1
-3.33-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
We shall calculate the input impedance of the common base stage by taking into
account the base resistance <b , which — as we will realize very soon — represents a
very nasty obstacle in achieving a wide bandwidth. We shall make our derivation on the
basis of Table 3.2.1, from which we have drawn Fig. 3.3.2. This figure represents the
equivalent small signal input circuit owed to <b .
The input admittance of the circuit in Fig. 3.3.2a is:
" "
]e œ <b (3.3.8)
<b = 7T <b
"!
Within the frequency range of interest the value <b Î"! in the second fraction is
small and can be neglected. The simplified input admittance is thus:
" "
]e ¸ (3.3.9)
<b = 7T <b
-3.34-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
τ T rb
Ze rb Z'e rb τ T rb
rb
β0
a) b)
Fig. 3.3.2: Common base amplifier input impedance: a) <b transformed to ^e ;
b) within the frequency range of interest, <b Î"! can be neglected.
Normally, if the amplifier is built with discrete components, there is always some lead
inductance Ps which must be added in series in order to obtain the total impedance.
In the next section, where we shall discuss the cascode amplifier, we shall find
that the inductance Pe , together with the capacitance G. of the common emitter current
driving stage, forms a parallel resonant circuit which may cause ringing in the
amplification of steep signals. In most cases the resistance Ve is too large to damp the
ringing effectively enough by itself, so additional citcuitry will be required.
Eq. 3.3.10 and Eq. 3.3.11 , respectively, disclose the fact that the annoying
inductance Pe and the resistance Ve are directly proportional to the base spread
resistance <b . When using this type of amplifier for the output stages, where the
amplitudes are large (e.g., in oscilloscopes), we must use more powerful transistors,
mostly in TO5 case type. In this case the internal transistor connections are relatively
long and its total active area is large, the corresponding <b is large as well. In order to
decrease Ve and Pe we must select transistors which have low <b . To decrease base
spread resistance as much as possible and also to decrease the transition time (the time
needed by the current carriers to pass the base width), the firm RCA has developed
(already in the late 1960s) the so called overlay transistor. A typical overlay transistor
is the 2N3866. Such transistors are essentially integrated circuits, composed of many
identical small transistors connected in parallel.
-3.35-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
io1 = i i2 Q2 io2
i i1
Q1 o
i RL
V b2
Rs R e1 Ce1
V c2
s V e1
Cµ1 π 2 gm2
ii1 rb1 io1 = ii2 io2
o
b1 c1 e2 c2
Rs r π1 Cπ 1 π 1 gm1 rπ 2 Cπ 2 Cµ 2
e1 RL
s R e1 r b2
Ce1
b2
Fig. 3.4.2: Equivalent small signal model of a cascode amplifier. The components
belonging to the common emitter circuit bear the index ‘1’ and those of the common
base circuit bear the index ‘2’.
A transistor cascode amplifier is drawn in Fig. 3.4.1 and Fig. 3.4.2 shows its
small signal equivalent circuit. All the components that belong to transistor U" bear the
index ‘1’ and all that belong to transistor U# bear the index ‘2’.
For the emitter network of U" we select the values such that Ve" Ge" œ 7T" .
In order to simplify the initial analysis, we shall first neglect Vs , <b# , and G." .
Later we shall reintroduce these elements one by one to get a closer approximation.
We have already derived the equations needed for each part of the combined
circuit: for the common emitter stage we have Eq. 3.2.66 and for the common base we
-3.37-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
have Eq. 3.3.7. We only need to multiply these two equations to get the voltage gain of
our cascode amplifier:
Here we have approximated !!# ¸ ", and therefore 3o# œ 3o" . The first fraction,
multiplied by VL from the third fractionß is the DC voltage amplification and the
remainder represents the frequency dependence:
VL "
Ev ¸ † (3.4.2)
Ve" Ð" = Ge" <b" Ñ Ð" = G.# VL Ñ
Obviously, the frequency dependence is a second-order function. There are two poles:
the pole at the input is "Î Ge" <b" œ "Î7T" whilst the pole "ÎG.# V L œ "Î7T# is
on the output side. As we shall see later, it is possible to apply the peaking technique on
both sides.
In an ideal case the common base stage input (emitter) impedance is very low.
Because of this low load the first stage voltage gain Ev" ¥ ", so G." would not be
amplified by it. And if we could neglect <b# the capacitance G.# would appear in
parallel to the loading resistor VL , and therefore it would neither be multiplied by the
second stage’s voltage gain Ev# . Both G." and G.# are relatively small, so it is obvious
that the cascode amplifier has, potentially, a much greater bandwidth in comparison
with a simple common emitter amplifier (for the same total voltage gain). The price we
pay for this improvement is the additional transistor U# .
Of course, in practice things are not so simple, and in addition we should not
neglect the inevitable stray capacitances. Those should be added to G." and G.# . Also,
ÐVs <b" Ñ with G." will affect the behavior of U" and <b# with G.# will affect the
behavior of U# , as we shall see in the following analysis.
Owing to the base spread resistance <b# the U# input (emitter) has an inductive
component with the inductance Pe# œ 7T# <b# in parallel with <b# , as already shown in
Table 3.2.1 and also in Fig. 3.3.2. The equivalent input impedance of the transistor U#
was derived in Eq. 3.3.9 to Eq. 3.3.11.
As shown in the simplified circuit in Fig. 3.4.3, the inductance Pe# and the
collector to base capacitance G." of U" form a series resonant circuit, damped by <b# in
parallel with Pe# (and a series emitter resistance <b# Î"# , which is very small, so it was
neglected). The other end of G." is connected to the base of U" , where we must
consider two effects:
— at very high frequencies the input signal goes directly through <b" and G." ;
— at somewhat lower frequencies, the signal is inverted and amplified by U" and
the internal base junction can then be treated as the virtual ground.
-3.38-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
Cd
bJ1
Cµ 1 Rd i e2
i b1 B r b1
1 C1 E2
Rs
π1 gm1
Z e2 rb2 L e2 =
τ T2 rb2
s
E1
Fig. 3.4.3: Parasitic resonance damping of the cascode amplifier. Two current paths must
be considered: at highest frequencies, for 3b" , G." represents a non-inverting cross-talk
path; at lower frequencies, for 3c" , G." provides a negative feedback loop, thus it can be
viewed as if being connected to a virtual ground (U" base junction bJ" ). The parasitic
resonance, formed by G." and Pe# is only partially damped by <b# ; the required additional
damping is provided by inserting Vd and Gd between U" collector and U# emitter.
So let us put:
Pe#
Vd œ <b# œ Ë (3.4.3)
Gd
The value of Gd is then:
7T# "
Gd œ œ (3.4.4)
<b# #10T# <b#
To get a feeling for actual values let us have two equal transistors with parameters such
as in the examples in Sec. 3.2.4 (0T œ '!! MHz, <b# œ %( H):
"
Vd œ <b2 œ %( H Gd œ œ &Þ' pF (3.4.5)
#1 † '!! † "!' † %(
The input impedance of the emitter circuit of transistor U# now becomes:
-3.39-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
circuit of transistor U# , and as we shall see later, it can be a good choice for providing
the thermal stabilization of the cascode stage.
Since we have introduced ^d into the collector circuit of U" we must now
account for the U" Miller capacitance:
^e# <b#
GM" ¸ G." Œ" œ G." Š" ‹ (3.4.7)
^e" Ve"
where ^e# is the U# compensated emitter input impedance and ^e" is the impedance of
the emitter circuit of U" . With this consideration the gain, Eq. 3.4.2, becomes:
VL "
Ev ¸ † (3.4.8)
Ve" " = ÐGe" GM" Ñ <b" ‘ Ð" = G.# V L Ñ
Cd
i e2HF
i c1 C1 i e2LF
Rd E2
CM1 Z'e2 = r b2 Z e2 L e2 =
rb2 τ T2 rb2
when
L
R d C d = r e2
B1 b2
(virtual
ground)
The collector to base capacitance of the transistor U" allows very high frequency
signals from the input bypassing this transistor and directly flowing into the emitter of
transistor U# . Transistor U" amplifies, inverts, and delays the low frequency signals. In
contrast, all of what comes through G." is non-delayed, non-amplified, and non-
inverted, causing a pre-shoot [Ref. 3.1] in the step response, as shown in Fig. 3.4.5. The
U" collector current, 3c" , is the sum of 3." and @1" gm" . Note that both the pre-shoot
owed to 3." and the overshoot of @1" gm" are reduced in @c# by the U# pole due to G.# .
i C µ1
i
c2
−1 0 1 2 3 4
t [ns]
Fig. 3.4.5: The step response @c# has a pre-shoot owed to the signal cross-talk
through G." (arbitrary vertical units, but corresponding to Ev œ # ).
-3.40-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
So far we have excluded G.# from our analysis. When included, its effect on
bandwidth is severe, owing to the Miller effect and <b# . But it also affects the emitter
input impedance of U# since GM# œ G.# a" Ev b appears in parallel to <b# and is
consequently transformed into the emitter in accordance with Table 3.2.1, in the same
way as in Fig. 3.2.7. If Ev is relatively high the pronounced resonance owed to G.# can
cause long ringing, even if the bandwidth is lower than the resonant frequency.
Instead of using increased damping in series with the emitter of U# , which
would reduce the bandwidth further, John Addis [Ref. 3.26] suggested an alternative
approach. The U# base, instead of being connected directly to a low impedance bias
voltage, should be connected to it through a resistor VA of up to "!! H and grounded by
a small capacitor GA ¸ G.# . In Fig. 3.4.6 the voltage gain, the phase, and the group
delay as functions of frequency are shown for the two cases: VA œ ! and VA œ $$ H,
respectivelyÞ The change of U# input impedance is exposed by the lower drive stage
current 3c" near the cut off frequency.
20 −180
ϕ
ϕ[ ]
τe> 0
10 −270 1.0
c2 c2
b1 b1 0.5
b a
[dB] τ e [ns]
RL
0 −360 0.0
i c1 Q2 c2
a − 0.5
τe b
−10 RA − 450 −1.0
b
CA a) 0 V b2
b) 33 a 30
a
i c1
− 20 20
i c1 b [mA]
10
−30
10 6 10 7 10 8 10 9 1010
f [Hz]
Fig. 3.4.6: The compensation method of U# as suggested by John Addis. a) With VA œ ! , the
frequency response has a notch at the resonance and a phase-reversed cross-talk, which makes the
group delay 7e positive. b) with VA œ $$ H and GA œ $ pF the frequency response, the phase and
the group delay are smoothed and, although the bandwidth is reduced slightly, the group delay
linearity is extended and the undesirable positive region is reduced to negative. The U# emitter
impedance is increased, as can be deduced from the lower 3c" peak.
-3.41-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
" "
^b# œ (3.4.10)
" "
=GM# =GA
<b# VA
#VA Vs
^b# œ œ (3.4.11)
" =GA VA " =Gs Vs
Cµ 2 C M2
r b2 r b2
Rs Cs
CA RA CA RA
Vb2
a) b) c)
Fig. 3.4.7: The U# emitter input impedance compensation.
We already know (Eq. 3.1.1) that the effective emitter resistance is:
5B X
<e œ (3.4.12)
; Me
-3.42-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
When we apply the bias and supply voltage to a transistor, the collector current
Mc will cause an increase in the temperature X of the transistor, owing to the power
dissipated in the transistor:
TD œ Mc Zce (3.4.13)
where Zce is the collector to emitter voltage. If the ambient temperature is XA and the
total thermal resistance from junction to ambient is KJA [KÎW] (kelvin per watt), the
junction temperature XJ will be:
XJ œ XA KJA TD (3.4.14)
r e ( T ) increase
r e ( T ) decrease
0 10 20 30 40
t [ µs]
Fig. 3.4.8: Thermal distortion can cause long term drift after the transient
(exaggerated, but not too much !). In addition to this thermal time constant, there can
also be a much slower one, owed to the change in the transistor case temperature.
-3.43-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
If we go back to Eq. 3.2.61 for a moment, we note that the gain depends also on
the ratio <e ÎVe (in the denominator), which we have assumed to be much smaller than 1
and thus neglected. But if Ve is small the change in temperature and emitter current can
alter the gain up to a few percent in worst cases.
Before we look for the remedy for the problem of how to cancel, or at least how
to substantially reduce, the thermal distortion, let us take a look at Fig. 3.4.9, which
shows a simple common emitter stage, and the way in which the power dissipation is
shared between the load and the transistor as a function of the collector current. As
usual, we use capital letters for the applied DC voltages, loading resistor, etc. and small
letters for the instantaneous signal voltages and power dissipation. The instantaneous
transistor power dissipation is:
@L Zcc @ce Zcc @#
:U" œ @ce 3c œ @ce œ @ce œ @ce ce (3.4.15)
VL VL VL VL
Since @ce cannot exceed Zcc if the collector load is purely resistive, the right
vertical axis ends at Zcc . The left vertical axis is normalized to the maximum load
power, which is simply :Lmax œ Zcc# ÎVL (corresponding to @ce œ ! and thus :Q1 œ !).
The transistor’s power dissipation, however, follows an inverse parabolic function with
a maximum at:
#
Zcc " Zcc#
TU"max œ Œ œ (3.4.16)
# VL % VL
where @ce œ Zcc Î#. This point is the optimum bias for a transistor stage. If excited by
small signals, the transistor power dissipation moves either left or right, close to the top
of the parabola and thus it does not change very much. This means that the transistor’s
temperature does not change very much either.
1.00 Vcc
Vcc
ce > 2
p 0.75
V cc RL L
L
p Lmax
pL
ic Vcc
0.50 ce 2
Q1
ib
ce = L
V
ce < 2cc
0.25
p
Q1
0 0
0 0.2 0.4 0.6 0.8 1.0
ic / icmax
Fig. 3.4.9: The optimum bias point is when the voltage across the load is
equal to the voltage across the transistor. This is optimal both in the sense of
thermal stability, as well as in the available signal range sense.
-3.44-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
If different design conditions were forcing us to move the bias point far from the
top of the parabola, the bias with Zce Zcc Î# is preferred to the range Zce Zcc Î#,
because the latter situation is unstable. However, in wideband amplifiers we can hardly
avoid it, because we want to have a low VL , a high Mc and a high Zcb (to reduce G. ) and
all three conditions are required for high bandwidth.
The typical temperature coefficient of a base–emitter p–n junction voltage
( ¸ !Þ' V) is approximately # mVÎK for silicon transistors, so we can explain the
instability in the following way:
When the circuit is powered up the transistor conducts a certain collector
current, which heats the transistor, increasing the transistor base–emitter p–n junction
temperature. If the base is biased from a voltage source (low impedance, which in
wideband amplifiers is usually the case), the temperature increase will, owing to the
negative temperature coefficient, decrease the base–emitter voltage. In turn, the base
current increases and, consequently, both the emitter and the collector current increase,
which further increases the dissipation and the junction temperature. The load voltage
drop would also increase with current and therefore reduce the collector voltage, thus
lowering the transistor power dissipation. But with a low VL , the change in the drop of
load voltage will be small, so the transistor power dissipation increase will be reduced
only slightly.
The effect described is cumulative; it may even lead to a thermal runaway and
the consequent destruction of the transistor if the top of the parabola exceeds the
maximum permissible power dissipation of the transistor (which is often the case, since
we want low VL and high Zcc and Me , as stated above). In a similar way, on the basis of
the # mVÎK temperature dependence of Zbe , the reader can understand why the bias
point for Zce Zcc Î# is thermally stable.
According to Eq. 3.4.16, to have the transistor thermally stable means having
resistances VL or VL Ve such that at the bias point the voltage drop across them is
equal to Zcc Î# . In general, this principle is successfully applied in differential
amplifiers: when one transistor is excited so that its bias point is pushed to one side of
the parabola the bias point of the other transistor is moved exactly to the same
dissipation on the opposite side of the parabola. As a result the temperature becomes
lower but equal in both transistors. Thus in the differential amplifier both transistors can
always have the same temperature, independent of the excitation (provided that we
remain within the linear range of excitation) and there is (ideally) no reason for a
thermal drift in the differential output signal (in practice, it is difficult to make two
transistors with similar parameter values, let alone equal values, even within an
integrated circuit).
In our cascode circuit of Fig. 3.4.1 the transistor U" already has an emitter
resistor Ve" as dictated by the required current gain, and we do not want to change it.
However, we can add a resistor, which we label V) , in the collector circuit of U" to
make Zce" ¸ Mc" ÐVe" V) Ñ ¸ Zcc" Î#, where Zcc" is the voltage at the emitter of the
transistor U# . Suppose now that the emitter current is Me" ¸ Mc" œ &! mA and the U#
base voltage Zbb œ "& V. Then the collector voltage of transistor U" is:
where Zbe# is the base–emitter voltage (about !Þ' V for a silicon transistor).
-3.45-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
r c1 Rθ
Cµ 1
base of Q 1
(virtual ground)
Fig. 3.4.10: The modified compensation network: Vd Gd provide the
HF damping, whilst V) G) provide thermal compensation.
The question is how does one calculate the proper value of G) ? The obvious
way would be to calculate the thermal capacity of the transistor’s die and case mass
(adding an eventual heat sink) and all the thermal resistances (junction to case, case to
heath sink, heath sink to air), as is usually done for high power output stages.
Bruce Hofer [Ref.3.8] suggests the following — more elegant — procedure,
based on Fig. 3.4.10. The two larger time constants in this figure must be equal:
Here <c" is the dynamic collector resistance of transistor U" , derived from Fig. 3.4.11 as
?Zce Î?Mc . In this figure ZA is the Early voltage:
I Ib5
∆Ic
Ic Ib4
Ib3
Ib2
Ib1
0 V
VA Vce ∆ Vce
Fig. 3.4.11: The dynamic collector resistance <c" is derived from
the Mc aZce , Mb b characteristic and the Early voltage ZA .
-3.46-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
The meaning of the Early voltage can be derived from Fig. 3.4.11, where
several curves of the collector current Mc vs. collector to emitter voltage Zce are drawn,
with the base current Mb as the parameter. With increasing collector voltage the
collector current increases even if the base current is kept constant. This is because
the collector to base depleted area widens on the account of the (active) base width as
the collector voltage increases. This in turn causes the diffusion gradient of the
current carriers in the base to increase, hence the increased collector current.
By extending the lines of the collector current characteristics back, as shown in
Fig. 3.4.11, all the lines intersect the abscissa at the same virtual voltage point ZA
(negative for npn transistors), called the Early voltage (after J.M. Early, [Ref. 3.11]);
from the similarity of triangles we can derive the collector’s dynamic resistance:
J Zce Zc a Z A b
<c" œ œ (3.4.20)
J Mc Mc
Since the voltage gain of the common emitter stage is low, GM" will be only
slightly larger than G." . If we now suppose that transistor U" has an <c" œ !Þ& † "!' H
and G." œ $ pF, the value of G) should be:
<c" GM" !Þ& † "!' † $ † "!"#
G) œ œ œ "*Þ& nF (3.4.21)
V) V d "#% %(
In practice, we can take the closest standard values, e.g., G) œ ## nF and
V) Vd œ "#% %( œ (( H ¸ (& H. Since, in general, a wideband amplifier has
several amplifying stages, each one having its own temperature and damping problems,
these values can be varied substantially in order to achieve the desired performance.
Thermal problems tend to be more pronounced towards the output stages where the
signal amplitudes are high. Here experimentation should have the last word.
Nevertheless, the values obtained in this way represent a good starting point.
i o1 i i2 Q i o2
Cθ Cd 2
i i1
Q1
Rθ - Rd Rd RL
CL o
Rs
R e1 Ce1
RA
CA V cc
s V b2
V e1
-3.47-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
7T" 7R 7L (3.5.6)
-3.49-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
Vcc log | Z i | b J1
RL Zi
o
| Zi |
−Rc
i c2 CL
Vb2 (β0 +1) R e1 Ci
i c1 1 − Cc
Zi
i b1 | sC i |
Q1
c)
i
R
is
Rs R e1 Ce1
C
log ω
ωL ωR ω T1
a) b)
By introducing Eq. 3.5.7 back into Eq. 3.5.5 the input impedance can be expressed as:
" = 7T" Ve" Ð" = 7R Ñ Ve" Ð" = 7R Ñ
^i œ † œ (3.5.8)
= 7T" Ð" = 7T" Ñ Ð" = 7L Ñ = 7T" Ð" = 7L Ñ
Now we put Eq. 3.5.2 and Eq. 3.5.8 into Eq. 3.5.1:
@o Vs V L
œ
3s Ve" Ð" = 7R Ñ
”Vs • = 7T" Ð" =7L Ñ
= 7T" Ð" = 7L Ñ
Vs V L
œ (3.5.9)
=# Vs 7T" 7L = ÐVs 7T" Ve" 7R Ñ Ve"
Next we put the denominator into the canonical form and equate it to zero:
Vs 7T" Ve" 7R Ve"
=# = œ =# + = , œ ! (3.5.10)
Vs 7T" 7L Vs 7T" 7L
where we set the coefficients:
Vs 7T" Ve" 7R Ve"
+œ and ,œ (3.5.11)
Vs 7T" 7L Vs 7T" 7L
-3.50-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
An efficient peaking must have complex poles, so the expression under the
square root must be negative, therefore: , +# Î%. We can then extract the negative
sign as the imaginary unit and write Eq. 3.5.12 in the form:
+ +#
=",# œ „ 4 Ê, (3.5.13)
# %
From Eq. 3.5.13 we can calculate the tangent section of the pole angle ):
+#
e ="
˜ ™ Ê , %,
tan ) œ œ % œÊ " (3.5.14)
d ˜=" ™ + +#
#
It follows that:
%,
" tan# ) œ (3.5.15)
+#
Now we insert the expressions from Eq. 3.5.11 for + and , and obtain:
Ve"
%
V s 7T" 7L % Ve" Vs 7T" 7L
" tan# ) œ # œ (3.5.16)
Vs 7T" Ve" 7R aVs 7T" Ve" 7R b#
Œ
Vs 7T" 7L
Vs 7T" 7L Vs 7T"
7R œ V G œ # Ë (3.5.18)
Ve" ˆ" tan# )‰ Ve"
" "
]e" œ = Ge"
Ve" "
V
=G
-3.51-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
The emitter impedance ^e" is the inverse value of ]e" and it must be equal to Eq. 3.5.7:
Ve" ˆ" = GV ‰
^e" œ
=# GV Ge" Ve" = ˆGV GVe" Ge" Ve" ‰ "
Ve" ˆ" = 7R ‰
œ (3.5.20)
=# 7T" 7L = ˆ7T" 7L ‰ "
The value of Ve" is constrained by the DC current amplification VL ÎVe" . Thus we need
the expressions for G , Ge" , and V. By using Eq. 3.5.18, 3.5.21, and 3.5.22 we obtain:
7T" 7L
Ge" œ (3.5.23)
Ve" 7R
and:
7L
7T" 7L 7R 7T"
7R
Gœ (3.5.24)
Ve
where 7R should be calculated by Eq. 3.5.18. Once the value of G is known we can
easily calculate the value of the resistor V œ 7R ÎG . Of course, 7R is determined by the
angle ) of the poles selected for the specified type of response.
Fig. 3.5.2a and 3.5.2b show the normalized pole loci in the complex plane. As
seen already in examples in Part 1 and Part 2, to achieve the maximally flat envelope
delay response (MFED), a single stage 2nd -order function must have the pole angle
) œ „"&!°. The original circuit has two real poles =T" and =L , but when the emitter
peaking zero =R is brought close to =L the poles form a complex conjugate pair.
The frequency response is altered as shown in Fig. 3.5.2c and the bandwidth is
extended. The emitter current increase 3e" a=bÎMe" owing to the introduced VG network
has two break points: the lower is owed to Ve" aG Ge" b and the upper is owed to VG .
If the break point at =R is brought exactly over =L they cancel each other, and the final
response is shaped by the break point =T" of the transistor and the second break point in
the emitter peaking network, =C . The peaking can thus be adjusted by V and G .
Let us consider an example with these data: 0T" œ #!!! MHz, Vs œ '! H,
Ve œ #! H, Go œ * pF, VL œ $*! H. We want to make the amplifier with such an
emitter peaking network which will suit the Bessel pole loci (MFED), where the pole
angle ) œ „"&!° . First we calculate both time constants:
" "
7T" œ œ œ (*Þ&) ps (3.5.25)
#10T" #1 † #!!! † "!'
and:
7L œ VL Go œ $*! † * † "!"# œ $Þ&" ns (3.5.26)
-3.52-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
1.5 1.5
1.0 1.0
s1
0.5 0.5
j ω1 θ
s T1 sR sL
jω 0 jω 0 σ1
−θ
− 0.5 − 0.5
s2
−1.0 −1.0
− 1.5 − 1.5
− 3.0 − 2.5 − 2.0 −1.5 − 1.0 − 0.5 0 −1.5 − 1.0 − 0.5 0 0.5 1.0 1.5
σ σ
a) b)
log | F ( ω ) |
FP ( ω )
F(ω )
log ω
i e1 ( ω )
I e1(DC)
ωL ωR ω T1 ωC
c)
Fig. 3.5.2: Emitter peaking: a) two real poles travel towards each other when the emitter
network zero goes from _ towards =L , forming eventually a complex conjugate pair; b)
poles for Eq. 3.5.14 for the 2nd -order Bessel (MFED) response. c) frequency response
asymptotes — the badwidth is extended to =T" if =R œ =L .
Vs 7T" 7L Vs 7T"
7R œ # Ë œ (3.5.27)
Ve" Ð" tan# )Ñ Ve"
-3.53-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
$Þ&" † "!*
(*Þ&) † "!"# $Þ&" † "!* "Þ$& † "!* (*Þ&) † "!"# †
œ "Þ$& † "!*
#!
œ "!"Þ' pF (3.5.29)
Finally we calculate the value of the resistor V , from the time constant 7R :
7R "Þ$& † "!*
Vœ œ œ "$Þ#* H (3.5.30)
G "!"Þ' † "!"#
Ve" 7R 7T" 7L
^i œ Ê Gi œ (3.5.32)
= 7 T" 7 L Ve" 7R
Our objective is to keep such an input impedance (at the base-emitter junction of
U" ) at lower frequencies also. In other words, at lower frequencies the plot of the input
impedance should correspond to the l"Î=Gi l line in Fig. 3.5.1b . All other impedances
that appear in the input circuit should be canceled by an appropriate compensating
network. To find these impedances, we perform a ‘continued fraction expansion’
synthesis of the input admittance ]i as derived from the right side of Eq. 3.5.8. Thus:
7T" 7L
= 7T" =
" =# 7T" 7L = 7T" 7T" 7L 7R
]i œ œ œ= (3.5.33)
^i = 7R Ve" Ve" V e " 7R = 7R Ve" Ve"
The first fraction we recognize to be the input admittance = Gi . The second fraction can
be inverted and, by canceling out =, we obtain the impedance:
= 7R Ve" Ve" Ve" 7R# Ve" 7R
^iw œ 7L œ
= 7T" Š" ‹ 7 Ð
T" R7 7L Ñ = 7T" Ð7R 7L Ñ
7R
"
œ Vc (3.5.34)
= Gc
-3.54-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
This means a resistor Vc and a capacitor Gc connected in series, and this
combination is in parallel with the input capacitance Gi . The values are negative
because 7R 7L as was required in Eq. 3.5.6. On the basis of these results we can draw
the equivalent input impedance circuit corresponding to Fig. 3.5.1c. The expression for
the capacitance Gc is:
7T" Ð7L 7R Ñ
Gc œ (3.5.35)
Ve" 7R
From Eq. 3.5.33, as well as from our previous analysis, we can derive that Vc Gc œ 7R
and obtain a simpler expression for Vc :
7R
Vc œ (3.5.36)
Gc
Let us now continue our example of the emitter peaking cascode amplifier with
the data Ve" œ #! H, 7T" œ (*Þ&) ps, 7R œ "Þ&" ns, and 7L œ $Þ&" ns, and calculate the
values of Gi , Gc , and Vc . The input capacitance Gi , without GM , is:
The next step is to compensate the series connected Gc and Vc . This can be
done by connecting in parallel an equal combination with positive elements. The
admittance of such a combination is zero and thus the impedance becomes infinity. The
mathematical proof for this operation is:
" "
]i w œ œ!
" "
Vc Vc
= Gc = Gc
and:
"
^iw œ Ê _ (3.5.40)
]i w
By doing so, only the input capacitances Gi GM and the input resistance
a" "bVe" remain effective at the (junction) input.
The impedance ^i as given by Eq. 3.5.8 is effective between the base–emitter
junction and the ground. Unfortunately, no direct access is possible to the junction,
because from there to the base terminal we have the base spread resistance <b" . This
means that <b" must be subtracted from Vc to get the proper value of the compensating
resistor. Supposing that <b" œ #& H, the proper compensating resistor is simply:
-3.55-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
The complete input circuit is shown in Fig. 3.5.3; the input impedance
components which are reflected from the emittor to base, are within the box.
i i1 b J1 C µ1
r b1
Q1
is R c − r b1
−R c R
Rs Ci = Ce1
R e1 Ce1
(1+ β )R e1
Cc − Cc C
Fig. 3.5.3: The impedances in the emitter are reflected into the base junction of U" .
The emitter peaking components V and G are reflected into negative elements Vc
and Gc , which must be compensated by adding externally an equal and positive V-
and Gc ; for proper compensation <b" must be subtracted from Vc .
-3.56-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
(1+ β ) R e
RL Ce Re Co = Ce + CM
a) b)
Fig. 3.6.1: a) The cascode amplifier with a T-coil interstage network. b) The
T-coil loaded by the equivalent small signal, high frequency input impedance.
P œ Pa Pb œ VL# Go (3.6.1)
Since the input shunt resistance a" "bVe is usually much higher than <b , we
will neglect it also, thus arriving at the circuit in Fig. 3.6.2a . Fig. 3.6.2b shows the
equivalent T-coil circuit in which we have replaced the magnetic field coupling factor 5
with the negative mutual inductance PM and the coil branches by their equivalent
inductances Pa and Pb . Finally, in Fig. 3.6.2c we have replaced the branch impedances
by symbols A to E to determine the three current loops M" , M# , M$ .
-3.57-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
Cb
Cb
ii La Lb A
k
L
ii −L M RL
k=0 I3
B C
o RL o
rb rb
I1 I2
bJ bJ is
Co Co
D E
a) b) c)
Fig. 3.6.2: a) The T-coil loaded by the simplified input impedance; b) the equivalent T-coil circuit
in which 5 is substituted by PM ; c) the equivalent branch impedances and the three current loops.
By comparing Fig. 2.4.1b,c, Sec. 2.4 with Fig. 3.6.2b,c we see that they are
almost equal, except that in the branch D, we have the additional series resistance <b .
Let us list all these impedances again, but now including <b :
"
Aœ
= Gb
B œ =Pa
C œ =Pb
"
D œ = PM <b
= Go
E œ VL (3.6.2)
The general analysis of the branches, Eq. 2.4.6 – 2.4.13, showed that the input
impedance of the T-coil network is equal to its loading impedance ^i œ E œ VL . As we
shall soon see, <b between the T-coil tap and Go spoils this nice property; we shall have
to compensate it. The analysis here is similar to that in Sec. 2.4, so we do not have to
repeat it. Here we give the final result, Eq. 2.4.14, for convenience:
By entering all substitutions from Eq. 3.6.2, performing all the required multiplications
and arranging the terms in the decreasing powers of =, we obtain:
Pa Pb P PM P <b VL " P V#
=”Š ‹ VL# P• ÐPa Pb Ñ Š L ‹œ!
Gb Gb Gb Gb = Go G b Gb
(3.6.4)
or, more simply:
= O" O# =" O$ œ ! (3.6.5)
-3.58-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
The difference between Eq. 3.6.4, 3.6.5 and Eq. 2.4.15, 2.4.16 is in the middle
term. Again, if we want to have an input impedance independent of the frequency =,
then each of the coefficients O" , O# , and O$ must be zero [Ref. 3.5]:
P PM Pa Pb
O" œ VL# P œ !
Gb Gb
P <b VL
O# œ ÐPa Pb Ñ œ !
Gb Gb
P V#
O$ œ L œ! (3.6.6)
Go Gb Gb
So we have the three equations from which we can calculate the parameters Pa ,
Pb , and PM . By considering Eq. 3.6.1 we obtain:
P <b V # Go <b
Pa œ Š" ‹œ L Š" ‹ (3.6.7)
# VL # VL
P <b V # Go <b
Pb œ Š" ‹œ L Š" ‹ (3.6.8)
# VL # VL
P <# V # Go <#
PM œ " b# VL# Gb œ L " b# VL# Gb (3.6.9)
% VL % VL
Two interesting facts become evident from Eq. 3.6.7 and 3.6.8. First, Pa Pb ,
and this means that the coil tap is not at the coil’s center any longer, but it is moved
towards the coil’s signal input node. Secondly, V L must always be larger than <b ,
otherwise Pa becomes negative. But we reach the limit of realizability long before that,
since we know from Part 2 that P" œ Pa PM (and also P# œ Pb PM ).
In Eq. 3.6.9 we have two unknowns, PM and G b ; therefore we need a fourth
equation to calculate them. Similarly as we did in Sec. 2.4, we shall use the
transimpedance equation for this purpose. The procedure is well described from
Eq. 2.4.20 to 2.4.24 and we write the last one again:
Zo " CA EA EB EC
œ † (3.6.10)
M" = G! CA CB DA DB DC EA EB EC
If we insert the substitutions from Eq. 3.6.2, we obtain the following result:
Zo VL
J Ð=Ñ œ œ (3.6.11)
M" V L <b
=# VL# Go Gb = Go "
#
In a similar way, for the transimpedance from the input to V L we would obtain:
V L <b
ZR =# VL# Go G b = Go "
œ VL # (3.6.12)
Mi V L <b
=# VL# Go G b = Go "
#
-3.59-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
Since we have the factor ÐV L <b Ñ in the numerator and a different factor
ÐV L <b Ñ in the denominator, this means that the two zeros are not symmetrically
placed in relation to the two poles in the =-plane. Therefore Eq. 3.6.12 does not
describe an all pass network and the input impedance is not simply VL as before. This
represents the basic obstacle to using T-coils in a transistor distributed amplifier,
because the T-coil load can not be replaced by another T-coil network (for comparison
see [Ref. 2.18 and 2.19] ).
Eq. 3.6.11 has two poles, which we calculate from the canonical form of the
denominator:
VL < b "
=# = # œ! (3.6.13)
# VL# Gb VL Go Gb
and both poles are:
Í #
Í
VL <b Í VL < b "
=",# œ „ # (3.6.14)
% VL# Gb Ì % VL# Gb VL Go G b
An efficient inductive peaking must have complex poles. For Bessel poles, as
shown in Fig. 3.6.2, the pole angles )",# œ „ "&!° and with this pole arrangement we
obtain the MFED response. If the poles are complex the tangent of the pole angle is the
ratio of the imaginary to the real component of Eq. 3.6.15:
eÖ=" × "' Gb
tan ) œ œË " (3.6.16)
d Ö=" × Go Ð" <b ÎVL Ñ#
With this we can calculate the coupling factor 5 between the coil P" and P# [Ref. 3.23]:
PM PM
5œ œ (3.6.20)
ÈP" P# ÈÐPa PM Ñ ÐPb PM Ñ
Now we have all the equations needed for the T-coil transistor interstage coupling.
-3.60-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
Sometimes we prefer the normalized form of the roots and in this case
VL Go œ "Î=h œ ". To emphasize the normalization, we add the subscript ‘n’, so
=",#n œ 5"n „ 4 ="n . By applying the normalized poles of Eq. 3.6.22 to Eq. 2.2.27, which
is a generalized second-order magnitude function, we obtain:
5"#n =#"n
¸J Ð=Ѹ œ Í (3.6.23)
Í # #
Í # = # =
5 Œ =" 5 Œ ="
Ì– —– "n —
"n n n
=h =h
By comparing the Bessel poles for a simple T-coil (Eq. 2.4.42) with Eq. 3.6.22,
we notice that in the denominator we have an additional factor Ð" <b ÎV L Ñ. Therefore
it would be interesting to make several frequency responses with different ratios <b ÎVL ,
as listed in Table 3.6.1:
Table 3.6.1
<b ÎVL 5" n ="n Note
!Þ!! $Þ! "Þ($# symmetrical T-coil
!Þ#& #Þ% "Þ$)' -
!Þ&! #Þ! "Þ"&& -
-3.61-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
2.0
Cb
vo
k
i i RL ii L
1.0 L1 L 2
rb
vo
0.7
a RL
Co
0.5 b
d
c
r b /R L Cb /C o L 1 /L 2 k M e
L=0
a) 0 0.083 1 0.50 0.0265
b ) 0.25 0.130 0.52 0.44 0.0166 f
L = R L2 C o
ω h = 1/ Co ( R L +r b )
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 3.6.3: MFED frequency response of the T-Coil transistor interstage coupling circuit for
three different values of <b : a) <b œ !; b) <b œ !Þ#& VL ; c) <b œ !Þ& VL . For comparison the
three reference cases (P œ !): d), e), and f), which correspond to the same three <b ÎVL ratios,
are drawn. The bandwidth improvement factor of the peaking system remains 2.72 times over
the non-peaking reference for each value of <b .
To calculate the phase response we insert our poles into Eq. 2.2.31:
=Î=h ="n =Î=h ="n
: œ arctan arctan (3.6.25)
5"n 5"n
In Fig. 3.6.4 the phase plots for the same three ratios of <b ÎVL as in the
frequency response are shown, along with the three references (P œ !).
and the responses are drawn in Fig. 3.6.5, for the three different ratios <b ÎVL , in
addition to the three references (P œ !).
-3.62-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
0
Cb
k
-30 ii L
L1 L 2
rb
-60 vo
L=0 RL
ϕ[ ] d Co
e
f
-90
r b /R L Cb /C o L 1 /L 2 k M
b a
-120 a) 0 0.083 1 0.50 0.0265 c
b ) 0.25 0.130 0.52 0.44 0.0166
c ) 0.50 0.1875 0.33 0 0
-150
L = R L2 C o
ω h = 1/ Co ( R L +r b )
-180
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 3.6.4: MFED phase response of the T-Coil transistor interstage coupling circuit compared
with the references (P œ !), for the same three values of <b ÎVL .
0.0
r b /R L C b /C o L 1 /L 2 k M
a
− 0.6
b
τe ω h c Cb
− 0.9 k
d Ii L
L1 L 2
rb
L=0 e
L = R L2 C o Vo
− 1.2
f ω h = 1/ C o ( R L +r b ) Co
RL
− 1.5
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 3.6.5: MFED envelope delay response of the T-Coil transistor interstage coupling
circuit compared with the references (P œ !) for the same three values of <b ÎVL .
-3.63-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
For plotting the step response we can use Eq. 2.4.47 (which was fully derived in
Part 1, Eq. 1.14.18):
"
gÐ>Ñ œ " e5" > sinÐ=" > ) 1Ñ (3.6.27)
lsin )l
where ) is the pole angle according to Fig. 3.6.2. By inserting the pole angle ) œ "&!°
or &1Î', as required by the 2nd -order Bessel system, we obtain:
&
gÐ>Ñ œ " # e$ >ÎX sin”È$ >ÎX Œ" 1• (3.6.28)
'
However, here we must use X œ "Î=h œ Go ÐVL <b Ñ, as in Eq. 3.6.22 and
3.6.24. In Fig. 3.6.6 the step-response plots are drawn for three different ratios <b ÎVL as
well as the three reference cases with P œ !.
1.2
1.0
o a
ii RL
b d
0.8
c e
L=0
f
0.6 Cb
L = R L2 C o
k
T = Co ( R L +r b ) ii L
0.4 L1 L 2
r b /R L Cb /C o L 1 /L 2 k M
rb
o
0.2 a) 0 0.083 1 0.50 0.0265 RL
b ) 0.25 0.130 0.52 0.44 0.0166 Co
c ) 0.50 0.1875 0.33 0 0
0.0
0 1 2 3 4 5
t T
Fig. 3.6.6: MFED step-response of the T-coil transistor interstage coupling circuit, compared
to the references (P œ !), for the same three values of <b ÎVL .
Thus we have completed the analysis of the basic case of a transistor T-coil
interstage coupling. The reader who would like to have more information should study
[Ref. 3.5]. In order to simplify the analysis, we have purposely neglected the transistor
input resistance " Ð<1 "Ñ and also stray inductance Ps of the tap to transistor base
terminal. In the next steps we shall discuss both of them.
-3.64-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
Fig. 3.6.7 shows the basic configuration of the transistor input circuit. We have
also drawn the base-lead inductance Ps which will be discussed in the next section.
c
C µ1
i i1 i i1 Ls r b1
Q1 bj
Ci = Ce1
R e1 Ce1 (1+ β )R e1
a) b)
Fig. 3.6.7: The complete transistor input impedance: a) shematic; b) equivalent
circuit, in which we include also the base lead inductance Pb" . The presence of
the shunt resistance Ð" " Ñ Ve" requires a modified inter-stage T-coil circuit.
as we have derived in Eq. 3.2.15. The effect of this resistance may be canceled if we
insert an appropriate resistor Vs from the end of coil P" to the start of coil P# , as shown
in Fig. 3.6.8. It is essential that the resistor Vs is inserted on the ‘left’ side of P# (at the
T-coil tap node), because in this case the bridging capacitance of Gb (self-capacitance of
the coil) and the magnetic field coupling (5 ) are utilized. With the resistor placed on the
‘right’ side of P# (at the VL Gb node) , that would not be the case.
Cb
L1 < L2
k
L
Ii
L1 Rs L2 RL
rb
Z i = RL Vo
bJ Compensates for
when Co
(1+ β ) R e
[ r b + (1+ β ) Re ] ||( Rs + RL ) = RL
Fig. 3.6.8: The resistance Vs in series with P# is inserted near the T-coil tap
to compensate the error in the impedance seen by the input current at low
frequencies, owed to the parallel connection of VL ll Ò <b Ð" " ÑVe Ó.
At very high frequencies we can replace all capacitors by short circuit and all
inductors by open circuit. In this case the input resistance of the T-coil circuit is VL . But
-3.65-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
at very low frequencies the capacitors represent an open circuit and the inductors a short
circuit. The transistor input resistance is then effectively in parallel with VL . It is the
task of the series resistor Vs to prevent this reduction of resistance. The idea is that:
The introduction of this resistor spoils all the expressions from our previous
analysis, and, to be exact, everything we derived to determine the basic T-coil
parameters should be calculated anew, considering the additional parameter Vs . Since
in practice the value of this resistor is very small, no substantial changes in other circuit
parameters may be expected and the additional effort, which would be required by an
exact analysis, would be worthless.
A sneaky method for implementing this compensation, while at the same time
decreasing the stray capacitance (and also to create difficulties to the competition to
copy the circuit) is to make the coil P# using an appropriate resistive wire.
Fig. 3.6.9a shows the T-coil with the base lead stray inductance Ps at the tap.
From Fig. 3.6.9b we realize that the positive inductance of the base lead Ps actually
decreases the negative mutual inductance PM of the T-coil. To retain the same
conditions as in Fig. 3.6.2c at the beginning of the basic T-coil analysis, the coupling
factor must be increased, thus increasing the mutual inductance to PM Ps .
Cb
Cb
k k=0
L L1 L2
Ii Ii
RL RL
Ls −L M
B
B
Ls
rb rb
Co Co
o o
bj
a) b)
Fig. 3.6.9: a) The base lead inductance Ps decreases the value of mutual
inductance, as indicated by the equivalent circuit in b). This can be
compensated by recalculating the circuit with an increased coupling factor.
We shall mark the new T-coil circuit parameters with a prime ( w ) to distinguish
them from the original parameters of the transistor interstage T-coil:
PwM œ PM Ps (3.6.32)
-3.66-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
Because the inductance from <b to either end of the coil P is now increased by
Ps , both inductances Pa and Pb must be decreased by the value of Ps :
By considering all these changes, the new (larger) value of the coupling factor is
PwM
5w œ (3.6.34)
ÈÐPa PM # Ps Ñ ÐPb PM # Ps Ñ
Cs C µ3 Cµ2 C µ1
i i1 Cµ i i1
Q1 Q1
rb r b2 r b1
a) b)
Fig. 3.6.10: a) The base–collector reverse capacitance G. is actually spread
across the base resistance <b . b) A good approximation is achieved by splitting <b
in half and G. in three parts, adding also the external stray capacitance Gs .
Those readers who are interested in the results and further suggestions, should
study [Ref. 3.20].
While we are still speaking about cascode amplifiers, let us examine the ‘folded’
cascode circuit, Fig. 3.6.11. This circuit is a handy solution in cases of a limited supply
voltage, a situation commonly encountered in modern integrated circuits and battery
supplied equipment.
The first thing to note is that the collector dc currents can be different, since the
bias conditions for U" are set by the input base voltage and Ve" , while for U# the bias
is set by Zcc Zb# and Vcc .
-3.67-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
Another interesting point is that Vcc (or a current source in its place) must
supply the current for both transistors. Therefore when a signal is applied at the input
the currents in U" and U# will be in anti-phase, i.e., when 3o" increases, 3i# decreases
and vice-versa. It is thus easier to achieve good thermal balance with such a circuit, than
with the original cascode.
V cc
R cc
Icc
io1 i i2
i i1
Q1 Q2 V b2
i
R e1 Ce1 RL o
Fig. 3.6.11: The ‘folded’ cascode is formed by a complementary, npn and pnp, transistor
pair, connected in the otherwise usual cascode configuration. Since thermionic devices are
not produced in complementary pairs, this circuit can not be realized with electronic tubes.
-3.68-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
In Sec. 3.4.2 we have explained that the instability of the transistor’s DC bias
depends on the ambient temperature and the heat generated internally, as a consequence
of its power dissipation. The current amplification factor "! also depends on
temperature. These effects multiply in a multi-stage DC amplifier. They can be greatly
reduced using a symmetrical differential amplifier.
The basic differential amplifier is shown in Fig. 3.7.1 . The input voltage of
transistor U" is @i" and that of U# is @i# œ @i" (we are assuming these symmetrical
driving voltages in order to eliminate any common mode voltages, thus simplifying the
initial analysis). The emitters of both transistors are connected together and fed via the
resistor Vee from the voltage Zee !. If we assume the circuit to be entirely
symmetrical, e.g., U" œ U# and VL" œ VL# , and if both input voltages @i" and @i# are
zero, the DC Output voltages are also equal, Zo" œ Zo# , independently of the ambient
temperature.
V cc V cc
R L1 R L2
simmetry line
i c1 o1 o2 i c2
Q1 Q2
i1 i2
i e1 i e2
Iee Iee
2 2
2R ee 2R ee
V ee V ee
Fig. 3.7.1: The differential amplifier. We simplify the initial analysis by
assuming @i# œ @i" , VL" œ VL# and U" œ U# (all parameters).
In a similar way as the input voltages, the signal output voltages @o" and @o# go
up and down for an equal amount, however, we must account for the signal inversion in
-3.69-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
the common emitter amplifier. If the voltage amplification of the input voltage
difference is Evd (which we can take directly from Eq. 3.1.14, where we discussed a
simple common-base amplifier), the output signal voltage difference is:
An attentive reader will note that we have added the subscript ‘d’ to denote the
differential mode gain.
In the case of both input voltages being equal and of the same polarity, both
output voltages will also be equal and of same polarity; however, the output signal’s
polarity is the inverse of the input signal’s polarity (owing to the 180° phase inversion
of each common emitter amplifier stage). If the symmetry of the circuit were perfect,
the output voltage difference would be zero, provided that the common mode excitation
at the input remains well within the linear range of the amplifier. Such operation is
named common-mode amplification, Evc (here we have added the subscript ‘c’).
For the common mode signal the excursion of both output voltages with respect
to their DC value is:
@i " @ i # VL"
@o" œ @o# œ Evc ¸ Ð@i" @i# Ñ (3.7.3)
# # Vee
A good idea to visualize the common mode operation is to ‘fold’ the circuit
across the symmetry line and consider it as a ‘single ended’ amplifier with both
transistors and both loading resistors in parallel.
Since we are more interested in the differential mode amplification, we have
pulled the expression for the common mode amplification, so to say, ‘out of the hat’.
The analysis so far is more intuitive than exact. The reason we did not bother to make a
corresponding derivation of exact formulae is that the simple circuit shown in Fig. 3.7.1
is almost never used as a wideband amplifier owing to its large input capacitance. A
basic differential amplifier in cascode configuration, with a constant current generator
instead of the resistor Vee , is drawn in Fig. 3.7.2. The reader who wants to study the full
analysis of the basic low-frequency differential amplifier according to Fig. 3.7.1 should
look in [Ref. 3.7].
The analysis of the circuit in Fig. 3.7.2, a differential cascode amplifier, with the
same rigor, considering all the complex impedances, would quickly run out of control
owing to its complexity (remember Table 3.2.1, which should be applied to each
transistor). However, if we take the emitter node to be a virtual ground, each half of the
differential amplifier can be analyzed separately; actually, owing to symmetry only one
analysis is needed, which we already have done in Sec. 3.4. So here we can focus on
other problems which are peculiar to differential amplifiers only.
First of all, no differential amplifier is perfectly symmetrical, even if all of its
transistors are on the same chip. The lack of a perfect symmetry causes the common
mode input signal Ð@i" @i# ÑÎ# to appear partly as a differential output signal. For the
-3.70-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
same reason there is still some temperature drift, although greatly reduced in
comparison with the single ended amplifier. The appearance of common mode signals
at the output is especially annoying in electrobiological amplifiers (electrocardiographs
and electroencephalographs). In these amplifiers, very small input signal differences (of
the order of several .V) must be amplified in the presence of large (up to 1V) common
mode signals from power lines, owing to capacitive pickup. The level of ability of the
differential amplifier to reject the common mode signal is called common mode
rejection ratio, CMRR œ Evc ÎEvd , generally expressed in decibels (dB). Since this is
out of scope of this book, we will not pursue these effects in detail.
V cc
R L1 R L2
o1 o2
V bb
Q3 Q4
Q1 Q2
Cee
i1 i2
R e1 R e2
Iee
V ee
Fig. 3.7.2: The basic circuit of the differential cascode amplifier.
The optimum thermal stability of the differential cascode circuit could again be
obtained by adjusting the quiescent currents in both halves of the differential amplifier
to values such that the voltage drop on each loading resistor is equal to the voltage
ÐZcc Zbb Zbe ÑÎ#, (see Eq. 3.4.29 and the corresponding explanation). However, as
has been said for the simple cascode amplifier, the requirements for large bandwidth
will prevent this from being realized. We would want to have low VL , high Zcc and Zbb ,
and high Mee to maximize the bandwidth. So the thermal stability will have to be
established in a different way.
Differential amplifiers are particularly suitable for compensation of many
otherwise unsolvable errors. This is achieved by cross-coupling and adding anti-phase
signals, so that the errors cancel out. For example, the pre-shoot of the simple cascode
amplifier, which is owed to capacitive feedthrough, can be effectively eliminated if two
capacitors with the same value as G.",# are connected from the U" emitter to the U#
collector, and vice-versa.
Similarily, by cross-coupling diodes or transistors we can achieve non-linearity
cancellation, leakage current compensation, better gain stability, DC stability, etc. In
integrated circuits, even production process variations can be compensated in this way.
Some such examples are given in Part 5.
-3.71-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
If both transistors are identical, then "& œ "' œ " and Mc& œ Mc' œ " Mb . In this case:
#
MV œ Mc& Œ" (3.7.5)
"
and the collector current is:
MV
Mc& œ (3.7.6)
#
"
"
If " is very large then:
Zcc Zbe
Mc' ¸ MV œ (3.7.7)
V
-3.72-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
Eq. 3.7.8 can be written simply from the geometric relations taken from
Fig. 3.4.11. For a common silicon transistor the Early voltage is at least "!! V. Suppose
both silicon transistors in Fig. 3.7.3a are identical and subject to the same temperature
variations (on the same chip). The collector–emitter voltage of transistor U& is the same
as the base–emitter voltage, Zce& œ Zbe& ¸ !Þ'& V. In contrast, the collector–emitter
voltage of transistor U' is higher, say, Zce' œ "& V. The ratio of both collector currents
is then:
Zce' "&
" "
Mc' ZA Zce' "!! "& œ "Þ#$(
œ œ (3.7.9)
Mc& Zce& !Þ'
" "
ZA Zce& "!! !Þ'
-3.73-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
which is 10 times more. Correspondingly, the power dissipation in the resistor Vee
would also be 10 times greater, or %Þ& W, compared to !Þ%& W for U' .
A high incremental resistance <o is also important for achieving high CMRR,
because it gives the differential amplifier a higher immunity to power supply voltage
variations (which is also a common mode signal).
A simple way of improving the current generator, thus achieving even greater
CMRR factors, is shown in Fig. 3.7.4, where negative feedback, provided by the U&
gain, is used to stabilize the collector current of U' and increase the incremental
resistance, whilst a low voltage zener diode (named after its inventor, the American
physicist Clarence M. Zener, 1905-1993) reduces the Zbe thermal drift of U& , owing to
an almost equal, but opposite thermal coefficient.
In this circuit any increase in U' collector current Mc' is sensed by its voltage
drop on V# , increasing Zb& , which in turn increasess Me& , thus reducing Mb' and therefore
also Mc' . The reduction feedback factor is nearly equal to the U& current gain ".
Effectively, the output resistance is increased from <o of Eq. 3.7.10 to about " <o . Note
that this circuit does not rely on identical transistor parameters, so it can be used in
discrete circuits.
Q6 Q6 Ib5 = Ib6
Ie6
Q5 Q5
VZ + 2 V be Ib5 VZ + Vbe2
IR2 =
VZ + V be R2
DZ R2 DZ R2
a) Vee b) Vee
Fig. 3.7.4: Improved current generator: a) voltage drops; b) current analysis.
The circuits shown and only briefly discussed here should give the reader a
starting point in current control design. Many more circuits, either simple or more
elaborate, can be found in the references quoted.
-3.74-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
Let us return to the differential cascode amplifier of Fig. 3.7.2, where the basic
analysis was done with the assumption that the inputs were driven by a differential
voltage source. Here we shall analyze a current driven stage, which could suitably be
used as an intermediate amplifier stage with the T-coil peaking at the input, such as the
cascode of Fig. 3.6.1. The T-coil peaking has already been analyzed in detail, so we
shall concentrate on the active part of the circuit and see how it can be improved.
In Fig. 3.8.1 we have the differential cascode circuit, driven differentially by the
current source 3s . This current source is loaded by the resistance Vs which we have split
in half and grounded its middle point. The output currents from the collectors of U$ and
U% drive the differential inputs of the next amplifying stage and we are not interested in
it now. What we are interested in, is the current gain of this differential stage. The
reader who has followed the previous analysis with some attention should by now be
able to understand the following relation intuitively:
3o Vs " "
¸ † † (3.8.1)
3s Ve 7T" Vs " = 7T#
" =ŒVs G.
Ve #
io
τ T2 V bb
Q3 Q4
Cµ1 Cµ 2
τ T1
Q1 τ T1 Q2
Cee =
Re
is Rs / 2 Rs / 2
Re / 2 Re / 2
Iee
V ee
Fig. 3.8.1: The current driven differential cascode amplifier. We assume that the transistor
pair U",# , and the pair U$,% , respectively, have identical main parameters. The emitters of
U",# see the Ve Gee network, set to equal the time constant 7T" of transistors U",# .
-3.75-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
This means that by reducing the gain E3 we can extend the bandwidth. On the
basis of this there arises the idea that we can add another differential stage to double the
current (and therefore also the current gain) and then optimize the stage, choosing
between the doubling of gain with the same bandwidth and the doubling of bandwidth
with the same gain, or any factor in between.
The basic 0T doubler circuit, developed by C.R. Battjes, [Ref. 3.1], is presented
in Fig. 3.8.2. Each differential pair amplifies the voltage drop on its own Vs Î# and each
pair sees its own Ve between the emitters, thus the current gain is simply:
3o Vs " "
¸ † †
3s Ve 7T" Vs G. " = 7T#
" =ŒVs
# Ve #
" "
¸ E3 † (3.8.4)
7T" Ve G. " = 7T#
" = E3 Œ
# #
io
τ T2 Vbb τ T2
Q3 Q4
Cµ1 C µ2
τ T1 τ T1 τ T1 τ T1
Q1a τ T1 Q2a Q1b τ Q
Cee = Cee = T1 2b
Re Re
is Rs / 2 Rs / 2
Re / 2 Re / 2 Re / 2 Re / 2
Iee1 Iee2
Fig. 3.8.2: The basic 0T doubler circuit. We assume equal transistors for U"aß#a and U"b,#b Þ
The low input impedance of U$ß% emitters allows summing the collector current of each
pair, but cross-coupled for in phase signal summing.
# 7T"
Gi œ G. for the circuit in Fig. 3.8.1 (3.8.5)
Ve
7 T"
Gi œ G. for the circuit in Fig. 3.8.2 (3.8.6)
Ve
-3.76-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
Another problem is that, although the transfer function is of second order, there
are two real poles, so we can not ‘tune’ the system for an efficient peaking. By forcing
the system to have complex conjugate poles with emitter peaking, we would increase
the emitter capacitance Gee , which would be reflected into the base as an increased
input capacitance; this would increase exactly that term which we have just halved.
A quick estimate will give us a little more feeling of the improvement
achievable. Let us have a number of transistors, with 0T œ $Þ& GHz, G. œ " pF,
Vs œ # ‚ &! H, U$,% collector load VL œ # ‚ &! H, GL œ " pF and the total current
gain E3 œ $. Assuming that the system’s response is governed by a dominant pole, we
can calculate the rise time of the conventional system as:
" Ve G . # "
#
#
>r œ #Þ#ËE#3 Œ Œ cVL aGL G. bd (3.8.7)
#1 0T # #1 0T
Then, for the ordinary differential cascode in Fig. 3.8.1:
and the improvement factor is 1.34, much less than 2. Transistors with lower 0T might
give an apparently greater improvement (about 1.7 could be expected) owing to the
lower contribution of the source’s impedance. However, it seems that a better idea
would be to remain with the original bandwidth and use the gain doubling instead,
which could lead to a system with a lower number of stages, which in turn could be
optimized more easily.
On the other hand, the reduced input capacitance is really beneficial to the
loading of the input T-coils. With the data from the example above, we can calculate
the T-coils for the conventional and the doubler system and the resulting bandwidths.
From Eq. 3.6.21 we can find that:
"# <b
=H œ Ë # Œ" (3.8.10)
aVL Gi b VL
By assuming an <b œ "& H and Gi of 'Þ& and $Þ( pF, respectively (Eq. 3.8.5 and 3.8.6),
we can calculate an 0H of "Þ* and $Þ% GHz, a ratio of nearly 1.8, which is worth
considering.
In principle one could use the same doubler implementation with 4, 6, or more
transistor pairs; however, the input capacitance poses a practical limit. A system with 4
pairs is already slower than the system with two pairs.
-3.77-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
Wideband signals come usually from two source types: low impedance sources
are usually those from the output of other wideband amplifiers, while medium and high
impedance sources are usually those from sensors or other very low power sources.
In the first case, we employ standardized impedances, 50 H or 75 H, so that both
the source and the load have the same impedance as the characteristic impedance of the
cable that connects them. In this way we preserve the bandwidth and prevent
reflections which would distort the signal, but we pay for this by the 50 % (6 dB)
attenuation of the signal’s amplitude.
In the second case we also want a standardized value of the impedance, but this
time the value is 1 MH, in parallel with some (inevitable) capacitance, usually 20 pF
(but values from 10 to 25 pF can also be found). The standardized capacitance is
helpful not only in determinating the loading of the source at high frequencies, but also
to allow the use of ‘probes’, which are actually special HF attenuators (÷10 or ÷100 ),
so that the source can be loaded by a 10 MH or even a 100 MH resistance, while
keeping the loading capacitance below some 12 pF.
With the improvement of semiconductor production processes, the so called
‘active’ probes have been developed, used mostly for extremely wideband signals, such
as those found in modern communications and digital computers. Active probes usually
have a 10 kH ll 2 pF input impedances, with no reduction in amplitude.
The key component of both high input impedance amplifiers and active probes
is the JFET (junction field effect transistor) source follower [Ref. 3.16].
The basic JFET source follower circuit configuration is shown in Fig. 3.9.1. In
contrast to the BJT emitter follower (with an input resistance of about " Ve ), the JFET
source follower has a very high input resistance (between 109 and 1012 H), owed to the
specific construction of the JFET. Its gate (a p–n junction with the drain–source
channel) is reverse biased in normal operation, modulating the channel width by the
electrical field only, so the input current is mainly owed to the reverse biased p–n
junction leakage and the input capacitances, Ggd and Ggs .
V dd V dd
Cgd
d Q1
Q1 g Q1 g Cgs s s
s
s s
g m GS
Cgs G
G ZL G ZL Cgd ZL
d
a) b) c)
Fig. 3.9.1: The JFET source follower: a) circuit schematic; b) the same
circuit, but with an ideal JFET and the inter-electrode capacitances drawn as
external components; c) equivalent circuit.
A MOSFET (metal oxide silicon field effect transistor) has even greater input
resistance (up to ~1015 H); however it also has a greater input capacitance (between 20
-3.79-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
and 200 pF; it is also more noisy and more sensitive to damage by being overdriven), so
it is not suitable for a wideband amplifier input stage.
In Fig. 3.9.1b we have drawn an ideal JFET device and its inter-electrode
capacitances are modeled as external components. These capacitances determine the
response at high frequencies [Ref. 3.8, 3.16, 3.20, 3.35]. Fig. 3.9.1c shows the
equivalent circuit.
The source follower is actually the common drain circuit with a voltage gain of
nearly unity, as the name ‘follower’ implies. The meaning of the circuit components is:
The JFET drain is connected to the power supply which must be a short circuit
for the drain signal current, therefore we can connect Ggd to ground, in parallel with the
signal source. We assume the signal source impedance to be zero; so we can forget
about Ggd for a while.
From the equivalent circuit in Fig. 3.91c we find the currents for the node g:
-3.80-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
have done in the differential amplifier. By doing this, we increase the real (resistive)
part of the loading impedance, but we can do little to reduce the always present loading
capacitance GL .
V dd
Cgd
Zi d
g Q1 Cgs CL
Cgs CL − Cx = −
s
s Zi Cgd Cx = Cgs + CL
Cgs + CL
Cgs (Cgs + CL ) 2
CL − Rx = −
G
Is g m Cgs CL
V ss
a) b)
Fig. 3.9.2: The JFET source follower biased by a current generator and loaded only by the
inevitable stray capacitance GL : a) circuit schematic; b) the input impedance has two
negative components, owed to Ggs and gm (see Sec. 3.9.5).
In Eq. 3.9.4 the term Ggs Îgm is obviously the characteristic JFET time constant, 7FET :
Ggs "
œ 7FET œ (3.9.5)
gm =FET
4=
"
@s =FET
Ev œ œ (3.9.6.)
@G " GL
" 4= Œ
=FET gm
4=
"
@s =FET
œ (3.9.7.)
@G 4= "
" †
=FET Hc
Here Hc is the input to output capacitive divider, which would set the output voltage if
only the capacitances were in place:
Ggs
Hc œ (3.9.8.)
Ggs GL
We would like to express Eq. 3.9.7 by its pole =" and zero =# , so we need the
normalized canonical form:
=" = =#
J a=b œ E! † (3.9.9)
= =" =#
-3.81-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
= a=FET b
J a=b œ Hc (3.9.10)
= a=FET Hc b
These simple relations are the basis from which we shall calculate the frequency
response magnitude, the phase, the group delay and the step response of the JFET
source follower (simplified at first and including the neglected components later).
Í
Í =
#
Í
Í "Œ
Í =FET
lJ a=bl œ Í # (3.9.14)
Í =
Ì " Œ
=FET Hc
Since we want to examine the influence of loading we shall plot the transfer
function for three different values of the ratio GL ÎGgs : 0.5, 1.0, and 2.0 (the
corresponding values of Hc being 0.67, 0.5, and 0.33, respectively). The plots, shown
in FigÞ 3.9.3, have three distinct frequency regions: in the lower one, the circuit behaves
as a voltage follower, with the JFET playing an active role, so that @s œ @G , whilst in
the highest frequency region, only the capacitances are important; in between we have a
transition between both operating modes.
-3.82-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
a
0.7
b
0.5
c
Vs
VG
V dd
gm gm
ω FET =
Vs Cgs
0.2 Cgs
VG CL C L / Cgs
Is
RG = 0 a) 0.5
V ss b) 1.0
c) 2.0
0.1
0.01 0.1 1.0 10.0 100.0
ω /ω FET
Fig. 3.9.3: Magnitude of the frequency response of the JFET source follower for three
different capacitance ratios GL ÎGgs . The pole =$ œ "ÎVG Gi has not been taken into
account here Ðsee Fig. 3.9.7 and Fig. 3.9.8ÑÞ
The relation for the upper cutoff frequency is very interesting. If we set lJ a=bl
to be equal to:
it follows that:
Hc
=h œ =FET (3.9.16)
È" # Hc#
From Eq. 3.9.15 we can conclude that by putting Hc œ "ÎÈ# the denominator is
reduced to zero, thus =h œ _ . This means that for such a capacitive ratio the
magnitude never falls below "ÎÈ# . However attractive the possibility of achieving
infinite bandwidth may seem, this can never be realized in practice, because any signal
source will have some, although small, internal resistance VG , resulting in an
additional input pole =$ œ "ÎVG Gi , where Gi is the total input capacitance of the
JFET. The complete transfer function will now be (see Fig. 3.9.7 and 3.9.8):
=" = $ = =#
J a=b œ † (3.9.17)
=# a= =" ba= =$ b
-3.83-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
3.9.2 Phase
We obtain the phase response from Eq. 3.9.7 by taking the arctangent of the
ratio of the imaginary to the real part of J a4=b:
eeJ a4=bf
:a=b œ arctan (3.9.18)
d eJ a4=bf
Since in Eq. 3.9.7 we have a single real pole an a single real zero, the resulting
phase angle is calculated as:
= =
:Ð=Ñ œ arctan arctan (3.9.19)
=FET =FET Dc
In Fig. 3.9.4 the three phase plots with same GL ÎGgs ratios are shown. Because
of the zero, the phase returns to the initial value at high frequencies.
−5
a
− 10
ϕ[ ]
b
− 15
− 20
c
V dd
− 25
gm
gm ω FET =
− 30 Cgs
Vs
− 35 Cgs C L / Cgs
VG CL
Is a) 0.5
RG = 0
− 40 b) 1.0
V ss c) 2.0
− 45
0.01 0.1 1.0 10.0 100.0
ω /ω FET
Fig. 3.9.4: Phase plots of the JFET source follower for the same three capacitance ratios.
.:
7e œ (3.9.20)
.=
but we usually prefer the normalized expression 7e =h . In our case, however, the upper
cut-off frequency, =h , is changing with the capacitance divider Hc . So instead of =h we
-3.84-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
shall, rather, normalize the envelope delay to the characteristic frequency of the JFET
itself, =FET :
" Hc
7e =FET œ (3.5.21)
" a=Î=FET b# Hc# a=Î=FET b#
The envelope delay plots for the three capacitance ratios are shown in Fig. 3.9.5.
Note that for all three ratios there is a frequency region in which the envelope delay
becomes positive, implying that the output signal advance, in correlation with the
phase plots, goes up with frequency. We have explained the physical background of
this behavior in Part 2, Fig. 2.2.5, 2.2.6. The positive envelope delay influences the
input impedance in a very unfavorable way, as we shall soon see.
0.5
C L / Cgs
a) 0.5
b) 1.0
Advance c) 2.0
0.0
Delay
τ T ω FET
a
− 0.5
gm
ω FET =
Cgs
b V dd
− 1.0
gm
Vs
c
Cgs
− 1.5 VG CL
Is
RG = 0
V ss
− 2.0
0.01 0.1 1.0 10.0 100.0
ω /ω FET
Fig. 3.9.5: The JFET envelope delay for the three capacitance ratios. Note the positive
peak (phase advance region): trouble in sight!
We are going to use Eq. 3.9.9, which we multiply by the unit step operator "Î=
to obtain the step response in the complex frequency domain; we then obtain the time
domain response by applying the inverse Laplace transform:
" " = =#
Ka=b œ J a=b œ Hc (3.9.22)
= = = ="
="
a= =# b e=>
ga>b œ _–1 eKa=bf œ Hc " res (3.9.23)
=œ!
= Ð= =" Ñ
-3.85-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
and, by considering Eq. 3.9.12 and 3.9.13, as well as that =FET œ "Î7FET and using the
normalized time >Î7FET , we end up with:
ga>b œ " a" Hc b eHc >Î7FET (3.9.27)
The plot of this relation is shown in Fig. 3.9.6, again for the same three
capacitance ratios. The initial output signal jump at > œ ! is the input signal cross-talk
(through Ggs ) multiplied by the Hc factor:
ga!b œ Hc (3.9.28)
Following the jump is the exponential relaxation towards the normal follower action at
lower frequencies. If the input pole =$ œ "ÎVG Gi is taken into account, the jump
would be slowed down to an exponential rise with a time constant of VG Gi .
1.2
.
a to ..
t ion
0.8 ansi
s ... tr
b
G
0.6
c
C
τ FET = g gs V dd
Capacitive divider
m
0.4
gm
s
0.2
C L / Cgs Cgs
G CL
0.0 a) 0.5 Is
RG = 0
b) 1.0
c) 2.0 V ss
− 0.2
0 2 4 6 8
t / τFET
Fig. 3.9.6: The JFET source follower step response for the three capacitance ratios.
-3.86-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
= GL
@G œ @s Œ" a" VG = Ggd VG = Ggs b @s VG = Ggs (3.9.32)
= Ggs gm
Now we put this into the normalized canonical form and use Eq. 3.9.5 again to
replace the term gm ÎGgs with =FET . Also, we express all the time constants as functions
of =FET and the appropriate capacitance ratios. Finally, we want to see how the reponse
depends on the product gm VG , so we multiply all the terms containing VG with gm and
compensate each of them accordingly. The final expression is:
Ggs
=FET
" Ggd
a= =FET b †
gm VG GL GL
Œ"
@s Ggs Ggd
œ
@G " Ggs GL Ggs
=FET ”" Œ • =#FET
g m VG Ggd G gd " G gd
=# = †
GL GL gm VG GL GL
Œ" Œ"
Ggs Ggd Ggs Ggd
(3.9.34)
To plot the responses we shall set:
GL Ggs
œ ", œ &, =FET œ ", = œ 4=, and gm VG œ Ò!Þ$; "; $Ó
Ggs Ggd
-3.87-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
2.0
gm RG
CL a) 0.3
=1
Cgs b) 1.0
c) 3.0
1.0
0.7
RG = 0
0.5 gm
ω FET =
Cgs V dd
Vs
VG RG
gm
Vs c b a
0.2 Cgs
VG C gd CL
Is
V ss
0.1
0.01 0.1 1.0 10.0 100.0
ω /ω FET
Fig. 3.9.7: The JFET source follower frequency response for a ratio GL ÎGgs œ " and a
variable signal source impedance, so that VG gm is 0.3, 1, and 3, respectively. Note the
response peaking for gm VG œ $.
1.2
G
1.0
s
G RG = 0
0.8
0.6 gm
ω FET =
a Cgs
b V dd
c
0.4 RG
CL gm
=1
Cgs s
0.2 Cgs
Cgd CL
g m RG G
Is
0.0 a) 0.3
b) 1.0
V ss
c) 3.0
− 0.2
0 2 4 6 8
t / τ FET
Fig. 3.9.8: The JFET source follower step response for the same conditions as in Fig. 3.9.7.
-3.88-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
In Fig. 3.9.7 and 3.9.8 we have seen how the JFET source follower response is
affected by its input impedance; this behavior becomes evident when the signal source
has a non-zero resistance. Here, we are going to explore the circuit in more depth to
examine the influence of a complex and, in particular, inductive signal source.
As we have done in the previous analysis, the gate–drain capacitance Ggd will
appear in parallel with the input, so we can treat its admittance separately and
concentrate on the remaining input components.
We start from Eq. 3.9.1 by solving it for @s :
3i
@s œ @ G (3.9.35)
= Ggs
This we insert into Eq. 3.9.2:
3i "
@G a= Ggs gm b Œ@g Œ= Ggs gm œ! (3.9.36)
= Ggs ^L
Because the JFET source is biased from a constant current generator (whose
impedance we assume to be infinite) the loading admittance is "Î^L œ = GL . Let us put
this back into Eq. 3.3.36 and rearrange it a little:
gm GL
@G = GL œ 3i Œ" (3.9.37)
= Ggs Ggs
Furthermore:
" gm "
@G œ 3 i Œ #
= GL = Ggs GL = Ggs
= aGgs GL b gm
œ 3i (3.9.38)
=# Ggs GL
@g = aGgs GL b gm
^iw œ œ (3.9.39)
3i =# Ggs GL
To see more clearly how this impedance is comprised, we invert it to find the
admittance and apply the continuous fraction synthesis in order to identify the
individual components.
Ggs GL
# gm =
= G G G G G gs GL
]i w œ
gs L gs L
œ= (3.9.40)
= aGgs GL b gm Ggs GL = aGgs GL b gm
The first fraction is the admittance of the capacitances Ggs and GL connected in series.
Let us name this combination Gx :
Ggs GL
Gx œ (3.9.41)
Ggs GL
-3.89-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
The second fraction, which has a negative sign, must be further simplified. We invert it
again, and after some simple rearrangement we obtain the impedance:
aGgs GL b# Ggs GL
^x œ (3.9.42)
gm Ggs GL = Ggs GL
The first part is interpreted as a negative resistance, which we will label Vx in order
to follow the negative sign in the following analysis more clearly:
aGgs GL b#
Vx œ (3.9.43)
gm Ggs GL
The second part as a negative capacitance, which we label G x because it has the same
absolute value as Gx from the Eq. 3.9.41:
Ggs GL
Gx œ (3.9.44)
Ggs GL
Now that we have all the components we can reintroduce the gate–drain capacitance
Ggd , so that the final equivalent input impedance looks like Fig. 3.9.9. We can write the
complete input admittance:
"
]i œ 4= aGgd Gx b (3.9.45)
"
Vx
4= Gx
V dd V dd
Cgd
Q1
− Rx Q1
Zi Cgd Cx s Cgs
LG
− Cx Cgs s
CL L
Is CL Is
G
V ss Vss
a) b) c)
Fig. 3.9.9: a) The equivalent input impedance of the capacitively loaded JFET source
follower has negative components which can be a nuisance if, as in b), the signal source
has an inductive impedance, forming c) a familiar Colpitts oscillator. If Ggd is small, the
circuit will oscillate for a broad range of inductance values.
We can separate the real and imaginary part of ]i by putting Eq. 3.9.45 on a
common denominator:
]i œ Åe]i f ¼e]i f
-3.90-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
The negative real part can cause some serious trouble [Ref. 24]. Suppose we are
troubleshooting a circuit with a switching power supply and we suspect it to be a cause
of a strong electromagnetic interference (EMI); we want to use a coil with an
appropriate inductance P (which, of course, has its own real and imaginary admittance)
to inspect the various parts of the circuit for EMI intensity and field direction. If we
connect this coil to the source follower and if the coil resistance is low, we would have:
and the source follower becomes a familiar Colpitts oscillator, Fig. 3.9.9c [Ref. 25].
Indeed, some older oscilloscopes would burst into oscillation if connected to such a
coil and with its input attenuator switched to maximum sensitivity (a few highly priced
instruments built by respectable firms, back in early 1970's, were no exception).
By taking into account Eq. 3.9.42, 3.9.43 and 3.9.9 and substituting
=FET œ gm ÎGgs , the real part of the input impedance can be rewritten as:
GL Ð=Î=FET Ñ#
ÅÐ]i Ñ œ Ki œ gm † (3.9.48)
Ggs " Ð=Î=FET Hc Ñ#
The last fraction represents the normalized frequency dependence of this admittance:
Ð=Î=FET Ñ#
KiN œ (3.9.49)
" Ð=Î=FET Hc Ñ#
Fig. 3.9.10 shows the plots of KiN for the same ratios of GL ÎGgs as before. Note
the quadratic dependence (of Hc ) at high frequencies.
0
−2
a
G iN
b
−4
−6
C L / C gs
a) 0.5
−8 b) 1.0
c
c) 2.0
− 10
0.01 0.1 1.0 10.0 100.0
ω /ω FET
Fig. 3.9.10: Normalized negative input conductance KiN vs. frequency.
-3.91-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
the JFET gate; since we should have some series resistance anyway in order to protect
the sensitive input from static discharge or accidental overdrive, this will seem to be the
preferred choice. However, after a closer look, this protection resistance is too small to
prevent oscillations in case of an inductive signal source impedance. The required
resistance value which will guarantee stability in all conditions will be so high that the
bandwidth will be reduced by nearly an order of magnitude. Thus this method of
compensation is used only if we do not care how much bandwidth we obtain.
A more elegant method of compensation is the one which we have used already
in Fig. 3.5.3. If we introduce a serially connected Vx Gx network in parallel with the
JFET input, as shown in Fig. 3.9.11, we obtain ]x œ ! and ^x œ _. Note the
corresponding phasor diagram: we first draw the negative components, Vx and Gx ,
find the impedance vector ^x and invert it to find the negative admittance, ]x . We
then compensate it by a positive admittance ]x such that their sum ]ic œ !. We finally
invert ]x to find ^x and decompose it into its real and imaginary part, Vx and Gx :
" "
]ic œ œ! (3.9.50)
" "
Vx Vx
4= G x 4= G x
ℑ
j
−Zx
−1
j ω Cx
Rx − Rx Yx
Rx ℜ
Y ic = 0 1
− Rx
−Y x
Cx − Cx 1
j ω Cx
Zx
a) b)
Fig. 3.9.11: a) The negative components of the input impedance can be compensated by an
equal but positive network, connected in parallel, so that their admittances sum to zero
(infinite impedance). In b) we see the coresponding phasor diagram.
With this compensation, the total input impedance is the one belonging to the
parallel connection of Ggd with Gx and assuming a 1 MH gate bias resistor Vin :
" Vin
^i œ œ (3.9.51)
" Ggs GL
4=aGgd Gx b " 4=ŒGgd Vin
Vin Ggs GL
The analysis of the input impedance would be incomplete without Fig. 3.9.12,
where the Nyquist diagrams of the impedance are shown revealing its frequency
dependence, as well as the influence of different signal source impedances.
-3.92-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
ℑ
V dd
f =∞ f =0
XC = 0
Zi Rin = 1M Ω ℜ
Q1
0
a) s Zi
Rin Cin
CL
Is f>0
| XC | = Rin
s
V ss t
V dd ℑ
f =∞
XC = 0
Zi −Rx 0 ℜ
Q1
b) s
s
Rin Cin −Cx Zi t
CL
Is
− Rx
f>0
V ss
V dd ℑ
X LG
f =∞
−Rx ℜ
Q1
f osc 0
c)
s
LG Zi
− Cx s
Zi CL t
Rin Cin
Is
vG − Rx
f>0
V ss
V dd ℑ
X LG
f =∞
−Rx ℜ
Q1
0 Rx
d) s s
LG Zi t
Cx − Cx
Zi CL
Rin Cin Is
G Rx − Rx
f>0
V ss
Fig. 3.9.12: a) The input impedance of the JFET source follower, assumed to be purely capacitive and in
parallel with a 1 MH gate biasing resistor; thus at 0 œ ! we see only the resistor and at 0 œ _ the
reactance of the input capacitance is zero; b) the negative input impedance components affect the input
impedance near the origin; c) with an inductive signal source, the point in which the impedance crosses
the negative real axis corresponds to the system resonant frequency, provoking oscillations. d) The
compensation removes the negative components.
-3.93-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
In Fig. 3.9.12a the JFET gate is tied to ground by a 1 MH resistor, which, with a
purely capacitive input impedance, would give a phasor diagram in the form of a half
circle with frequency varying from DC to infinity.
In Fig. 3.9.12b we concentrate on the small area near the complex plane origin
(high frequencies, close to 0FET ), where we draw the influence of the negative input
impedance components, assuming a resistive signal source.
In Fig. 3.9.12c an inductive signal source (with a small resistive component)
will cause the impedance crossing the real axis in the negative region, therefore the
circuit would oscillate at the frequency at which this crossing occurs.
Finally, in Fig. 3.9.12d we see the same situation but with the negative
components compensated as in Fig. 3.9.11. Note the small loop in the first quadrant of
the impedance plot — it is caused by the small resistance VG of the coil PG , the coil
inductance, and the total input capacitance Gin .
In Fig. 3.9.13 we see yet another way of compensating the negative input
impedance. Here the compensation is achieved by inserting a small resistance Vd in the
drain, thus allowing the anti-phase signal at the drain to influence the gate via Ggd and
cancel the in-phase signal from the JFET source via Ggs . This method is sometimes
preferred over the former method, because the PCB pads, which are needed to
accommodate the additional compensation components, also create some additional
parasitic capacitance from the gate to ground.
V dd
Rd
Cgd
i comp id
Zi Q1
s
− Cx
Cgs
ix CL
Is
− Rx
V ss
-3.94-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
Résumé of Part 3
In this part we have analyzed some basic circuits for wideband amplification,
examined their most important limitations, and explained several ways of improving
their high frequency performance. The property which can cause most trouble, even for
experienced designers, is the negative input impedance of some of the most useful
wideband circuits, and we have shown a few possible solutions.
The reader must realize, however, that the analytical tools and solutions
presented are by no means the ultimate design examples. For a final design, many other
aspects of circuit performance must also be carefully considered, and, more often than
not, these other factors will compromise the wideband performance severely.
As we have indicated at some points, there are ways of compensating certain
unwanted circuit behavior by implementing the system in a differential configuration,
but, on the negative side, this doubles the number of active components, increasing
cost, power dissipation, circuit size, strays and parasitics and also the production and
testing complexity. From the wideband design point of view, having many active
components usually means many more poles and zeros that must be carefully analyzed
and appropriately ‘tuned’.
In Part 4 and Part 5 we shall explain some theoretical and practical techniques
for an efficient design approach at the system level.
-3.95-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
References:
[3.1] C.R. Battjes, Amplifier Frequency Response and Risetime, AFTR Class Notes
(Amplifier Frequency and Transient Response), Tektronix, Inc., Beaverton, 1977
[3.2] P. Starič, Transistor as an Impedance Converter (in Slovenian with an English Abstract),
Elektrotehniški Vestnik, 1989, pp. 17–22.
[3.3] D.L. Feucht, Handbook of Analog Circuit Design,
Academic Press, Inc., San Diego, 1990
[3.4] I.E. Getreu, Modeling the Bipolar Transistor,
Elsevier Scientific Publishing Company, Amsterdam, 1978
[3.5] R.I. Ross, T-coil Transistor Interstage Coupling,
AFTR Class Notes, Tektronix, Inc., Beaverton, 1968
[3.6] P. Starič, Application of T-coil Transistor Interstage Coupling in Wideband Pulse Amplifiers,
Elektrotehniški Vestnik, 1990, pp. 143–152.
[3.7] P.R. Gray & R.G. Meyer, Analysis and Design of Analog Integrated Circuits,
John Wiley, New York, 1969
[3.8] B.E. Hofer, Amplifier Risetime and Frequency Response,
AFTR Class Notes, Tektronix, Inc., Beaverton, Oregon, 1982
[3.9] J. J. Ebers & J.L. Moll, Large-Signal Behavior of Junction Transistors,
Proceedings of IRE, Vol. 42, pp. 1761–1772, December 1954
[3.10] H.K. Gummel & H.C. Poon, An Integral Charge Control Model of Bipolar Transistors,
Bell Systems Technical Journal, Vol. 49, pp. 827–852, May 1970
[3.11] J.M. Early, Effects of Space-Charge Layer Widening in Junction Transistors,
Proceedings of IRE, Vol. 40, pp. 1401–1406, November 1952
[3.12] P.E. Gray & C.L Searle, Electronic Principles, Physics, Models and Circuits,
John Wiley, New York, 1969
[3.13] L.J. Giacoletto, Electronics Designer's Handbook,
Second Edition, McGraw-Hill, New York 1961.
[3.14] P.M. Chirlian, Electronic Circuits, Physical Principles and Design,
McGraw-Hill, 1971.
[3.15] J.E. Cathey, Theory and Problems of Electronic Devices and Circuits,
Schaum's Outline Series in Engineering, McGraw-Hill, New York, 1989
[3.16] A.D. Evans, Designing with Field-Effect Transistors, (Siliconix Inc.),
McGraw-Hill, New York, 1981
[3.17] J.M. Pettit & M. M. McWhorter, Electronic Amplifier Circuits, Theory and Design,
McGraw-Hill, New York, 1961
[3.18] B. Orwiller, Vertical Amplifier Circuits,
Tektronix Inc., Beaverton, Oregon, 1969
[3.19] C.R. Battjes, Technical Notes on Bridged T-coil Peaking,
Internal Publication, Tektronix, Inc., Beaverton, Oregon, 1969
[3.20] P. Antognetti & G. Massobrio, Semiconductor Device Modelling with SPICE,
McGraw-Hill, New York, 1988
[3.21] P. Horowitz & W. Hill, The Art of Electronics,
Cambridge University Press, Cambridge, 1987
[3.22] G. Bruun, Common-Emitter Transistor Video-Amplifiers,
Proceedings of the IRE, Nov. 1956, pp. 1561–1572
-3.97-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices
-3.98-
P. Starič, E. Margan:
Wideband Amplifiers
Part 4:
- 4.2 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
Contents:
4.0 Introduction ....................................................................................................................................... 4.7
4.1 A Cascade of Identical, DC Coupled, VG Loaded Stages ................................................................ 4.9
4.1.1 Frequency Response and the Upper Half Power Frequency ............................................ 4.9
4.1.2 Phase Response .............................................................................................................. 4.12
4.1.3 Envelope Delay .............................................................................................................. 4.12
4.1.4 Step Response ................................................................................................................ 4.13
4.1.5 Rise time Calculation ..................................................................................................... 4.15
4.1.6 Slew Rate Limit ............................................................................................................. 4.16
4.1.7 Optimum Single Stage Gain and Optimum Number of Stages ...................................... 4.17
4.2 A Multi-stage Amplifier with Identical AC Coupled Stages ........................................................... 4.21
4.2.1 Frequency Response and Lower Half Power Frequency ................................................ 4.22
4.2.2 Phase Response .............................................................................................................. 4.23
4.2.3. Step Response ............................................................................................................... 4.24
4.3 A Multi-stage Amplifier with Butterworth Poles (MFA Response) ................................................ 4.27
4.3.1. Frequency Response ..................................................................................................... 4.31
4.3.2. Phase response .............................................................................................................. 4.32
4.3.3. Envelope Delay ............................................................................................................. 4.33
4.3.4 Step Response ................................................................................................................ 4.33
4.3.5 Ideal MFA Filter, Paley–Wiener Criterion .................................................................... 4.36
4.4 Derivation of Bessel Poles for MFED Response ............................................................................. 4.39
4.4.1 Frequency Response ...................................................................................................... 4.42
4.4.2 Upper Half Power Frequency ........................................................................................ 4.43
4.4.3 Phase Response .............................................................................................................. 4.43
4.4.4. Envelope delay .............................................................................................................. 4.45
4.4.4 Step Response ................................................................................................................ 4.45
4.4.5. Ideal Gaussian Frequency Response ............................................................................. 4.49
4.4.6. Bessel Poles Normalized to Equal Cut Off Frequency ................................................. 4.51
4.5. Pole Interpolation ........................................................................................................................... 4.55
4.5.1. Derivation of Modified Bessel poles ............................................................................ 4.55
4.5.2. Pole Interpolation Procedure ........................................................................................ 4.56
4.5.3. A Practical Example of Pole interpolation .................................................................... 4.59
4.6. Staggered vs. Repeated Bessel Pole Pairs ...................................................................................... 4.63
4.6.1. Assigning the Poles For Maximum dynamic Range ...................................................... 4.65
Résumé of Part 4 ................................................................................................................................... 4.69
References ............................................................................................................................................. 4.71
- 4.3 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
List of Figures:
Fig. 4.1.1: A multi-stage amplifier with identical, DC coupled, VG loaded stages ............................... 4.9
Fig. 4.1.2: Frequency response of a 8-stage amplifier, 8 œ "–"! ........................................................ 4.10
Fig. 4.1.3: A slope of 6 dB/octave equals 20 dB/decade ................................................................ 4.11
Fig. 4.1.4: Phase angle of the amplifier in Fig. 4.1.1, 8 œ "–"! ........................................................... 4.12
Fig. 4.1.5: Envelope delay of the amplifier in Fig. 4.1.1, 8 œ "–"! ..................................................... 4.13
Fig. 4.1.6: Amplifier with 8 identical DC coupled stages, excited by the unit step .............................. 4.13
Fig. 4.1.7: Step response of the amplifier in Fig. 4.1.6, 8 œ "–"! ....................................................... 4.15
Fig. 4.1.8: Slew rate limiting: definition of parameters ........................................................................ 4.17
Fig. 4.1.9: Minimal relative rise time as a function of total gain and number of stages ....................... 4.18
Fig. 4.1.10: Optimal number of stages required for minimal rise time at given gain ............................ 4.20
Fig. 4.2.1: Multi-stage amplifier with AC coupled stages .................................................................... 4.21
Fig. 4.2.2: Frequency response of the amplifier in Fig. 4.2.1, 8 œ "–"! .............................................. 4.22
Fig. 4.2.3: Phase angle of the amplifier in Fig. 4.2.1, 8 œ "–"! ........................................................... 4.23
Fig. 4.2.4: Step response of the amplifier in Fig. 4.2.4, 8 œ "–5 and "! .............................................. 4.25
Fig. 4.2.5: Pulse response of the amplifier in Fig. 4.2.4, 8 œ ", $, and ) ............................................. 4.25
Fig. 4.3.1: Impulse response of three different complex conjugate pole pairs ...................................... 4.28
Fig. 4.3.2: Butterworth poles for the system order 8 œ "–& ................................................................. 4.30
Fig. 4.3.3: Frequency response magnitude of Butterworth systems, 8 œ "–"! .................................... 4.31
Fig. 4.3.4: Phase response of Butterworth systems, 8 œ "–"! ............................................................. 4.32
Fig. 4.3.5: Envelope delay of Butterworth systems, 8 œ "–"! ............................................................. 4.33
Fig. 4.3.6: Step response of Butterworth systems, 8 œ "–"! ............................................................... 4.34
Fig. 4.3.7: Ideal MFA frequency response ........................................................................................... 4.36
Fig. 4.3.8: Step response of a network having an ideal MFA frequency response ............................... 4.36
Fig. 4.4.1: Bessel poles of order 8 œ "–"! ......................................................................................... 4.41
Fig. 4.4.2: Frequency response of systems with Bessel poles, 8 œ "–"! ............................................. 4.42
Fig. 4.4.3: Phase angle of systems with Bessel poles, 8 œ "–"! .......................................................... 4.44
Fig. 4.4.4: Phase angle as in Fig. 4.4.3, but in linear frequency scale ................................................... 4.44
Fig. 4.4.5: Envelope delay of systems with Bessel poles, 8 œ "–"! .................................................... 4.45
Fig. 4.4.6: Step response of systems with Bessel poles, 8 œ "–"! ....................................................... 4.46
Fig. 4.4.7: Ideal Gaussian frequency response (MFED) ....................................................................... 4.49
Fig. 4.4.8: Ideal Gaussian frequency response in loglog scale ............................................................. 4.50
Fig. 4.4.9: Step response of a system with an ideal Gaussian frequency response ............................... 4.50
Fig. 4.4.10: Frequency response of systems with normalized Bessel poles, 8 œ "–"! ........................ 4.52
Fig. 4.4.11: Phase angle of systems with normalized Bessel poles, 8 œ "–"! ..................................... 4.52
Fig. 4.4.12: Envelope delay of systems with normalized Bessel poles, 8 œ "–"! ............................... 4.53
Fig. 4.4.13: Step response of systems with normalized Bessel poles, 8 œ "–"! .................................. 4.53
Fig. 4.5.1: Pole interpolation procedure ............................................................................................... 4.56
Fig. 4.5.2: Frequency response of a system with interpolated poles ..................................................... 4.60
Fig. 4.5.3: Phase angle of a system with interpolated poles .................................................................. 4.60
Fig. 4.5.4: Envelope delay of a system with interpolated poles ............................................................ 4.61
Fig. 4.5.5: Step response of a system with interpolated poles .............................................................. 4.62
Fig. 4.6.1: Comparison of frequency responses of systems with staggered vs. repeated pole pairs ...... 4.63
Fig. 4.6.2: Step response comparison of systems with staggered vs. repeated pole pairs ..................... 4.64
Fig. 4.6.3: Individual stage step response of a 3-stage, 5-pole system ................................................. 4.66
Fig. 4.6.4: Step response of the complete 3-stage, 5-pole system, reverse pole order .......................... 4.66
Fig. 4.6.5: Step response of the complete 3-stage, 5-pole system, correct pole order .......................... 4.67
- 4.4 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
List of Tables:
Table 4.1.1: Values of the upper cut off frequency of a multi-stage amplifier for 8 œ "–"! ................ 4.11
Table 4.1.2: Values of relative rise time of a multi-stage amplifier for 8 œ "–"! ................................ 4.15
Table 4.2.1: Values of the lower cutoff frequency of an AC coupled amplifier for 8 œ "–"! ............. 4.23
Table 4.3.1: Butterworth poles of order 8 œ "–"! ............................................................................... 4.35
Table 4.4.1: Relative bandwidth improvement of systems with Bessel poles ....................................... 4.43
Table 4.4.2: Relative rise time improvement of systems with Bessel poles .......................................... 4.46
Table 4.4.3: Bessel poles (equal envelope delay) of order 8 œ "–"! ................................................... 4.48
Table 4.4.4: Bessel poles (equal cut off frequency) of order 8 œ "–"! ................................................ 4.54
Table 4.5.1: Modified Bessel poles (equal asymptote as Butterworth) ................................................. 4.58
- 4.5 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
4.0 Introduction
These are the main questions which we shall try to answer in this part.
In Sec. 4.1 we discuss a cascade of identical DC coupled amplifier stages, with
loads consisting of a parallel connection of a resistance and a (stray) capacitance. There
we derive the formula for the calculation of an optimum number of amplifying stages to
obtain the required gain with the smallest rise time possible for the complete amplifier.
Next we derive the expression for the optimum gain of an individual amplifying
stage of a multi-stage amplifier in order to achieve the smallest possible rise time. We
also discuss the effect of AC coupling between particular stages by means of a simple
VG network.
Butterworth poles, which are needed to achieve an MFA response, are derived
next. This leads to the discussion of the (im)possibility to design an ideal MFA
amplifier.
Then we derive the Bessel poles which provide the MFED response. Since they
are derived from the condition for a unit envelope delay, the upper cut off frequency
increases with the number of poles. Therefore we also present the derivation of two
different pole normalizations: to equal cut off frequency and to equal stop band
asymptote. We discuss the (im)possibility of designing an amplifier with the frequency
response approaching an ideal Gaussian curve. Further we discuss the interpolation
between the Bessel and the Butterworth poles.
Finally, we explain the merit of using staggered Bessel poles versus repeated
second-order Bessel pole pairs.
Wherever practical, we calculate and plot the frequency, phase, group delay and
step response to allow a quick comparison of different concepts.
- 4.7 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
R1 C1 R2 C2 Rn C
Vg
- 4.9 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
2.0
ωH 1/n
|A| ωh = 2 −1
A0
ω h = 1/ RC
1.0
0.7
0.5
n=1
2
3
10
0.2
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 4.1.2: Frequency response of an 8-stage amplifier (8 œ ", #, á , "!). To compare the
bandwidth, the gain was normalized, i.e., divided by the system DC gain, Ðgm VÑ8 . For each 8,
the bandwidth (the crossing of the !Þ(!( level) shrinks by È#"Î8 " .
The upper half power frequency of the amplifier can be calculated by a simple relation:
8
Ô ×
Ö " Ù œ " (4.1.5)
# È#
Õ É" a=H Î=h b Ø
By squaring we obtain:
#
8 =H
" a=H Î=h b# ‘ œ # Ê Œ "Î8
œ# " (4.1.6)
=h
The upper half power frequency of the complete 8-stage amplifier is:
- 4.10 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
At high frequencies, the first stage response slope approaches the 6 dBÎoctave
asymptote (20 dBÎdecade). The meaning of this slope is explained in Fig. 4.1.3. For
the second stage the slope is twice as steep, and for the 8th stage it is 8 times steeper.
| A|
[dB]
A0
2
0
n=1
−2
− 3 dB
−4
−6
−8
− 10
−6
− 12
dB
/2
f
− 14
=
−2
0d
− 16
B
/ 10
− 18
f
− 20
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 4.1.3: The first-order system response and its asymptotes. Below the cut off, the
asymptote is the level equal to the system gain at DC (normalized here to ! dB). Above the
cut off, the slope is ' dB/octave (an octave is a frequency span from 0 to #0 ), which is
also equal to #! dB/decade (a frequency decade is a span from 0 to "! 0 ).
Table 4.1.1
With ten equal stages connected in cascade the bandwidth is reduced to a poor
!.#'* =h ; such an amplifier is definitively not very efficient for wideband amplification.
Alternatively, in order to preserve the bandwidth a 8-stage amplifier should
have all its capacitors reduced by the same factor, È#"Î8 " . But in wideband
amplifiers we already strive to work with stray capacitances only, so this approach is
not a solution.
Nevertheless, the amplifier in Fig. 4.1.1 is the basis for more efficient amplifier
configurations, which we shall discuss later.
- 4.11 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
Each individual stage of the amplifier in Fig. 4.1.1 has a frequency dependent
phase angle:
e˜J Ð4 =Ñ™
:5 œ arctan œ arctanÐ = Î=h Ñ (4.1.8)
d ˜J Ð4 =Ñ™
where J Ð4 =Ñ is taken from Eq. 4.1.1. For 8 equal stages the total phase angle is simply
8 times as much:
The phase responses are plotted in Fig. 4.1.4. Note the high frequency
asymptotic phase shift increasing by 1Î# (or 90°) for each 8. Also note the shift at
= œ =h being exactly 8 1Î%, in spite of a reduced =h for each 8.
0
n=1
2
3
−180
ϕ (ω )
[°]
− 360
− 540
10
− 720
ϕ n = n arctan ( − ω /ω h ) ω h = 1/ RC
− 900
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 4.1.4: Phase angle of the amplifier in Fig. 4.1.1, for 8 œ 1–10 amplifying stages.
For a single amplifying stage (8 œ ") the envelope delay is the frequency
derivative of the phase, 7e8 œ .:8 Î.= (where :8 is given by Eq. 4.1.9). The
normalized single stage envelope delay is:
"
7e = h œ (4.1.10)
" Ð=Î=h Ñ#
- 4.12 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
Fig. 4.1.5 shows the frequency dependent envelope delay for 8 œ "–"!. Note the
delay at = œ =h being exactly "Î# of the low frequency asymptotic value.
0.0
n=1
2
− 2.0
3
− 4.0
τ en ω h
− 6.0
− 8.0
ω h = 1/ RC
10
−10.0
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 4.1.5: Envelope delay of the amplifier in Fig. 4.1.1, for 8 œ 1–10 amplifying stages. The
delay at = œ =h is "Î# of the low frequency asymptotic value. Note that if we were using 0 Î0h
for the abscissa, we would have to divide the 7e scale by #1.
To obtain the step response, the amplifier in Fig. 4.1.1 must be driven by the
unit step function:
Vo1 Vo2 Vo n
R1 C1 R2 C2 Rn Cn
Vg
Fig. 4.1.6: Amplifier with 8 equal DC coupled stages, excited by the unit step.
We can derive the step response expression from Eq. 4.1.1 and Eq. 4.1.3. In
order to simplify and generalize the expression we shall normalize the magnitude by
dividing the transfer function by the DC gain, gm V, and normalize the frequency by
setting =h œ "ÎVG œ ". Since we shall use the _" transform we shall replace the
variable 4 = by the complex variable = œ 5 4 =.
- 4.13 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
The amplifier input is excited by the unit step, therefore we must multiply the
above formula by the unit step operator "Î=:
"
KÐ=Ñ œ (4.1.13)
= Ð" =Ñ8
for 8 œ #:
res" ¹ œ e> Ð" >Ñ (4.1.17)
8œ#
for 8 œ $:
>#
res" ¹ œ e> Œ" > (4.1.18)
8œ$ #
á etc.
The general expression for the step response for any 8 is:
8 >5"
g8 Ð>Ñ œ _" ˜GÐ=Ñ™ œ res! res" œ " e> " (4.1.19)
Ð5 "Ñx
5 ="
- 4.14 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
The step response plots for 8 œ "–"!, calculated by Eq. 4.1.19, are shown in
Fig. 4.1.7. Note that there is no overshoot in any of the curves. Unfortunately, the
efficiency of this kind of amplifier in the sense of the ‘bandwidth per number of stages’
is poor, since it has no peaking networks which would prevent the decrease of
bandwidth with 8.
1.0
n=1
2
0.8
3
g n( t )
0.6
10
0.4
0.2
0.0
0 4 8 12 16 20
t /RC
Fig. 4.1.7: Step response of the amplifier in Fig. 4.1.6, for 8 œ 1–10 amplifying stages.
In the case of a multi-stage amplifier, where each particular stage has its
respective rise time, 7r" , 7r# , á , 7r8 , we calculate the system’s rise time [Ref. 4.2] as:
In Part 2, Sec. 2.1.1, Eq. 2.1.1 – 4, we have calculated the rise time of an
amplifier with a simple VG load to be 7r" œ #Þ#! VG . Since here we have 8 equal
stages the rise time of the complete amplifier is:
Table 4.1.2 shows the rise time increasing with the number of stages:
Table 4.1.2
8 " # $ % & ' ( ) * "!
7r8 Î7r" "Þ!! "Þ%" "Þ($ #Þ!! #Þ#% #Þ%& #Þ'& #Þ)$ $Þ!! $Þ"'
- 4.15 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
The equations derived so far describe the small signal properties of an amplifier.
If the signal amplitude is increased the maximum slope of the output signal .@o Î.>
becomes limited by the maximum current available to charge any capacitance present at
the particular node. The amplifier stage which must handle the largest signal is the one
which runs into the slew rate limiting first; usually it is the output stage.
To find the slew rate limit we drive the amplifier by a sinusoidal signal and
increase the input amplitude until the output amplitude just begins to saturate; then we
increase the frequency until we notice that the middle part of the sinusoidal waveform
becomes distorted (changing linearly with time) and then decrease the frequency until
the distortion just disappears. That frequency is equal to the full power bandwidth =FP
(in radians per second, [rad/s]). Generally, an amplifier need not have the positive and
negative slope equally steep; then it is the less steep slope to set the limit, which is:
.@o Momax
œ (4.1.23)
.> G
Here Momax is the maximum output current available to drive the loading capaciatnce G .
If @o is a sinusoidal signal of angular frequency =FP and amplitude Zmax , such
that @o œ Zmax sin =FP >, the slope varies with time as:
.ÐZmax sin =FP >Ñ
œ Zmax =FP cos =FP > (4.1.24)
.>
and it has a maximum at lcos =FP >l œ " (which is at > œ ! „ 1Î=FP ; see Fig. 4.1.8a).
Therefore:
The slew rate is usually expressed in volts per microsecond [VÎ.s]; for contemporary
amplifiers a more appropriate figure would be volts per nanosecond [VÎns].
If we increase the signal frequency beyond =FP the waveform will eventually be
distorted into a linear ramp shape, but reduced in amplitude because the slope of a
sinusoidal signal is reduced to zero at the peak voltage „ Zmax . However, if driven by a
step signal of the same amplitude the linear ramp will span the full amplitude range. We
need to find the total slewing time between Zmax and Zmax ; by equating the small
signal derivative .@o Î.> with the large signal slewing ?Z Î?> we find:
.@o ?Z # Zmax
WV œ œ œ (4.1.26)
.> ?> >slew
The slewing time is then:
# Zmax # XFP
>slew œ œ œ (4.1.27)
Zmax =FP =FP 1
where XFP œ #1Î=FP , i.e., the period of the full power bandwidth sinewave signal.
- 4.16 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
FP
ωax
t = −π π
FP
Vm
ω FP t= ω
ω
FP h( t )
SR =
max
t t
V
0
SR =
T FP = ω2 π
FP
10% τ rS
−Vmax t slew
−Vmax
a) b)
Fig. 4.1.8: Definitions of Slew rate limiting parameters; a) the slew rate is equal to the highest
.@Î.> of the full-power undistorted sine wave; b) the slewing time defined as the large signal
step response; the equivalent slewing frequency (of the triangle waveform) is higher than =FP of
the sine wave.
XFP
7rS œ !Þ) >slew œ !Þ) ¸ !Þ#&%' XFP (4.1.28)
1
The 8-stage cascade amplifier of Fig. 4.1.6 has the voltage gain E" œ @o" Î@i" ,
the second E# œ @o# Î@i# œ @o# Î@o" and so on, up to E8 œ @o8 Î@i8 œ @o8 Î@oÐ8"Ñ .
The total gain is the product of individual stage gains:
@o8
Eœ œ E" † E# â E8 (4.1.29)
@i"
If all the amplifying stages are identical, we denote the individual stage gain as
E5 , the loading resistors V, and the loading capacitances G . Then the total gain is:
E œ E85 (4.1.30)
- 4.17 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
The rise time of the complete multi-stage amplifier is equal to the square root of
the sum of the individual rise times squared, but since the amplitude after each gain
stage is different, we must normalize the rise times by multiplying each with it's own
gain factor:
8
7r œ Ë ! aE3 7r3 b# (4.1.31)
3œ"
We have assumed that all stages have identical gain, E3 œ E5 , and rise time, 7r3 œ 7r5 .
Thherefore:
7r œ È8 ÐE5 7r5 Ñ# œ È8 E5 7r5 œ È8 E5 #Þ# VG (4.1.32)
where 7r5 is the rise time of an individual stage, as calculated in Part 2. By considering
Eq. 4.1.30, we obtain the following relation:
7r
œ È8 E"Î8 (4.1.33)
7r5
We have ploted this relation in Fig. 4.1.9, using the system total gain E as the
parameter. Note that, in order to see the function 7r a8b better, we have assumed a
continuous 8; of course, we can not have, say, 4.63 stages — 8 must be an integer.
10
9
+10% tolerance
8
A [total system gain]
τr 7 1000
τ rk 500
6 200
100
5 50
20
4 10
5
3
2
2
minima
1
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
n [number of stages]
Fig. 4.1.9: Minimal relative rise time as a function of the number of stages 8
and the total system gain E. Close to the minima the curves are relatively flat,
so in practice we can trade off, say, a 10% increase in the system rise time and
reduce the required number of stages accordingly; i.e., to achieve the gain of
100, only 5 stages could be used instead of 9, with a slight rise time increaseÞ
From this diagram we can find the optimum number of the amplifying stages
8opt if the total system gain E is known. These optima lie on the valleys of the curves
and in the following discussion we will derive the necessary formulae.
- 4.18 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
7r5 "
E"Î8 È8 E"Î8 ln E œ ! (4.1.35)
E # È8
Because neither 7r5 nor E are zero we equate the expression in parentheses to zero:
"
È8 ln E œ ! (4.1.36)
È
# 8
and since we can not have, say, 3.47 amplifying stages, we round the result to the
nearest integer, the smallest obviously being ".
On the basis of this simple relation we can draw the line a in Fig. 4.1.10 for a
quick estimation of the number of amplifying stages necessary to obtain the smallest
rise time if the total system gain E is known. Again, the required number of amplifying
stages can be reduced in practice, as indicated in Fig. 4.1.10 by the line b, without
significantly increasing the rise time. Owing to reasons of economy, the most simple
systems are often designed far from optimum, as indicated by the bars and the line c.
Eq. 4.1.33 and 4.1.38 and the corresponding diagrams are valid also for peaking
stages, although peaking stages can be designed much more efficiently, as we shall see.
From Eq. 4.1.38 we can find the optimal gain value of the individual stage,
independent of the actual number of stages in the system. For 8 equal stages it is:
- 4.19 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
This expression gives us the optimal value of the gain of the individual
amplifying stage which minimizes the total rise time of a multi-stage amplifier. In
practice we usually take a higher value, say, between 2 and 4, in order to decrease the
cost and simplify the design. Eq. 4.1.41 can also be used for peaking stages.
14
13
12
11
10
9
8
n
7 a
6 b
5
4
3
c
2
1
0 1 2 3
10 10 A 10 10
Fig. 4.1.10: The optimal number of stages, 8, required to achieve the minimal
rise time, given the total system gain E, as calculated by Eq. 4.1.38, is shown
by line a. In practice, owing to economy reasons, we tend to use a lower
number; the line b shows the same +10% rise time trade off as in Fig. 4.1.9. In
low complexity systems we usually make even greater tradeoffs, as in c.
- 4.20 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
In Fig. 4.1.1 all the amplifying stages were DC coupled. In times when the
amplifiers were designed with electronic tubes the DC coupling was generally avoided
to prevent drift owed to tube aging, unstable cathode heater voltages (changing of
temperature), hum and poor insulation between the heater and cathode. With the
apearance of bipolar transistors and FETs only the temperature dependent drift
remained on this nasty list. Because of it, many TV video and broadband RF amplifiers
still use AC coupling between amplifying stages.
However, AC coupling introduces other undesirable properties, which in certain
cases might not be acceptable. It is therefore interesting to investigate a multi-stage
amplifier with equal stages, similar to that in Fig. 4.1.1, except that all the stages are AC
coupled. In Fig. 4.2.1 we see a simplified circuit diagram of such amplifier and here we
are interested in its low-frequency performance.
Vo1 Vo2 Vo n
C1 C2 Cn V i n
Vi1 Q1 Vi2 Q2 Qn
R1 RL R2 RL Rn RL
Vg
Since we want to focus on essential problems only, here, too, we use FETs,
instead of BJTs, in order to avoid the complicated expression for the base input
impedance of each stage. Moreover, in a wideband amplifier we can assume VL ¥ V8 ,
so we shall neglect the effect of V8 on gain. On the other hand, V8 and G8 set the low
frequency limit of each stage, which is =" œ "ÎV8 G8 œ "ÎVG , if all stages are
identical. Usually, =" is many orders of magnitude below =h , so we can neglect the
stray and input capacitances (both effectively in parallel to the loading resistors VL ) as
well. Thus, near =" , the voltage gain of each stage is:
4 =Î="
E8 œ gm VL (4.2.1)
" 4 =Î="
and the magnitude is:
=Î="
kE8 k œ gm VL (4.2.2)
È" Ð=Î=" Ñ#
With all input time constants equal to VG , the total system gain is E8 to the 8th power:
8
4 =Î="
E œ ”gm VL • (4.2.3)
" 4 =Î="
with the magnitude:
8
=Î="
lEl œ –gm VL (4.2.4)
È" Ð=Î=" Ñ# —
- 4.21 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
In Fig. 4.2.2 we show the frequency response plots according to Eq. 4.2.4,
normalized in amplitude by dividing it by agm VL b8 .
2.0
1.0
|F ( jω)|
1
2 1
3 4 10 ωl =
RC
0.1
0.1 1.0 10.0
ω /ω l
Fig. 4.2.2: Frequency response magnitude of the AC coupled amplifier for 8 œ "–"!. The
frequency scale is normalized to the lower cut off frequency =" of the single stage.
It is evident that the lower half power frequency =L of the complete amplifier
increases with the number of stages. We can express =L as a function of 8 from:
8
=L Î=" "
– È" Ð= Î= Ñ# — œ È (4.2.5)
L " #
"
=L œ = " (4.2.9)
È#"Î8 "
- 4.22 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
The phase shift is positive and this means a phase advance. For 8 stages the total
phase advance is simply 8 times as much:
The corresponding plots for 8 œ "–"! are shown in Fig. 4.2.3. Note the phase
shift at = œ =" being exactly "Î# the low frequency asymptotic value.
900
10
720
1
ϕ (ω ) ωl =
RC
[°]
540
360
180 2
0
0.1 1.0 10.0
ω /ω l
Fig. 4.2.3: Phase angle as a function of frequency for the AC coupled 8-stage amplifier,
8 œ "–"!. The frequency scale is normalized to the lower cutoff frequency of the single stage.
We will omit the calculation of envelope delay since in the low frequency region
this aspect of amplifier performance is not very important.
- 4.23 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
By replacing the sine wave generator in Fig. 4.2.1 with a unit step generator, we
obtain the time domain step response of the AC coupled multi-stage amplifier. We want
the plots to be normalized in amplitude, so we normalize Eq. 4.2.3 by dividing it by
agm VL b8 , the total amplifier gain at DC. We will use the _" transform, so we replace
the normalized variable 4=Î=" by the complex variable = œ 5 4=:
= 8
J8 Ð=Ñ œ Š ‹ (4.2.12)
"=
The system’s frequency response must be multiplied by the unit step operator "Î=:
" = 8
K8 Ð=Ñ œ Š ‹ (4.2.13)
= "=
Now we apply the _" transformation and obtain the time domain step response:
=8"
g8 Ð>Ñ œ _" ˜K8 Ð=Ñ™ œ res K8 Ð=Ñ e=> œ res e=> (4.2.14)
Ð" =Ñ8
Since we have here a single pole repeated 8 times we have only a single residue,
but — as we will see — it is composed of 8 summands. A general expression for the
residue for an arbitrary 8 is:
" . 8" 8 =8"
g8 Ð>Ñ œ lim † 8" ”Ð" =Ñ e=> • (4.2.15)
= p " Ð8 "Ñx .= Ð" =Ñ8
or, simplified:
" . 8"
g8 Ð>Ñ œ † =8" e=> ‘º (4.2.16)
Ð8 "Ñx . = 8" = p "
A few examples:
" >
8 œ " Ê g" Ð>Ñ œ e œ e>
!x
"
8 œ # Ê g# Ð>Ñ œ ˆe> > e> ‰ œ e> Ð" >Ñ
"x
8 œ $ Ê g$ Ð>Ñ œ e> ˆ" # > !Þ& ># ‰
8 œ % Ê g% Ð>Ñ œ e> ˆ" $ > "Þ& ># !Þ"''( >$ ‰
8 œ & Ê g& Ð>Ñ œ e> ˆ" % > $ ># !Þ'''( >$ !Þ!%"( >% ‰ (4.2.17)
The coefficients decrease rapidly with increasing number of stages 8; i.e., the
last summand for 8 œ "! is #Þ((& † "!' >* .
The corresponding plots are drawn in Fig. 4.2.4. The plots for 8 œ '–* are not
shown, since the individual curves would impossible to distinguish. We note that the
8th -order response intersects the abscissa Ð8 "Ñ times.
- 4.24 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
1.0
0.8
0.6
gn ( t )
1
0.4
2
0.2 3
4
0.0
5
− 0.2 10
− 0.4
− 0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5
/
t RC
Fig. 4.2.4: Step response of the multi-stage AC coupled amplifier for 8 œ "–& and 8 œ "!.
For pulse amplification only the short starting portions of the curves come into
consideration. An example for 8 œ ", $, and ) is shown in Fig. 4.2.5 for a pulse width
?> œ !Þ" VG.
1.0
δ8
0.8
t
10
0.6
0.4
∆ t = 0.1 RC
0.2
0.0 1
3
δ8
− 0.2
8
− 0.4
− 0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5
t / RC
Fig. 4.2.5: Pulse response of the AC coupled multi-stage amplifier (8 œ ", $, and )).
- 4.25 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
Note that the pulse in Fig. 4.2.5 sags, both on the leading and trailing edge, the
sag increasing with the number of stages. We conclude that the AC coupled amplifier of
Fig. 4.2.1 is not suitable for a faithful pulse amplification, except when the pulse
duration is very short in comparison with the time constant VG of a single amplifying
stage (say, ?> Ÿ !Þ!!" VG ).
Another undesirable property of the AC coupled amplifier is that the output
voltage makes 8 " damped oscillations when the pulse ends, no matter how short its
duration is. This is especially annoying because the input voltage is by now already
zero. The undesirable result is that the effective output DC level will depend on the
pulse repetition rate.
Since today the DC amplification technique has reached a very high quality
level, we can consider the AC coupled amplifier an inheritance from the era of
electronic tubes and thus almost obsolete. However, we still use AC coupled amplifiers
to avoid the drift in those cases where the deficiencies described are not important.
- 4.26 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
In multi-stage amplifiers, like the one in Fig. 4.1.1, we can apply inductive
peaking at each stage. As we have seen in Part 2, Sec. 2.9, where we discussed the
shunt–series peaking circuit, the equations became very complicated because we had to
consider the mutual influence of the shunt and series peaking circuit. If both circuits are
separated by a buffer amplifier, the analysis is simplified. Basically, this was considered
by S. Butterworth in his article On the Theory of Filter Amplifiers in the review
Experimental Wireless & the Wireless Engineer in 1930 [Ref. 4.6]. When writing the
article, which Butterworth wrote when he was serving in the British Navy, he obviously
did not expect that his technique might also be applied to wideband amplifiers. In
general his article became the basis for filter design for generations of engineers up to
the present time.
The basic Butterworth equation, which, besides to filters, can also be applied to
wideband amplifiers, either with a single or many stages, is:
"
J Ð=Ñ œ (4.3.1)
" 4 Ð=Î=H Ñ8
where =H is the upper half power frequency of the (peaking) amplifier and 8 is an
integer, representing the number of stages. A network corresponding to this equation
has a maximally flat amplitude response (MFA). The magnitude of J Ð=Ñ is:
"
lJ Ð=Ñl œ (4.3.2)
È" Ð=Î=H Ñ# 8
and not just the first derivative, but all the 8 " derivatives of an 8th -order system are
also zero at origin. This means that the filter is essentially flat at very low frequencies
(= ¥ =H ). The number of poles in Eq. 4.3.1 is equal to the parameter 8 and the flatness
of the frequency response in the passband also increases with 8. The parameter 8 is
called the system order. To derive the expression for the poles we start with the
denominator of Eq. 4.3.2, where the expression under the root can be simplified into:
- 4.27 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
The roots of these equations are the poles of Eq. 4.3.1 and they can be calculated
by the following general expression:
B œ #8
È" (4.3.6)
=; œ #8
È" œ #8
Ècos Ð1 # 1 ;Ñ 4 sin Ð1 # 1 ;Ñ
"#; "#;
œ cosŒ1 4 sinŒ1 (4.3.8)
#8 #8
If we insert the value !, ", #, á , Ð#8 "Ñ for ; , we obtain #8 roots. The roots
lie on a circle of radius < œ ", spaced by the angles 1Î#8. With this condition no pole is
repeated. One half ( œ 8) of the poles lie in the left side of the =-plane; these are the
poles of Eq. 4.3.1. None of the poles lies on the imaginary axis. The other half of the
poles lie in the positive half of the =-plane and they can be associated with the complex
conjugate of J a4 =b; as shown in Fig. 4.3.1, owing to the Hurwitz stability requirement,
they are not useful for our purpose.
e 0.8 t
e0.8 t sin ω t
1 sin ω t
t
e− 0.8 t sin ω t ω =1
1
b
1
e− 0.8 t t
jω
t
s1a s1c
a j c
s1b
σ
− 0.8 0.8 1
−1
Fig. 4.3.1: Impulse response of three different complex conjugate pole pairs: The real part
determines the system stability: ="a and s#a make the system unconditionally stable, since the
negative exponent forces the response to decrease with time; ="b and =#b make the system
conditionally stable, whilst ="c and =#c make it unstable.
- 4.28 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
This left and right half pole division is not arbitrary, but, as we have explained
in Part 1, it reflects the direction of energy flow. If an unconditionally stable system is
energized and then left alone, it will eventually dissipate all the energy into heath and
RF radiation, so it is lost (from the system point of view) and therefore we agree to give
it a negative sign. This is typical of a dominantly resistive systems. On the other hand,
generators produce energy and we agree to give them a positive sign. In effect,
generators can be treated as negative resistors. Inductances and capacitances can not
dissipate energy, they can only store it in their associated electromagnetic fields (for a
while). We therefore assign the resistive and regenerative action to the real axis and the
inductive and capacitive action to imaginary axis.
For example, if we take a two–pole system with poles forming a complex
conjugate pair, =" œ 5 4= and =# œ 5 4=, the system impulse response function
has the form:
0 Ð>Ñ œ e5> sin = > (4.3.9)
By referring to Fig. 4.3.1, let us first consider the poles ="a œ !Þ) 4 and
=#a œ !Þ) 4, where = œ ". Their impulse function is a damped sinusoid:
This means that for any impulse disturbance the system reacts with a sinusoidal
oscillation (governed by =), exponentially damped (by the rate set by 5 ). Such behavior
is typical for an unconditionally stable system. If we move the poles to the imaginary
axis (5 œ !) so that s"b œ 4 and =#b œ 4 (again, = œ "), then an impulse excites the
system to a continuous sine wave:
If we push the poles further to the right side of the = plane, so that ="c œ !Þ) 4
and =#c œ !Þ) 4, keeping = œ ", the slightest impulse disturbance, or even just the
system's own noise, excites an exponentially rising sine wave:
The poles on the imaginary axis are characteristic of a sine wave oscillator, in
which we have the active components (amplifiers) set to make up for (and exactly
match) any energy lost in resistive components. The poles on the right side of the
=-plane also result in oscillations, but there the final amplitude is limited by the system
power supply voltages. Because the active components provide much more energy than
the system is capable of dissipating thermally, the top and bottom part of the waveform
will be saturated, thus limiting the energy produced. Since we are interested in the
design of amplifiers and not of oscillators, we shall not use the last two kinds of poles.
Let us return to the Butterworth poles. We want to find the general expression
for 8 poles on the left side of the =-plane. A general expression for a pole =; , derived
from Eq. 4.3.8 is:
=; œ cos ); sin ); (4.3.13)
where:
" #;
); œ 1 (4.3.14)
#8
- 4.29 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
#5 " #5 "
=5 œ 55 4 =5 œ sin 1 4 cos 1 (4.3.17)
#8 #8
where 5 is an integer from " to 8. As shown in Fig. 4.3.2 (for 8 œ "–&), all these poles
lie on a semicircle with the radius < œ " in the left half of the =-plane:
jω jω jω jω jω
s3 s4
s2
s1
j ω1 s2
s1
s1 θ1 s1 s1
σ σ1 σ σ σ σ
s2
s2 s3
s3
s4 s5
n=1 n=2 n=3 n=4 n=5
The numerical values of the poles for systems of order 8 œ "–"!, together with
the corresponding angle ), are listed in Table 4.3.1. Obviously, if 8 is even the system
has complex conjugate pole pairs only. If the 8 is odd, one of the poles is real, and in
the normalized presentation its value is =" œ =Î=H œ ". In the non-normalized
form, the value of the real pole is equal to =H . Since this is the radius of the circle on
which all the poles lie, we can calculate the upper half power frequency also from any
pole (for Butterworth poles only!):
=H œ 5" (4.3.19)
- 4.30 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
2.0
ω H = 1 /RC
1.0
|F ( jω)| 0.707
2
3
4
10
0.1
0.1 1.0 10.0
ω /ω H
Fig. 4.3.3: Frequency response magnitude of 8th -order system with Butterworth poles, 8 œ "–"!.
with = œ 4 =Î=H and =i œ 5i 4 =i (the values of 5i and =i are listed in Table 4.3.1).
By multiplying all the expressions in parentheses, we obtain:
+!
J& Ð=Ñ œ (4.3.21)
=& + % = % + $ = $ + # = # + " = + !
where:
+% œ =" =# =$ =% =&
+$ œ = " = # = " = $ = " = % = " = & = # = $ = # = % = # = & = $ = % = $ = & = % = &
+# œ = " = # = $ = " = # = % = " = # = & = # = $ = % = # = $ = & = $ = % = &
+" œ = # = $ = % = & = " = $ = % = & = " = # = % = & = " = # = $ = & = " = # = $ = %
+! œ = " = # = $ = % = & (4.3.22)
- 4.31 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
If we use the normalized poles with the numerical values listed in Table 4.3.1 to
calculate the coefficients +! á +% , we obtain:
"
J& Ð=Ñ œ (4.3.23)
=& $.#$'" =% &.#$'" =$ &.#$'" =# $.#$'" = "
For the magnitude only, by applying Eq. 4.3.2, we have:
"
lJ& Ð=Ñl œ (4.3.24)
É" a=Î=H b"!
The reason why we took particular interest for the function with the normalized
numerical values of the order 8 œ & is that in Sec. 4.5 we will compare it with the
function having Bessel poles of the same order.
For an odd number of poles the imaginary part of the first pole =" œ !. For the
remaining poles or in the case of even 8, we enter the complex conjugate pair
components: =i,i+" œ 5i „ 4 =i . The phase response plots are drawn in Fig. 4.3.4. By
comparing it with Fig. 4.1.4 we note that Butterwoth poles result in a much steeper
phase slope near the system’s cut off frequency at = œ =H (which is even more evident
in the envelope delay).
0
1
2
− 180 3
ϕ (ω )
4
[°]
− 360
− 540
10
− 720
ω H = 1 /RC
− 900
0.1 1.0 10.0
ω /ω H
Fig. 4.3.4: Phase angle of 8th -order system with Butterworth poles, 8 œ "–"!.
- 4.32 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
The envelope delay plots for 8 œ "–"! are shown in Fig. 4.3.5. Owing to the
steeper phase shift, the curves for 8 " dip around the system cut off frequency. Those
frequencies are delayed more than the rest of the spectrum, thus revealing the system
resonance. Therefore we expect that amplifiers with Butterworth poles will exhibit an
increasing amount of ringing in the step response, a property not acceptable in pulse
amplification.
0
1
2
−2 3
4
−4
−6
τ en ω H 10
−8
−10
ω H = 1 /RC
−12
−14
0.1 1.0 10.0
ω /ω H
Fig. 4.3.5: Envelope delay of 8th -order system with Butterworth poles, 8 œ "–"!.
Since we have 8 non-repeating poles we start with the frequency function in the
form which is suitable for the _" transform:
a"b8 =" =# â =8
J Ð=Ñ œ (4.3.27)
Ð= =" ÑÐ= =# Ñ â Ð= =8 Ñ
- 4.33 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
To obtain the step response in the time domain we use the _" transform:
8
a"b8 =" =# â =8 e = >
gÐ>Ñ œ _" ˜KÐ=Ñ™ œ " resi (4.3.29)
iœ"
= Ð= =" ÑÐ= =# Ñ â Ð= =8 Ñ
It would take too much space to list the complete analytical calculation for
systems with 1 to 10 poles. Some examples can be found in the Appendix 2.3. Here we
shall use the computer routines, which we develop and discuss in detail in Part 6. The
plots for 8 œ "–"! are shown in Fig. 4.3.6.
The plots confirm our expectation that amplifiers with Butterworth poles are not
suitable for pulse amplification. The main advantage of Butterworth poles is the flat
frequency response (MFA) in the passband (evident from the plots in Fig. 4.3.3). For
measuring sinusoidal signals in a wide range of frequencies, i.e., in an electronic
voltmeter, Butterworth poles offer the best solution.
1.2
1.0
gn ( t )
n =1
0.8 2
3
0.6
10
0.4
0.2
T = RC
0.0
0 5 10 15
t /T
Fig. 4.3.6: Step response of 8th -order system with Butterworth poles, 8 œ "–"!.
- 4.34 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
- 4.35 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
ω /ω H
−1 1
Fig. 4.3.7: Ideal MFA frequency response.
For the time being we assume that the function EÐ=Ñ is real, and consequently it
has no phase shift. At the instant > œ ! we apply a unit step voltage to the input of the
amplifier (multiply Ea=b by the unit step operator "Î=). By applying the basic formula
for the _" transform (Part 1, Eq. 1.4.4), the output function in the time domain is the
integral of the sinÐ>ÑÎ> function [Ref. 4.2]:
>
" sin >
gÐ>Ñ œ ( .> (4.3.31)
1 >
_
The normalized plot of this integral is shown in Fig. 4.3.8. Here we have 50% of
the signal amplitude at the instant > œ !. Also, there is some response for > !, before
we applied any step voltage to the amplifier input, which is impossible. Any physically
realizable amplifier would have some phase shift and an envelope delay, therefore the
step response would be shifted rightwards from the originÞ However, an infinite phase
shift and delay would be needed in order to have no response for time > !.
t
−2 −1 0 1 2
Fig. 4.3.8: Step response of a network having the ideal frequency response of Fig. 4.3.7.
- 4.36 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
What we would like to know is whether there is any phase response, linear or
not, which the amplifier should have in order to suit Eq. 4.3.30 without any response for
time > !. The answer is negative and it was proved by R.E.A.C. Paley and N. Wiener.
Their criterion is given by an amplitude function [Ref. 4.2]:
_
¸log EÐ=Ѹ
( .= _ (4.3.32)
" =#
_
Outside the range =H = =H , EÐ=Ñ œ !, as required by Eq. 4.3.30, but the
magnitude in the numerator is infinite ( | log ! | œ _ ); therefore the condition expressed
by Eq. 4.3.32 is not met. Thus it is not possible to make an amplifier with a continuous
infinite attenuation in a certain frequency band (it is, nevertheless, possible to have an
infinite attenuation, but at distinct frequencies only). As we can derive from Eq. 4.3.32,
the problem is not the steepness of the frequency response curve at =H in Fig. 4.3.7, but
the requirement for an infinite attenuation everywhere outside the defined passband
=H = =H .
If we allow that outside the passband EÐ=Ñ œ %, no matter how small % is, such
a frequency response is possible to achieve. In this case the corresponding phase
response must be [Ref. 4.2]:
"=
:Ð=Ñ œ ln k%k † ln º º (4.3.33)
"=
However, such an amplifier would still have a step response very similar to that
in Fig. 4.3.8, except that it would be shifted rightwards and there would be no response
for > !. This is because we have almost entirely (down to %) and suddenly cut the
signal spectrum above =H . The overshoot is approximately * %. We have met a similar
situation in Part 1, Fig.1.2.7.a,b in connection with the square wave when we were
discussing the Gibbs’ phenomenon [Ref. 4.2].
Some readers may ask themselves why the step response overshoot of some
systems with Butterworth poles in Fig. 4.3.6 exceeds 9%? The reason is the
corresponding non-linear phase response, resulting in a peak in the envelope delay, as
shown in Fig. 4.3.5. This is a characteristic of not just the Butterworth poles, but also of
any pole pattern, e.g., Chebyshev Type I and Elliptic (Cauer) systems, where the
magnitude and phase change more steeply at cutoff.
We shall use Eq. 4.3.32 again when we shall discuss the possibility of obtaining
an ideal Gaussian response of the amplifier.
- 4.37 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
If we want to preserve the waveform shape, the amplifier must pass all its
frequency components with the same delay. From the requirement for a constant delay
we can derive the system poles. The frequency response function having a constant
delay X [Ref. 4.8, 4.9] is of the form:
J Ð=Ñ œ e =X (4.4.1)
=# =% =' =)
cosh = œ " â (4.4.3)
#x %x 'x )x
=$ =& =( =*
sinh = œ = â (4.4.4)
$x &x (x *x
With these suppositions and using ‘long division’ we obtain:
sinh = " "
œ (4.4.5)
cosh = = $ "
= & "
= ( "
= *
â
=
With a successive multiplication we can simplify this continuous fraction into a
simple rational function. By truncating the fraction at *Î= we obtain the following
approximation:
sinh = "& =% %#! =# *%&
¸ & (4.4.6)
cosh = = "!& =$ *%& =
Now we put this and Eq. 4.4.4 into Eq. 4.4.2 and perform the suggested division
by sinh =. A normalized expression, where J Ð=Ñ œ " if = œ ! is obtained by multiplying
the numerator by *%&. With these operations we obtain:
*%&
J Ð=Ñ œ e= ¸ (4.4.7)
=& "& =% "!& =$ %#! =# *%& = *%&
- 4.39 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
A critical reader might ask why have we taken such a circuitous way to come to
this result instead of deriving it straight from McLaurin’s series as:
" "
e= œ ¸ (4.4.8)
e= =# =$ =% =&
"=
#x $x %x &x
and the roots =$,% lie in the right half of the =-plane. Therefore the denominator of
Eq. 4.4.8 is not a Hurwitz polynomial [Ref. 4.10] (a closer investigation would reveal
that the denominator is not a Hurwitz polynomial if its degree exceeds 4, but even for
low order systems it can be shown that the McLaurin’s series results in an envelope
delay which is far from being maximally flat). Thus Eq. 4.4.8 describes an unstable
system or an unrealizable transfer function.
Let us return to Eq. 4.4.7, which we express in a general form:
+!
J Ð=Ñ œ (4.4.9)
=8 +8" =8" â +# =# +" = +!
where the numerical values for the coefficients can be calculated by the equation:
Ð# 8 "Ñx
+3 œ (4.4.10)
#83 3x Ð8 3Ñx
where N"Î# Ð4 =Ñ and 4 N"Î# Ð4 =Ñ are the spherical Bessel functions [Ref. 4.10, 4.11].
Therefore we name the polynomials having their coefficients expressed by Eq. 4.4.10
Bessel polynomials. Their roots are the poles of Eq. 4.4.9 and we call them Bessel
poles. We have listed the values of Bessel poles for polynomials of order 8 œ "–"! in
Table 4.4.1, along with the corresponding pole angles )3 .
- 4.40 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
We usually express Eq. 4.4.9 in another normalized form which is suitable for
the _" transform:
a "b8 = " = # = $ â = 8
J Ð=Ñ œ (4.4.12)
Ð= =" ÑÐ= =# ÑÐ= =$ Ñ â Ð= =8 Ñ
10
9
10 8
7
6
5
5 4
3
2
1
ℑ{s}
−5
−10
−5 0 5 10 15 20
ℜ{s}
Fig. 4.4.1: Bessel poles for order 8 œ "–"!. Bessel poles lie on a family of ellipses with
one focus at the origin of the complex plane and the other focus on the positive real axis).
The first-order pole is the same as for the Butterworth system and lies on the unit circle.
- 4.41 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
=" = # = $ â = 8
¸J Ð=Ѹ œ » (4.4.13)
Ð= =" ÑÐ= =# ÑÐ= =$ Ñ â Ð= =8 Ñ »
= œ 4=Î=h
where =h œ "ÎVG is the non-peaking stage cut off frequency. If we put the numerical
values for poles =i œ 5i „ 4 =i and = œ 4 =Î=h as suggested, then this formula obtains a
form similar to Part 2, Eq. 2.6.10 (where we had 4 poles only).
The magnitude plots for 8 œ "–"! are shown in Fig. 4.4.2. By comparing this
figure with Fig. 4.3.3, where the frequency response curves for Butterworth poles are
displayed, we note an important difference: for Butterworth poles the upper half power
frequency is always ", regardless of the number of poles. In contrast, for Bessel poles
the upper half power frequency increases with 8.
The reason for the difference is that the derivation of 8 Butterworth poles was
based on #8È" (for magnitude), whilst the Bessel poles were derived from the
condition for a unit envelope delay. This difference prevents any direct comparison of
the bandwidth extension and the rise time improvement between both kinds of poles.
To be able to compare the two types of systems on a fair basis we must normalize the
Bessel poles to the first-order cut off frequency. We do this by recursively multiplying
the poles by a correction factor and calculate the cut off frequency, until a satisfactory
approximation is reached (see Sec. 4.4.6). Also, a special set of Bessel poles is derived
in Sec. 4.5, allowing us to interpolate between Bessel and Butterworth poles. The
BESTAP algorithm (in Part 6) calculates the Bessel poles in any of the three options.
2.0
1.0
| F ( j ω )| 0.707
10
1 2 3
ω h = 1 /RC
0.1
0.1 1.0 10.0
ω /ω h
Fig. 4.4.2: Frequency-response magnitude of systems with Bessel poles for order 8 œ "á "!.
- 4.42 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
To find the cut off frequency and the bandwidth improvement offered by the
Bessel poles an inversion formula should be developed from Eq. 4.4.13. This inversion
formula would be different for each 8, so there would not be a general solution. Instead,
we shall use a computer program (see Part 6) to calculate the complete frequency
response magnitude and find the cut off frequency from it. Calculated in this way, the
bandwidth improvement factors (b œ =H Î=h for Bessel poles of the order 8 œ "–"! are
listed in the following table:
The calculation is similar to the phase response for Butterworth poles; however
there we had the normalized upper half power frequency =H œ ", whilst for Bessel
poles =H increases with the order 8, so here we must use =h œ "ÎVG as the reference,
where VG is the time constant of the non-peaking amplifier. Then for the phase
response we simply repeat Eq. 4.3.25 and write =h (instead of =H ):
=
8 =i
=h
:a=b œ " arctan (4.4.14)
iœ"
5i
Fig. 4.4.3 shows the phase plots of Eq. 4.4.14 for Bessel poles for the order
8 œ "–"! (owing to the cutoff frequency increasing with order 8, the frequency scale
had to be extended to see the asymptotic values at high frequencies).
So far we have used a logarithmic frequency scale for our phase response plots.
However, by using a linear frequency scale, as in Fig. 4.4.4, the plots show that the
phase response for Bessel poles is linear up to a certain frequency [Ref. 4.10], which
increases with an increased order 8.
- 4.43 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
0
1
2
− 180
3
ϕ (ω )
[°] 4
−360
− 540
10
−720
ω h = 1 /RC
− 900
0.1 1.0 10.0 100.0
ω /ω h
Fig. 4.4.3: Phase angle of the systems with Bessel poles of order 8 œ "á "!.
ϕ (ω ) 0
[°] 1
− 100
2
− 200
3
− 300 4
− 400
− 500
− 600
ω h = 1 /RC
10
− 700 n =∞
− 800
0 2 4 6 8 10 12 14 16 18 20
ω /ω h
Fig. 4.4.4: Phase-angle as in Fig. 4.4.3, but in a linear frequency scale. Note the linear
phase frequency-dependence extending from the origin to progressively higher frequencies.
- 4.44 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
4.4.4 Envelope-delay
Here, too, we take the corresponding formula from Butterworth poles and
replace the frequency normalization =H by =h :
8
5i
7e =h œ " # (4.4.15)
iœ" =
5i# Œ =i
=h
The envelope delay plots are shown in Fig. 4.4.5. The delay is flat up to a certain
frequency, which increases with increasing order 8. This was our goal when we were
deriving the Bessel poles, starting with Eq. 4.4.1. Therefore the name MFED
(Maximally Flat Envelope Delay) is fully justified by this figure. This property is
essential for pulse amplification. Because pulses contain a broad range of frequency
components, all of them, (or in practice, the most significant ones, i.e., those which are
not attenuated appreciably) should be subject to equal time delay when passing through
the amplifier in order to preserve the pulse shape at the output as much as possible.
− 0.2
τ en ω H
1
− 0.4
2 3 4
10
− 0.6
− 0.8
−1.0
ω h = 1 /RC
−1.2
0.1 1.0 10.0 100.0
ω /ω h
Fig. 4.4.5: Envelope delay of the systems with Bessel poles for order 8 œ "–"!. Note
the flat unit delay response increasing with system order. This figure demonstrates the
fulfilment of the criterion from which we have started the derivation of MFED.
We start with Eq. 4.4.12 and multiply it by the unit step operator "Î= to obtain:
a "b8 = " = # = $ â = 8
KÐ=Ñ œ (4.4.16)
= Ð= =" ÑÐ= =# ÑÐ= =$ Ñ â Ð= =8 Ñ
- 4.45 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
By inserting the numerical pole values from Table 4.4.3 for the systems of order
8 œ "–"! we can proceed in the same way as in the examples in Appendix 2.3, but it
would take too much space. Instead, we shall use the routines developed in Part 6 to
generate the plots of Fig. 4.4.6. This diagram is notably different from the step response
plots of Butterworth poles in Fig. 4.3.6. Again, the reason is that for normalized
Butterworth poles the upper half power frequency =H is always one, regardless of the
order 8, consequently the step response always has the same maximum slope, but a
progressively larger delay. The Bessel poles, on the contrary, have progressively steeper
slope, whilst the delay approaches unity. This is also reflected by the improvement in
rise time, listed in Table 4.4.2. Of course, the improvement in rise time is even higher
for peaking circuits using T-coils.
1.2
1.0
gn (t )
0.8
0.6
0.4
n=1 2 3 T = RC
10
0.2
0.0
0 0.5 1.0 1.5 2.0 2.5 3.0
t /T
Fig. 4.4.6: Step response of systems with Bessel poles of order 8 œ "–"!.
Note the 50% amplitude delay approaching unity as the system order increases.
From Fig 4.4.2 and 4.4.6 one could make a false conclusion that the upper half
power frequency increases and the rise time decreases if more equal amplifier stages are
cascaded. This is not true, because all the parameters of systems having Bessel poles are
defined with respect to the single stage non-peaking amplifier, where =h œ "ÎVG .
- 4.46 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
In the case of a system with 8 Bessel poles this would mean chopping the stray
capacitance of a single amplifying stage into smaller capacitances and separating them
by coils to create 8 poles.
Unfortunately there is a limit in practice because each individual amplifier stage
input sees two capacitances: the output capacitance of the previous stage and its own
input capacitance. Therefore, in a single stage we can have at most four poles (either
Bessel, Butterworth or of any other family).
If we use more than one stage, we can assign a small group of staggered poles
from the 8th -group (from either Table 4.3.1 or Table 4.4.3) to each stage, so that the
system as a whole has the poles as specified by the 8th -group chosen. Then no stage by
itself will be optimized, but the amplifier as a whole will be. More details of this
technique are given in Sec. 4.6 and some examples can be found in Part 5 and Part 7.
- 4.47 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
- 4.48 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
Suppose that we have succeeded making an amplifier with zero phase shift and
an ideal Gaussian frequency response:
#
KÐ=Ñ œ e = (4.4.18)
the plot of which is shown in Fig. 4.4.7 for both positive and negative frequencies (and,
to acquire a feeling for Bessel systems, compared to the magnitude of a 5th -order system
with modified Bessel poles).
1.0
0.8
| F ( jω ) | ω h = 1/RC
B5a G
0.6
R
0.4
0.2
G B5a
B5a
0
−3 −2 −1 0 1 2 3
ω /ω h
Fig. 4.4.7: Ideal Gaussian (MFED) frequency response K (real only, with no phase shift),
compared to the magnitude of a 5th -order modified Bessel system F&+ (identical cutoff
asymptote, Table 4.5.1). The frequency scale is two-sided, linear, and normalized to
=h œ "ÎVG of the first-order system, which is shown as the reference V.
By examining Eq. 4.4.9 and Eq. 4.4.18 we come to the conclusion that it is
possible to approximate the Gaussian response with any desired accuracy up to a
certain frequency. At higher frequencies, the Gaussian response falls faster than the
approximated response. This is brought into evidence in Fig. 4.4.8 where the same
responses are ploted in loglog scale.
By applying a unit step at the instant > œ ! to the input of the hypothetical
amplifier having a Gaussian frequency response the resulting step response is equal to
the so called error-function, which is defined as the time integral of the exponential
function of time squared [Ref. 4.2]:
>"
gG a>b œ erfa>b œ
" ># Î% .>
( e (4.4.19)
# È1
_
- 4.49 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
0
10
ω h = 1/ RC
−1 R
10
| F (ω ) |
−2 B5a
10
10
−3 G
−4
10
−5
10
1
10−1 10 0 10
ω /ω h
Fig. 4.4.8: Frequency response in loglog scale brings into evidence how the ideal Gaussian
response K decreases much more steeply with frequency than the 5th -order Bessel response
F&+. The Bessel system would have to be of infinitely high order to match the Gaussian
response.
g
B5a
1.0
g
G
0.8
τe
g (t )
0.6
0.4
0.2
g 2π
TH =
G g ωh
B5a
0
−5 −4 −3 −2 −1 0 1 2 3 4 5
t / TH
Fig. 4.4.9: Step response of a hypothetical system, gK , having the ideal Gaussian frequency
response with no phase shift, as the one in Fig. 4.4.7 and 4.4.8. Compare it with the step
response of a 5th -order Bessel system, gF&+ , with modified Bessel poles, Table 4.5.1, and
envelope delay compensated (7e œ $Þ*% XH ) for minimal difference in the half amplitude
region.
- 4.50 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
The plot of Eq. 4.4.21 calculated by the Simpson method is shown in Fig. 4.4.9.
The step response is symmetrical, without any overshoot. However, here too, we have a
response for > ! as it was in the ideal MFA amplifier.
If our hypothetical amplifier were to have any linear phase delay the curve gK in
Fig. 4.4.9 would be shifted rightwards from the origin, but an infinite phase shift would
be required in order to have no response for time > ! (the same as for Fig. 4.3.8).
By looking back to Eq. 4.4.7, we realize that we would need an infinite number
of terms in the denominator ( œ an infinite number of poles) in order to justify an ‘ œ ’
sign instead of an approximation ( ¸ ). This would mean an infinite number of system
components and amplifying stages, and therefore the conclusion is that we can not
make an amplifier with an ideal Gaussian response (but we can come very close).
A proof, based on the Paley–Wiener Criterion, can be carried out in the
following way: if we compare the step response in Fig. 4.4.9 with the step response of a
non-peaking multi-stage amplifier in Fig. 4.1.7, for 8 œ "!, there is a great similarity.
Therefore we can ask ourselves if a Gaussian response could be achieved by increasing
the number of stages to some arbitrarily large number (8 p _). By doing so, the phase
response diverges when 8 p _ and it becomes infinite if = p _. Therefore for both
reasons (infinite number of stages and divergent phase response) it is not possible to
make an amplifier with an ideal Gaussian response.
Because the Bessel poles are derived from the requirement for an identical
envelope delay there is no simple way of renormalizing them back to the same cut off
frequency. However, such a renormalization would be very useful, not only for
comparing the systems with different pole families and equal order, but also for
comparing systems of different order within the Bessel family itself.
What is difficult to do analytically is often easily done numerically, especially if
the actual number crunching is executed by a machine. The normalization procedure
goes by taking the original Bessel poles and finding the system magnitude by Eq. 4.4.13
at the unit frequency (=Î=h œ "). We obtain an attenuation value, say, lJ Ð"Ñl œ + and
we want lJ Ð"Ñl to be "ÎÈ# . The ratio ; œ "Î+È# is the correction factor by which
we multiply all the poles and insert the new poles again into Eq. 4.4.13. We keep
repeating this procedure until l; "ÎÈ# l &, with & being an arbitrarily small error;
for practical purposes, a value of & œ !Þ!!" is adequate. In the algorithm presented in
Part 6, this tolerance is reached in only 6 to 9 iterations, depending on system order.
The following graphs were made using the computer algorithms presented in
Part 6 and show the performance of cut off frequency normalized Bessel systems of
order 8 œ "–"!, as in the previous figures.
Fig. 4.4.10 shows the frequency response magnitude; the plots for 8 œ &–* are
missing, since the difference is too small to identify them on such a vertical scale (the
difference in high frequency attenuation becomes significant with higher magnitude
resolution, say, down to 0.001 or more). Fig. 4.4.11 shows the phase, Fig. 4.4.12 the
envelope delay and Fig. 4.4.13 shows the step response. Finally, in Table 4.4.4 we
report the values of Bessel poles and their respective angles for systems with equal cut
off frequency.
- 4.51 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
2.0
ω H = ω h = 1 / RC
1.0
|F ( jω)|
2
3
4
0.1 10
0.1 1.0 10.0
ω /ω H
Fig. 4.4.10: Frequency response magnitude of systems with normalized Bessel poles of order
8 œ ", #, $, %, and "!. Note the nearly identical passband response — this is the reason why we
can approximate the oscilloscope (multi-stage) amplifier rise time from the cut off frequency,
using the relation for the first-order system: 7< œ !Þ$&Î0T .
0
1
2
3
− 180
4
ϕ (ω )
[°]
− 360
10
− 540
− 720
ω H = ω h = 1 / RC
− 900
0.1 1.0 10.0
ω /ω H
Fig. 4.4.11: Phase angle of systems with normalized Bessel poles of order 8 œ "–"!.
- 4.52 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
0
ω H = ω h = 1 / RC
− 0.5
1
− 1.0
2
− 1.5
τ en ω H 3
− 2.0 4
− 2.5
−3.0
−3.5
10
− 4.0
0.1 1.0 10.0
ω /ω H
Fig. 4.4.12: Envelope delay of systems with normalized Bessel poles of the order 8 œ "–"!.
Although the bandwidth is the same, the delay flatness extends progressively with system
order, already reaching beyond the system cut off frequency for 8 œ &.
1.2
T H = T h = RC
1.0
gn ( t )
0.8
n=1
0.6
2
3
10
0.4
0.2
0.0
0 1 2 3 4 5 6
t /T H
Fig. 4.4.13: Step response of systems with normalized Bessel poles of order 8 œ "–"!.
Note the half amplitude slope being almost equal for all systems, indicating an equal
system cut off frequency.
- 4.53 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
- 4.54 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
In order to be able to interpolate between Butterworth and Bessel poles, the later
must be modified so that, for both systems of equal order, the frequency response
magnitude will have the same asymptotes in both passband and stopband. This is
achieved if the product of all the poles is equal to one, as in Butterworth systems:
8
a"b8 $=5 œ " (4.5.1)
5œ"
- 4.55 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
The coefficients +5 for the Bessel polynomials are calculated by Eq. 4.4.10.
Then the coefficients ,5 are those of the modified Bessel polynomial, from which we
can calculate the modified Bessel poles of J a=b for the order 8 œ "–"!. These poles are
listed together with the corresponding pole radii < and pole angles ) in Table 4.5.1.
sB
sI
sT
rT rI rB
θT
θI
θB
R=1
σ
Fig. 4.5.1: Pole interpolation procedure. Butterworth (index B) and Bessel poles
(index T) are expressed in polar coordinates, =a<,)b. The trajectory going through
both poles is the interpolation path required to obtain the transitional pole =I .
We first express the poles in polar coordinates with the well known conversion:
Here the 1 radians added are required because the arctangent function repeats with a
period of 1 radians, so it does not distinguish between the poles in quadrant III form
those in I, and the same is true for quadrants IV and II.
- 4.56 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
In Eq. 4.5.3 the coefficient +! is equal to the product of all poles. Because we
have divided the polynomial coefficients +5 by +! to obtain the coefficients ,5 we have
effectively normalized the product of all poles to one:
8
º # =5 º œ " (4.5.11)
5œ"
and this is now true for both Butterworth and the modified Bessel poles. Therefore we
can assume that there exists a trajectory going through the 5 th Butterworth pole =B5 and
the 5th Bessel pole =T5 and each point on this trajectory can represent a pols =I5 which
can be expressed as:
such that the absolute product of all interpolated poles =I is kept equal to one. Then:
- 4.57 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
- 4.58 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
Let us calculate the frequency, phase, time delay and step response of a network
with three poles. Three poles are just enough to demonstrate the procedure of pole
interpolation completely. Let us select 7 œ !.&. From Table 4.3.1 we find the following
values for Butterworth poles of order 8 œ $:
="B : <"B œ " )"B œ ")!°
=#B : <#B œ " )#B œ "#!° (4.5.15)
=$B : <$B œ " )$B œ "#!°
and in Table 4.5.1, order 8 œ $, we find these values for modified Bessel poles:
="T : <"T œ !Þ*%"' )"T œ ")!°
=#T : <#T œ "Þ!$!& )#T œ "$'Þ$&° (4.5.16)
=$T : <$T œ "Þ!$!& )$T œ "$'Þ$&°
Now we interpolate the radii and the pole angles:
<" œ <"T 7 œ !Þ*%"'!Þ& œ !Þ*(!%
<#,$ œ <"T 7 œ ".!$!&!.& œ ".!"&" (4.5.17)
)#,$ œ )B 7 a)T )B b œ "#!° !Þ& a"$'Þ$&° "#!°b œ "#)Þ"(&°
"
œ
#
Éa!Þ*(!% =b !Þ'#(%# a= !Þ(*)!b# ‘!Þ'#(%# a= !Þ(*)!b# ‘
The magnitude plot is shown in Fig. 4.5.2 (TBT); for comparison, the magnitude
plots with Butterworth poles and modified Bessel poles are also drawn.
The normalized phase response is calculated as:
= = =# = =#
: œ arctan arctan arctan œ
5" 5# 5#
= = !Þ(*)" = !Þ(*)"
œ arctan arctan arctan (4.5.20)
!Þ*(!% !Þ'#(& !Þ'#(&
In Fig. 4.5.3 the phase plot of the transitional (TBT) system, together with the
plots for Butterworth and modified Bessel systems are drawn.
- 4.59 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
TBT
1.0
T B
| F (ω ) |
0.1
0.1 1.0 10
ω /ω h
TBT
− 90 B
T
ϕ [ °]
− 180
− 270
0.1 1.0 10
ω /ω h
- 4.60 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
The envelope delay plot is shown in Fig. 4.5.4 (TBT), along with the delays for the
Butterworth and the modified Bessel system.
−1
τe ω h
−2
TBT B
T
−3
0.1 1.0 10
ω /ω h
The starting point for the step response calculation is the general formula for a
three pole function multiplied by the unit step operator "Î=:
=" = # = $
KÐ=Ñ œ (4.5.22)
= Ð= =" ÑÐ= =# ÑÐ= =$ Ñ
We calculate the corresponding step response in the time domain by the _" transform:
8
gÐ>Ñ œ _" ˜KÐ=Ñ™ œ " res3 KÐ=Ñ e=>
3œ"
$
=" =# =$ e=>
œ " res3 (4.5.23)
3œ"
= Ð= =" ÑÐ= =# ÑÐ= =$ Ñ
- 4.61 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
After the sum of the residues is calculated, we insert the poles =" œ 5" and
=#,$ œ 5# „ 4 =# to obtain (see Appendix 2.3):
#
5" Éc5# a5# 5" b =## d =## a#5# 5" b#
gÐ>Ñ œ " e5# > sin Ð=# > )Ñ
=# a5# 5" b# =## ‘
a5## =## b
e5" > (4.5.24)
a5# 5" b# =##
By inserting the numerical values for poles from Eq. 4.5.18, we arrive at the
final relation:
gÐ>Ñ œ " "Þ%#"" e'#(& > sinÐ!Þ(*)" > #Þ))""Ñ "Þ$''! e!Þ*(!% > (4.5.26)
The plot based on this formula is shown in Fig. 4.5.5, (TBT). By inserting the
appropriate pole values in Eq. 4.5.24 we obtain the plots of Butterworth (B) and
modified Bessel (T) system’s step responses.
1.2
B
1.0 TBT
g (t ) T
0.8
0.6
0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10
t /T
Fig. 4.5.5: The step response of the Transitional Bessel–Butterworth three-pole system
(TBT), along with the Bessel (T) and Butterworth (B) responses.
- 4.62 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
where 8 is an even integer (#, %, ', á ). For a fair comparison we must use the poles
from Table 4.4.4, the Bessel poles normalized to the same cutoff frequency.
The plots in Fig. 4.6.1 of these two functions were made by a computer, using
the numerical methods described in Part 6. From this figure it is evident that an
amplifier with staggered poles (as reported in the Table 4.4.4 for each 8) preserves the
intended bandwidth. On the other hand, the amplifier with the same total number of
poles, but of second-order, repeated (8Î#)-times, does not — its bandwidth shrinks with
each additional second-order stage. Obviously, if 8 œ # the systems are identical.
0
10
f
g
h
−1 i
10
a) 2-pole Bessel reference
| F (ω)|
Even if the poles were of a different kind, e.g., Butterworth or Chebyshev poles,
the staggered poles would also preserve the bandwidth, but the system with repeated
second-order pole pairs will not. For the same total number of poles the curves tend to
- 4.63 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
the same cut off asymptote (from Fig. 4.6.1 this not evident, but it would have been
clear if the graphs had been plotted with increased vertical scale, say, down to 106 ).
In the time domain the decrease of rise times is even more evident. To compare
the step responses we take Eq. 4.6.1 and 4.6.2 (without the magnitude sign) and
multiply them by the unit-step operator "Î=, obtaining:
=" = # â = 8
Ks Ð=Ñ œ (4.6.3)
= Ð= =" ÑÐ= =# Ñ â Ð= =8 Ñ
and:
a=" =# b8Î#
Kr Ð=Ñ œ (4.6.4)
= Ð= =" Ñ8Î# Ð= =# Ñ8Î#
with 8 being again an even integer.
By using the _" transform, we obtain the step responses in the time domain:
=" =# â =8 e=>
gs Ð>Ñ œ _" ˜Ks Ð=Ñ™ œ " res (4.6.5)
= Ð= =" ÑÐ= =# Ñ â Ð= =8 Ñ
and:
a=" =# b8Î# e=>
gr Ð>Ñ œ _" ˜Kr Ð=Ñ™ œ " res (4.6.6)
= Ð= =" Ñ8Î# Ð= =# Ñ8Î#
The analytical calculation of these two equations, for 8 equal to 2, 4, 6, 8 and
10, should be a pure routine by now (at least for readers who have followed the
calculations in Part 2 and Appendix 2.3; for those who have skipped them, there will be
a lot of opportunities to revisit them later, when the need will force them!). Anyway,
this can be done more easily by using a computer, employing the numerical methods
described in Part 6. The plots obtained are shown in Fig. 4.6.2. The figure is convincing
enough and does not need any comment.
1.0
g( t )
0.8
2 4 6 8 10
0.6
2
2
2
3×
5×
2
4×
2×
0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10
t /T
Fig. 4.6.2: Step response of systems with staggered poles, compared with systems with repeated second-
order pole pairs. The rise times of systems with repeated poles increase with each additional stage.
- 4.64 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
Readers who usually pay attention to details may have noted that we have listed
the poles in our tables (and also in figures) in a particular order. We have combined
them in complex conjugate pairs and listed them by the increasing absolute value of
their imaginary part. Yes, there is a reason for this, beyond pure aesthetics.
In general we can choose any number (7) of poles from the total number (8) of
poles in the system, and assign them to any stage of the total number of stages (5).
Sometimes, a particular choice would be limited by reasons other than gain and
bandwidth, e.g., in oscilloscopes, the first stage is a JFET source follower, which
provides the high input impedance required and it is usually difficult to design an
effective peaking around a unity gain stage, so almost universally this stage has one real
pole. In most other cases the choice is governed mainly by the gain ‚ bandwidth
product available for a given number of stages.
However, the main reason which dictates the particular pole ordering is the
dynamic range. Remember that in wideband amplifiers we are, more often than not, at
the limits of a system’s realizability. If we want to extract the maximum performance
from a system we should limit any overshoot at each stage to a minimum.
If we consider a rather extreme example, by putting the pole pair with the
highest imaginary part in the first amplifier stage, the step response of this stage would
exhibit a high overshoot. Consequently, the maximum amplitude which the system
could handle linearly would be reduced by the amount of that overshoot.
In order to make the argument more clear, let us take a 3-stage 5-pole system
with Bessel poles (Table 4.4.3, 8 œ &) and analyze the step response of each stage
separately for two different assignments. In the first case we shall use a reversed pole
assignment: the pair =%,& will be assigned to the first stage, the pair =#,$ to the second
stage and the real pole =" to the last stage. In the second case we shall assign the poles
in the preferred order, the real pole first and the pair with the largest imaginary part last.
Our poles have the following numerical values:
=" œ $Þ'%'(
=#,$ œ $Þ$&#! „ 4 "Þ(%#(
=%,& œ #Þ$#%( „ 4 $Þ(&"!
We can model the actual amplifier by three voltage driven current generators
loaded by appropriate G V or G PV networks. Since the passive components are
isolated by the generators, each stage response can be calculated separately. We have
thus one first-order function and two second-order functions:
=" e=> " =" >
g" Ð>Ñ œ ! res œ" e
= a= =" b ="
=# =$ e=> "
g# Ð>Ñ œ ! res œ" e5# > sinÐ=# > )# Ñ
= a= =# ba= =$ b lsin )# l
=% =& e=> "
g$ Ð>Ñ œ ! res œ" e5% > sinÐ=% > )% Ñ
= a= =% ba= =& b lsin )% l
- 4.65 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
In the reverse pole order case the first stage, with the pole pair =%,& , is excited by
the unit step input signal. We know that the second-order system from Table 4.4.3 has
an optimal step response; since the imaginary to real part ratio of =%,& is larger
(their tan )% is greater) than it is with the poles of the optimal case, we thus expect that
the stage with =%,& would exhibit a pronounced overshoot.
In Fig. 4.6.3 we have drawn the response of each stage when it is driven
individually by the unit step input signal (the responses are gain-normalized to allow
comparison). It is evident that the stage with poles =%ß& has a 13% overshoot.
1.2
4,5
1.0
0.8 s4,5
Normalized gain
4,5
1 2,3
0.6
i s 2,3 2,3
0.4
0.2 s1 1
0
0 0.5 1 1.5 2 2.5 3
t /T
Fig. 4.6.3: Step response of each of the three stages taken individually.
1.2
1.0
0.6
0.2
0
0 0.5 1 1.5 2 2.5 3
t/T
Fig. 4.6.4: Step response of the complete amplifier with reverse pole order at each stage.
Although the second stage had no overshoot of its own, it overshoots by nearly 5% when
processing the @%,& output.
- 4.66 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
In Fig. 4.6.4 the step response of the complete amplifier is drawn, showing the
signal after each stage. Note that although the second stage exhibits no overshoot when
driven by the unit step (Fig. 4.6.3), it will overshoot by nearly 5% when driven by the
output of the first stage, @%,& . And the overshoot would be even higher if we were to put
the =" stage in the middle.
The dynamic range of the input stage will therefore have to be larger by 13%
and that of the second stage by 5% in order to handle the signal linearly. Fortunately the
maximal input signal is equal to the maximal output, divided by the total gain. If we
have followed the rule given by Eq. 4.1.33 and Fig. 4.1.9 the input stage will have only
"Î$ of the total system gain, so its output amplitude will be only a fraction of the supply
voltage. On the other hand, the optimal stage gain is rather low, as given by Eq. 4.1.38,
so the dynamic range may become a matter of concern after all.
The circuit configuration which is most critical in this respect is the cascode
amplifier, since there are two transistors effectively in series with the power supply, so
the biasing must be carefully chosen. In traditional discrete circuits, with relatively high
supply voltages, the dynamic range was rarely a problem; the major concern was about
poor linearity for large signals, since no feedback was used. In modern ICs, with lots of
feedback and a supply of 5V or just 3V, the usable dynamic range can be critical.
We can easily prevent this limitation if we use the correct pole ordering, so that
the first stage has the real pole =" and the last stage the pole pair =%,& . As we can see in
Fig. 4.6.5, the situation improves considerably, since in this case the two front stages
exhibit no overshoot, while the output overshoot is 0.4% only.
In a real amplifier, the pole assignment chosen can be affected by other factors,
e.g., the stage with the largest capacitance will require the poles with the lowest
imaginary part; alternatively a lower loading resistor and thus lower gain can be chosen
for that stage.
1.2
1.0
0.8
Normalized gain
1 2,3 4,5
0.6
i 1 2,3 4,5
0.4 s1 s 2,3 s4,5
0.2
0
0 0.5 1 1.5 2 2.5 3
t /T
Fig. 4.6.5: Step response of the complete amplifier, but with the correct pole assignment.
- 4.67 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
Résumé of Part 4
The study of this part should have given the reader enough knowledge to acquire
an idea of how multi-stage amplifiers could be optimized by applying inductive peaking
circuits, discussed in Part 2, at each stage.
Also, the merit of using DC over AC coupled multi-stage amplifiers should be
clearly understood.
A proper pole pattern selection is of fundamental importance for the amplifier’s
performance. In particular, for a smooth, low overshoot transient response the envelope
delay extended flatness offered by the Bessel poles provides a nearly ideal solution,
approaching the ideal Gaussian response very quickly: with only 5 poles, the system’s
frequency and step response conform exceptionally well to the ideal, with a difference
of less than 1% throughout most of the transient.
Finally, the advantage of staggered vs. repeated pole pairs should be strictly
considered in the design of multi-stage amplifiers when gain ‚ bandwidth efficiency is
the primary design goal. We hope that the reader has gained awareness of how the
bandwidth, which has been achieved by hard work following the optimizations of each
basic amplifying stage given in Part 3, can be easily lost by a large factor if the stages
are not coupled correctly.
A few examples of how these principles are used in practice are given in Part 5.
- 4.69 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles
References:
- 4.71 -
P. Starič, E. Margan:
Wideband Amplifiers
Part 5:
Contents:
5.0 ‘The Product is Greater than the Sum of its Parts’ ....................................................................... 5.7
5.1 Geometrical Synthesis of Inductively Compensated Multi-stage Amplifiers —
A Simple Example ........................................................................................................................ 5.9
5.2 High Input Impedance Selectable Attenuator with a JFET Source Follower .............................. 5.25
5.2.1 Attenuator High Frequency Compensation ................................................................ 5.26
5.2.2 Attenuator Inductance Loops ..................................................................................... 5.37
5.2.3 The ‘Hook-Effect’ ...................................................................................................... 5.41
5.2.4 Improving the JFET Source Follower DC Stability ................................................... 5.43
5.2.5 Overdrive Recovery ................................................................................................... 5.47
5.2.6 Source Follower with MOSFETs ............................................................................... 5.48
5.2.7 Input Protection Network ........................................................................................... 5.51
5.2.8 Driving the Low Impedance Attenuator ..................................................................... 5.53
5.3 High Speed Operational Amplifiers ............................................................................................ 5.57
5.3.1 The Classical Opamp ................................................................................................. 5.57
5.3.2 Slew Rate Limiting .................................................................................................... 5.61
5.3.3 Current Feedback Amplifiers ..................................................................................... 5.62
5.3.4 Influence of a Finite Inverting Input Resistance ........................................................ 5.69
5.3.5 Noise Gain and Amplifier Stability Analysis ............................................................. 5.72
5.3.6 Feedback Controlled Gain Peaking ............................................................................ 5.79
5.3.7 Improved Voltage Feedback Amplifiers .................................................................... 5.80
5.3.8 Compensating Capacitive Loads ................................................................................ 5.83
5.3.9 Fast Overdrive Recovery ............................................................................................ 5.89
5.4 Improving Amplifier Linearity .................................................................................................... 5.93
5.4.1 Feedback and Feedforward Error Correction ............................................................. 5.94
5.4.2 Error Reduction Analysis ........................................................................................... 5.98
5.4.3 Alternative Feedforward Configurations .................................................................. 5.100
5.4.4 Time Delay Compensation ....................................................................................... 5.103
5.4.5 Circuits With Local Error Correction ...................................................................... 5.104
5.4.6 The Tektronix M377 IC ........................................................................................... 5.117
5.4.7 The Gilbert Multiplier .............................................................................................. 5.123
Résumé of Part 5 .............................................................................................................................. 5.129
References ........................................................................................................................................ 5.131
- 5.3 -
P.Stariè, E.Margan System synthesis and integration
List of Figures:
- 5.4 -
P.Stariè, E.Margan System synthesis and integration
Fig. 5.3.21: Frequency and step response of the amplifier in Fig. 5.3.20 .......................................... 5.79
Fig. 5.3.22: Improved VF amplifier using folded cascode ................................................................ 5.80
Fig. 5.3.23: Improved VF amplifier derived from a CFB amp .......................................................... 5.81
Fig. 5.3.24: The ‘Quad Core’ structure ............................................................................................. 5.82
Fig. 5.3.25: Output buffer stage with improved current handling ...................................................... 5.83
Fig. 5.3.26: Capacitive load ads a pole to the feedback loop ............................................................ 5.83
Fig. 5.3.27: Amplifier stability driving a capacitive load .................................................................. 5.86
Fig. 5.3.28: Capacitive load compensation by inductance ................................................................. 5.86
Fig. 5.3.29: Capacitive load compensation by separate feedback paths ............................................ 5.87
Fig. 5.3.30: Adaptive capacitive load compensation ......................................................................... 5.88
Fig. 5.3.31: Amplifier with feedback controlled output level clipping .............................................. 5.89
Fig. 5.3.32: Amplifier with output level clipping using separate supplies ......................................... 5.90
Fig. 5.3.33: CFB amplifier output level clipping at ^T ..................................................................... 5.90
Fig. 5.4.1: Amplifiers with feedback and feedforward error correction ............................................ 5.94
Fig. 5.4.2: Closed loop frequency response of a feedback amplifier ................................................. 5.95
Fig. 5.4.3: Optimized feedforward amplifier frequency response ..................................................... 5.98
Fig. 5.4.4: Grounded load feedforward amplifier ............................................................................ 5.100
Fig. 5.4.5: ‘Error take off’ principle ................................................................................................ 5.102
Fig. 5.4.6: ‘Current dumping’ principle (Quad 405) ....................................................................... 5.103
Fig. 5.4.7: Time delay compensation of the feedforward amplifier ................................................. 5.104
Fig. 5.4.8: Simple differential amplifier .......................................................................................... 5.105
Fig. 5.4.9: DC transfer function of a differential amplifier .............................................................. 5.106
Fig. 5.4.10: Test set up used to compare different amplifier configurations ................................... 5.108
Fig. 5.4.11: Simple differential cascode amplifier used as the reference ......................................... 5.108
Fig. 5.4.12: Frequency response of the reference amplifier ............................................................. 5.109
Fig. 5.4.13: Step response of the reference amplifier ...................................................................... 5.109
Fig. 5.4.14: Improved Darlington .................................................................................................... 5.110
Fig. 5.4.15: Frequency response of the improved Darlington differential amplifier ........................ 5.110
Fig. 5.4.16: Step response of the improved Darlington differential amplifier ................................. 5.110
Fig. 5.4.17: Differential amplifier with feedforward error correction ............................................. 5.111
Fig. 5.4.18: Differential amplifier with double error feedforward ................................................... 5.111
Fig. 5.4.19: Frequency response of the differential amplifier with double feedforward .................. 5.112
Fig. 5.4.20: Step response of the differential amplifier with double feedforward ........................... 5.112
Fig. 5.4.21: Cascomp (compensated cascode) amplifier ................................................................. 5.113
Fig. 5.4.22: Frequency response of the Cascomp amplifier ............................................................. 5.113
Fig. 5.4.23: Step response of the Cascomp amplifier ...................................................................... 5.113
Fig. 5.4.24: Differential cascode amplifier with error feedback ...................................................... 5.114
Fig. 5.4.25: Modified error feedforward with direct error sensing and direct correction ................ 5.114
Fig. 5.4.26: Cascomp evolution with feedback ................................................................................ 5.115
Fig. 5.4.27: Frequency response of the Cascomp evolution ............................................................ 5.115
Fig. 5.4.28: Step response of the Cascomp evolution ...................................................................... 5.115
Fig. 5.4.29: Differential cascode amplifier with output impedance compensator ............................ 5.116
Fig. 5.4.30: Frequency response of the amplifier with output impedance compensator .................. 5.116
Fig. 5.4.31: Step response of the amplifier with output impedance compensator ............................ 5.116
Fig. 5.4.32: Basic wideband amplifier block of the M377 IC ......................................................... 5.118
Fig. 5.4.33: The basic M377 block as a compound transistor ......................................................... 5.119
Fig. 5.4.34: The M377 amplifier with fast overdrive recovery ........................................................ 5.120
Fig. 5.4.35: Simulation of the M377 amplifier frequency response ................................................ 5.121
Fig. 5.4.36: Simulation of the M377 amplifier step response .......................................................... 5.121
Fig. 5.4.37: Gain switching in the M377 amplifier .......................................................................... 5.122
Fig. 5.4.38: Fixed gain with V–#V attenuator switching ................................................................ 5.122
Fig. 5.4.39: The Gilbert multiplier development ............................................................................. 5.124
Fig. 5.4.40: DC transfer function of the Gilbert multiplier .............................................................. 5.126
Fig. 5.4.41: The Gilbert multiplier has almost constant bandwidth ................................................. 5.126
Fig. 5.4.42: Another way of developing the Gilbert multiplier ........................................................ 5.127
Fig. 5.4.43: The four quadrant multiplier ........................................................................................ 5.127
Fig. 5.4.44: The two quadrant multiplier used in M377 .................................................................. 5.128
- 5.5 -
P.Stariè, E.Margan System synthesis and integration
List of Tables:
Table 5.1.1: Comparison of component values for different pole assignments ................................. 5.22
Table 5.3.1: Typical production parameters of the Complementary-Bipolar Process ....................... 5.64
- 5.6 -
P.Stariè, E.Margan System synthesis and integration
... and that can be true in both the mathematical and technological sense! Well,
in math, at least as long as we are dealing with numbers greater than two; but in
technology the goal might not be so straightforward and neither are the means of
achieving it.
Electronics engineering is a difficult and demanding job. Most electronics
engineers pay attention to the input and output constraints imposed by the real world
as a matter of course. Many will also silently nod their head when the components
logistician tells them to use another part instead of the one originally specified, just
because it's cheaper. A number of them will also agree to take it into account when the
marketing manager tells them that the customer wants and expects from the product a
feature which was not foreseen initially as one of the design goals. And almost all will
shrug their shoulders when the chief director announces that the financial resources
for their project have been cut low or even that the project has been canceled. But
almost all will go mad when the mechanical engineer or the enclosure designer
casually stops by and asks if a switch or a pot could be moved from the left side of the
printed circuit board to the right. Fortunately for the electronics engineer, he has on
his side the most powerful argument of all: “Yeah, maybe I could do that, but
probably the final performance would suffer!” Electronics engineering is a difficult
and demanding job, indeed.
In the past 50 years electronics engineers have been delivering miracles at an
ever increasing rate. Not just the general public, but also other people involved within
the electronics industry have become accustomed to this. In the mid 1980s no one ever
asked you if something could be done; instead, the question was ‘for how much and
when?’. Today, no one asks ‘how much’ either — it has to be a bargain (between
brothers) and it had better be ready yesterday!
How many of you could give a name or two of engineers who became rich or
just famous in the last 50 or so years? Let us see: William R. Hewlett was famous, but
that was probably due more to his name in the Hewlett–Packard firm’s title and less to
the actual recognition of his work by the general public. Then there is Ray Dolby,
known for his noise reduction system. Gordon Moore of Intel is known for his ‘law’.
Errr ... who else? Oh, yes, Bill Gates is both rich and famous, but he is more famous
because he is so rich and far less for his own work!
Do you see what we mean? No doubt, many engineers have started a very
profitable business based on their own original circuit ideas. True, most commercially
successful products are the result of a team effort; still, the key solution is often born
in the mind of a gifted individual. But, frankly speaking, is there a single engineer or
scientist who could attract 50,000 delirious spectators to a stadium every Sunday?
OK, 5,000? Maybe Einstein could have been able to do so, but even he was
considered to be ‘a bit nuts’. Well, there you have it, the offer–demand economy.
In this book we have tried to pay a humble tribute to a small number of
amplifier designers. But our readers are engineers themselves, so we are preaching to
the already converted. Certainly we are not going to influence, let alone reverse, any
of those trends.
So, if tomorrow your boss tells you that he will be paying you 5% less, shrug
your shoulders and go back to your drawing board. And do not forget to stop by the
front-panel designer to tell him that he is asking too much!
- 5.7 -
P.Stariè, E.Margan System synthesis and integration
The reader who has patiently followed the discussion presented in previous
chapters is probably eager to see all that theory being put into practice.
Before jumping to some more complex amplifier circuits we shall give a
relatively simple example of a two-stage differential cascode amplifier, by which we
shall illustrate the actual system optimization procedure in some detail, using the
previously developed principles in their full potential.
Since we want to grasp the ‘big picture’ we shall have to leave out some less
important topics, such as negative input impedance compensation, cascode damping,
etc.; these are important for the optimization of each particular stage which, once
optimized, can be idealized to some extent. We have covered that extensively enough
in Part 3, so we shall not explicitly draw the associated components in the schematic
diagram. But, at the end of our calculations, we shall briefly discuss the influence of
those components to final circuit values.
A two-stage amplifier is a ‘minimum complexity’ system for which the
multi-stage design principles still apply. To this we shall add a 3-pole T-coil and a
4-pole L+T-coil peaking networks, discussed in Part 2, as loads to each stage, making
a total of 7 poles. There is, however, an additional real pole, owed to the U" input
capacitance and the total input and signal source resistance. As we shall see later, this
pole can be neglected if its distance from the complex plane’s origin is at least twice
as large as that of the system real pole set by "ÎVa Ga .
Such an amplifier thus represents an elementary example in which everything
that we have learned so far can be applied. The reader should, however, be aware that
this is by no means the ideal or, worse still, the only possibility. At the end of our
calculations, when we shall be able to assess the advantages and limitations offered by
our initial choices at each stage, we shall examine a few possibilities of further
improvement.
We shall start our calculations from the unavoidable stray capacitances and the
desired total voltage gain. Then we shall apply an optimization process, which we like
to refer to as the geometrical synthesis, by which we shall calculate all the remaining
circuit components in such a way that the resulting system will conform to the 7-pole
normalized Bessel–Thomson system. The only difference will be that the actual
amplifier poles will be larger by a certain factor, proportional (but not equal) to the
upper half power frequency =H . We have already met the geometrical synthesis in its
basic form in Part 2, Fig. 2.5.3 when we were discussing the 3-pole T-coil circuit. The
name springs from the ability to calculate all the peaking network components from
simple geometrical relations which involve the pole real an imaginary components,
given, of course, the desired pole pattern and a few key component values which can
either be chosen independently or set by other design requirements. Here we are going
to see a generalization of those basic relations applied to the whole amplifier.
It must be admitted that the constant and real input impedance of the T-coil
network is the main factor which allows us to assign so many poles to only two stages.
A cascade of passive 2-pole sections could have been used, but those would load each
- 5.9 -
P.Stariè, E.Margan System synthesis and integration
other and, as a result, the bandwidth extension factor would suffer. Another possibility
would be to use an additional cascode stage to separate the last two peaking sections,
but another active stage, whilst adding gain, also adds its own problems to be taken
care of. It is, nevertheless, a perfectly valid option.
Let us now take a quick tour of the amplifier schematic, Fig. 5.1.1. We have
two differential cascode stages and two current sources, which set both the transistor’s
transconductance and the maximum current available to the load resistors, Va and Vb .
This limits the voltage range available to the CRT. Since the circuit is differential the
total gain is a double of each half. The total DC gain is (approximately):
Va Vb
E! œ # † (5.1.1)
Ve" Ve#
The values of Ve" and Ve# set the required capacitive bypass, Ge" Î# and
Ge$ Î#, to match the transistor’s time constants. In turn, this sets the input capacitance
at the base of U" and U$ , to which we must add the inevitable Gcb and some strays.
V c2 V c4
Cbd Cbb
L d kd Ra L b kb Rb
Q2 Q4 Lc
Q1 Q3
Ca
V b2 V b4
Cd Cc Cb
Y
Rs R e1 R e3
R1
Ce3 X X'
s Ce1 I e3 2
I e1
2 V ee
Y'
V b2 V b4
Q1' Q3'
Q'2 Q4'
V c2 V c4
Fig. 5.1.1: A simple 2-stage differential cascode amplifier with a 7-pole peaking system: the
3-pole T-coil inter-stage peaking (between the U# collector and the U$ base) and the 4-pole
L+T-coil output peaking (between the U% collector and the vertical plates of the cathode ray
tube). The schematic was simplified to emphasize the important design aspects — see text.
The capacitance Gd should thus consists of, preferably, only the input
capacitance at the base of U$ . If required by the coil ‘tuning’, a small capacitance can
be added in parallel, but that would also reduce the bandwidth. Note that the
associated T-coil Pd will have to be designed as an inter-stage peaking, as we have
discussed in Part 3, Sec. 3.6, but we can leave the necessary corrections for the end.
The capacitance Gb , owed almost entirely to the CRT vertical plates, is much
larger than Gd , so we expect that Va and Vb can not be equal. From this it follows that
- 5.10 -
P.Stariè, E.Margan System synthesis and integration
it might be difficult to apply equal gain to each stage in accordance with the principle
explained in Part 4, Eq. 4.1.39. Nevertheless, the difference in gain will not be too
high, as we shall see.
Like any other engineering process, geometrical synthesis also starts from
some external boundary conditions which set the main design goal. In this case it is
the CRT’s vertical sensitivity and the available input voltage, from which the total
gain is defined. The next condition is the choice of transistors by which the available
current is defined. Both the CRT and the transistors set the lower limit of the loading
capacitances at various nodes. From these the first circuit component Vb is fixed.
With Vb fixed we arrive at the first ‘free’ parameter, which can be represented
by several circuit components. However, since we would like to maximize the
bandwidth this parameter should be attributed to one of the capacitances. By
comparing the design equations for the 3-pole T-coil and the 4-pole L+T-coil peaking
networks in Part 2, it can be deduced that Ga , the input capacitance of the 3-pole
section, is the most critical component.
With these boundaries set let us assume the following component values:
Gb œ "" pF (9 pF of the CRT vertical plates, 2 pF stray )
Ga œ % pF (3 pF from the U# Gcb , 1 pF stray ) (5.1.2)
Vb œ $'! H (determined by the desired gain and the available current)
The pole pattern is, in general, another ‘free’ parameter, but for a smooth,
minimum overshoot transient we must apply the Bessel–Thomson arrangement. As
can be seen in Fig. 5.1.2, each pole (pair) defines a circle going through the pole and
the origin, with the center on the negative real axis.
2
s1d
K = 1 for a single real pole
K = 2 for series peaking s1c
K = 4 for T-coil peaking
1
s1b
−K −K −K
jω
RdCd RbCb RaCa
0 sa
−K
RcCc s2b
−1
s2c
s2d
−2
−5 −4 −3 −2 −1 0
σ
Fig. 5.1.2: The 7 normalized Bessel–Thomson poles. The characteristic circle
of each pole (pair) has a diameter determined by the appropriate VG constant
and the peaking factor O , which depends on the type of network chosen.
- 5.11 -
P.Stariè, E.Margan System synthesis and integration
The poles in Fig. 5.1.2 bear the index of the associated circuit components and
the reader might wonder why we have chosen precisely that assignment.
In a general case the assignment of a pole (pair) to a particular circuit section
is yet another ‘free’ design parameter. If we were designing a low frequency filter we
could indeed have chosen an arbitrary assignment (as long as each complex conjugate
pole pair is assigned as a pair, a limitation owed to physics, instead of circuit theory).
If, however, the bandwidth is an issue then we must seek those nodes with the
largest capacitances and apply the poles with the lowest imaginary part to those circuit
sections. This is because the capacitor impedance (which is dominantly imaginary) is
inversely proportional both to the capacitor value and the signal frequency.
In this light the largest capacitance is at the CRT, that is, Gb ; thus the pole pair
with the lowest imaginary part is assigned to the output T-coil section, formed by Pb
and Vb , therefore acquiring the index ‘b’, ="b and =#b .
The real pole is the one associated with the 3-pole stage and there it is set by
the loading resistor Va and the input capacitance Ga , becoming =a .
The remaining two pole pairs should be assigned so that the pair with the
larger imaginary part is applied to that peaking network which has a larger bandwidth
improvement factor. Here we must consider that O œ % for a T-coil, whilst O œ # for
the series peaking L-section (of the 4-pole L+T-scetion). Clearly the pole pair with the
larger imaginary part should be assigned to the inter-stage T-coil, Pd , thus they are
labeled ="d and =#d . The L-section then receives the remaining pair, ="c and =#c .
We have thus arrived at a solution which seems logical, but in order to be sure
that we have made the right choice we should check other combinations as well. We
are going to do so at the end of the design process.
The poles for the normalized 7th -order Bessel–Thomson system, as taken
either from Part 4, Table 4.4.3, or by using the BESTAP (Part 6) routine, along with
the associated angles, are:
=a œ 5a œ %Þ*(") )a œ ")!°
=b œ 5b „ 4 =b œ %Þ(&)$ „ 4 "Þ($*$ )b œ ")!° … #!Þ!()(°
=c œ 5c „ 4 =c œ %Þ!(!" „ 4 $Þ&"(# )c œ ")!° … %!Þ)$"'°
=d œ 5d „ 4 =d œ #Þ')&( „ 4 &Þ%#!( )d œ ")!° … '$Þ'%$*° (5.1.3)
So, let us now express the basic design equations by the assigned poles and the
components of the two peaking networks.
For the real pole =a we have the following familiar proportionality:
"
=a œ 5a œ Ha œ %Þ*(") º (5.1.4)
Va Ga
At the output T-coil section, according to Part 2, Fig. 2.5.3, we have:
5b %Þ(&)$ %
Hb œ œ œ &Þ$*%" º (5.1.5)
cos# )b !Þ))#" Vb Gb
- 5.12 -
P.Stariè, E.Margan System synthesis and integration
For the L-section of the L+T output network, because the T-coil input
impedance is equal to the loading resistor, we have:
5c %Þ!(!" #
Hc œ œ œ (Þ"!*% º (5.1.6)
cos# )c !Þ&(#& V b Gc
And finally, for the inter-stage T-coil network:
5d #Þ')&( %
Hd œ œ œ "$Þ'$$$ º (5.1.7)
cos# )d !Þ"*"( Va Gd
From these relations we can calculate the required values of the remaining
capacitances, Gc and Gd . If we divide Eq. 5.1.5 by Eq. 5.1.6, we have the ratio:
%
Hb Vb G b # Gc
œ œ (5.1.8)
Hc # Gb
Vb G c
It follows that the capacitance Gc should be:
Gb Hb "" &Þ$*%"
Gc œ † œ † œ %Þ"($! pF (5.1.9)
# Hc # (Þ"!*%
Likewise, if we divide Eq. 5.1.4 by Eq. 5.1.7, we obtain:
"
Ha Va G a Gd
œ œ (5.1.10)
Hd % % Ga
Va G d
Thus Gd will be:
Ha %Þ*(")
Gd œ % Ga œ%†%† œ &Þ)$%* pF (5.1.11)
Hd "$Þ'$$$
- 5.13 -
P.Stariè, E.Margan System synthesis and integration
We are now ready to calculate the inductances Pb , Pc and Pd . For the two
T-coils we can use the Eq. 2.4.19:
For Pc we use Eq. 2.2.26 to obtain the proportionality factor of the VG constant:
$ tan# )b $ !Þ"$$'
5b œ œ œ !Þ&&)% (5.1.17)
& tan# )b & !Þ"$$'
and likewise:
$ tan# )d $ %Þ!($)
5d œ œ œ !Þ"")$ (5.1.18)
& tan# )d & %Þ($)
Note that 5d is negative. This means that, instead of the usually negative
mutual inductance, we need a positive inductance at the T-coil tap. This can be
achieved by simply mounting the two halves of Pd perpendicular to each other, in
order to have zero magnetic coupling and then introduce an additional coil, Pe (again
perpendicular to both halves of Pd ), with a value of the required positive mutual
inductance, as can be seen in Fig. 5.1.3. Another possibility would be to wind the two
halves of Pd in opposite direction, but then the bridge capacitance Gbd might be
difficult to realize correctly.
Cbd Cbd
L d kd Ra kd = 0 Ra
L d /2 L d /2
Ca Ra Le
Cd Ca
kd < 0
Cd
sa s 1d s 2d
Fig. 5.1.3: With the assigned poles and the resulting particular component values the 3-pole
stage magnetic coupling 5d needs to be negative, which forces us to use non-coupled coils
and add a positive mutual inductance Pe . Even with a negative 5d the T-coil reflects its
resistive load to the network input, greatly simplifying the calculations of component values.
- 5.14 -
P.Stariè, E.Margan System synthesis and integration
P œ P" P # # P M
P
P" œ P# œ (5.1.19)
# Ð" 5Ñ
PM œ 5 ÈP" P#
Thus, if 5 œ ! we have:
Pd !Þ%#!'
P"d œ P#d œ œ œ !Þ#"!$ µH (5.1.20)
# #
and:
Pd Pd Pd !Þ%#!'
Pe œ 5 d Ê † œ 5d œ !Þ"")$ œ !Þ!#& µH (5.1.21)
# # # #
If we were to account for the U3 base resistance (discussed in Part 3, Sec. 3.6)
we would get 5d even more negative and also P"d Á P#d .
The coupling factor 5b , although positive, also poses a problem: since it is
greater than 0.5 it might be difficult to realize. As can be noted from the above
equations, the value of 5 depends only on the pole’s angle ). In fact, the 2nd -order
Bessel system has the pole angles of „ "&!°, resulting in a 5 œ !Þ&, representing the
limiting case of realizability with conventionally wounded coils. Special shapes, coil
overlapping, or other exotic techniques may solve the coupling problem, but, more
often than not, they will also impair the bridge capacitance. The other limiting case,
when 5 œ !, is reached by the ratio eÖ=×ÎdÖ=× œ È$ , a situation occurring when
the pole’s angle ) œ "#!°.
In accordance with previous equations we also calculate the value of the two
halves of Pb :
Pb "Þ%#&'
P"b œ P#b œ œ œ !Þ%&(% µH (5.1.22)
# a" 5b b # a" !Þ&&)%b
Cbb
L b kb Rb
Lc
Rb
Cc Cb
s 1c s 2c s 1b s 2b
Fig. 5.1.4: The 4-pole output L+T-coil stage and its pole assignment.
The last components to be calculated are the bridge capacitances, Gbb and Gbd .
The relation between the T-coil loading capacitance and the bridge capacitance has
been given already in Part 2, Eq. 2.4.31, from which we obtain the following
expressions for Gbb and Gbd :
- 5.15 -
P.Stariè, E.Margan System synthesis and integration
" "
5aA œ œ œ *$"Þ" † "!' radÎs (5.1.25)
Va G a #')Þ& † % † "!"#
% cos# )b % † !Þ))#"
5bA œ œ œ )*"Þ! † "!' radÎs
Vb Gb $'! † "" † "!"#
% cos )b sin )b % † !Þ*$*# † !Þ$%$$
=bA œ „ œ „ œ „ $#&Þ( † "!' radÎs
Vb Gb $'! † "" † "!"#
# cos# )c # † !Þ&(#&
5cA œ œ œ ('#Þ# † "!' radÎs
Vb Gc $'! † %Þ"($! † "!"#
# cos )c sin )c # † !Þ(&'' † !Þ'&$)
=cA œ „ œ „ œ „ '&)Þ& † "!' radÎs
Vb Gc $'! † %Þ"($! † "!"#
% cos# )d % † !Þ"*"(
5dA œ œ œ %)*Þ& † "!' radÎs
Va Gd #')Þ& † &Þ)$%* † "!"#
% cos )d sin )d % † !Þ%%$* † !Þ)*'"
=dA œ „ œ „ œ „ "!"&Þ' † "!' radÎs
Va Gd #')Þ& † &Þ)$%* † "!"#
If we divide the real amplifier pole by the real normalized pole, we get:
- 5.16 -
P.Stariè, E.Margan System synthesis and integration
and the step response is the inverse Laplace transform of the product of JA a=b with
the unit step operator "Î=:
" "
ga>b œ _" œ JA a=b œ ! resŒ JA a=b e=> (5.1.34)
= =
- 5.17 -
P.Stariè, E.Margan System synthesis and integration
relatively simple operation and we leave it as an exercise to the reader. Instead we are
going to use the computer routines, the development of which can be found in Part 6.
In Fig. 5.1.5 we have made a polar plot of the poles for the inductively
compensated 7-pole system and the non-compensated 2-pole system. As we have
learned in Part 1 and Part 2, the farther from origin the smaller is the pole’s influence
on the system response. It is therefore obvious that the 2-pole system’s response will
be dominated by the pole closer to the origin and that is the pole of the output stage,
=#N . The bandwidth of the 7-pole system is, obviously, much larger.
90
120 60
ℑ{s }
1.5 × 10 9
s 1dA
150 1.0 30
s 1cA
0.5
s 1bA
ℜ{s }
180 s aA 0
s 1N s 2N
s 2bA
s 2cA
210 330
s 2dA
240 300
270
Fig. 5.1.5: The polar plot of the 7-pole compensated system (poles
with index ‘A’) and the 2-pole non-compensated system (index ‘N’).
The radial scale is ‚ "!* radÎs. The angle is in degrees.
- 5.18 -
P.Stariè, E.Margan System synthesis and integration
0
10
0.707
| FA ( f )|
| F ( f )|
| FN ( f )|
A0
−1
10
6 7 8 9
10 10 10 10
f [Hz]
Fig. 5.1.6: The gain normalized magnitude vs. frequency of the 7-pole
compensated system |JA a0 b| and the 2-pole non-compensated system, |JN a0 b|.
The bandwidth of JN is about 25 MHz and the bandwidth of JA is about 88 MHz,
more than 3.5 times larger.
1.2
1.0
g (t )
A0
0.8
gA(t ) gN ( t )
0.6
0.4
0.2
0
0 5 10 15 20 25
t [ns]
Fig. 5.1.7: The gain normalized step responses of the 7-pole compensated system
gA a>b and the 2-pole non-compensated system gN a>b. The rise time is 14 ns for the
gN a>b, but only 3.8 ns for gA a>b. The overshoot of gA a>b is only 0.48 %.
- 5.19 -
P.Stariè, E.Margan System synthesis and integration
1.2
1.0
0.8
abcd acbd
abdc adbc
0.6
0.4
0.2
0
0 2 4 6 8 10 12 14 16 18 20
t [ns]
Fig. 5.1.8: The normalized step responses of the four possible combinations of pole
assignments. There are two pairs of responses, here spaced vertically by a small
offset to allow easier identification. One of the two faster responses (labeled ‘abcd’)
is the one for which the detailed analysis has been given in the text.
If the pole pairs =c and =d are mutually exchanged the result is the same as our
original analysis. But by exchanging =b with either =c or =d the result is sub-optimal.
A closer look at Table 5.1.1 reveals that both of the two slower responses have
Va œ $&% H instead of #') H. The higher value of Va means actually a higher gain, as
can be seen in Fig. 5.1.9, where the original system was set for a gain of E! ¸ "!, in
contrast with the higher value, E! ¸ "$. The higher gain results from a different
‘tuning’ of the 3-pole T-coil stage, in accordance with the different pole assignment.
- 5.20 -
P.Stariè, E.Margan System synthesis and integration
14
12
10
8
abcd
abdc
6
acbd
4 adbc
0
0 2 4 6 8 10 12 14 16 18 20
t [ns]
Fig. 5.1.9: The slower responses of Fig. 5.1.8, when plotted with the actual gain,
are actually those with a higher value of Va and therefore a higher gain.
Since our primary design goal is to maximize the bandwidth with a given gain,
let us recalculate the slower system for a lower value of Vb . If Vb œ $"' H (from the
E96 series of standard values, 0.5 % tolerance), the gain is restored. Fig. 5.1.10 shows
the recalculated responses, labeled ‘acbd’ and ‘abdc’, compared to the response
obtained by the ‘abcd’ and ‘abdc’ pole assignment.
12
10
8
Rb = 360 Ω abcd
abdc
6
acbd
R = 316 Ω
adbc b
4
0
0 2 4 6 8 10 12 14 16 18 20
t [ns]
Fig. 5.1.10: If the high gain responses are recalculated by reducing Vb from the
original 360 H to 316 H, the gain is nearly equal in all four cases, However, those
pole assignments which put the poles with the higher imaginary part at the output
stage still result in a slightly slower system.
- 5.21 -
P.Stariè, E.Margan System synthesis and integration
The difference in rise time between the two pairs is much smaller now;
however, the recalculated pair is still slightly slower. This shows that our initial
assumptions of how to achieve maximum bandwidth (within a given configuration)
were not guessed by sheer luck.
In Table 5.1.1 we have collected all the design parameters for the four out of
six possible pole assignments. The systems in the last two columns have the same
pole assignments as in the middle two, but have been recalculated from a lower Vb
value, in order to obtain the total voltage gain nearly equal to the first system. From a
practical point of view the first and the last column are the most interesting: the
system represented by the first column is the fastest (as the second one, but the latter
is difficult to realize, mainly owing to low Gc value), whilst the last one is only
slightly slower but much easier to realize, mainly owing to a lower magnetic coupling
5b and the non-problematic values of Gc and Gd .
Table 5.1.1
Vb [H] $'! $'! $'! $'! $"' $"'
pole order: abcd abdc acbd adbc acbd adbc
E! *Þ''( *Þ''( "#Þ(% "#Þ(% *Þ)"( *Þ)"(
Va H[ ] #')Þ& #)'Þ& $&$Þ* $&$Þ* $"!Þ( $"!Þ(
Gc [pF] %Þ"($ #Þ"(( #Þ)(! (Þ#%* #Þ)(! (Þ#%*
Gd [pF] &Þ)$) ""Þ"* "%Þ(& &Þ)$) "%Þ(& &Þ)$)
Gbb [pF] !Þ((* !Þ((* "Þ#!" "Þ#!" "Þ#!" "Þ#!"
Gbd [pF] "Þ)&" "Þ### "Þ!%& "Þ)&" "Þ!%& "Þ)&"
Pb [µH] "Þ%#' "Þ%#' "Þ%#' "Þ%#' "Þ!*) "Þ!*)
Pc [µH] !Þ"&$ !Þ!)! !Þ"'# !Þ%"! !Þ"#& !Þ$"'
Pd [µH] !Þ%#" !Þ)!( "Þ)%( !Þ($" "Þ%#$ !Þ&'$
5b !Þ&&) !Þ&&) !Þ$*# !Þ$*# !Þ$*# !Þ$*#
5d !Þ"") !Þ$*# !Þ&&) !Þ"") !Þ&&) !Þ"")
(b $Þ&( $Þ&( #Þ$& #Þ$& #Þ)% #Þ)%
(r $Þ&& $Þ&& #Þ$$ #Þ$$ #Þ)" #Þ)"
Table 5.1.1: Circuit components for 4 of the 6 possible pole assignments. The last two
columns represent the same pole assignment as the middle two, but have been recalculated
for Vb œ $"' H and nearly equal gain. The first column is the example calculated in the text
and its response is one of the two fastest. The other fast system (second column) is probably
non realizable (in discrete form), because Gc ¸ # pF. The last column (adbc) is, on the other
hand, only slightly slower, but probably much easier to realize (T-coil coupling and the
capacitance values). The bandwidth and rise time improvement factors (b and (r were
calculated by taking the non-compensated amplifier responses as the reference.
The main problem encountered in the realization of our original ‘abcd’ system
is the relatively high magnetic coupling factor of the output T-coil, 5b . A possible way
of improving this could be by applying a certain amount of emitter peaking to either
the U" or U3 emitter circuit. Then we would have a 9-pole system and we would have
to recalculate everything. However, the use of emitter peaking results in a negative
input impedance which has to be compensated (see Part 3, Sec. 3.5), and the
compensating network adds more stray capacitance.
- 5.22 -
P.Stariè, E.Margan System synthesis and integration
A 9-pole system might be more easily implemented if, instead of the 3-pole
section, we were to use another L+T-coil 4-pole network. The real pole could then be
provided by the signal source resistance and the U" input capacitance, which we have
chosen to neglect so far. With 9 poles both T-coils can be made to accommodate those
two pole pairs with moderate imaginary part values (because the T-coil coupling
factor depends only on the pole angle ) ), so that the system bandwidth could be more
easily maximized. A problem could arise with a low value of some capacitances,
which might become difficult to achieve. But, as is evident from Table 5.1.1, there are
many possible variations (their number increases as the factorial of the number of
poles), so a clever compromise can always be made. Of course, with a known signal
source an additional inductive peaking could be applied at the input, resulting in a
total of 11 or perhaps even 13 poles, but then the component tolerances and the
adjustment precision would set the limits of realizability.
Finally, we would like to verify the initial claim that the input real pole =" ,
owed to the signal source resistance and base spread resistance and the total input
capacitance, can be neglected if it is larger than the system real pole =a . Since the
input pole is separated from the rest of the system by the first cascode stage, it can be
accounted for by simply multiplying the system transfer function by it. In the
frequency response its influence is barely noticeable. In the step response, Fig. 5.1.11,
it affects mostly the envelope delay and the overshoot, while the rise time (in
accordance with the frequency response) remains nearly the same.
10
9
8
7
6 m = 10 2 1.1
4 s1 = m sa
3
s1 = −1
2 ( Rs + r b1 ) Cin
sa = −1
1 R a Ca
0
0 2 4 6 8 10 12 14 16 18 20
t [ns]
Fig. 5.1.11: If the real input pole =" is at least twice as large as the system’s real pole
=a , its influence on the step response can be seen merely as an increased envelope
delay and a reduced overshoot, while the rise time remains nearly identical.
- 5.23 -
P.Stariè, E.Margan System synthesis and integration
V cc
Rb
Va
d ka
ld
Vg
Rb
V cc
Fig. 5.1.12: If the CRT deflection plates are made in a number of sections (usually between 4
and 8), connected by a series of T-coil peaking circuits, the amplifier would effectively be
loaded by a much smaller capacitance, allowing the system cutoff frequency to be several times
higher. The T-coils also provide the time delay necessary to keep the deflecting voltage (as seen
by the electrons in the writing beam) almost constant throughout the electron’s travel time across
the deflecting field. For simplicity, only the vertical deflection system is shown, but a similar
circuit could be used for the horizontal deflection, too (such an example can be found in the
1 GHz Tektronix 7104 model; see Appendix 5.1 for further details). Note that, owing to the
increasing distance between the plates their length should also vary accordingly, in order to
compensate for the reduced capacitance. Fortunately, the capacitance is also a function of the
plate’s width, not just length and distance, so a well balanced compromise can always be found.
- 5.24 -
P.Stariè, E.Margan System synthesis and integration
6) the upper cut off frequency at least twice higher than the system’s bandwidth;
7) the upper cut off frequency should be independent of any of the above settings;
8) the gain flatness must be kept within 0.5 % from DC to 1/5 of the bandwidth;
9) the protection diodes must survive repeating 1–2 A surge currents with < 1 ns
rise and 50 µs decay, their leakage must be < 100 pA and capacitance < 1 pF;
This is an impressive list, indeed. Especially if we consider that for a 500 MHz
system bandwidth the above requirements should be fulfilled for a 1 GHz bandwidth.
- 5.25 -
P.Stariè, E.Margan System synthesis and integration
A typical input stage block diagram is shown in Fig. 5.2.1. The attenuator and
the unity gain buffer stage will be analyzed in the following sections.
control
DC-GND-AC Cp
Vcc
Coupling
1.5nF
1:1 Rp Rd Ro
1M Ω o
i 10:1 Av =+1
BNC 15pF 100:1 150k 150 Ω 50 Ω
C AC
Rt Vee
50 Ω 33nF 1M Ω Attenuator
Unity-Gain
Overdrive Buffer
protection
Fig. 5.2.1: A typical conventional oscilloscope input section. All the switches must be high
voltage types, controlled either mechanically or as electromagnetic relays (but other
solutions are also possible, as in [Ref. 5.2]). The spark gap protects against electrostatic
discharge. The Vt 50 H resistor is the optional transmission line termination. The 1 MH
resistor in the DC–GND–AC selector charges the AC coupling capacitor in the GND
position, reducing the overdrive shock through Gac in presence of a large DC signal
component. The attenuator is analyzed in detail in Sec. 5.2.1–3. The overdrive protection
limits the input current in case of an accidental connection to the 240 Vac with the attenuator
set to the highest sensitivity. The unity gain bufferÎimpedance transformer is a > 100 MH
Vin , 50 H Vo JFET or MOSFET source follower, analyzed in Sec. 5.2.4 and 5.2.5.
A simple resistive attenuator, like the one in Fig. 5.2.2a is too sensitive to any
capacitive loading by the following amplifier stage. For oscilloscopes the standard
input impedance Va œ V" V# is 1 MH, so for a 10:1 attenuation the resistance
values must be V" œ *!! kH and V# œ "!! kH. With such values the output
impedance, which equals the parallel connection of both resistances, would be about
90 kH. Assuming an amplifier input capacitance of only 1 pF, the resulting system
bandwidth would be only 1.77 MHz [0h œ aV" V# bÎa#1Gi V" V# b]Þ
Therefore high frequency compensation, as shown in Fig. 5.2.2b, is necessary
if we want to obtain higher bandwidth. The frequency compensation, however, lowers
the input impedance at high frequencies.
i i i
9R C R1 C1
9R 900k 10pF
o = 0.1 i o= 0.1 i o
R R 9C R2 C2a C2b
Ci 100k 82pF 10-30pF
a) b) c)
Fig. 5.2.2: The 10:1 attenuator; a) resistive: with V œ "!! kH, the following stage input
capacitance of just 1 pF would limit the bandwidth to only 1.(( MHz; b) compensated: the
capacitive divider takes over at high frequencies but the input capacitance of the following
stage of 1 pF would spoil the division by 1%; c) adjustable: in practice, the capacitive
divider is trimmed for a perfect step response.
- 5.26 -
P.Stariè, E.Margan System synthesis and integration
V" V # œ V a (5.2.1)
- 5.27 -
P.Stariè, E.Margan System synthesis and integration
In the latter expression we have taken into account Eq. 5.2.7. This is the same as if we
would have a single parallel Va Ga network:
"
^a œ Va (5.2.9)
" 4=Ga Va
where:
"
Va œ V " V # and Ga œ (5.2.10)
" "
G" G#
By substituting G# œ aE "bG" the input capacitance Ga relates to G" as:
" E"
Ga œ œ G" (5.2.11)
" " E
G" aE "bG"
Obviously, the frequency dependence will vanish if the condition of Eq. 5.2.7
is met. However, the transfer function will be independent of frequency only if the
signal’s source impedance is zero (we are going to see the effects of the signal source
impedance a little later).
The transfer function of an unadjusted attenuator (V" G" Á V# G# ) has a simple
pole and a simple zero, as can be deduced from Eq. 5.2.12. If we rewrite the
impedances as:
"
" V" G" ="
^" œ œ V" œ V" (5.2.13)
" " = ="
=G" =Œ
V" V" G "
and
"
" V# G # =#
^# œ œ V# œ V# (5.2.14)
" " = =#
=G# =Œ
V# V# G #
where =" and =# represent the poles in each impedance arm, explicitly:
" "
=" œ and =# œ (5.2.15)
V" G" V# G #
The transfer function is then:
=#
V#
@out ^# = =#
œ œ =" =# (5.2.16)
@in ^" ^ # V" V#
= =" = =#
- 5.28 -
P.Stariè, E.Margan System synthesis and integration
By solving the double divisions, the transfer function can be rewritten as:
@out =# V# a= =" b
œ (5.2.17)
@in =" V" a= =# b =# V# a= =" b
@out = =z
J a=b œ œ EG (5.2.25)
@in = =p
- 5.29 -
P.Stariè, E.Margan System synthesis and integration
4= 5 z 4= 5z
Q a=b œ ¹J a4=b¹ œ ÈJ a4=b † J a 4=b œ EG Ë † (5.2.26)
4= 5 p 4 = 5p
=# 5z#
Q a=b œ EG Ë (5.2.27)
=# 5p#
The phase angle is the arctangent of the imaginary to real component ratio of
the frequency response J a4=b:
4= 5 z
eœ
eeJ a4=bf 4= 5 p
:a=b œ arctan œ arctan (5.2.28)
deJ a4=bf 4= 5 z
dœ
4= 5 p
First we must rationalize J a4=b by multiplying both the numerator and the
denominator by the complex conjugate of the denominator a 4= 5p b:
4= 5z a4= 5z ba 4= 5p b =# 4 = 5 z 4 = 5 p 5 z 5 p
œ œ
4= 5p a4= 5p ba 4= 5p b =# 5p#
- 5.30 -
P.Stariè, E.Margan System synthesis and integration
−21.0
3.0
2.0 C 2 = 80 pF
1.0
ϕ [°] C 2 = 90 pF
0.0
−1.0
C 2 = 100 pF
−2.0
−3.0
1.0
C 2 = 80 pF
0.5
τ [ µ s]
C 2 = 90 pF
0.0
− 0.5
C 2 = 100 pF
−1.0
1 10 100 1k 10k 100k 1M 10M 100M
f [Hz]
Fig. 5.2.3: The attenuator magnitude, phase, and envelope delay responses for the correctly
compensated case (flat lines), along with the under- and over-compensated cases (G2 is trimmed
by ±10 pF). Note that these same figures apply also to oscilloscope passive probe compensation,
demonstrating the importance of correct compensation when making single channel pulse
measurements and two channel differential measurements.
- 5.31 -
P.Stariè, E.Margan System synthesis and integration
The step response is obtained from J a=b by the inverse Laplace transform,
using the theory of residues:
p =œ=
" EG = =z = =z
_" œ J a=b œ ( e=> .= œ EG "resœ e=>
= #14 =a= =p b =œ!
= a= =p b
G
We have two residues. One is owed to the unit step operator, "Î=:
= =z = =z =>
res" œ EG lim =œ e=> œ EG lim œ e
=p! =a= = p b =p! = =p
"
=z V" G "
œ EG œ EG
=p " "
†
EV V" aG" G# b
V" aG" G# b G" G # "
œ EG EV œ EG EV œ EG EV
V" G" G" EG
V#
œ EV œ (5.2.33)
V" V#
As expected, the residue for zero frequency (DC) is set by the resistance ratio.
The other residue is due to the system pole, =p :
= =z = =z =>
res# œ EG lim a= =p bœ e=> œ EG lim š e ›œ
=p=p =a= =p b =p=p =
" " "
† Œ
=p =z =p > EV V" aG" G# b V" G"
œ EG e œ EG e=p >
=p " "
†
EV V" aG" G# b
" " "
†
EV V" aG" G# b V" G" =p >
œ EG e
" "
†
EV V" aG" G# b
The result is a time decaying exponential, with the time constant set by the
system pole, =p , and the amplitude set by the difference between the capacitive and
resistive divider.
The step response is the sum of both residues:
- 5.32 -
P.Stariè, E.Margan System synthesis and integration
V# G" V# V V
" #† " >
0 a>b œ Œ e V"V# G"G# (5.2.36)
V" V # G" G # V" V #
We have plotted the step response in Fig. 5.2.4. The plots are made for the
matched and two unmatched cases in order to show the influence of trimming the
attenuator by G# (±10 pF), as in the frequency domain plots.
0.12
a
0.10 b
c
0.08
jω
o
i
0.06
R1 C1 sz
900k 10pF
0.04
o a bc σ
0.02 R2 C2
100k 90 ± 10pF sp a) C 2 = 80 pF
b) C 2 = 90 pF
0.00
c) C 2 = 100 pF
− 0.02
0 20 40 60 80
t [ µ s]
Fig. 5.2.4: The attenuator’s step response for the correctly compensated case, along with
the under- and over-compensated cases (G2 is trimmed by ±10 pF). Note that by changing
G# the system pole also changes but the system zero remains the same.
Now we are going to analyze the influence of the non-zero source impedance
on the transfer function. Since we can re-use some of the results we shall not need to
recalculate everything.
The capacitive divider presents a relatively high output capacitance to the
following amplifier, and the amplifier input capacitance appears in parallel with G# ,
changing the division slightly, but that is compensated by trimming.
However, the attenuator input capacitance Ga (Eq. 5.2.10) is smaller than G"
*
(for an attenuation of E œ "!, the input capacitance is Ga œ "! G" ). The actual values
- 5.33 -
P.Stariè, E.Margan System synthesis and integration
of G" and G# are dictated mainly by the need to provide a standard value for various
probes (compensated attenuators themselves, too). Historically, values between 10
and 20 pF have been used for Ga . Although small, this load is still significant if the
signal source internal impedance is considered.
High frequency signal sources are designed to have a standardized impedance
of Vg œ 50 H (75 H for video systems). The cable connecting any two instruments
must then have a characteristic impedance of ^! œ 50 H, and it must always be
terminated at its end by an equal impedance in order to prevent signal reflections. As
shown in Fig. 5.2.5a and 5.2.5b, the internal source resistance Vg and the termination
resistance Vt form a ÷2 attenuator (neglecting the 1 MH of the 10:1 attenuator):
@o Vt &! H "
œ œ œ (5.2.40)
@g Vg Vt &! H &! H #
Therefore the effective signal source impedance seen by the attenuator is:
Vg Vt #&!!
Vge œ œ œ #& H (5.2.41)
Vg Vt "!!
Rg Z 0 = 50 Ω
i
50 Ω
9R C
Rt o
g
50 Ω
R 9C
a)
R ge i
R ge i
25 Ω
9R C
25 Ω g o
9R C 2
g o R 9C
2
R 9C
Rge
2.78 Ω Rc =
9
b) c)
Fig. 5.2.5: a) When working with 50 H impedance the terminating resistance must match
the generator internal resistance, forming a ÷2 attenuator with an effective output
impedance of 25 H; b) With a 9 pF attenuator input capacitance, a HF cut off at 707 MHz
results; c) The cut off for the ÷10 attenuator can be compensated by a 25/9 H resistor
between the lower end of the attenuator and the ground.
- 5.34 -
P.Stariè, E.Margan System synthesis and integration
attenuator transfer function for a zero signal source impedance (Eq. 5.2.25), the
transfer function for the impedance Vge œ #& H is:
"
" Vge Ga
J" a=b œ Vge † † J! a=b (5.2.42)
# "
=Œ
Vge Ga
15pF 1M
900k 10pF
2 o
2-8pF
90pF Co
100k
1pF
990k 10pF
3
2-8pF
10k 990pF
Fig. 5.2.6: The direct and the two attenuation paths are switched at both input and output,
in order to reduce the input capaciatance. For low cross-talk, the input and output of each
unused section should be grounded (not shown here). The variable capacitors in parallel
with the two attenuation sections are adjusted so that the input capacitance is equal for all
settings. Of course, other values are possible, e.g., ÷1, ÷20, ÷400 (as in Tek 7A11), with the
highest attenuation achieved by cascading two ÷20 sections. The advantage is that the
parasitic serial inductance of the largest capacitance in the highest attenuation section is
avoided; a disadvantage is that it is very difficult to trim correctly.
- 5.35 -
P.Stariè, E.Margan System synthesis and integration
500k 20pF
1
2-8pF
500k 15-25pF
i o
25 Ω Co
50 Ω
1pF
50 Ω
g
950k 10pF
2
2-8pF
50k 180-200pF
1.3 Ω
Fig. 5.2.7: The attenuator with no direct path, in which the 25 H effective source impedance
compensation can be used for both settings. Low ground return path impedance is necessary.
On the negative side, by using the ÷2 and ÷20 attenuation, the amplifier must
provide for another gain of two, making the system optimization more difficult.
Fortunately, for modern amplifiers, driving an AD converter, the gain requirement is
low, since the converter requires only a volt or two for a full range display; in contrast,
a conventional ’scope CRT requires tens of volts on the vertical deflecting plates.
Thus a factor of at least 10 in gain reduction (and a similar bandwidth increase!) is in
favor to modern circuits.
Whilst the gain requirements are relaxed, modern sensitive circuits require a
higher attenuation to cover the desired signal range. But obtaining a 200:1 attenuation
can be difficult, because of capacitive feed through: even a 0.1 pF from the input to
- 5.36 -
P.Stariè, E.Margan System synthesis and integration
the buffer output, together with a non-zero output impedance, can be enough to spoil
the response. If we can tolerate a feed through error of one least significant bit of an 8
bit analog to digital converter, the 200:1 attenuator would need an effective isolation
of #! log"! a#!! ‚ #) b œ *% dB, which is sometimes hard to achieve even at audio
frequencies, let alone GHz. A cascade of two sections could be the solution.
Designer’s life would be easy with only resistances and capacitances to deal
with. But every circuit also has an inductance, whether we intentionally put it in or
desperately try to avoid it. As we have learned in Part 2, in wideband amplifiers,
instead of trying to avoid the unavoidable, we rather try to put the inductance to use by
means of fine tuning and adequate damping.
In Fig. 5.2.8 we have indicated the two inductances associated with the
attenuator circuit. Because of the high voltages involved, the attenuator circuit can not
use arbitrarily small components, packed arbitrarily close together. As a consequence,
the circuit will have loop dimensions which can not be neglected and, since the
inductance value is proportional to the loop area, the inductance values can be
relatively large (for wideband amplifiers).
As for stray capacitance, the value of stray inductance can not be readily
predicted, at least not to the precision required. Each component in Fig. 5.2.8 will
have its own stray inductances, one associated with the internal component structure
and the other associated with the component leads, the soldering pads, and PCB
traces. These will be added to the loop inductance.
50 Ω R1 C1 Cp
Rd
o
50 Ω L1 2-8pF
Rp
R2 C2
Co
g L2 2pF
R3 LM
Fig. 5.2.8: Inductances owed to circuit loops can be modeled as inductors in series with the
signal path. Note that in addition to the two self inductances there is also a mutual inductance
between the two. The actual values depend on the loop’s size, which in turn depends on the size
of the components and the circuit’s layout. Smaller loops have less inductance. Mutual
inductance can be reduced by shielding, although this can increase the stray capacitances.
- 5.37 -
P.Stariè, E.Margan System synthesis and integration
The current M and the magnetic field strength L are proportional: L œ MÎ#<
for a single loop, where < is the loop radius. In a linear non-magnetic environment
(with the relative permeability .r œ ") M and F are also proportional because
F œ .L . Furthermore, .! is the free space magnetic permeability, also known as the
‘induction constant’, the value of which has been set by the SI agreement about the
Ampere: .! œ %1 † "!( [V s A" m" ]. This means that a current of 1 A encircling
once a loop area of 1 m# causes a magnetic field strength of 1 Vs. Because for a
circular loop W œ 1<# , our loop inductance equation can be reduced to:
.! E .! 1 < # .! 1 <
Pœ œ œ œ k< (5.2.44)
#< #< #
where k œ #1# † "!( HÎ m. The inductance of a 1 m2 loop (< œ !Þ&'%# m) is then
¸ "."% † "!' H (the unit of inductance is ‘henry’, after Joseph Henry, 1791–1878;
[H] œ [V s A" ]).
As a more practical figure, a loop of 10 cm# ( ¸ 0.0178 m radius circle) has an
inductance of ¸ $& nH. This does not look much, but remember that in our circuit the
loop inductance P" is effectively in series with the signal source and is loaded by the
attenuator’s input capacitance, forming a 2nd -order low pass filter with a cut off
frequency 0h œ "Έ#1ÈP" Ga ‰ ¸ #') MHz, assuming P" œ $& nH and Ga œ "! pF.
With such values the step response rings long, since the equivalent signal source
resistance (Vge œ #& H) is not high enough to damp the resonance (such damping
would be adequate for P" & nH).
The above inductance estimation is based on a circular loop model, whilst our
loops will usually be of a square form (increasing P), with additional stray
inductances owing to the internal geometry of the components (capacitors) and their
leads or just the PCB traces in case surface mounted components are used.
The loop inductances can, of course, be measured. If we replace G" , G# and V$
(Fig. 5.2.8) by a wire of the same total length, the input resistance and P" form a high
pass filter, whose cut off frequency can be measured. Next, by removing the wire and
also V$ and replacing Vp , Vd and Go by another wire, we can measure P" P# .
Obtaining P# is than a matter of simple subtraction. Finally, by applying a signal to
the input, shorting V$ , and measuring the signal induced in P# , we can calculate the
mutual inductance PM . Note that a thin wire will have a somewhat larger inductance
than a wide PCB trace.
The best way to reduce the loop area (and consequently P) is to use a 3-layer
PCB, and make the middle layer a ‘ground plane’. In addition, using surface mounted
components reduces the circuit size and also allows us to place them on both sides of
the board. However, this technique also increases the stray capacitances and can also
cause reflections if the ‘microstrip’ trace impedances are not well matched to the
circuit. Therefore, a careful PCB design is needed, with wider ground clearance
around sensitive pads and using a material with low &r . The most sensitive node in this
respect is the attenuator output.
Another way of reducing the effect of stray inductance is to employ the same
technique as we did for the low value resistors. This means that the inductance in the
ground path (the signal return path) should not be too small, as it would be in the
ground plane case; rather, the return path inductance should be kept in the same ratio
- 5.38 -
P.Stariè, E.Margan System synthesis and integration
to the P" as the attenuation ratio. Precision in this respect is difficult, but not
impossible to achieve. Our inductance expression Eq. 5.2.44 does not show it, but
inductance is also inversely proportional to trace width. Powerful finite element
numerical simulation routines will be required for the job.
However, the same trick can not be used for P# (no attenuation in this loop!).
Fortunately, as will become clear from the analysis below, the input inductance P" is
more critical than P2 , since the latter is loaded by a much smaller capacitance (Go )
and can be suitably damped with a larger resistance (Vd , which is already in the circuit
because it is required for the FET gate protection).
We shall analyze the attenuator loops by assuming perfectly matched time
constants, V" G" œ V# G# , matched also to the other attenuator paths, so that the
variable capacitor in parallel is not needed. Also, we shall replace the two 50 H
resistors with a single 25 H one, representing the effective signal source resistance Vs
in series with the input, with @i œ @g Î#. The loop inductances are represented by
discrete components, P" and P# in the forward signal paths, as drawn in Fig. 5.2.8.
In the second loop the first thing to note is that G# is many times larger than
Go (10–500 ×, depending on the attenuation setting) and the same is true for Gp , which
means that their reactance will be comparably low and can thus be neglected.
Likewise, the resistances V# and Vp in parallel with these capacitances are large in
comparison with their reactances. What remains is the loop inductance P# in series
with Vd V$ , driving the amplifier input capacitance Go . If the attenuated input
voltage is @i ÎE, the output voltage will be:
@i " "
@o œ † † (5.2.45)
E =Go "
=P# Vd V$
=Go
So we have a 2nd -order transfer function:
"
@o " P# Go
J# a=b œ œ † (5.2.46)
@i E Vd V$ "
=# =
P# P# G o
Since V$ is fixed and of quite low value, Vd is used to provide the desired damping.
The input loop analysis is similar. Here we have the equivalent source
resistance Vs V$ in series with P" , driving the equivalent input attenuator
capacitance Ga (Eq. 5.2.9; the attenuator resistance V" V# can be neglected at high
frequencies). At the top of the attenuator we have:
@g " "
@i œ ŒV$ (5.2.47)
# =Ga "
=P" Vs V$
=Ga
which results in the following second-order transfer function:
" V$
=
# @i P" Ga P"
J" a=b œ œ (5.2.48)
@g #
Vs V$ "
= =
P" P" G a
- 5.39 -
P.Stariè, E.Margan System synthesis and integration
#6 A2
P œ !Þ# 6 ”lnŒ !Þ##$&Œ !Þ&• (5.2.50)
A2 6
where the trace length 6, width A and thickness 2 are all in mm, resulting in the
inductance in nH (no ground plane in this case!). With surface mounted components,
choosing capacitors with low serial inductance, and using miniature relay switches in
the attenuator, the inductance P# can be reduced to less than 10 nH, making the pole
(pair) at "ÎÈP# Go high, compared to the source follower real pole (set by the
damping resistance Vd and the source follower loading capacitance GL (see the JFET
source follower discussion in Part 3, Sec. 3.9). However, by making P# somewhat
larger, say, 30–50 nH, we can achieve a 3rd -order Bessel pole pattern, improving the
bandwidth and reducing the rise time. In Fig. 5.2.9 we see the attenuator circuit of the
E œ "! section, followed by a JFET source follower.
L 1a i
3.3nH R1 C1 Vcc
Rs L2 D1
900k 10pF Rd JFET 1
25 a o
2N5911
R2 C 2 0-50nH 180
g 100k 90pF R ss C ss
D2 50 800pF
R3 L
L 1b 2.74
Co
JFET 2 CL RL
0.37nH 2N5911 3.3pF 12k
R ss
50 Vee
Fig. 5.2.9: The attenuator and the source follower JFET" (JFET# acts as a constant current
source bias for JFET" ). The i nput loop inductance P" should be low, but the attenuation
can be compensated (by P" b ). The inductance P# of the second loop can b e‘tuned’ and damped
by an appropriate value of Vd to provide a Bessel step response, as seen in Fig. 5.2.10.
- 5.40 -
P.Stariè, E.Margan System synthesis and integration
Note that here we have not drawn the protecting components Gp and Vp , but
since a 325 V (peak value of the 230 V AC-mains) at the input results in a 32.5 V at
the attenuator output, these components are absolutely necessary. Also, Gp should be a
high voltage type (500 V), in order to survive the 325 V in the direct path (and still
163 V for a ÷2 attenuator); therefore, it will be of larger dimensions, so its internal
serial inductance will have to be taken into account.
Note also that for high bandwidth a low value of Go must be ensured. Since
the negative input impedance compensation network (as in Part 3, Sec. 3.9), as well as
Vd , H" , H# , GGD , and GL are present at the @o node, Go will tend to be high.
We have analyzed the step response in Fig. 5.2.10 for two values of P# (10 and
50 nH; Vd has been chosen for a correct Bessel damping).
1.2
i o2
10
1.0
g
o1
10
0.8
0.6
L1
L2
0.4 1) L 2 = 10nH
0.2 2) L 2 = 50nH
0
0 1 2 3 4
t [ns]
Fig. 5.2.10: Step response of the circuit in Fig. 5.2.9. With a low P1 , a correctly damped
P# , and a good JFET, a 350 MHz bandwidth (@ L2 rise time ¸ 1 ns), can be easily achieved.
The source follower gain is a little less than one. @o and @ L are drawn for the two P2 cases.
- 5.41 -
P.Stariè, E.Margan System synthesis and integration
dielectric between the plates. A pad on the PCB thus has some small stray capacitance
towards the ground (large if a ground plane is used). This capaciatnce changes with
frequency proportionally with &r . Also, the material is porous and the fibres are long,
extending to the edge of the board, allowing moisture in (water &r œ )!), which
causes long term changes. The problem is not specific to this material only, it is
encountered with all traditional PCB materials (as well as many other insulators).
0.12
C2 min
0.10
C2 max
0.08
o
i ε(ω)
0.06 i
R1 C1 CPCB1
900k 10pF 1-3pF
0.04 o
R2 C2 CPCB2
100k 90pF 1-3pF
0.02
0
0 20 40 60 80 100 120 140 160 180
t [ µ s]
Fig. 5.2.11: The ‘hook–effect’ is most noticeable in the frequency range 10–300 kHz. Because
the relative permittivity, &r , of a common PCB material is not exactly constant with frequency,
the high impedance attenuator will exhibit a hook in its step response, which can not be trimmed
out by the usual adjustment of G# . The GPCB stray capacitance can vary by some 10–30 %,
depending on the actual topology involved. Since G" is small, it is affected by a few percent. The
lower attenuator leg is less affected, due to a larger value of G# .
To solve this problem, special Teflon® based material is used for instrument
front end, but it is expensive and not readily available. If it can not be obtained, one
possible solution could be to implement two large pads on a two sided PCB, in
parallel to G" and G# , with their areas in the same ratio as the attenuation factor
required [Ref. 5.67]. Then, the effect would be equally present in both legs, canceling
out the hook, Fig. 5.2.12. Even some trimming can be done by drilling small holes in
the larger pad pair (in contrast to cutting a pad corner, drilling removes the dielectric,
thus lowering both E and &).
Fig. 5.2.12: Canceling the hook–effect in the common PCB material is achieved by
intentionally adding two capacitances in form of large PCB pads, with areas in the same ratio as
required by the attenuation (since the area is proportional to the square of the linear dimensions,
for a 9:1 capacitance ratio, a 3:1 dimension ratio is needed). Trimming is possible by drilling
small holes in the larger pad.
- 5.42 -
P.Stariè, E.Margan System synthesis and integration
The main problem with this solution is that the use of external probes will
expose the hook again, although to a lesser extent (owing to the large capacitance of
the probe compensation).
Vcc
from in
attenuator
Vofs R ss C ss
50 800pF
L
CL RL
3.3pF 12k
510
10nF RT 62
100 Vee
Fig. 5.2.13: Simple offset trimming of a JFET source follower.
- 5.43 -
P.Stariè, E.Margan System synthesis and integration
Basically, there are three ways of achieving a low DC error, each having its
own advantages and drawbacks. While DC performance is not of primary interest in
this book, it should be implemented so that high frequency performance is preserved.
The first technique is suitable for microprocessor controlled equipment, where
the input can be temporarily switched to ground, the offset measured, and the error
either adjusted by a digital to analog converter or subtracted from the sampled signal
in memory. But this operation should not be repeated too often or take a considerable
amount of time, otherwise the equipment would be missing valid trigger events or,
worse still, introduce errors by loading and unloading the signal source with the
instrument’s input impedance. This is a rather inelegant solution and it should be
taken as the last resort only.
A better way, shown in Fig. 5.2.14, is to use a good differential amplifier to
monitor the difference between the U" gate and the output, integrate it and modify the
U# current to minimize the offset. Note that this technique works well only while the
input is within the linear range of the JFET; when in the non-linear range or when
overdiven, the integrator will develop a high error voltage, which will be seen as a
long ‘tail’ after the signal returns within the linear range. Also, owing to the presence
of V" and V# , the attenuator lower arm resistors will need to be readjusted.
from Vcc
attenuator in Q1
R1 R2 R4 R ss C ss
10M 1M 1M R3 50 800pF
L
10M CL
Q2 RL
A1 3p3 12k
C1 C2
10nF 10nF
47
R5 Vee
10k
Fig. 5.2.14: Active DC correction loop. The amplifier E" amplifies and integrates the
difference between the U" gate and the output, driving through V& the source of U# and
modifying its current to minimize the offset. The resulting offset is equal to the offset of E",
multiplied by the loop gain (" V$ ÎV% ). The differential amplifier with a very low offset will
usually have its input bias current much larger than the JFET input current, therefore resistors
V# and V% provide a lower impedance to ground. G# is the integration capacitor, whilst G"
provides an equal time constant to the non-inverting input. The feedback divider, V$ and V%
should be altered to compensate for the system gain slightly lower than one (this is achieved by
adding a suitably low value resistor in series with V% ). For a low error the amplifier E" must
have a high common mode rejection up to the frequency set by G# and V$ llV% .
But the most serious problem is owed to the amplifier E" : in order to
minimize the system offset it should have both low offset and low input bias current
itself. Although E" can be a low bandwidth device, the low input error requirements
can easily put us back to where we started from.
The example in Fig. 5.2.14 is relatively simple to implement. However, for a
low error we must keep an eye on several key parameters. Ideally we would like to get
- 5.44 -
P.Stariè, E.Margan System synthesis and integration
rid of the resistor V# (and V% ) to avoid the DC path to ground, because it alters the
attenuator balance.
Unfortunately, the input common mode range of the error amplifier is limited
and, more importantly, amplifiers with a low DC offset are usually made with bipolar
transistors at the input, so their input bias current can be in the nA range, much higher
than the JFET gate’s leakage (< #! pA). The bias current would then introduce a high
DC offset over V" (and V$ ). Here V# and V% come to the rescue, by conducting the
large part of the bias current to ground over their lower resistance. On the other hand,
the amplifier input offset voltage is then effectively amplified by the DC loop gain,
" V$ ÎV% . The amplifier is selected so that the total offset error is minimized:
% ?V V$ V% V " V#
Zofs œ Œ" ŒZA" ofs MA" ofs (5.2.52)
V V% V" V #
where ?VÎV is the nominal resistor tolerance and ZA" ofs and MA" ofs are the amplifier’s
voltage and current input offset, respectively.
An industry standard amplifier, the OP-07, has Zofs œ $! µV and Mofs œ !Þ% nA
typical, so by taking the resistor values as in Fig. 5.2.14 (with a 1 % tolerance), we can
estimate the typical total system offset to be within „(#) µV, which is slightly larger
than we would like. The offset can be reduced using a chopper stabilized amplifier,
such as the Intersil’s ICL-7650 or the LTC-1052 from Linear Technology, which have
a very low voltage offset (< 5 µV) and low current offset (< 50 pA), but their switching
noise must be filtered at the output; also their input switches are very delicate and
must be well protected from over-voltage. Therefore, we can not do without V# and
V% and consequently the attenuator must be corrected by increasing the lower
resistance appropriately. See [Ref. 5.2] for more examples of such solutions.
The third technique involves separate low pass and high pass amplifier paths
and summing their outputs.
The example in Fig. 5.2.15 is made on the assumption that the sum of the two
outputs restores the original signal in both phase and amplitude. As the readers who
have tried to build loudspeaker crossover networks will know from experience, this
can be done correctly only for simple, first-order VG filters (with just two paths; for
higher order filters a third, band pass path is necessary).
i1
A1
i
A3
o
A2
i2
Fig. 5.2.15: The principle of separate low pass and high pass amplifiers.
Here the main problem is with the input of the low pass amplifier E" , which
must have an equally low input bias current as the high pass E# , but should also have
a very low voltage offset. Although in E" we don't need to worry about the high
- 5.45 -
P.Stariè, E.Margan System synthesis and integration
frequency response, we are essentially again at the start, since JFETs and MOSFETs,
which have low input current, have a high offset voltage and vice versa for the BJTs.
But we can combine Fig. 5.2.14 and 5.2.15, and, by putting the VG network in
front of the source follower, we can eliminate the amplifier E# . Fig. 5.2.16 shows a
possible implementation.
Vcc
C1
in Q1
330pF
R1 R7 R ss C ss
900k 10M R3 50 800pF
L
A1 900k CL
Q2 RL
R5 3.3pF 12k
R2 C2 2.7k R4
100k 100k
50
10nF R6
300 Vee
Fig. 5.2.16: With this configuration we can eliminate the need for a separate high pass
amplifier. The DC correction is now applied to the U" gate through V( . The error integrating
amplifier E" must have a gain of 10 in order to compensate for the " V" ÎV# and " V$ ÎV%
attenuation. Resistors V" and V# now provide the 1 MH input impedance for all attenuation
settings, and this requires the compensated attenuators in front to be corrected accordingly.
Vcc
C1
in Q1
C2 20pF
R1 R7 R ss C ss
10nF
1M 10M 50 800pF
L
R2 A1 CL
Q2 RL
1M R5 3.3pF 12k
10k
A2 47
R4 R3 Vee
100k 100k
Fig. 5.2.17: By inverting the output the error amplifier becomes an inverting
integrator and the offset correction is independent from the attenuator settings.
The bootstrapping of V( produces an effective input resistance of about 2.4 GH.
- 5.46 -
P.Stariè, E.Margan System synthesis and integration
Of course, now the DC error correction path must be returned to the current
source U# . The input resistor V" must be increased to 1 MH, since now the input of
E" is at the virtual ground; likewise V# must be equal to V" . Note that both E" and
E# offsets add to the final DC error.
Further evolution of this circuit is possible by combining a DC gain switching
(V# or V% adjusting) with input attenuation. A very interesting result has been
described in [Ref. 5.2], where also all input relays have been eliminated (using 3
source followers with the switching at their supply voltages by PIN diodes).
The integration loop will reduce the DC error only if the output follows the
input. However, under a hard overdrive the JFET will saturate and the integrator will
build up a charge proportional to the input overdrive amplitude and duration. When
the overdrive is removed, the loop will re-establish the original DC conditions, but
with the integration time constant, so the follower will exhibit a very long ‘tail’.
This is one of the most annoying properties of modern instrumentation,
because we often want to measure the settling time of an amplifier and a convenient
specification is the time from start of the transient to within 0.1 % of the final value.
With a good old analog ’scope we would simply increase the vertical sensitivity and
adjust the vertical position so that the final signal level is within the screen range. But
with modern DC compensated circuits this is not possible, and in order to avoid the
post-overdrive tail we must use a specially built external limiter, [Ref. 5.6], to keep
the input signal within the linear range of the ’scope. The quality and speed of this
limiter will also influence the measurement.
Note that simple follower circuits, like the one in Fig. 5.2.13, would also
exhibit a small but noticeable post-overdrive tail, mainly owed to thermal effects.
Also, high amplitude step response can be nonlinear, as shown in Fig. 5.2.18, owing to
the variation of the JFET gate to channel capacitance with voltage (Eq. 5.2.18–19), but
the time constant involved here is relatively small.
overdrive
overdrive
Fig. 5.2.18: Large signal step response (but still well below overdrive) is nevertheless
nonlinear, caused by the variation of the JFET gate–drain capacitance with voltage.
- 5.47 -
P.Stariè, E.Margan System synthesis and integration
For a very long time, ever since semiconductors replaced electronic tubes in
instrumentation, JFETs were the only components used for the source follower input
section. Even today, JFETs outshine all other components in all performance aspects
but one — shear speed. Unfortunately, BJT input impedance is much too low for the 1
MH required. And MOSFETs, although having higher DC input resistance than
JFETs, can have (depending on their internal geometry) higher input leakage current,
are notoriously noisy, and their gate is easily damaged by overdrive.
If, however, we are ready to accept the design challenge to help the MOSFET
by external circuitry, we might be rewarded with a faster follower. Also MOSFETs
lend themselves nicely to integration, and this is where the experience gained from the
design of high speed digital circuits can help. Circuit area reduction minimizes the
stray capacitance and inductance, and new IC processing and semiconductor materials
(e.g., GaAs, SiGe) increase charge mobility.
Note that for source follower applications a depletion type MOSFET is needed
in order to achieve the required drain–source conductance with zero gate–source
voltage. With appropriate doping, the supply voltage can be reduced to only 2 or 3 V
(in contrast to several tens of volts required by conventional circuits), whilst retaining
good high frequency operation. This also reduces the power dissipation and, more
importantly, with low system supply voltage, the need for voltage gain is lower.
As with BJTs and JFETs, the parasitic capacitances of MOSFETs are also
voltage dependent, but only partially, as will become evident from the following
comparison with JFETs.
V gs V ds Id Id
I dss
S G D
p
n
p
V gs
bias depleted region G(B) Vp
a) b)
Id
Cgd Id
G D
Q1 g mV gs
V ds
+
Cgs V gs ro
V gs
S
c) d)
Fig. 5.2.19: a) A typical n-channel JFET structure cross-section under the bias condition.
The p-type substrate is in contact with the p-type gate. The n-type channel is formed
between the source and the drain. The bias voltage depletes the channel. b) The Zgs –Md
characteristic. c) The symbolic circuit. d) The equivalent circuit model.
- 5.48 -
P.Stariè, E.Margan System synthesis and integration
The built in potential Zbi relates to the JFET pinch off voltage parameter ZP as:
;e R A RA
ZP œ +# Œ" Zbi (5.2.55)
# &Si RD
where:
+ is the channel thickness ( ¸ # ‚ "!' m);
&Si is the silicon dielectric permeability ( œ "Þ!% ‚ "!"! FÎm).
For MOSFETs, the situation is slightly different. Fig. 5.2.20 shows a typical n-
channel MOS transistor cross-section and the equivalent circuit model.
- 5.49 -
P.Stariè, E.Margan System synthesis and integration
S V gs G D Id V ds Id
SiO 2
Metal
SiO 2 SiO 2
n+ n+
B V gs
V gsoff
bias depleted region V sub bias induced b)
n-type channel
a)
Cgd Id
Id G D
Q1 g mV gs
+
Vds Cgs V gs ro
g mbV sb
V gs V sub Cgb Cdb
Csb V sb S
+
c) B d)
Fig. 5.2.20: a) A typical n-channel MOSFET structure’s cross-section under bias condition.
Two heavily doped n+ regions (source and drain) are manufactured on a p-type substrate and a
metal gate covers a thin insulation layer, slightly overlapping the n+ regions. The bias voltage
depletes a thick region in the substrate, within which an n-type channel is induced between the
source and the drain. b) The Zgs –Md characteristic. c) The symbolic circuit. d) The equivalent
circuit model has two current sources, one owed to the usual mutual transconductance gm and
the gate–source voltage Zgs ; the other is owed to the so called ‘body effect’ transconductance
gmb and the associated source–body voltage Zsb . The gmb is typically an order of magnitude
lower than gm .
From the MOSFET structure cross-section it can be deduced that Ggb is small,
owing to the relatively large depleted region. Ordinarily its value is about 0.1 pF and it
is relatively constant. Likewise the depletion region capacitances Gsb and Gdb are also
small (they are proportional to the gate and source area), but they are voltage
dependent:
"
Zsb #
Gsb œ Gsb0 Œ" (5.2.58)
Zbi
"
Zdb #
Gdb œ Gdb0 Œ" (5.2.59)
Zbi
The capacitances Ggs and Ggd are owed to the SiO# insulation layer between
the gate and the channel. If Wg is the gate area and Gx is the unit area capacitance of
the oxide layer under the gate, then the total capacitance is:
Most MOSFETs are built with symmetrical geometry, thus the total zero bias
capacitance is simply split in half. But in the saturation region the channel narrows, so
- 5.50 -
P.Stariè, E.Margan System synthesis and integration
the drain voltage influence is small, resulting in a nearly constant Ggd whose value is
essentially proportional to the small gate–drain overlapping area. Thus typical Ggd
values range between 0.002 and 0.020 pF.
Ggs is larger, typically some #Î$ of the Wg Gx value, or about 1–2 pF.
Although MOSFET's’ gm is typically lower than in JFETs, it is the very small
capacitances, in particular Ggd and Ggb , which are responsible for the wider bandwidth
of a MOSFET source follower. Cut off frequencies of many GHz are easily achieved.
The input protection network is needed for two distinct real life situations. The
first one is the (occasional) electrostatic discharge, the second one is a long term
overdrive.
Imagine a technician sitting on a well insulated chair, wearing woollen or
synthetic clothes, and rubber plated shoes, repairing a circuit on his bench. For a while
he rubs his clothes on the chair by reaching for the schematic, the spare parts, some
tools, etc., thus quickly charging himself up to an average 500 V. Suddenly, he needs
to put a 1:1 ’scope probe somewhere on the rear panel and he stands up, touching the
probe to identify its trace by the characteristic capacitive AC mains pickup. By
standing up, he has increased his average distance from the chair by a large factor, say
30, but the charge on the chair and his clothes remains unchanged. This is equivalent
to charging a parallel plates capacitor and then increasing the plates distance, so that
the capacitance drops inversely to the distance (Eq. 5.2.51). Because Z œ UÎG , his
effective voltage would increase 30 times, reaching some 15 kV!
The average capacitance of the human body towards the surroundings of an
average room is about 200 pF. So, when our technician touches the probe tip, he will
discharge the 15 kV of his 200 pF right into the input of the poor ’scope. And such a
barbaric act can be repeated hundreds of times during an average repairing session.
At the instant the probe tip is touched the effective input voltage falls for the
first 5 ns (the propagation delay of the 1 m long probe cable) to a level set by the
resulting capacitive divider + œ "Îa" Gcable ÎGbody b, so if Gcable œ "!! pFÎm,
Z œ + Zbody ¸ "! kV. Here we assume a signal propagation velocity of 0.2 mÎns
(about 2/3 of the speed of light). Also, note that the probe cable is made as a lossy
transmission line (the inner conductor is made of a thin resistive wire, about 50 HÎm).
After 5 ns the cable capacitance is fully charged and the signal reaches the
spark gap. The spark gap fires, limiting the input voltage to its own breaking voltage
(1500–2000 V), providing a low impedance path to ground and discharging
Gcable Gbody . Some 25 ns later the voltage falls below the spark threshold.
Now the total capacitance Gcable Gbody Gin is discharged into the
remaining input resistance. With the attenuator set to highest sensitivity (1:1), the
input resistance is equal to the 1 MH of Vin , in parallel to the series connection of the
damping resistor Vd and one of the protection diodes (depending on the voltage
polarity). The diode must withstand a peak current Mdpk œ Zspark ÎVd ; if Vd œ "&! H,
then Mdpk œ #!!!Î"&! œ "$Þ$ A! Fortunately, the peak current is lowered also by the
loop inductance. The spark discharges the capacitance in less than $! ns and then the
- 5.51 -
P.Stariè, E.Margan System synthesis and integration
15 Vcc
6V3
iR d D1
t =0
Z 0 = 50 Ω Rd
12 spark JFET 1
100pF/m
body τ d = 5ns 2kV 150 gate
15kV Rin D2
C in
C body 1M 20pF
9 200pF (Vee)
6
i R d [A] gate [V] R inCT
body [kV] R d CT
3
spark [kV] CT = Cbody + Ccable + Cin
0
0 50 100 150 200 250
t [ns]
Fig. 5.2.21: A human body model of electrostatic discharge into the oscilloscope input. About
5 ns after touching the probe tip the probe cable is charged and the voltage reaches the spark gap.
The spark gap fires and limits the voltage to its firing threshold. The arc provides a low
impedance path discharging the body and cable capacitance until the voltage falls below the firing
threshold (~25 ns). The remaining charge is fed through Vd and one of the protection diodes, until
the voltage falls below Zcc ZH" (~250 ns). Finally, the capacitance is discharged through Vin .
A different situation occurs in case of a long term overdrive. Fig. 5.2.22 shows
the protection network.
Cp Vcc
D1
1.5nF
i in Rp Rd o
Av =+1
in 150k 150Ω
230V
50Hz
D2
Vee
Fig. 5.2.22: Input protection network for long term overdrive.
The most severe long term input overdrive occurs when the oscilloscope input
is on its highest sensitivity setting (no attenuation) and the user inadvertently connects
a 1:1 probe to a high voltage DC or AC power supply. A typical highest sensitivity
setting of 2 mVÎdiv or ±8 mV range is brutally exceeded by the 230 Veff , 650 Vpp AC
mains voltage. Since with a well designed instrument nothing dramatic would happen
- 5.52 -
P.Stariè, E.Margan System synthesis and integration
(no flash, no bang, no smoke), the user might realize his error only after a while (a few
seconds at best and several minutes in the evening at the end of a long working day).
The instrument must be designed to withstand such a condition for indefinitely long.
With component values as in Fig. 5.2.22 the peak current through Vp is:
- 5.53 -
P.Stariè, E.Margan System synthesis and integration
keeping the voltage at Ggd nearly constant. The output complementary emitter
follower is driven from the U" source and V$ (V$ should be equal to V% to reduce the
DC offset). The U$ bootstrap is provided by H^# , which, together with H^$ , also
bootstrap the bias circuit (V*,"! and H$,% ) for the bases of U& and U' , lowering in this
way the load to U" source.
Vcc
R7
Q3
D2
C1
R8 DZ 2
Q5
in R1 Q1 D3
R3 R9 R 13
R2 R 15
R5 R 10 R 14
o
RL
C2 Q4 i q2
D1 D4
Q6
Q2 DZ 1 R 11 DZ 3
i q1
R4 R6 R 12
Vee
Fig. 5.2.23: The FET source follower with a buffer is capable of driving
a low impedance load (such as a 50 H 1–2–5 attenuator section).
Bootstrapping increases the DC and low frequency impedance seen by U" , but
note that its use will make sense at high frequencies only if the U&,' and U$ are
substantially faster than the FET itself. Otherwise, the bootstrap circuitry would only
increase the parasitic capacitances and thus increase the rise time.
With enough driving capability made available by U& and U' the load resistor
VL can now be realized as the low impedance three step attenuator with a direct path
and two attenuated paths, ÷2 and ÷4. Besides the usual maximum input sensitivity of
5 mVÎdiv., this attenuator will provide the next two settings of 10 and 20 mVÎdiv.
The following lower sensitivity settings (50 mVÎdiv., etc.) are achieved by switching
in the ÷10 and ÷100 sections of the high impedance input attenuator, achieving the
lowest sensitivity of 2 VÎdiv. An external ÷10 probe will decrease this to 20 VÎdiv.
Fig. 5.2.24 shows two possible implementations of the low impedance divider,
having 50 H impedance at both input and output. The first attenuator design is based
on the 7 -type network and the other on the 1-type network. If the input signal is a
current, the series 50 H in the ÷1 branch can be omitted.
- 5.54 -
P.Stariè, E.Margan System synthesis and integration
50 50
i i
÷1 ÷1
25 25
37.5 ÷2 50 ÷2
o o
37.5
÷4 25 ÷4
12.5
40.625 25
37.5
12.5
Fig. 5.2.24: The low impedance attenuator (50 H input and output) can be
built as a straight 7 -type ladder or a 1-type ladder. If driven by a current
source the series 50 H in the ÷1 branch can be omitted.
Here we assume that the input impedance of the following amplifier stage is
high enough and its input capacitance is low enough for the division factor to remain
correct at each setting and the bandwidth does not change. It is also important to keep
the switch capacitive cross-talk low and preserve the nominal impedance by designing
it as a microstrip transmission line.
In addition to the discrete step attenuation, oscilloscopes, as well as other high
speed instruments, often need a continuously variable attenuation (or gain), although
within a restricted range (a range of 0.3 to 1 is often enough). A passive potentiometer
is, of course, an obvious solution and it was used extensively in early days. However,
this potentiometer is usually placed somewhere in the middle of the amplifier and its
control shaft has to be brought to the instrument front panel, which can often be a
mechanical nightmare. Also, its variable impedance causes the bandwidth to vary and
this is a very undesirable property. An electronically controlled amplifier gain with
constant bandwidth would therefore be welcome. We will examine such circuits at the
end of Sec. 5.4.
- 5.55 -
P.Stariè, E.Margan System synthesis and integration
From about 1980 we have been witnessing both the development of a radically
different operational amplifier topology and a major improvement in complementary
semiconductor devices’ technology, resulting in a steep rise in performance. At the
same time, the market’s hunger for higher bandwidth has been met by a massive
production increase, so that the prices have remained fairly low. With the
accompanying development in digital technology, both in terms of switching speed
and circuit complexity, the techniques which have previously been forbiddingly
expensive and too demanding to realize suddenly became feasible and within reach.
At the turn of the century opamps with the unity gain ‚ bandwidth product of
about 1 GHz or more (such as the Burr–Brown OPA-640) became available at a price
comparable to that of a couple of good discrete high frequency transistors. Add to this
a relatively good DC performance and, using surface mounting devices, a circuit area
of ~1 cm# , a low power consumption, and a noise level comparable to the thermal
noise of a 100 H resistor, the advantages are obvious.
Clearly, we have come a long way from the ubiquitous µA741.
However, in order to better evaluate the performance and the design
requirements of the new devices we shall first expose the weak points of the classical
configuration.
The name ‘operational amplifier’ springs from the analog computer era, in
which amplifier blocks could be combined with passive components to perform
various mathematical operations, from simple signal addition and subtraction to
integration and differentiation.
The main performance limitation of early IC opamps was imposed by the
integration technology itself: whilst fairly good NPN transistors could be easily
produced, PNP transistors could be made along with NPN ones only as very slow,
‘lateral’ structures. This has restricted their use to only those parts of the opamp,
where the needed bandwidth, gain, and load could be low. In practical terms, the only
such place in a typical opamp is the so called ‘middle stage level translator’, as shown
in Fig. 5.3.1. Even so, the opamp open loop bandwidth was almost always below 100
Hz, mainly owing to the Miller effect and the need to provide enough phase margin at
low closed loop gain to ensure unconditional system stability.
On the other hand, for general purpose applications, the important parameter
was a high negative feedback factor, used to minimize the circuit performance
variations owed to the transistor parameters (which in the early days were difficult to
control) and instead rely on passive components which could be easily produced with
a relatively tight tolerance. It was this ability to deliver predictable performance by a
simple choice of two feedback resistors which made the opamp a popular and widely
used circuit component. And not only a well defined gain, but also a broad range of
other signal conditioning functions is made possible by combining various passive
and active components in the feedback loop.
- 5.57 -
P.Stariè, E.Margan System synthesis and integration
The feedback concept is so simple and works so well that too many people
take it for granted; and equally many are surprised to discover that it can cause as
much trouble as the solutions it offers.
Slow PNP
"lateral"
Rc transistor V cc
Q3
+ Q1 Q2 Q4
∆ Ccb o
−
Rs
I1 I2 I3
s
V ee
Re Rf
Fig. 5.3.1: The classical opamp, simplified. The input differential pair U" and U#
subtract the feedback from the input signal, driving with the difference the ‘level
translator’ stage U$ , which in turn drives the output emitter follower U% , which
provides a low output impedance. The feedback voltage is derived from the output
voltage @o by dividing it as " œ Ve ÎÐVe Vf Ñ. If the opamp open loop gain Eo is
much higher than "Î" , the closed loop gain is Ecl ¸ "Î" (see text).
In the (highly simplified) opamp circuit in Fig. 5.3.1 the open loop gain is
equal to the gain of the differential pair U" and U# , multiplied by the gain of the level
translator U$ . The output emitter follower U% has a unit voltage gain. However, all
the three stages have their gain frequency dependent, as was explained in Part 3.
Fortunately, the three poles are far apart (all are real) and the poles of the first and the
third stage can be (and usually are) easily set high enough for the amplifier open loop
frequency response to be dominated by the second stage pole (which was in turn
named ‘the dominant pole’).
The dominant pole of the circuit in Fig. 5.3.1 is set by the U" collector resistor
Vc and the Miller capacitance GM :
"
=! œ (5.3.1)
Vc G M
where we have neglected the input resistance of U$ in parallel with Vc , which we can
do if Vc is small.
GM appears effectively in parallel with Vc and its value is equal to the U$
collector to base capacitance Gcb , multiplied by the U$ gain:
- 5.58 -
P.Stariè, E.Margan System synthesis and integration
@i œ @o " (5.3.8)
The signal being amplified is the difference between the source voltage and
the voltage provided by feedback:
?@ œ @s @i (5.3.9)
This voltage is amplified by the amplifier open loop transfer function, Ea=b, to
give the output voltage:
@o œ Ea=b ?@ (5.3.10)
=! =!
@o Œ" "E! œ E! @s (5.3.13)
= =! = =!
- 5.59 -
P.Stariè, E.Margan System synthesis and integration
From this last expression it is obvious that if the open loop gain E! is very
high the amplifier gain @o Î@s is reduced to the familiar "Î" , or aVf Ve bÎVe .
Likewise, for a finite value of E! , the frequency dependent part increases, thus
lowering the closed loop gain at higher frequencies.
Take, for example, the µA741 opamp, Fig 5.3.2: owing to its dominant pole,
the open loop cut off frequency is at about 10 Hz, whilst the open loop gain at DC is
about 10& . The unity gain crossover frequency 0" is therefore about 1 MHz.
f0
A0
100
A0 − 3dB
+
80 ∆ A( f ) o
A( f ) −
A( f ) = o /∆
60 0
ϕ (f ) ϕ [ o]
A [dB]
40 90
A CL = o / s = 1+ Rf /Re
20 180
+ fh
s
A( f ) f1
∆ o
0 −
Re Rf
− 20
1 10 100 1k 10k 100k 1M 10M
f [Hz]
Fig. 5.3.2: A typical opamp open loop gain and phase compared to the closed loop
gain. The dashed lines show the influence of a secondary pole (usually the input
differential stage pole), which, for stability requirements, must be set at or above the
unity gain transition frequency, 0" œ 1 MHz. 0h is the closed loop cutoff frequency.
For a closed loop gain of 10, " œ !Þ"; since the frequency dependence term is
a ratio, the factor #1 can be extracted and canceled, leaving 0! Îa40 0! b, where 0! is
the open loop cutoff frequency. By putting this into Eq. 5.3.15, we see that the
amplifier will be making corrections to its own non-linearities by a factor 10% (80 dB)
at 1 Hz, but only by a factor of 10# (40 dB) at 1 kHz; and at 100 kHz there would be
- 5.60 -
P.Stariè, E.Margan System synthesis and integration
only 3 dB of feedback, resulting in a 50% gain error. This means that for a source
signal of 0.1 V there would be a ?@ of 0.05 V, resulting in an output voltage of
@o œ 0.5 V (instead of the 1 V as at low frequencies). Above the closed loop cutoff
frequency the amplifier has practically no feedback at all.
An additional error is owed to the phase shift: at 100 kHz a single pole
amplifier would have the output at 90° phase lag against the input. An amplifier with
an additional input differential stage pole at 1 MHz would shift the phase by 135°, so
there would be only a 45° phase margin at this frequency and the circuit would be
practically at the edge of closed loop stability. If we were to need this amplifier to
drive a 2 m long coaxial cable (capacitance 200 pF), by considering the amplifier
output impedance of about 75 H the additional phase shift of 5° would be enough to
turn the amplifier into a high frequency oscillator.
The discussion so far is valid for the small signal amplification. For large
signals the bandwidth would be much lower than the small signal one. This is owed to
the Miller capacitance causing U$ to act as an integrator. For a positive input step
larger than # 5B X Î;e (M" Ve" if the input differential pair has emitter degeneration
resistors), the transistor U" will be fully open, while U# will be fully closed.
Therefore, the maximum current available to charge GM will be equal to the tail
current M" . The voltage across GM will increase linearly until the input differential
stage will be out of saturation. Consequently, the slew rate limit is:
.@ M"
SR œ œ (5.3.16)
.> GM
Usually M" is of the order of 100 µA (or even lower if low noise is the main design
goal). Also, owed to the gain of U$ the Miller capacitance GM can be large; say, with
Gcb œ 4 pF and E$ œ 50, GM will be about 200 pF, giving a slew rate SR œ 0.5 Vεs.
We know that for a sine wave the maximum slope occurs at zero crossing, where the
derivative is .@Î.> œ . aZp sin =>bÎ.> œ = Zp cos =>; at zero crossing > œ ! and
cosa!b œ ", so the slew rate equation can be written as:
M"
SR œ = Zp œ (5.3.17)
GM
For a supply voltage of ±15 V, the signal amplitude just before clipping would
probably be around 12 V, so the maximal full power sine wave frequency would be
0max œ M" Î#1 GM Zp , or approximately 6.5 kHz only!
The frequency at which the sine wave becomes a linear ramp, with a nearly
equal peak amplitude, is slightly higher: 0r œ "Î%>r œ SRÎ% Zp œ 10 kHz (note that
the SR of the circuit in Fig. 5.3.1 is not symmetrical, since GM is charged by M" and
discharged through Vc ; in an actual opamp circuit, such as in Fig. 5.3.3, Vc is
replaced by a current mirror, driven by the collector current of U# , giving a
symmetrical slew rate).
- 5.61 -
P.Stariè, E.Margan System synthesis and integration
Vcc
Q5 Q6 Ccb
I1 o
s
+ Q1 Q2 o
− A3 +1
∆
−
Rs
I1
s
V ee
Re Rf
Fig. 5.3.3: Simplified conventional opamp circuit with the current mirror as the active
load to U" . The second stage is modeled as a Miller integrator with large gain. This
circuit exhibits symmetrical slew rate limiting. The dominant pole =! œ gm ÎGM , where
gm is the differential amplifier’s transconductance and GM œ Gcb a" E$ b.
Q2 V cc
Q3
+ Q1 Q4
C cb o
−
Rs
i fb
I1 I2 I3
s
V ee
Re Rf
Fig. 5.3.4: Current feedback opamp, derived from the voltage feedback opamp (Fig. 5.3.1):
we first eliminate U# from the input differential amplifier and introduce the feedback into
the U" emitter (low impedance!)Þ Next, we load the U" collector by a diode connected U# ,
forming a current mirror with U$ . Finally, we use very low values for Vf and Ve . The
improvements in terms of speed are two: first, for large signals, the current available for
charging Gcb is almost equal to the feedback current 3fb , eliminating slew rate limiting;
second, Gcb is effectively grounded by the low impedance of U# , thus avoiding the Miller
effect. A disadvantage is that the voltage gain is provided by U$ alone, so the loop gain is
lower. Nevertheless, high frequency distortion can be lower than in classical opamps,
because, for the equivalent semiconductor technology, the dominant pole is at least two
decades higher, providing more loop gain for error correction at high frequencies.
- 5.62 -
P.Stariè, E.Margan System synthesis and integration
The amplifier in Fig. 5.3.4 would still run into slew rate limiting for high
amplitude signals, owing to the fixed bias of the first stage current source M" . This is
avoided by using a complementary symmetry configuration, as shown in Fig. 5.3.5. Of
course, the complementary symmetry can be used throughout the amplifier, not just in
fist stage.
Vcc
Q5 Q7
Q3 Q11
Q1 ZT Q9
+
C T RT o
Q2 Q10
Rs Q4 Q12
Q6 Q8
s V ee
i fb
−
Re Rf
Fig. 5.3.5: A fully complementary current feedback amplifier model. It consists of four parts:
transistors U"–% form a unity gain buffer, the same as U*–12 , with the four current sources
providing bias; U&,( and U',) form two current mirrors. In contrast to the voltage feedback
circuit, both of whose inputs are of high impedance, the inverting input of the current feedback
amplifier is a low impedance output of the first buffer. The current flowing in or out of the
emitters of U$,% is (nearly) equal to the current at the U$,% collectors. This current is reflected
by the mirrors and converted into a voltage at the U(,) collectors, driving the output unity gain
buffer. The circuit stability is ensured by the transimpedance ^T , which can be modeled as a
parallel connection of a capacitor GT and resistor VT . The closed loop bandwidth is set by Vf
and the gain by Ve (the analysis is presented later in the text). One of the first amplifiers of
this kind was the Comlinear CLC400.
- 5.63 -
P.Stariè, E.Margan System synthesis and integration
Process (yr) VIP1 (1986) VIP2 (1988) VIP3 (1994) VIP10 (2000) Units
Parameter NPN PNP NPN PNP NPN PNP NPN PNP —
3c Î3b 250 150 250 80 150 60 100 55 —
Early ZA 200 60 150 40 150 50 120 40 V
0T 0.4 0.2 0.8 0.5 3 2.5 9 8 GHz
Gjs 2.0 2.2 1.5 1.8 0.5 0.8 0.005 0.007 pF
E width 15 11 2 1 µm
Area 20000 18000 2400 300 µm#
Zce max 36 36 32 12 V
Table 5.3.1: Typical production parameters of the Complementary Bipolar process [Ref. 5.33]
Although the same technology is now used also for conventional voltage
feedback amplifiers, the current feedback structure offers further advantages which
result in improved circuit bandwidth, as put in evidence by the following discussion.
The stability of the amplifier in Fig. 5.3.5 is ensured by the so called
transimpedance network, ^T , which can be modeled as a parallel VT GT network.
Note that the compensating capacitor, GT (consisting of 4 parts, GT" –GT% ), is
effectively grounded, as can be seen in Fig. 5.3.7, since Gcb of U*,"! are tied to the
supply voltages directly, whilst the Gcb of U(,) are tied to the supply by the low
impedance CE path of U&,' .
V cc
Q5 Q7
C T1 Q11
Q3 Q9
C T3
o
C T4
RT
Q4 Q10
C T2
Q12
Q6 Q8
V ee
Fig. 5.3.7: The capacitance GT consists of four components, all effectively grounded.
- 5.64 -
P.Stariè, E.Margan System synthesis and integration
M1
Vcc
A1 i fb ZT A2
+ T o
+1 +1
s
− RT CT
i fb
M2 Vee
Re Rf
fb
ie if
Fig. 5.3.8: Current feedback amplifier model used for the analysis.
Imagine for a moment that Vf is taken out of the circuit. Essentially this would
be an open loop configuration, the gain of which can be expressed by the ratio of two
resistors, VT ÎVe . The gain is provided by the current mirrors PM ; their output
currents are summed, so the two mirrors behave like a single stage; consequently the
gain value, compared with that of a conventional opamp, is relatively low (in practice,
maximum gains between 60 and 80 dB are common). It is important to note, however,
that the open loop (voltage) gain does not play such an important role in current
feedback amplifiers. As the name implies, it is more important how the feedback
current is processed.
Let us now examine a different situation: we put back Vf and disconnect Ve .
If there were to be any voltage difference between the outputs of the two buffers, a
current would be forced through Vf , increasing the output current of the first buffer,
E" . The two current mirrors would reflect this onto the input of the second buffer, E# ,
in order to null the output voltage difference. In other words, the output of the first
buffer E" represents an inverting current mode input of the whole system.
If we now reconnect Ve it is clear that the E" output must now deliver an
additional current, 3e , flowing to the ground. The current increase is reflected by the
mirrors into a higher @T , so the output voltage @o would increase, forcing the current
- 5.65 -
P.Stariè, E.Margan System synthesis and integration
3f (through Vf ) into the E" output. By looking from the E" output, 3e flows in the
direction opposite to 3f , so the total current 3fb of the E" output will be equal to their
difference. Thus with Ve and Vf a classical feedback divider network is formed, but
the feedback signal is a current. As expected, the output of E# must now become
aVf Ve bÎVe times higher than the output of E" to balance the feedback loop.
Deriving the circuit equations is simple. The transimpedance equation
(assuming an ideal unity gain buffer E# , thus @T œ @o ) is:
@o œ ^T 3fb (5.3.18)
The feedback current (assuming an ideal unity gain buffer E" , thus @fb œ @s ) is:
@s @o @ s " " @o
3fb œ œ @s Œ (5.3.19)
Ve Vf Ve Vf Vf
@o Vf "
œ Œ" (5.3.20)
@s Ve Vf
"
^T
We see that the equation for the closed loop gain has two terms, the first one
resulting from the feedback network divider and the second one containing the
transimpedance ^T and Vf , but not Ve ! This is in contrast to what we are used to in
conventional opamps.
If we now replace ^T by its equivalent network, "Îa= GT "ÎVT b, then the
closed loop gain Eq. 5.3.20 can be rewritten as:
@o Vf "
œ Œ" (5.3.21)
@s Ve Vf
" = GT Vf
VT
We can rewrite this to reveal the system’s pole, in the way we are used to:
Vf Vf "
@o
" Œ"
Ve VT G T Vf
œ † (5.3.22)
@s Vf Vf "
" = ” Œ" •
VT VT GT V f
By comparing this with the general single pole system transfer function:
="
J a=b œ E! (5.3.23)
= ="
we note that the term:
Vf "
=" œ Œ" (5.3.24)
VT GT V f
is the closed loop pole, which sets the closed loop cutoff frequency: 0h œ ¸=" ¸Î#1.
- 5.66 -
P.Stariè, E.Margan System synthesis and integration
It might be interesting to return to Eq. 5.3.19 with the result of Eq. 5.3.21 and
express the current which charges GT as a function of input voltage:
Therefore we can express 3GT (see the transient response in Fig. 5.3.9) as:
" " =
3GT ¸ 3fb œ @s Œ (5.3.27)
Vf Ve Vf "
= Œ"
VT GT V f
- 5.67 -
P.Stariè, E.Margan System synthesis and integration
(arbitrary units)
i fb
o
A
0 1 2 3 4
t [ns]
Fig. 5.3.9: ‘Current on demand’: The step response reveals that the feedback
current is proportional to the difference between the input and output voltage,
essentially a high pass version of the output voltage, as shown by Eq. 5.3.27.
In Fig. 5.3.10 we compare the cut off frequency vs. gain of a voltage feedback
and a current feedback amplifier. The voltage feedback amplifier bandwidth is
inversely proportional to gain; in contrast, the current feedback amplifier bandwidth
is, in principle, independent of gain. This property makes current feedback amplifiers
ideal for wideband programmable gain applications.
100 A
0
A0 − 3dB f s + o
s + o
0 ∆ A( f ) ZT( f )
80 − ifb −
A( f ) Re Rf Re Rf
60
A CL = o / s = 1+ R f / R e
40
A CL = 100
A [dB]
f Vh2 f Ch2
20
A CL = 10
f Vh1 f Ch1
0
A CL = 1
f Vh0 f Ch0
− 20
1 10 100 1k 10k 100k 1M 10M 100M 1G
f [Hz]
Fig. 5.3.10: Comparison of closed loop cut off frequency vs. gain of a conventional and a
current feedback amplifier. The conventional amplifier has a constant GBW product (higher
gain, lower cut off). But the current feedback cut off frequency is (almost) independent of gain.
- 5.68 -
P.Stariè, E.Margan System synthesis and integration
The last four points are equally well known from conventional amplifiers and
their influence is straightforward and easy to understand, so we shall not discuss them
any further. The first point, however, deserves some attention.
The current feedback amplifier requires a low (ideally zero) impedance at the
inverting input in order to sense the feedback current correctly. This, in addition to the
manufacturing imperfections between the transistors U"% (Fig. 5.3.11), results in a
relatively high input offset, owed to both DC voltage errors and current errors.
The offset is reduced by using the current mirror technique for the biasing
current sources (Ua–d ), making the currents of U",# equal. Further reduction is
achieved by adding low value resistors, V"–% (a value of about "! <e is usually
sufficient) to U"–% emitters. This, however, increases the inverting input resistance.
Qa Qb
Q3
R1
I bb
R3
Q1 A1 Ro
+ 1
s +1
+ −
−
fb
Q2
i fb
R4
Re Rf
R2 o
Q4
Qc Qd
Fig. 5.3.11: The resistors V"–% used to balance the inputs are the cause for the non-zero
inverting input resistance, V$ ||V% , modeled by Vo ; this causes an additional voltage drop,
reducing the feedback current (see analysis).
- 5.69 -
P.Stariè, E.Margan System synthesis and integration
Vf
Œ"
@o Ve
œ (5.3.32)
@s Vf Vo Vf
" Œ"
Ve ^T ^T
which we re-order in the usual way to separate the DC gain from the frequency
dependent part:
- 5.70 -
P.Stariè, E.Margan System synthesis and integration
Vf Vo Vf
" Œ"
Ve VT VT
Vf Vf
" GT ”Vf Vo Œ" •
@o Ve Ve
œ †
@s Vf Vo Vf Vf Vo Vf
" Œ" " Œ"
Ve VT VT Ve VT VT
=
Vf
GT ”Vf Vo Œ" •
Ve
(5.3.35)
Again, a comparison with the general normalized first-order transfer function:
="
J a=b œ E! (5.3.36)
= ="
reveals the DC gain:
Vf
"
Ve
E! œ (5.3.37)
Vf Vo Vf
" Œ"
Ve VT VT
and the pole:
Vf Vo Vf
" Œ"
Ve VT VT
=" œ (5.3.38)
Vf
GT ”Vf Vo Œ" •
Ve
- 5.71 -
P.Stariè, E.Margan System synthesis and integration
Whilst the gain error & is small, the bandwidth error can be rather high at high
gain; e.g., if Vf œ " kH and Vo œ "! H, with Ecl œ "!!, the time constant would
double and the bandwidth would be halved. In Fig. 5.3.12 we have plotted the
bandwidth reduction as a function of gain for a typical current feedback amplifier.
Although not constant as in theory, the bandwidth is reduced far less than in voltage
feedback opamps (about 50× less for a gain of 100).
2
10
A CL = 100
f h2
Re = 10.1 Ω
+
s
+1 i fb 1 A CL = 10
o 10
+1 Re = 110 Ω
RT CT f h1
Ro Rf
Re =
− Acl − 1
i fb A CL = 1
0
10 Re = ∞
Rf
C T = 1.6 pF f h0
Re RT = 300 kΩ
Rf = 1 k Ω
Ro = 10 Ω
−1
10
10 6 10 7 10 8 10 9
f [Hz]
Fig. 5.3.12: The bandwidth of an actual current feedback amplifier is gain dependent, owing to
a small but finite inverting input resistance Vo . The nominal bandwidth of 100 MHz at unity gain
is reduced to 90 MHz at the gain of 10 and to only 50 MHz at the gain of 100 if Vo is 10 H.
Nevertheless, this is still much better than in voltage feedback amplifiers.
From these relations we conclude two things: first, both the actual closed loop
gain and bandwidth are affected by the desired closed loop gain Ecl œ " Vf ÎVe ;
second and more important, for a given Vo , we can reduce Vf by Vo Ecl and
recalculate the required Ve to arrive at slightly modified values which preserve both
the desired gain and bandwidth!
Note that in the above analysis we have assumed a purely resistive feedback
path; additional influence of Vo will show up when we shall consider the effect of
stray capacitances in the following section.
A classical voltage feedback unity gain compensated amplifier (for which any
secondary pole lies above the open loop unity gain crossover) usually remains stable if
a capacitor Gf is added in parallel to Vf , as in Fig. 5.3.13a. Because Gf lowers the
bandwidth, it is often used to prevent problems at and above the closed loop cutoff. In
contrast, a capacitor Ge in parallel with Ve , as shown in Fig. 5.3.13b, would reduce the
feedback at high frequencies, leading to instability.
- 5.72 -
P.Stariè, E.Margan System synthesis and integration
o o
Rs Rs
Rf
Rf
s Re s Ce Re
Cf
Fig. 5.3.13: a) A unity gain compensated voltage feedback amplifier remains stable with
capacitive feedback, whilst in b) it is unstable. In contrast, the stability of a current feedback
amplifier is upset by either a) Gf in parallel with Vf , or b) Ge in parallel with Ve.
1 A (s ) o
N
2
Re Rf
Fig. 5.3.14: Noise gain definition: A noisy amplifier is modeled as a noise
generator @N in series with the input of a noiseless amplifier. The inverting signal
gain is @o Î@# œ Vf ÎVe ; the non-inverting signal gain is @o Î@" œ " Vf ÎVe ;
the noise gain is @o Î@N œ a" Vf ÎVe b. The noise generator polarity is
indicated only as a reference for the noise gain polarity inversion.
The noise gain is inverting in phase, but equal in value to the non-inverting
signal gain:
@o Vf
EN œ œ Œ" (5.3.43)
@N Ve
- 5.73 -
P.Stariè, E.Margan System synthesis and integration
For the voltage feedback amplifier the closed loop bandwidth is equal to the
unity gain bandwidth frequency and the noise gain:
01
0cl œ (5.3.44)
lEN l
log A
slope 0
phase 0 45
−2
0d 0
B/
10
9
0
45 0 45
−2
0d 90
B/
10 1
f 35
log f
− 40 180
dB
/10
f
Fig. 5.3.15: An arbitrary example of the phase angle estimated from the
gain magnitude, its slopes and various breakpoints. If two breakpoints are
relatively close the phase would not actually reach the value predicted from
the slope value, but an intermediate value instead.
In the same manner, along the amplifier open loop gain asymptotes, we draw
the noise gain, as in Fig. 5.3.16, and we look at the crossover point of these two
characteristics. Two important parameters can be derived from this plot: the first is
the amount of gain at the crossover frequency 0c ; the second is the relative slope
difference between the two lines at 0c , which also serves as an indication of their
phase difference.
If the available gain at 0c is greater than unity the phase difference determines
the amplifier stability. The feedback can be considered ‘negative’ and the amplifier
operation stable if the loop phase margin is at least 45°; this means that, if a 360°
phase shift is ‘positive’, the maximum phase shift within the feedback loop must
always be less than 315° (if Ea0c b " ). Since the inverting input provides 180°, the
total phase shift of the remaining amplifier stages (secondary poles) and the feedback
network should never exceed 135°. Note also that a phase margin of 90° or more
results in a smooth transient response; for a phase margin between 90° and 45° an
increasing amount of peaking would result.
- 5.74 -
P.Stariè, E.Margan System synthesis and integration
log A
A0
A (s ) | A( f ) |
o
R f +R e fc
N Acl =
Rf Re fs
0
Re ( Acl = 1 ) f0 log f
Cf Re
β=
R f +R e 1 1
2π R f Cf 2π R f || R eCf
Fig. 5.3.16: Voltage feedback amplifier noise gain is derived from the equivalent circuit noise
model (a noise generator in series with the input of a noiseless amplifier). The Bode plot shows
the relationships between the most important parameters (lEa0 bl is the open loop gain
magnitude, 0! is the dominant pole, and 0s is the secondary pole, owed to the slowest internal
stage). The inverse of the feedback attenuation " is the noise gain and it is equivalent to the
amplifier closed loop gain Ecl . Note that the noise gain is flat up and beyond the open loop
crossover 0c , owed to the zero at "Î#1Ve llVe Gf . The amplifier is stable since the noise gain
crosses the open loop gain at a point where their slope difference is 20 dBÎ100 . If the amplifier
open loop gain was higher, the gain at the secondary pole (at 0s ) would be higher than unity and
the slope difference would be 40 dBÎ100 . Then, the increased phase (135° at 0s and
approaching 180° above), along with the 180° of the amplifier inverting input, would make the
feedback positive (p360°) and the amplifier would oscillate.
Now, let us find the noise gain of the voltage feedback amplifier in Fig. 5.3.16.
Note that while there is some feedback available the amplifier tries to keep the
difference between the inverting and non-inverting input as small as its open loop
gain allows; so, with a high open loop gain, the input voltage difference tends to be
zero (plus the DC voltage offset).
Note also that, in order to keep track of the phase inversion by the amplifier,
we have added polarity indicators to the noise generator. If the ‘+’ side of the noise
generator @N tries to push the inverting input positive, the output voltage must go
negative to compensate it.
With Gf in parallel with Vf we shall have:
Î " Ñ
@o Ð Vf Gf Vf Ó
œ Ð " † Ó (5.3.45)
@N Ve "
Ï = Ò
GV f f
Î " " Ñ
=
@o Ð Gf Vf Gf V e Ó
œ Ð Ó (5.3.46)
@N " "
Ï = G V =
Gf V f Ò
f f
- 5.75 -
P.Stariè, E.Margan System synthesis and integration
Here we have a pole at "ÎGf Vf and a zero at "ÎGf Ve . Eq. 5.3.46 is the noise
gain (and also the closed loop gain Ecl ; see the two distinct breakpoint frequencies in
Fig. 5.3.16). The inverse of this is the feedback attenuation " .
With the open loop gain as shown in Fig. 5.3.16 the amplifier is stable, since
the noise gain crosses over the open loop gain at 0c , where the open loop and closed
loop slope difference is 20 dBÎdecade, and the associated phase shift is (nearly) 90°.
In addition to the 180° of the amplifier inverting input, the total phase angle is then
270°. The minimum phase margin for a stable amplifier would be 45° and here we
have 90° (360 270), so the feedback can still be considered "negative".
However, if the open loop gain was higher (and if the poles remain at the same
frequencies) the gain at 0s (the frequency of the secondary pole) can be greater than
unity. In this case at the crossover of the noise gain and open loop gain the slope
difference will be 40 dBÎ100 , with the associated phase of 135° at 0s and approaching
180° above. The feedback will become ‘positive’ and, with the gain at 0s greater than
unity, the amplifier would oscillate.
In the case of the current feedback amplifier in Fig. 3.5.17 we first note that
instead of gain our Bode plot shows the feedback impedances and the amplifier
transimpedance, all as functions of frequency.
Intuitively speaking, a capacitance Gf in parallel to Vf would reduce the
impedance of the feedback network at high frequencies, thus also reducing the closed
loop gain. However, intuition is misleading us: since the current feedback system
bandwidth is inversely proportional to the feedback impedance in the f-branch (as
demonstrated by Eq. 5.3.24), the addition of Gf increases the bandwidth. By itself this
would be welcome, but note that at the crossover frequency 0c the slope difference
between the transimpedance ^T and the ‘noise transimpedance’ (in analogy with
noise gain) is 40 dBÎ100 , causing a phase shift of 180°. This means that at 0c the
feedback current becomes positive and the amplifier will oscillate.
log | Z | 1
fT =
RT 2π RTCT 1
fz =
2π Rf Cf
Z T (s ) | Z T| 1
i fb o fp =
2π Ro || Re Cf
N
(
R f 1+
Ro
R f || R e
)
| Z fb |
Rf Rf
Re
| Z N|
Cf R o || R e
log f
fT fz fc fp fs
Fig. 5.3.17: For the current feedback amplifier we draw the impedances, not gain. The feedback
network impedance l^fb l, as seen from @o , is slightly higher than Vf at DC (owing to Vo , the
inverting input resistance) and falls to Vo llVe at high frequencies; its inverse (about Vf ) is the
amplifier noise transimpedance, l^N l. The feedback network pole becomes the zero of the noise
transimpedance: =z œ "ÎGf Vf (0z œ l=z lÎ#1); likewise, the feedback zero becomes the noise
transimpedance pole =p œ "ÎGf aVo llVe b, (0p œ l=p lÎ#1). At 0c the crossover with l^T l, the
slope difference is 40 dBÎ100 and the relative phase angle is 180°; the amplifier will inevitably
oscillate, even if the secondary pole is far away and its ^T breakpoint is well below Vf . The
dashed line is the transimpedance required for stability, realized by an V in series with Gf .
- 5.76 -
P.Stariè, E.Margan System synthesis and integration
+1 o
+1
i fb
RT CT
Ro Rf o
i fb
−
i fb Ro Re
Rf
Re
Fig. 5.3.18: The current feedback amplifier and its ‘noise transimpedance’ equivalent, @o Î3fb .
We can find the noise transimpedance as simply as we found the noise gain
for voltage feedback amplifiers. By assuming that the feedback current is noise
generated, from the equivalent circuit in Fig. 5.3.18 we calculate the ratio of the
output voltage and the feedback current:
@o Vo
œ Vo Vf Œ" (5.3.47)
3fb Ve
By adding a capacitance Gf in parallel with Vf the noise transimpedance becomes:
"
@o Gf Vf Vo
œ Vo Vf Œ" (5.3.48)
3fb " Ve
=
Gf Vf
and this equation is represented by l^N l in Fig. 5.3.17.
In most practical cases there will be stray capacitances in parallel to both Vf
and Ve , and in addition between both inputs, as well as from the non-inverting input
to ground, and also from output to ground. A real world situation can be rather
complicated.
As we have shown, current feedback amplifiers are extremely sensitive to any
capacitances within the negative feedback loop. This means that whole families of
circuits (such as integrators, differentiators, some filter topologies, current amplifiers,
I to V converters, logarithmic amplifiers, etc.) can not be realized in the same way as
with conventional amplifiers. Fortunately, there are alternative ways of performing the
same functions and some of the most common ones are shown in Fig. 5.3.19 for a
quick comparison:
• an inverting integrator can be implemented using two current feedback (CFB)
opamps, with the bonus of providing both the inverting and non-inverting
configuration within the same circuit;
• a single pole inverting filter amplifier can be implemented by exploiting the
internal capacitance GT and the external feedback resistor Vf (useful for high
frequency cut off; for lower frequencies a high value of Vf is impractical since it
would cause a large voltage offset, owing to a large input bias current);
- 5.77 -
P.Stariè, E.Margan System synthesis and integration
R CFA2
o
Integrator (inverting) Integrator (inverting and non-inverting)
C Rf Rf
s
R R
s CFA − o
VFA
− o
fh = 1
2 π R f CT
R R 2R C3 C2
s R R R CFA
C1 C2 o
s
VFA C1
− o C3
DAC DAC
R R Rf
Cc Cc
R
Re
Cd Cd
VFA CFA
− o Rf << R
− o
Re = Rf / 10
I to V Conversion DAC Gd Compensation I to V Conversion DAC Gd Compensation
Fig. 5.3.19: Functionally equivalent circuits with conventional and current feedback amplifiers.
Integrators, filters and current to voltage converters in inverting configurations cannot be achieved
using a single CF amplifier. However, two-amplifier circuits can provide inherent amplifier pole
compensation, which is very important at high frequencies. Filters can be realized in the non-
inverting configuration. And feedback capacitance can be isolated by a resistive divider.
- 5.78 -
P.Stariè, E.Margan System synthesis and integration
o
CFA
Rs i fb Rf
Cs1 b
Rb
s Cs2 Re
Fig. 5.3.20: This circuit exploits the ability of current feedback amplifiers to
adjust the bandwidth and gain peaking independently of the gain. The price to pay
is the lower slew rate limit. See the frequency and the step response in Fig. 5.3.21.
10 6
5
o
4
A (f ) 3
2
s
1
1 -1
1M 10M 100M 1G 10G 0 2 4 6 8
t [ns]
f [Hz]
Fig. 5.3.21: a) Frequency response; b) Step response of the amplifier in Fig. 5.3.20. The closed loop
gain Ecl œ " Vf ÎVe œ %, Vf œ "&! H, Ve œ &! H, the source resistance Vs œ &! H, the stray
capacitances, Gs1,2 œ " pF, the amplifier transcapacitance GT œ " pF, while Vb is varied from "&! H
for highest to (&! H for lowest peaking.
- 5.79 -
P.Stariè, E.Margan System synthesis and integration
Note, however, that in this way we lose the current on demand property of the
CFB amplifier, since Vb will reduce the slew rate.
In a similar manner as was done for passive circuits in Part 2 and in Sec. 5.1,
the resulting gain peaking can be used to improve the step response of a multi-stage
system. As shown in Fig. 5.3.21, the gain peaking reveals the amplifier resonance,
which decreases with increasing Vb , while the DC gain remains almost unchanged.
The lessons learned from the current feedback technology can be used to
improve conventional voltage feedback amplifiers.
Besides the improved semiconductor manufacturing technology, basically
there are two approaches: one is to take the voltage feedback amplifier and modify it
using the techniques of current feedback to avoid its week points. One such example
is shown in Fig. 5.3.22. The other way, like the circuit in Fig. 5.3.23, is to take the
current feedback amplifier and modify it to make it appear to the outside world as a
voltage feedback amplifier.
V cc
R1 R2
Vbb
+ Q1 Q2 Q4 Q3 o
+1
i
∆
Rs −
Cc
I1 Q5 Q6
s
V ee
fb
Re Rf
Fig. 5.3.22: The voltage feedback amplifier, improved. The transistors U"% form a differential
‘folded’ cascode, which drives the current mirror U&,' . In this way the input is a conventional high
impedance differential, but the dominant pole compensation capacitor Gc is grounded, eliminating
the Miller effect. This circuit still exhibits slew rate limiting, although at much higher frequencies.
Typical examples of this configuration are Analog Devices’ AD-817 and Burr–Brown’s OPA-640.
The differential folded cascode U"–% and the current mirror U&,' of Fig. 5.3.22
can be modeled by a transconductance, gm , driven by the input voltage difference,
?@ œ @s @fb . Here @s is the signal source voltage and @fb is the feedback voltage,
derived from the output voltage @o and the feedback network divider, Ve ÎaVf Ve b.
The current 3 œ ?@ gm drives the output buffer and the capacitance Gc :
- 5.80 -
P.Stariè, E.Margan System synthesis and integration
M1
Vcc
A1 Rb A2 ib A3
+ o
+1 +1 +1
∆ ib
Rs − CT
s fb
M2 Vee
Re Rf
Fig. 5.3.23: The basic current feedback amplifier (E" , E$ , Q" , Q# ) is improved by
adding another buffer, E# , which presents a high impedance to the feedback divider, Vf
and Ve ; an additional resistor, Vb , now takes the role of converting the voltage feedback
into current and provide bandwidth setting. Like the original current feedback amplifier,
this circuit is also (almost) free from slew rate limiting. However, the closed loop
bandwidth is, as in voltage feedback amplifiers, gain dependent. A typical representative
of this configuration is Analog Devices’ OP-467.
The closed loop gain is the same as in the previous case, whilst the pole is
=" œ Ve ÎGT Vb aVf Ve b, so the closed loop cutoff frequency is again an inverse
function of the closed loop gain.
- 5.81 -
P.Stariè, E.Margan System synthesis and integration
Vcc
Q5 Q7
Q9
Q1 Q3
Ccb
R1 R3
R5
− + o
+1
R2 R4
Ccb
Q2 Q4
Q10
Q6 Q8
Vee
Fig. 5.3.24: An interesting combination of circuits in Fig. 5.3.22 and 5.3.23 is the so
called ‘Quad Core’ structure, [Ref. 5.32]. Here both the inverting and the non-inverting
input buffer currents are combined by the current mirrors to drive the Gcb of U9,10. The
non-labeled transistors provide the Zbe compensation for U"–% . Typical representatives
are Analog Devices’ AD-8047, AD-9631, AD-8041 and others.
The current available to charge the Gcb capacitances of U*,"! is set by the
input voltage difference and V& . This current is effectively doubled by the input
structure, thus increasing the bandwidth, the loop gain, and linearity. A further
bandwidth improvement is achieved by the low impedance of U(,) , which are
practically fully open and so provide a tight control of the U*,"! base voltages,
reducing the Miller effect considerably. The circuit behaves as a voltage feedback
amplifier with the advantages of low offset and high loop gain and with a bandwidth
and slew rate limiting close to that of current feedback amplifiers.
The output buffer stage can also be improved for greater current handling
efficiency. An example is shown in Fig. 5.3.25.
Here the collectors of U#,$ and U",% are summed and mirrored by U&,( and
U',) , respectively, and finally added to the output load current. With appropriate bias
this scheme allows a reduction of the quiescent power supply current to just one third
- 5.82 -
P.Stariè, E.Margan System synthesis and integration
of the conventional buffer, whilst not compromising the full power bandwidth. At the
same time, the circuit has a comparable loading capability and offers better linearity.
Vcc
I b1 Q5 Q7
Q3
Q1
d o
Q2
Q4
I b2 Q6 Q8
Vee
Fig. 5.3.25: Output buffer stage with improved current handling.
Another very important property of high speed opamps is their ability to drive
capacitive loads. To the amplifier in Fig. 5.3.26, because of its non-zero output
resistance Vo , the capacitive load GL adds a high frequency pole within the feedback
loop. The feedback becomes frequency dependent and the phase margin is lowered,
thus compromising the stability.
V cc
i +
Ro L
∆ o
io iL
−
V ee
Rf
CL RL
fb
Re
Fig. 5.3.26: Owing to the non-zero output impedance a capacitive load adds another pole
within the feedback loop. If the closed loop gain is too low the resulting increase in phase can
make the feedback positive at high frequencies, instead of negative, destabilizing the amplifier.
- 5.83 -
P.Stariè, E.Margan System synthesis and integration
Here HV is the resistive divider formed by the output resistance Vo , the load
resistance VL and the total resistance of the feedback divider Vf Ve :
VL aVf Ve b
VL Vf Ve
HV œ (5.3.56)
VL aVf Ve b
Vo
VL Vf Ve
The pole =L is formed by the load capacitance GL and the equivalent resistance seen
by it, Vq , whilst =L is the appropriate cut off frequency:
"
=L œ ; =L œ l=L l (5.3.57)
Vq GL
Vq is simply the parallel combination of all the resistances at the output node:
"
Vq œ (5.3.58)
" " "
Vo VL Vf V e
The internal output voltage, @o , is a function of the input voltage difference, ?@, and
the amplifier open loop gain E, which, in turn, is also a function of frequency, Ea=b:
@o œ Ea=b ?@ (5.3.59)
The input voltage difference is, of course, the difference between the signal source
voltage and the output (load) voltage, attenuated by the feedback resistors:
Ve
?@ œ @s @L (5.3.60)
Vf V e
The open loop gain Ea=b is defined by the DC open loop gain E! and the frequency
dependent term owed to the amplifier dominant pole at the frequency =! :
=!
Ea=b œ E! (5.3.61)
= =!
With this in mind, we can express the internal output voltage:
=! Ve
@o œ E! Œ@s @L (5.3.62)
= =! Vf V e
and by inserting this back in Eq. 5.3.53 we have the load voltage:
=! Ve =L
@L œ E ! Œ@s @L HV (5.3.63)
= =! Vf V e = =L
Ve =! =L =! =L
@L Œ" E! HV œ @s E! HV
Vf Ve = =! = =L = =! = =L
(5.3.64)
- 5.84 -
P.Stariè, E.Margan System synthesis and integration
The product of the poles, =" =# , is a function of not just =! and =L , but also of
the open loop gain E! and the closed loop feedback dividers, HV and Ve ÎaVf Ve b
(refer to Appendix 2.1 to find the system poles of a 2nd -order function). Since the
output resistance, Vo , is usually much lower than both the load resistance VL and the
feedback resistances Vf Ve , the output divider HV is usually between 0.9 and 1.
The system’s stability is therefore dictated by the amount of loop gain when = p =L .
Thus close to =L the loop gain will be higher than 1 either if E! is very high, or if =L
is relatively low and Vf p !, that is, if the closed loop gain approaches unity!
This is often counter-intuitive, not just to beginners, but sometimes even to
experienced engineers. Usually, if we want to enhance the amplifier’s stability, we
increase the feedback at high frequencies by placing a capacitor in parallel to Vf .
However, in the case of a capacitive load the amplifier will be turned into an
oscillator by that procedure. In contrast, the stability will improve if we increase the
closed loop gain (increase Vf or decrease Ve ). This is illustrated in Fig. 5.3.27, where
the gain and the phase are plotted for the three values of Vf (_, 4Ve and 0) and the
capacitive load is such, that the loop gain at 0L ¸ # † "!% 0! is about 3.
Conventional opamp compensation schemes (consisting usually of a resistor
or a series VG network, connected between both inputs, thus increasing the noise
gain, without affecting the signal gain) increase the stability at the expense of either
the gain, or the bandwidth, or both! Conventional compensation should be used as the
last resort only, when the circuit must meet an unknown load capacitance, which can
vary considerably.
In fixed applications, when the capacitive load is known, or is within a narrow
range, it is much better to compensate the load by inductive peaking, as we have seen
in Part 2. The simplest approach is shown in Fig. 5.3.28, where the load impedance
appears real to the amplifier output, so that the feedback is not frequency dependent.
- 5.85 -
P.Stariè, E.Margan System synthesis and integration
10 6 − 180
a) Rf = ∞
10 5 b) R f = 4 Re
c) Rf = 0 ϕc
10 4 − 90
3 ϕ[ ]
10
Ma
2
10 0
L
s 10 1 ϕa
Mb
Mc
10 0 90
10 −1 +
ϕ
b
s
A( f )
L ϕa
∆
10− 2 − 180
Re Rf CL RL
10 −3
10− 4 270
10−1 10 0 10 1 10 2 f 10 3 10 4 10 5 10 6
f0
Fig. 5.3.27: An amplifier driving a capacitive load can become unstable if its close loop gain is
decreased too much: a) with no feedback and b) with a gain of 5, the amplifier is stable, although
in the latter case there is already a pronounced peaking; whilst in c) with the closed loop gain
reduced to 1 the peaking is very high and the phase goes over 360° and oscillation is excited at
the frequency at which the phase equals 360°.
L c = RL2 C L
V cc
i +
Ro F L
∆ o
io Rd = RL
iL
−
V ee
Rf
CL RL
fb
Re
Fig. 5.3.28: Capacitive load compensation which makes the load to appear
real and equal to VL to the opamp. This works for fixed load impedances.
Here the inductance Pc and its parallel damping resistance, Vd , are chosen so
that Vd œ VL and Pc œ VL# GL , and the amplifier sees an impedance equal to VL from
DC up to the frequency at which the coil stray capacitance starts to influence the
compensation. With a careful inductance design the frequency at which this happens
can be much higher than the critical amplifier frequency.
However, even with such compensation, the bandwidth can be lower than
desired, since the compensation network forms a low pass filter with the load, and the
value of the inductance is influenced by both the load resistance VL and the load
capacitance GL .
- 5.86 -
P.Stariè, E.Margan System synthesis and integration
The cut off frequency is =h œ "ÎÈPc GL œ "ÎVL GL and that is much lower
than =L of Eq. 5.3.57, which would apply if the amplifier could be made stable by
some other means. If VL and GL can be separated, it is possible to build a 2-pole
series peaking or a T-coil peaking, tuned to form a 3-pole system together with the
pole associated with the amplifier closed loop cut off. This procedure is similar to the
one described in Part 2 and also in Sec. 5.1, so we leave it as an exercise to the reader.
Another compensation method, shown in Fig. 5.3.29, is to separate the AC and
DC feedback path by a small resistance Vc in series with the load and a feedback
bridging capacitance Gc :
V cc
s +
Ro Rc L
∆ o
io iL
− Cc
V ee ACfb
Rf
CL RL
DCfb
Re
Cc Rf ≥ Rc C L
Fig. 5.3.29: The capacitive load is separated from the AC feedback path by a small
resistor Vc in series with the output; owing to the capacitance Gc this type of
compensation can be applied only to voltage feedback unity gain compensated amplifiers.
This type of compensation can be very effective, since very small values of Vc
can be used (5–15 H or so), lowering the bandwidth only slightly; however, due to the
feedback bridging capacitance Gc , it can be applied only to voltage feedback unity
gain compensated amplifiers; it can not be used for current feedback amplifiers.
A more serious problem is the fact that, in some applications, the load
capacitance would vary considerably; for example, some types of fast AD converters
have their input capacitance code dependent (thus also signal level dependent!). It is
therefore desirable to design the amplifier output stage with the lowest possible output
resistance and employ a compensation scheme which would work for a range of
capacitance values.
Fig. 5.3.30 shows the implementation found in some CFB opamps, where the
compensation network, formed by Gc and Vc , is in parallel with the output buffer.
With a high impedance load the voltage drop on the output resistance Vo is small and
the current through the compensation network is low; but with a capacitive load or
other low impedance load the output current causes a high voltage drop on Vo , and
consequently the current through Gc increases. Effectively the series combination of
- 5.87 -
P.Stariè, E.Margan System synthesis and integration
Gc and GL is added in parallel to GT , thus lowering the system open loop bandwidth
in proportion with the load and keeping the loop stable.
V cc log| Z |
i fb | ZT|
Qa
dynamic
i fb ( T) R o copmensation
T o
+1
io iL Rf
RT CL RL
CT C Rc ic
c log f
Qb 1
2π Ro C L
V ee
Fig. 5.3.30: The output buffer with a finite output resistance Vo would, when driving a capacitive
load GL , present an additional pole within the feedback loop (taken from @o ), which could
condition the amplifier stability. The compensation network, formed by a serial connection of Gc
and Vc , draws part of the feedback current to the output node, effectively increasing GT in
proportion to the load, reducing the transimpedance and preserving the closed loop stability.
@T @ o œ 3 o V o (5.3.69)
@T @o 3o V o
3c œ œ (5.3.70)
^c ^c
The transimpedance ^T is driven by the feedback current 3fb ; the voltage @T , which in
an ideal case would be equal to 3fb ^T , is now lower, because part of the current is
stolen by the compensation network ^c :
^T
@o œ 3fb ^T 3o Vo Œ " (5.3.72)
^c
This equation shows that the original transimpedance equation (Eq. 5.3.18) is
modified by the output current and the ^T Î^c ratio. Thus a capacitive load, which
would draw high currents at high frequencies (or at the step transition), will
automatically lower the system open loop cut off frequency. Consequently the gain at
high frequencies is reduced so that the closed loop crossover remains well above the
secondary pole (created by Vo and GL ).
Note that the distortion at high frequencies of the compensated amplifier will
be worse than that of a non-compensated one.
- 5.88 -
P.Stariè, E.Margan System synthesis and integration
Re Rf
s
A o
R2
Vee
Fig. 5.3.31: The output level clipping is more precise if a biased Zener diode in a
Schottky diode bridge is controlling the feedback. However, this circuit can be
used only with voltage feedback unity gain compensated amplifiers.
- 5.89 -
P.Stariè, E.Margan System synthesis and integration
A o
V cn
V ee
Fig. 5.3.32: The output buffer with a separate lower supply voltage can be used for
output signal clipping with current feedback amplifiers. Since feedback is lost during
clipping, the input stage must be protected from overdrive by anti-parallel Schottky
diodes, which, inevitably, increase the input capacitance.
Vcc o
Q1 Q3 Q7
Q12
VcH
Q5
Q9
i Bi i fb Bo
+ o i
+1 +1
i fb CT
−
Q10
Q6
Q11 VcL
Q2 Q4 Q8
Vee
Rf
Fig. 5.3.33: Output signal clipping by limiting the internal GT node of a current feedback amplifier.
The transistors U5–8 are normally reverse biased. For positive voltages U',) (U&,( for negative) start
conducting only when the voltage at GT reaches ZcH (ZcL ).
- 5.90 -
P.Stariè, E.Margan System synthesis and integration
Transistors U"–% form the two current mirrors, which reflect the feedback
current 3fb from the inverting input (the first buffer output) into the transimpedance
node (at GT ). Transistors U& and U( are normally reverse biased and so are the B–E
junctions of U' and U) . The transistors U&,( start to conduct when the GT voltage
(and consequently the output voltage @o ) falls below ZcL . Likewise, the transistors
U',) conduct when the GT voltage exceeds ZcH . When either of these transistors
conduct, they diverge the mirrored current 3fb to one of the supplies. Note that the
voltages which set the clipping levels can be as high as the supply voltage. Also, they
can be adjusted independently, as long as ZcH ZcL . Since only two transistors at a
time are required to switch on or off, the limiting action, as well as the recovery from
limiting, can be very fast.
The most common misconception of overdrive recovery, even amongst more
experienced engineers, is the belief that short switching times can be achieved only if
the transistors are prevented from being saturated by artificially keeping them within a
linear signal range. It often comes as a surprise if this does not solve the problem or,
in some cases, can make it even worse.
It is true that adding Schottky diodes to a TTL gate makes it faster than
ordinary TTL, and the inherently non-saturating ECL is even faster. Fast recovery is
ultimately limited by the so called minority carrier storage time within the
semiconductor material, and it depends on the type and concentration of dopants
which determine the mobility of minority charge carriers. However, in analog circuits
the problem is radically different from digital circuits, since we are interested in not just
how quickly the output will start to catch up with the input, but rather how quickly it
will follow the input to within 0.1 %, or even 0.01 %. In current state of the art
circuitry, the best recovery times are < 5 ns for 0.1 % error and < 25 ns for 0.01 %.
In this respect it is the thermal ‘tails’ that are causing trouble. Wideband
circuits need more power than conventional circuits, so good thermal balance is
critical. Careful circuit design is required in order to keep temperature differences
low, both during linear and non-linear modes of operation.
To some extent, we have been dealing with thermal problems in Part 3. There
we were discussing two-transistor circuits, such as the cascode and the differential
amplifier. But the problem in multi-transistor circuits is that, even if it is differentially
or complementary symmetrical, only one or two transistors will be saturated during
overdrive, the rest of the circuit will either remain linear or cut off, which in this last
case means cooling down. In saturation there is a low voltage across the device
(usually a few tens of mV), so, even if the current through it is large, the power
dissipation is low. Inevitably this results in different thermal histories of different
parts of the circuit.
In integrated circuits the temperature gradients across the die are considerably
lower than those between transistors in discrete circuits, but complex circuits can be
large and heat conduction can be limited, so designers tend to reduce the power of
auxiliary circuitry which is not essential for high speed signal handling. Therefore, hot
spots can still exist and can cause trouble. Another important factor is gain switching
and DC level adjustment, which must not affect the thermal balance, either because of
amplifier working point changes or because of the control circuitry.
- 5.91 -
P.Stariè, E.Margan System synthesis and integration
Circuits which rely on feedback for error correction can be inherently less
sensitive to thermal effects (except for the input differential transistor pair!).
However, the feedback stability, or, more precisely, the no feedback stability during
overdrive, can cause identical or even worse problems. If there is insufficient damping
during the transition from saturation back to the linear range, long ringing can result,
impairing the recovery time. Such problems are usually solved by adding normally
reverse biased diodes, which conduct during the saturation of a nearby transistor,
allowing the remaining part of the circuit some control over critical nodes.
We will review a few possible solutions in the following section.
- 5.92 -
P.Stariè, E.Margan System synthesis and integration
- 5.93 -
P.Stariè, E.Margan System synthesis and integration
forgotten, then ‘reinvented’ from time to time, only to be rediscovered by the broader
engineering community in 1975, when the Quad 405 audio power amplifier came onto
the market [Ref. 5.42 and 5.43]. Following the presentation article by the 405’s
designer, Peter J. Walker, the idea was refined and generalized by a number of
authors, among the first by J. Vanderkooy and S.P. Lipshitz [Ref. 5.44].
Later M.J. Hawksford [Ref. 5.46] showed that between the two extremes (pure
feedback on one end and pure feedforward on the other) there is a whole spectrum of
solutions combining both concepts. Moreover, such solutions can be applied both at
system level (like the Quad 405 itself or similar solutions, as in [Ref. 5.48]) or down
to the transistor level (as in [Ref. 5.47]).
It is interesting that while there were several examples of feedforward
application in the field of RF communications (some even before 1975), most of the
theoretical work was done with audio power amplification in mind. Apparently it took
some time before the designers of high speed circuits grasped the full potential and the
inherent advantages of the feedforward error correction techniques. In a way, this
situation has not changed much, for even today we meet feedforward error correction
mostly in RF communications systems and top level instrumentation. At the IC level
we find feedforward only as a method of phase compensation (bypassing a slow inter-
stage, not error correction), mostly in older opamps. From 1985 the advance in
semiconductor processing has been extremely fast, discouraging amplifier designers
from seeking more ‘exotic’ circuit topologies.
Before we examine the benefits of the feedforward technique for wideband
amplification we shall first compare the feedback and feedforward principles from the
point of view of error correction.
Fig. 5.4.1 shows the comparison of amplifiers with feedback and feedforward
error correction. The feedforward amplifier is shown in its simplest form — later we
shall see other possible realizations of the same principle.
1
A1(s )
o
∆ A (s )
β Rf
β Rf o ZL
s ZL s Re
Re
2
∆ A2( s )
a) Feedback b) Feedforward
Fig. 5.4.1: Comparison of amplifiers with feedback and feedforward error correction. The
feedback amplifier has excess gain, EÐ=Ñ, which is reduced to the required level by taking the
output voltage, suitably attenuated (" ), to the inverting input of the differential amplifier (hence the
name ‘negative feedback’). The feedforward case, in its most simple form, has two amplifiers: the
main amplifier E" Ð=Ñ is assisted by the auxiliary amplifier E# Ð=Ñ, which takes the difference
between the input voltage and the attenuated main amplifier output forward to the load.
- 5.94 -
P.Stariè, E.Margan System synthesis and integration
The analysis of the feedback amplifier has already been presented in Sec. 5.3,
but we shall repeat some expressions in order to correlate them with the feedforward
amplifier. From Fig. 5.4.1a we can write:
The fraction on the right hand side is the amplifier closed loop gain Kfb ; it can be
rewritten in such a form, from which it is easily seen that the gain expression can be
approximated as "Î" if EÐ=Ñ is large:
Ea=b " " Vf
Kfb œ œ ¸ º œ" (5.4.3)
" "Ea=b " " Ea=bp_ Ve
"
Ea=b
Of course, this final simplification is valid at low frequencies only. Since EÐ=Ñ is
finite and falls with frequency owing to the amplifier dominant pole =! :
=!
Ea=b œ E! (5.4.4)
= =!
the closed loop transfer function must also have a pole, but at a frequency =h at which
EÐ=h Ñ œ "Î" . Since 0h œ l=h lÎ#1 and 0! œ l=! lÎ#1 we can write:
" 0!
œ E! (5.4.5)
" 0h 0!
and, considering that " E! ¦ ", the closed loop cut off frequency is:
E!
0h œ 0! a" E! "b ¸ 0! " E! œ 0! (5.4.6)
Kfb
So the closed loop cut off frequency of a voltage feedback amplifier is inversely
proportional to the closed loop gain.
f0
log A A ( f ) = A0
j f + f0
A0
1 fh
Gfb( f ) =
β j f + fh
| A ( f )| fh = f0 ( β A0 − 1)
1 R +R
= f e
β Re | Gfb( f ) |
0 log f
| A ( f )| = 1 f0 fh fc
Re
β=
R f +R e
Fig. 5.4.2: For a voltage feedback amplifier the closed loop frequency response Kfb a0 b
depends on the open loop gain E! , its dominant pole 0! and the feedback attenuation factor " .
The transition frequency 0c is equal to the amplifier gain bandwidth product (but only if the
amplifier does not have a secondary pole close to or lower than 0c ).
- 5.95 -
P.Stariè, E.Margan System synthesis and integration
For the feedforward amplifier, Fig. 5.4.1b, we must first realize that the load
voltage, @o , is the difference of the output voltages of individual amplifiers:
@o œ @" @# (5.4.7)
So, whatever the actual value of the auxiliary amplifier gain E# , the system’s
gain Kff will be equal to "Î" if we can make E" œ "Î" . Note that we have not
requested any of the two gains to be very high, as we were forced to for feedback
amplifiers, therefore this result is achieved without any approximation! True, if E" is
frequency dependent and " is not, at high frequencies Eq. 5.4.12 would not hold and,
consequently, Eq. 5.4.13 would not be so simple.
However, when E" starts to fall with frequency the factor " E" E# is also
reduced by the same amount, and the appropriate part of E# compensates the loss.
This would be so as long as the gain E# remains constant with frequency (and even
beyond its own cut off, provided that there still is enough gain for correction!).
- 5.96 -
P.Stariè, E.Margan System synthesis and integration
If you now think that there are no more surprises with feedforward amplifiers,
consider the following points:
Eq. 5.4.11 is, in a sense, symmetrical, thus Kff œ "Î" (as in Eq. 5.4.13) can be
achieved also if we decide to make:
However, the advantage of making " E" œ " is that the input signal is
canceled at the auxiliary amplifier differential input (remaining as a common mode
signal only). In contrast with the ‘output balance’ condition represented by Eq. 5.4.14,
Eq. 5.4.12 represents the so called ‘input balance’ condition, in which the auxiliary
amplifier needs only a very low level amplitude swing at low frequencies (but rising
to "Î# amplitude and higher at E" cut off and beyond, respectively). It therefore
processes only the errors of the main amplifier, canceling them at the load and leaving
only those of the auxiliary amplifier; as errors in processing the main amplifier error,
those are secondary errors only.
It is also possible to make both gains equal to "Î" :
where E!" and E!# are the main and auxiliary amplifier DC gain, respectively. By
choosing E!" œ "Î" we get:
" =" =# =
œ † E!# † (5.4.18)
" = =" = =# = ="
- 5.97 -
P.Stariè, E.Margan System synthesis and integration
Note that the auxiliary amplifier gain is effectively multiplied by the high pass
version of the main amplifier’s frequency dependence. If we now decide to make
E!# œ "Î" also, we obtain:
" =" =# =
Kff œ Œ †
" = =" = =# = ="
" =" =# =
œ ” Œ" † • (5.4.19)
" = =" =" = =#
The second fraction represents a second-order band pass response, which will add
some gain peaking and extend the bandwidth by a factor of almost 3:
10
1 Gff
−3dB
A1 A2
0
10
−2 −1 0 1
10 10 10 10
f / f1
Fig. 5.4.3: The feedforward amplifier bandwidth Kff is highest (and optimized in
the sense of lowest gain ‚ bandwidth requirement of the auxiliary amplifier) if both
amplifiers have the same bandwidth and the gains equal to "Î" . In this figure
"Î" œ "!, E!" œ "!, and E!# œ "" Ðin order to distinguish E# from E" more
easilyÑ. If =" Á =# the Kff bandwidth will be lower.
- 5.98 -
P.Stariè, E.Margan System synthesis and integration
For the feedback amplifier we want to know the influence of variations in the
amplifier open loop gain E to the closed loop gain Kfb , as defined by Eq. 5.4.3:
E
`
E `Kfb E " "E
WEKfb œ † œ †
Kfb `E E `E
" "E
"
œ Ò !¹ (5.4.22)
" "E Ep_
This means that the gain sensitivity is low only if E is very high. We also want to
know the influence of variations of the feedback attenuation " :
E
`
" `Kfb " " "E
W"Kfb œ † œ †
Kfb `" E `"
" "E
"E
œ Ò "¹ (5.4.23)
" "E Ep_
In the case of the feedforward amplifier, the influence of E" and E# on the
system gain, as well as the influence of ", using Eq. 5.4.11 for Kff , is:
It is evident that the second condition in Eq. 5.4.24 and the first in Eq. 5.4.25,
as well as the third condition in Eq. 5.4.26, are the same as for the feedback amplifier.
However, remember that for the feedback amplifier the results belong to the ideal case
for which Ep_, so in practice they can be approximated only, whilst for the
- 5.99 -
P.Stariè, E.Margan System synthesis and integration
feedforward amplifier they can be realized without any approximation (but within the
limits of the specified component tolerances).
In a similar way we can determine the error reduction. For the feedback
amplifier we have:
&fb "
œ Ò !¹ (5.4.27)
&E " "E Ep_
and again, zero distortion is achievable only in the idealized case of infinite gain.
In contrast, for the feedforward amplifier we have:
&ff
œ " " E# œ !¹ (5.4.28)
&E" " E# œ"
and this extraordinary result can be realized (not only approximated!) to whatever
degree of precision we are satisfied with (accounting also for the technology cost).
Also, we must not forget that the open loop gain of the feedback amplifier
decreases with frequency, so, for a given E0 , the theoretically achievable maximum
error reduction, "ÎÐ" " E! Ñ, is obtained only from DC up to the frequency of the
dominant pole, 0" ; beyond 0" the error increases proportionally with frequency.
In contrast, feedforward amplifiers offer the same level of error reduction from
DC up to the full feedforward system bandwidth and even beyond!
The main drawback of the feedforward amplifier in Fig. 5.4.1b is the ‘floating’
load (between the outputs of both amplifiers); in most cases we would prefer a ground
referenced load, instead.
We have already noted that the output impedance of the main amplifier is not
reduced by feedforward action; in fact, it does not need to be low in order to achieve
effective error canceling. This leads to the idea of summing passively the two outputs,
with that of the auxiliary amplifier inverted in phase:
1
Z3
A1( s )
β Rf o ZL
s Re
Z4
2
∆ A2 ( s )
Fig. 5.4.4: Grounded load feedforward amplifier. Note the inverted input polarity of the
auxiliary amplifier, compared with the circuit in Fig. 5.4.1b. This allows passive signal
summing over the output impedances ^$ and ^% .
- 5.100 -
P.Stariè, E.Margan System synthesis and integration
@o ^% +
Kff œ œ + ”E " aE # " E " E # b • œ (5.4.33)
@s ^$ "» when " E" œ"
or " E# œ^$ Î^%
Because of the passive summing, the correct balance condition and error
canceling for this circuit is achieved when the two output voltages are in the inverse
ratio as the impedances:
@# ^$
œ (5.4.34)
@" ^%
If we assume that the output balance has been achieved, then:
- 5.101 -
P.Stariè, E.Margan System synthesis and integration
The auxiliary amplifier will, under this condition, draw considerable current,
even without the load. To reduce the current demand, we have to give up the input
balance condition. If we set @# œ @" then there would be no current if ^L œ _, and
by choosing ^$ ll^% ¥ ^L the current demand is reduced for the nominal load:
@"
@# œ E# Œ " @" œ @ " (5.4.38)
E"
and this means that:
E#
œ " E# " (5.4.39)
E"
which, considering Eq. 5.4.35, results in:
E# ^$
œ " (5.4.40)
E" ^%
and this should be compared with the ‘simple’ balance condition in Eq. 5.4.37.
Another configuration, known as the ‘error take off’, by Sandman [Ref. 5.49],
is shown in Fig. 5.4.5. Here both the main and the auxiliary amplifier are of the
negative feedback type; however, the auxiliary amplifier senses both the distortion
and the gain error from the main amplifier feedback input and delivers it to the load in
the same feedforward passive summing manner.
Zi Z1 1
Z4 o
∆ A1 ( s )
ZL
Re
Z2 2 Z3
s
A2 ( s )
Fig. 5.4.5: ‘Error take off’ principle. The error ?@ of the main amplifier, left by feedback,
is taken by the auxiliary amplifier and fed forward to the load, where it is passively
summed to the main output. The impedances ^" to ^% form the balancing bridge.
With an ideal main amplifier the voltage at its inverting input would be at a
(virtual) ground potential; any signal ?@ present at this point represents an attenuated
version of the main amplifier error (gain error and distortion). If adequately amplified,
it can be added to the main output to cancel the error at the load:
^i
?@ œ " & œ & (5.4.41)
^i ^ "
To achieve effective error cancellation, we must set:
^# ^i ^$
† œ (5.4.42)
Ve ^ i ^ " ^%
- 5.102 -
P.Stariè, E.Margan System synthesis and integration
A variation of this circuit is shown in Fig. 5.4.6, which actually represents the
original Quad 405 ‘current dumping’ amplifier configuration, and we now see how it
follows from both the ‘error take off’ circuit and the pure feedforward circuit. If the
balance condition is achieved E# must compensate whatever the amount of error at
the E" output. It is important to realize that the input signal of E" can be taken from
any suitable point within the circuit (the only condition is that it should, preferably,
not be out of phase); the E# output represents just such a convenient point. The
balance condition for the 405 is:
^$ ^#
œ (5.4.43)
^% ^"
and the main amplifier error is canceled. Again, the impedances ^n can be real or
complex, whichever combination satisfies this equation.
Ri Z1 1
Z4 o
x
A1( s )
ZL
iε
Z2 Z3
s
2
A2( s )
Fig. 5.4.6: Current dumping: by taking the main amplifier input signal (@x) from the
auxiliary amplifier (@# ), we obtain the original Quad 405 amplifier configuration.
Although it is effective in error cancellation, its main disadvantages are the requirement
for large voltage at the auxiliary amplifier output and a relatively low cut off frequency. In
the Quad 405, ^" and ^$ are resistors, ^# is a capacitor and ^% is an inductor.
Let us return to the basic feedforward amplifier of Fig. 5.4.1b. It is clear that if
both the main and the auxiliary amplifier have limited bandwidth, time delays are
inevitable. In order to work properly the feedforward scheme must include some time
delay compensation, otherwise error cancellation close to and beyond the system cut
off frequency would not occur.
For two sinusoidal signals a small time delay between them is transformed
into a small amplitude error when summed and it might even be possible to
- 5.103 -
P.Stariè, E.Margan System synthesis and integration
compensate it by altering the balance condition. However, for a square wave or a step
function, large spikes, equal to full amplitude difference, would result and these can
not be corrected by the balance setting components. Moreover, these spikes can
overdrive the input of the auxiliary amplifier and saturate its output; error correction
in such conditions is non-operating. Thus for high speed amplification some form of
time delay compensation is mandatory.
In Fig. 5.4.7 we see a general principle of time delay compensation. Since each
amplifier has its own time delay acting on a different summing node, at least two
separate time delay circuits are needed. Here, 7" compensates the delay of the main
amplifier, allowing the auxiliary amplifier input to perform the difference between the
input and "–attenuated signal with the correct phase. Likewise, 7# does so for the
auxiliary amplifier delay for correct summing at the output.
1
A1( s ) τ2
β Rf
τ1 Re o ZL
s
2
∆ A2( s )
Fig. 5.4.7: Time delay compensation principle for the basic feedforward amplifier: 7"
compensates the delay introduced by the main amplifier and 7# compensates the delay
introduced by the auxiliary amplifier.
In this section we shall briefly discuss a few simple circuits in which we shall
employ our knowledge of feedback and feedforward error correction. We shall
demonstrate how the same technique, which was developed at the system level, can
also be applied at the local (subsystem) level. Local error correction often gives better
results, since the linearization of individual amplifier stages lowers the requirements
or indeed completely eliminates the need for system level correction. In many
applications, such as oscilloscopes and adaptable data acquisition instrumentation,
system level correction can be difficult to implement, owing to variable inter-stage
conditions (variable attenuation, gain, DC level setting, trigger pick up delays, etc.).
The most interesting circuits to which the error correction is applied are the
differential amplifier and the differential cascode amplifier. We have discussed them
briefly in Part 3, Sec. 3.7. Here we shall review the analysis with the emphasis on
their non-linearity. The dominant non-linearity mechanism is the familiar exponential
dependence of Me to Zbe of a single transistor:
;e
Zbe
Me œ M s Œe 5 B X " (5.4.44)
- 5.104 -
P.Stariè, E.Margan System synthesis and integration
For the differential pair of Fig. 5.4.8 the effective resistance seen by their
base–emitter junctions is the sum:
`Zbe" `Zbe#
<ed œ (5.4.47)
`Me" `Me#
which, since one increases and the other decreases with the signal, varies much less
over the much larger input signal range than in the single transistor case.
V cc V cc
Rc Rc Rc Rc
V c1 V c2 V c1 V c2
I c1 Ic2 I c1 Ic2
V b1 Q1 Q2 V b2 V b1 Q1 Q2 V b2
Re Re
I e0 I e0
a) b)
V ee V ee
Fig. 5.4.8: a) A simple differential amplifier, showing the voltages and currents
used in the analysis. b) The same, but with the emitter degeneration resistors Ve .
For the amplifier in Fig. 5.4.8a we must first realize that the differential input
voltage is equal to the difference of the two Zbe junction voltages:
- 5.105 -
P.Stariè, E.Margan System synthesis and integration
Therefore the differential output voltage will follow a hyperbolic tangent function of
the input differential voltage:
Zdo œ Zo" Zo# œ Vc aMc# Mc" b
" "
œ !F Vc Me! Œ
" eZdi ÎZX " eZdi ÎZX
Zdi
œ !F Vc Me! tanh (5.4.54)
ZX
1 Vdo
α F Rc I e0
a
b
Vdi
VT
−8 − 7 − 6 − 5 − 4 −3 − 2 −1 0 1 2 3 4 5 6 7 8
−1
Fig. 5.4.9: a) The DC transfer function of the differential amplifier in Fig. 5.4.8a: the
input differential voltage is normalized to ZX and the output is normalized to !F Vc Me! .
b) With the emitter degeneration, as in Fig. 5.4.8b, the transfer function is more linear,
but at the expense of the gain (lower slope).
The system gain is represented by the slope of the plot, which, for Zdi ¸ !, is:
Zdo !F Vc Me! !F Vc Me! Vc
œ œ œ !F (5.4.55)
Zdi ZX <e Me! <e
- 5.106 -
P.Stariè, E.Margan System synthesis and integration
We define the loading factor B as the ratio of the signal current to the bias current:
3
Bœ (5.4.59)
Me!
So we can express the incremental non-linearity (INL) factor R as the ratio of the
non-linear gain component to the linear one:
B ZX
R aBb œ † (5.4.60)
" B ZX Me! Ve
Generally R can be (and usually is) a function of many variables, not just one.
In a similar way we can derive INL for the differential pair, where the
differential input voltage is:
Me! 3
Zin œ 3Ve aZbe" Zbe# b œ 3Ve ZX lnŒ (5.4.61)
Me! 3
The linear and non-linear components are:
#ZX #ZX 3#
`Zin œ ŒVe `3 `3 (5.4.62)
Me! Me! a3# M # b
and the INL:
B# #ZX
R aBb œ † (5.4.63)
" B# #ZX Me! Ve
This expression can be used to estimate the amount of error for a given signal
and bias current, which an error correction scheme attempts to suppress.
In the following pages we are going to show a collection of differential
amplifier circuits, employing some form of error correction, either feedback,
feedforward or both. We are also going to present their frequency and time domain
- 5.107 -
P.Stariè, E.Margan System synthesis and integration
performance to compare how the bandwidth has been affected as a result of increased
circuit complexity (against a simple cascode amplifier).
For a fair comparison all circuits have been arranged to suit the test setup
shown in Fig. 5.4.10; the amplifiers were set for a gain of 2, using the same type of
transistors (BF959) and biased by a 10 mA current source. The input signal was
modeled by a 10 mA step driven current source, loaded by two 50 H ll 1 pF networks.
An equal network was used as the output load. Finally, all circuits were ‘tuned’ for a
Bessel system response (MFED), using only capacitive emitter peaking (of course,
inductive peaking can be used in the final design). Note that this setup offers only a
relative indication of what can be achieved, not a final optimized design.
Vcc
R e1 Ce1 R e1 Ce1
50 Ω 1pF 50 Ω 1pF
i( )
s 10mA ∆ i A=2 ∆ o
R e1 Ce1 R e1 Ce1
50 Ω 1pF 50 Ω 1pF
Vcc
Fig. 5.4.10: Test set up used to compare different amplifier configurations.
I e0 − i o I e0 + io
Rb V bb R
b
Q3 Q4
Cb Cb
Ce
i
Q1 Q2 − i
Re Re
2 I e0
V ee
Fig. 5.4.11: This simple differential cascode amplifier, employing no error correction, is
used in the test set up circuit of Fig. 5.4.10, representing the reference against which all
other amplifiers are compared. The emitter peaking and base impedance are adjusted for a
MFED response.
- 5.108 -
P.Stariè, E.Margan System synthesis and integration
for a MFED response. Fig. 5.4.12 and 5.4.13 show the frequency domain and time
domain responses, respectively.
9
6
o
o 3 ϕ [°] τ [ns]
i i
0 0 0
[dB]
−3 ϕ
− 45 − 0.1
−6 − 90 − 0.2
−9 −135 − 0.3
0.200
∆ o
0.150
[mV]
0.100
−∆ i
0.050
0.000
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
t [ns]
Fig. 5.4.13: Time domain performance of a simple differential cascode amplifier of
Fig. 5.4.11 (no error correction) used in the test set up circuit of Fig. 5.4.10. This will be
used as the reference for all other circuits. The input voltage indicates the input
impedance dynamics in the first 100 ps and up to 1.5 ns. Note also the small undershoot at
the output, owed to the cross-talk via Gbc of U",# . The output rise time is less than 1 ns.
The first circuit to be compared with the reference is shown in Fig. 5.4.14. The
circuit is owed to C.R. Battjes [Ref. 5.18] and is functionally a Darlington connection
(U" , U# ), improved by the addition of U$ . Used as the differential input stage of a
cascode amplifier, it enhances the input characteristics and increases both the output
current handling and the bandwidth. At a first glance it may seem that the diode
connected U$ (shorted collector and base) can not do much. However, it allows U" to
carry a current much larger than the U# base current, delivering it to the resistance Ve
and lowering the impedance seen by base of U# , thus extending the bandwidth. The
compound device has about twice the current gain of a single transistor.
- 5.109 -
P.Stariè, E.Margan System synthesis and integration
I e0 − i o I e0 + io
2+ β
I out = 2+ β Q4 Q8
1+ β 1+
1+ β V bb V bb
2 +1
I in = β
1+ β Q1 1
1 i − i
1+ 2 β Q1 Q5
β
1 Q2
1 Q2 Q6
β Ce
Q3 1+ 1 1+ 1 Q7
β β Q3
o Re Re
Re 2( 1+ 1 )
β 2 I e0
V ee
a) b)
Fig. 5.4.14: a) Improved Darlington. b) Used as the input differential stage of the
cascode amplifier — see the performance in the following figures.
9
6
o o
3 ϕ [°] τ [ns]
i
0 i
0 0
[dB]
−3 − 30 − 0.1
ϕ
−6 − 60 − 0.2
−9 − 90 − 0.3
τ
− 12 −120 − 0.4
− 15 −150 − 0.5
− 18 −180
− 21 −210
0.001 0.01 0.1 1.0 10.0
f [GHz]
Fig. 5.4.15: Frequency domain performance of the differential cascode amplifier using
the circuit of Fig. 5.4.14b. The bandwidth is about 560 MHz. Note the input voltage
changing slope above 2 GHz.
0.200
∆ o
0.150
[mV]
0.100
−∆ i
0.050
0.000
- 5.110 -
P.Stariè, E.Margan System synthesis and integration
In Fig. 5.4.17 U" and U# form the differential amplifier, whose error current 3"
is sensed at the resistor V" and is available at the collector of U$ for summing with
the output current 3# (error feedforward) further in the following circuit.
V cc
I1 + i 1 I2 + i 2 I 2 − i2 I 1− i1
I1
i Q1 Q2
Q3 i2 R2 Q4 i1
R3 R1
I1 I2 I2 I1
V ee
Fig. 5.4.17: A simple differential amplifier with feedforward error correction. Accurate
matching of transistors is required only for DC error reduction, not for the feedforward
linearization. Here 3# is the differential current, whilst the error current, 3", sensed at V",
is available at the U$ collector to be added to the output current further in the circuit.
Two such circuits can form a differential amplifier, employing a double error
feedforward correction, as shown in Fig. 5.4.18. The error currents can now be
summed directly with output currents, without further processing.
However, the main problem with this linearization technique is that V" must
be relatively high for a suitable error sensing, so it can reduce the bandwidth
considerably. In part the bandwidth can be improved by adding precisely matched
capacitors in parallel to both V# and V$ (emitter peaking), but then the input
impedance can become negative and should be compensated accordingly. This
negative input impedance compensation is easily implemented at „ @i inputs, but, by
adding it to the inputs connected to V" , the error sensing will be affected, since a part
of 3" would flow into the compensating networks, reducing error correction at high
frequencies.
Io + i o Io − i o
V cc
R1
i − i
R2 R2
i1
R3 R3
V ee
Fig. 5.4.18: Two circuits from Fig. 5.4.17 can form a differential amplifier with a double
error feedforward directly summed with the output.
- 5.111 -
P.Stariè, E.Margan System synthesis and integration
9
6
o
o
3 ϕ [°] τ [ns]
i i
0 0 0
[dB] ϕ
−3 − 30 − 0.1
−6 − 60 − 0.2
−9 − 90 − 0.3
τ
−12 −120 − 0.4
0.200
∆ o
0.150
[mV]
0.100
−∆ i
0.050
0.000
- 5.112 -
P.Stariè, E.Margan System synthesis and integration
I 1 + I 2 −i o I 1 + I 2 + io
V b78
Q7 Q8
Rb3 Rb4
V b34 Q3 Q4 V b34
Ce2 Q6
Cb3 Q5 Cb4
Re2 Re2
i Ce1 − i
Q1 Q2
Re1 Re1
2I2 2 I1
V ee V ee
Fig. 5.4.21: The ‘Cascomp’ amplifier employs indirect error sensing and error feedforward.
9
6
o
o 3
ϕ [°] τ [ns]
i i
0 0 0
[dB] ϕ
−3 − 90 − 0.1
−6 −180 − 0.2
−9 −270 − 0.3
−12 −360 − 0.4
0.200
∆ o
0.150
[mV]
0.100
−∆ i
0.050
0.000
- 5.113 -
P.Stariè, E.Margan System synthesis and integration
Fig. 5.4.24 shows a similar circuit, but with a feedback error correction. The
error signal is taken at the same point as before, but its amplified version is applied to
the emitters of the input differential pair U",# . Unfortunately, in spite of its attractive
concept this configuration is not suitable for high frequencies, since capacitive emitter
peaking can not be used (a capacitance in parallel to V" would short the auxiliary
amplifier outputs, reducing the amount of error correction at high frequencies), thus
the bandwidth is about 180 MHz only. But when bandwidth is not the primary design
requirement this amplifier can be a valid choice. We shall not plot its performance.
I 1 + io V cc I1 − i o
I2 I2
V b34 Q3 Q4 V b34
R2
(− i) Q5 Q6 ( i)
i Q1 Q2 − i
R1
I1 + I2 I1 + I2
V ee
Fig. 5.4.24: Differential cascode amplifier with indirect error sensing and
error feedback. Unfortunately this configuration tends to be rather slow.
I o − io I o + io
Q1 β Q2 − i
i ( I 1− i 1)
1+β
Q3 Q4
Q5 Q6
I 1− i 1 i1 R
1+β
I1 I1
V ee
Fig. 5.4.25: A possible modification of error feedforward, employing direct
error sensing and direct feedforward error correction.
- 5.114 -
P.Stariè, E.Margan System synthesis and integration
I 1 + I 2 − io I 1 + I 2 + io
V b56 V b34
Q5 Q6
Q3 Q4
Q7 Q8
Re2 Re2
Q1 Q2
Re1 Re1
i − i
Ra Rb 2 I1 Rb Ra
2 I2
V ee V ee
Fig. 5.4.26: Another evolution of the Cascomp is realized by feedback derived error
sensing and feedforward error correction. The junction of Va and Vb is at the ‘virtual
ground’ at which the error of the U",# pair is sensed and amplified by the auxiliary
amplifier U(,) . The error is subtracted from the output current at the emitters of U&,'.
9
6
o
o
3 ϕ [°] τ [ns]
i i
0 0 0
[dB]
−3 ϕ − 45 − 0.1
−6 − 90 − 0.2
−9 − 135 − 0.3
− 12 −180 − 0.4
− 15 −225 − 0.5
− 18 −270 − 0.6
τ
− 21 − 0.7
0.001 0.01 0.1 1.0 10.0
f [GHz]
Fig. 5.4.27: Frequency domain performance of the Cascomp evolution amplifier. The
bandwidth is a little over 400 MHz.
0.200
∆ o
0.150
[mV]
0.100
−∆ i
0.050
0.000
- 5.115 -
P.Stariè, E.Margan System synthesis and integration
I 1 + I 2 − io V cc Vcc I 1 + I 2 + io
Q5 Q6
Q7 Q8
Re2 Re2
V bb I4 V bb
Q3 I3 2 I2 Q4
V ee Ce
i Q2 − i
Q1
Re1 Re1
2 I1
V ee
Fig. 5.4.29: This ‘output impedance compensation’, also patented by Pat Quinn, has
direct error sensing and direct feedforward error correction, performed by U&–) .
9
6
o
o
3 ϕ [°] τ [ns]
i i
0 0 0
[dB]
−3 ϕ − 45 − 0.1
−6 − 90 − 0.2
−9 − 135 − 0.3
− 12 − 180 − 0.4
− 15 − 225 − 0.5
−18 − 270 − 0.6
τ
−21 − 315 − 0.7
0.001 0.01 0.1 1.0 10.0
f [GHz]
Fig. 5.4.30: Frequency domain performance of the amplifier in Fig. 5.4.29. The
bandwidth is about 400 MHz.
0.200
∆ o
0.150
[mV]
0.100
−∆ i
0.050
0.000
- 5.116 -
P.Stariè, E.Margan System synthesis and integration
In this section we shall have a brief discussion of the M377 IC, made by
Tektronix for their 11 000 series oscilloscope, later also employed in many other
models. The M377 was described in [Ref. 5.50 and 5.51] by its designer John Addis.
When it was designed, back in mid 1980s, this integrated circuit started a
revolutionary trend in oscilloscope design which is still evolving today in the first
decade of the XXI century. One of the important design goals was to eliminate the
need for manual adjustments as much as possible. Classical oscilloscopes, made with
discrete components, required a lot of manual adjustment; for example, the Tektronix
7104 oscilloscope with its two single-channel plug-ins required 32 manual
adjustments for correcting the thermal effects only, many of which needed iterative
corrections in combination with another one or even several other settings. This
caused long calibration periods and a lot of bench testing by experienced personnel,
increasing the production cost considerably. In contrast, the M377 needs only one
manual adjustment (for optimizing the transient response), whilst all other calibration
procedures are performed at power–up by a microprocessor, which varies the settings
using several DC voltage control inputs. Of course, the calibration can also be done
upon the user’s request, by pressing a push button on the front panel. Some settings,
such as the high impedance attenuator calibration, is done in production by laser
trimming of the components.
The elimination of almost all electromechanical adjustments and their
replacement by electronic controls resulted in circuit miniaturization, reducing strays
and parasitics, thus improving bandwidth. But it also increased the circuit complexity
(the number of active devices) and density, resulting in higher power dissipation, and
consequently higher temperatures, requiring careful thermal compensation.
The first step in this direction was the ‘Cascomp’, [Ref. 5.53–5.54], which we
have met already in Fig. 5.4.21. The first instrument to use the Cascomp was the
Tektronix 2465 oscilloscope. Although it represented a significant improvement in
precision over a simple cascode differential pair, it also had some limitations. First,
the addition of the error amplifier, U&,' , helps to reduce both the nonlinearity and, if
the operating points are chosen carefully, also the thermally induced tails. But since
both the main and the error amplifier have nearly equal gain, the system gain can not
be high, otherwise the error amplifier’s non-linearity would show up.
Another problem is the stack of three transistor pairs in the signal path, which
results in an increased gain sensitivity to ! variations. The gain, as a function of
temperature, increases as !$ , about 225 ppmÎK for a Cascomp using transistors with
" œ )!, compared to some 150 ppmÎK of the standard cascode. But a standard
cascode also has a counteracting temperature dependent gain term, ¸ ")& ppmÎK
(at Me œ #! mA, Xj œ '!°C and Ve œ %! H) owed to the dynamic emitter resistance
<e , which in the Cascomp is compensated. To some degree ! effects can be
compensated by adding resistors Vb$,% to the bases of U$,% , but these resistors can
never be made large enough, owing to the transient response requirement for low base
resistance (see Part 3, Fig. 3.4.6 and the associated analysis).
The three transistor pair stack (and not forgetting the current source, M" )
further disadvantage the IC against the discrete design, since for a given supply
voltage the output linear voltage range is severely reduced. Also, any level shifting
- 5.117 -
P.Stariè, E.Margan System synthesis and integration
back down to the negative supply voltage, needed by an eventual subsequent stage,
requires a greater level shift than in a conventional cascode.
Finally, the Cascomp has a limiting ability to handle overdrive signals. The
emitters of U$,% do not ‘see’ the whole signal during overdrive, thus the error
amplifier signal is clipped off at peaks, and as a result the main and the error amplifier
experience different thermal histories. Additional circuitry is needed to ensure correct
input signal clipping and acceptable thermal behavior.
All these limitations dictated a different approach in the M377 IC. The basic
wideband amplifier block is shown in Fig. 5.4.32 and an improved differential version
in Fig. 5.4.33. This is a feedback amplifier, which can be viewed (oversimplified) as a
compound transistor, with the U" base, the U$ emitter and the U$ collector
representing the base, emitter and collector of the compound device, respectively.
Compared with a single transistor, operating at the same current, such a compound
transistor has a greater gm and ", and also much better linearity.
V cc io V cc io
Rc Cc Rc
Q3 Q3
i o i o
Q1 Q2 Q1 Q2
RL RL
I1 I3 I1 I3
V ee V ee V ee V ee
a) b)
V cc V cc
Cc Cc
Rc Rc
io
Rd Rd
io
Rb
Lc Q5
Q3 Q3
Q4
i Q1 Q2 o i Q1 Q2 o
RL RL
I1 I3 I1 I3
V ee V ee V ee V ee
c) d)
Fig. 5.4.32: a) The M377 IC main amplifier block, basic scheme. b) Dominant pole
compensation. c) Inductive peaking. d) Inductance created by the U& emitter impedance.
- 5.118 -
P.Stariè, E.Margan System synthesis and integration
V cc2 io
C Zc
B V cc1 Q3
E Q4
i Q1 Q2
I1
V cc2 io I3 Re3
C V ee
Zc
V cc1 Q3 Re3
I1
Q4
i B Q1 Q2 E − i
Q5 Q6
Q8
I1 I3
V cc1 Q7
V ee V ee Zc
−i o
a) b) V cc2
Fig. 5.4.33: a) The basic wideband amplifier block of the M377 IC can be viewed as a
compound transistor, in which the U" base, the U$ emitter, and the U$ collector
represent the base, emitter and collector of the compound device. b) Two such blocks
form a differential amplifier. An improved design results if U$ is of a Darlington
configuration. To a high precision the output current is 3o œ @3 ÎVe$ . The impedance ^c
is the compensation shown in Fig. 5.4.32d.
- 5.119 -
P.Stariè, E.Margan System synthesis and integration
conducts owing to M# and M% , allowing the feedback of the lower circuit half to remain
operative, preserving the delicate thermal balance.
V cc2 io
Zc
Q3
V cc2
V cc1
I2 Q7
i Q1 Q2 0.5mA
D1 I4
D3
I1 1mA
D2 Re3
4mA V cc2 I 3 12mA
V ee V ee
4mA
D5 V cc2 Re6
I1
D6
D4 I4
− i 1mA
Q4 Q5
0.5mA
I2 Q8
V cc1
V cc2
Q6
Zc
V cc2 − io
Fig. 5.4.34: The M377 amplifier with the components for high speed overdrive recovery.
The frequency domain and time domain responses of the circuit in Fig. 5.4.34
are shown in Fig. 5.4.35 and 5.4.36, respectively. Note that the compensating
impedance ^c was adjusted to the needs of the transistors used for circuit simulation
(BF959, as in all previous circuits, thus allowing comparison), therefore the graphs do
not represent the true M377 performance capabilities.
Although it can be argued that the simulation has been performed using a
simplified basic version of the circuit, there are a few points to note, which are
nevertheless valid. First, there is the potential instability problem (owed to feedback),
indicated by the phase plot turning upward and the envelope delay going positive
above some 4 GHz. If proper care is not taken, especially compensating the parasitics
and strays in an IC environment, the step response might display some initial
waveform aberrations, or even ringing and oscillations.
Another point of special attention is the parasitic capacitance to the substrate
of the Schottky diodes. Being within the feedback loop, these capacitances can be
troublesome. Proper forward bias for low impedance is needed to move those
unwanted poles (transfer function zeros) well above the cutoff frequency. High bias
would result in high temperatures, which, in a densely packed IC such as this one, can
be problematic. Also, since noise increases with temperature, the bias can not be as
high as one would like. The 0.5 mA bias offers a good compromise.
- 5.120 -
P.Stariè, E.Margan System synthesis and integration
As an advantage, judging by the constant slope of the input voltage plot, the
circuit input impedance is well behaved, thus the loading of a previous stage should
not be critical. Likewise, the active inductive peaking, realized by the base resistance
Vb and transformed as inductance at the U& emitter (as shown in Fig. 5.4.32d), offers
a simple way of adjusting the frequency compensation network.
9
6
o
o
3 ϕ [°] τ [ns]
i
0 0 0
[dB] i
−3 − 45 − 0.1
ϕ
−6 − 90 − 0.2
−9 − 135 − 0.3
− 12 − 180 − 0.4
− 15 − 225 − 0.5
− 18 − 270 − 0.6
τ
− 21 − 315 − 0.7
0.001 0.01 0.1 1.0 10.0
f [GHz]
Fig. 5.4.35: Frequency domain performance of the amplifier in Fig. 5.4.34. The
bandwidth of the simulated circuit is about 500 MHz, but this is owed to the transistor
used (BF959) and the frequency compensation network adjusted in accordance,
therefore the graph is not representative of the actual M377 IC performance.
0.200
∆ o
0.150
[mV]
0.100
−∆ i
0.050
0.000
Now take a close look at the circuit in Fig. 5.4.34; in particular, the diode pairs
H#,$ and H&,' , the resistors Ve$,' and the current source M$ ; if another such block is
added in parallel (with different values of resistors Ve$,' ), and if the current sources
are switched on one at a time, a gain switching in steps can be achieved. The
bandwidth would change only slightly with switching. Fig. 5.4.37 shows such a circuit
with two gain values, but three or more can easily be added.
Another way of changing the vertical sensitivity is to use a fixed gain
amplifier and switch the attenuation at its output, as shown in Fig. 5.4.38. In this way
- 5.121 -
P.Stariè, E.Margan System synthesis and integration
the bandwidth is preserved, but attenuation switching has its own weak points, such
as a reduced signal range and higher noise at higher attenuation.
As a point of principle, switching the amplifier gain is preferred to fixed gain
with switched attenuation. Although it alters the bandwidth, gain switching preserves
the signal’s dynamic range at all settings, whilst the system with fixed gain and an
attenuator will have a comparable dynamic range only with no attenuation; at any
other attenuation setting the dynamic range would be proportionally reduced. Also,
gain switching systems will preserve the same noise level, whilst the fixed gain
systems will have the lowest signal to noise ratio at maximum attenuation.
V cc2 io
Zc
Q3
V cc2
V cc1
D5
D1 R
i Q1 Q2
D7
V ee D3 2R I1 I2
V ee V ee
V ee D4 2R
D8
− i R
Q4 Q5
D2
D6
Vcc1 V cc2
Q6
Zc
V cc2 − io
Fig. 5.4.37: Gain switching in steps was achieved by adding one or more emitter current
sources (M" , M# , á ) with appropriate resistor values and Schottky diode pairs and switching on
one current source at a time.
V cc V cc
R 2R R R 2R R
R R − o o R R
V b4 V b3 Vb2 Vb2 V b3 V b4
Q4 Q3 Q2 Q6 Q7 Q8
− i
i Q1 Q5
Re Re
2 I e0
V ee
Fig. 5.4.38: With an V–#V network the attenuation can be switched in steps by applying a
positive DC voltage Zb#,$,% to U#,' , U$,( , U%,) , one pair at a time; at each collector the load is
Vll#V œ #VÎ$. With U#,' on the gain is E œ #VÎ$Ve , with U$,( on E œ VÎ$Ve , and with U%,8
on E œ VÎ'Ve , effectively halved by each step. A similar circuit was used in the Tek 2465.
- 5.122 -
P.Stariè, E.Margan System synthesis and integration
For small gain or attenuation changes of, say, 1:4, as commonly found in
oscilloscope amplifiers, these differences can be small. However, in M377, the gain
switching had a 50:1 range. With such a high gain range the frequency compensation
needed to be readjusted (for the highest gain no compensation was needed).
- 5.123 -
P.Stariè, E.Margan System synthesis and integration
By inserting this into the gain equation of the differential amplifier the multiplication
function results:
#V V Me V
@o œ @bb œ @bb œ @bb † @M (5.4.65)
# <e ZX # V e ZX
Unfortunately the bandwidth is also proportional to the emitter current and with the
usual values of V and stray capacitances, the dominant pole at low currents can be
very low. In addition the output common mode level also changes with current.
Almost all wideband multipliers are based on one of the variations of the basic
circuit, now known as the Gilbert multiplier, after its inventor Barrie Gilbert (see
[Ref. 5.56–5.62]). The circuit development can be followed from Fig. 5.4.39a by
noting that if the output is to be a linear function the input has to be nonlinear.
V cc V cc
(1 + x ) Ie (1 − x ) Ie
R R (1 + x ) Ie (1 − x ) Ie
Q3 Q4
o bb
bb i
Q1 Q2 Q1 Q2
R R
2 Ie 2 Ie
M M
Re Re
V ee V ee
a) b)
V cc2
V cc1 (1 + x ) Ie2 (1 − x ) Ie2
R R
o
Q3 Q4
bb Q5 Q6
i
Q1 Q2
R R
2 Ie2
2 Ie1
V ee
V ee
c)
Fig. 5.4.39: The Gilbert multiplier development. a) The gain of an ordinary
differential amplifier is proportional to the emitter current, but so is the bandwidth.
Also, for a linear output a nonlinear input is required. b) Inverting the function by
linearizing the gm stage and loading it by simpler p–n junctions gives a nonlinear
output, such as required by the circuit in a) to give a linear output. c) The simple
combination of b) and a) gives the so called ‘translinear’ stage.
- 5.124 -
P.Stariè, E.Margan System synthesis and integration
Me ¸ Ms eZbeÎZX (5.4.67)
Since the differential pair was made by the same IC process, we can expect that the
devices will be reasonably well matched, so Ms" ¸ Ms# and their temperature
dependence will also be well matched if the transistors are at the same temperature:
Me" Ms" aZbe" Zbe# bÎZX
œ e ¸ eaZbe" Zbe# bÎZX (5.4.68)
Me# Ms#
In order to achieve a linear output current the expected input voltage should follow the
logarithmic function:
Me"
@bb œ ?Zbe œ ZX ln (5.4.69)
Me#
If 3e is the modulation component superimposed on the DC current Me! , we can write:
and, returning to Eq. 5.4.69, we obtain the required nonlinear input function:
Me! a" Bb "B
@bb œ ZX ln œ ZX ln (5.4.73)
Me! a" Bb "B
- 5.125 -
P.Stariè, E.Margan System synthesis and integration
0.25
I M = 4mA
0.20
0.15
0.10
0.05
[V]
[V]
0.00 I M = 0.4mA
c1 − c2
c4
c3 −
− 0.05
− 0.10
I M = 4mA
− 0.15
− 0.20
I M = 0.4mA
− 0.25
− 0.5 − 0.4 − 0.3 − 0.2 − 0.1 0.0 0.1 0.2 0.3 0.4 0.5
s [V]
Fig. 5.4.40: DC transfer function of the Gilbert multiplier of Fig. 5.4.39, for #Me" œ % mA
and modulation current #Me# œ MM œ !Þ%–% mA.. The signal source (@s ) range is ±0.5V.
1.00
0.80
I M = 4mA
0.60
[V]
c3 − c4
0.40
0.20
I M = 0.4mA
0.00
0.001 0.01 0.1 1 10
f [GHz]
Fig. 5.4.41: The Gilbert multiplier bandwidth is almost constant over the 10:1 modulation
current range.
- 5.126 -
P.Stariè, E.Margan System synthesis and integration
A (1 + x ) I e1 A (1 − x ) I e1 (1 + x ) I e2 (1 − x ) I e2
(1 + x ) I e1 (1 − x ) I e1 (1 + x ) I e1 (1 − x ) I e2
1 : A A : 1 1 : A A : 1
2 I e2
a) b)
V ee
Fig. 5.4.42: Another way of developing the Gilbert multiplier is by interconnecting two
current mirrors into a differential amplifier, whose input nonlinearity is compensated by the
nonlinearity of the two grounded transistors.
Vcc
R R
o
I ctrl I ctrl
2(1 + x ) I e2 2(1 − x ) I e2
4 Ie
V ee
Fig. 5.4.43: A four quadrant multiplier developed from the previous circuit. The gain is
controlled by current biasing the compensation transistors. Four quadrant operation is
obtained from the fact that the cross-coupled collectors cancel out if the two tail currents are
equal and the third differential amplifier allows to distribute the tail currents in a symmetrical
manner about the mid–bias value. Thus both the input and the control can be AC signals and
can also be mutually exchanged, not compromising the bandwidth or the linearity.
- 5.127 -
P.Stariè, E.Margan System synthesis and integration
Vcc
R R R R
I ctrl I ctrl
2(1 + x ) I e 2(1 − x ) I e
4 Ie
V ee
Fig. 5.4.44: Two quadrant multiplication is sufficient for the oscilloscope continuously
variable gain control; however, the same differential symmetry of the four quadrant multiplier
has been retained for the M377 because of good thermal balance and DC stability.
1 It might be interesting to note that Barrie Gilbert published an article [Ref. 5.56] describing his
multiplier before Tektronix applied for a patent. Motorola quickly seized the opportunity and started
producing it (as the MC1495). Tektronix claimed the primarity and Motorola admitted it, but
nevertheless continued the production, since once published the circuit was ‘public domain’. Barrie’s
misfortune gave the opportunity to many generations of electronics enthusiasts (including the authors
of this book) to play with this little jewel and use it in many interesting applications. Thanks, Barrie!
- 5.128 -
P.Stariè, E.Margan System synthesis and integration
R ésumé of Part 5
In this part we have briefly analyzed some of the most important aspects of
system integration and system level performance optimization, with a special
emphasis on system bandwidth.
We have described the transient response optimization by a pole assignment
process called ‘geometrical synthesis’ and showed how it can be applied using
inductive peaking. We have discussed the problems of input signal conditioning, the
linearization and error reduction and correction techniques, employing the feedback
and feedforward topologies at either the system level or at the local, subsystem level.
We have also revealed and compared some aspects of designing wideband amplifiers
using discrete components and modern IC technology.
On the other hand, we have said very little about other important topics in
wideband instrumentation design, such as adequate power supply decoupling,
grounding and shielding, signal and supply path impedance control by strip line and
microstrip transmission line techniques, noise analysis and low noise design, and the
parasitic impedances of passive components. But we believe that those subjects are
extensively covered in the literature, some of it also cited in the references, so we
have tried to concentrate on the bandwidth and transient performance issues.
We have also said nothing about high sampling rate analog to digital
conversion techniques, now already established as the essential ingredient of modern
instrumentation. While there are many books discussing AD conversion, most of
them are limited to descriptions of applying a particular AD converter, or, at most, to
compare the merits of one conversion method against others. Only a few of them
discuss ADC circuit design in detail, and even fewer the problems of achieving top
sampling rates for a given resolution, either by an equivalent time or a real time
sampling process, time interleaving of multiple converters, combining analog and
digital signal processing and other techniques, which today (first decade of the XXI
century) allow the best systems to reach sampling rates of up to 20 GSps (Giga
Samples per second) and bandwidths of up to 6 GHz.
Just like many other books, this one, too, ends just as it has become most
interesting (the reader might ask himself whether there is really nothing more to say
or whether the authors simply ran out of ideas? — since most of the circuits presented
are not of our origin, and electronics certainly is an art of infinite variations, we the
authors can be, one hopes, spared the blame). As already said in the Foreword, the
most difficult thing when writing about an interesting subject is not what to include,
but what to leave out. Although we discuss the effects of signal sampling in Part 6
and a few aspects of efficient system design combining analog and digital technology
in Part 7, this book is about amplifier design, so we leave the fast ADC circuit design
discussion for another opportunity.
- 5.129 -
P.Stariè, E.Margan System synthesis and integration
References:
[5.1] D.L. Feucht, Handbook of Analog Circuit Design,
Academic Press, Inc. San Diego, 1990
See also the latest CD version at <http://www.innovatia.com/>
[5.3] P.R. Gray & R.G. Meyer, Analysis and Design of Analog Integrated Circuits,
John Wiley, New York, 1969
[5.6] J. Williams, Measuring 16–bit Settling Times: the Art of Timely Accuracy,
EDN Magazine, Nov. 19, 1998, <http://www.ednmag.com/>
[5.7] W. Kester, High Speed Design Techniques, Analog Devices, 1996, <http://www.analog.com/>
[5.8] A.D. Evans (Editor), Designing with Field Effect Transistors, Siliconix, McGraw-Hill, 1981
[5.9] J.E. Lilienfeld, Method and Apparatus for Controlling Electric Current,
US Patent 1 745 175, Jan. 28, 1930
(the FET patent, apparently predating the BJT patent by about 23 years)
[5.13] S. Franco, Design with Operational Amplifiers and Analog ICs, McGraw-Hill, 1988
[5.15] B. Orwiller, Vertical Amplifier Circuits, Tektronix, Inc., Beaverton, Oregon, 1969.
2 Note : For US Patents go to <http://www.uspto.com/> and type the patent number in the Search pad.
Patent figures are in TIFF graphics format, so a TIFF Viewer software is recomended (links
for downloading and installation are provided within the USPTO web pages).
- 5.131 -
P.Stariè, E.Margan System synthesis and integration
[5.19] J.L. Addis, P.A. Quinn, Broadband DC Level Shift Circuit With Feedback,
US Patent 4 725 790, Feb. 16, 1988
[5.20] J.L. Addis, Precision Differential Amplifier Having Fast Overdrive Recovery,
US Patent 4 714 896, Dec. 22, 1987
[5.31] R.W. Anderson, s-Parameter Techniques for Faster, More Accurate Network Designs,
Hewlett-Packard, Application Note AN-95-1
[5.33] New Complementary Bipolar Process, National Semiconductor, Application Note AN-860
[5.34] J. Bales, A Low Power, High Speed, Current Feedback OpAmp with a Novel Class AB High
Current Output Stage, IEEE Journal of Solid-State Circuits, Vol. 32, No. 9, Sept. 1997
[5.37] T.T. Regan, Designing with a New Super Fast Dual Norton Amplifier,
National Semiconductor Application Note AN-278, Sept., 1981
- 5.132 -
P.Stariè, E.Margan System synthesis and integration
[5.47] M.J. Hawksford, Low Distortion Programmable Gain Cell Using Current Steering Cascode
Topology, Journal of the Audio Engineering Society, vol. 30, No. 11, pp. 795–799, Nov., 1982
- 5.133 -
P.Stariè, E.Margan System synthesis and integration
[5.58] B. Gilbert, High accuracy four–quadrant multiplier also capable of four–quadrant division,
US Patent No. 4 586 155, April 29, 1986
[5.63] P. Gamand, C. Caux, Semiconductor device comprising a broadband and high gain
monolithic integrated circuit for a distributed amplifier,
US Patent No. 5,386,130, January 31, 1995
- 5.134 -
P. Starič, E. Margan:
Wideband Amplifiers
Part 6:
Ever since I have heard of ‘electronic brains’, back in the early 1960s, I was
asking all sorts of people how those things work, but I never received any answer, with
the exception of one, coming from a medical profession, saying that no one understood
the biological brains either, so how could I expect to understand the electronic ones?
Much later I discovered that a lot of people did not like using their brains at all, and
they were doing just fine without it, thank you!
In the autumn of 1974, as a student, I had a limited access to an IBM-1130
machine while attending a course on Fortran. It was not exactly a top model of punched
card technology, but it could be used for many purposes, other than calculating the
monthly wages of the University personnel. As a beginner, it took me nearly three
months to program and run a simple " e>ÎVG response. I had just heard of Moore’s
law, but I guessed he was exaggerating, since I could do the same job with a slide rule
in little less than four hours, so I expected that within my professional lifetime I would
never need a computing power that sophisticated. How wrong I was!
In the spring of 1975 my father bought me an HP-29-C, a programmable pocket
calculator with ‘scientific’ functions, sines, cosines, logs, exps and all that jazz. And in
addition to the four stack registers it had some 96 program registers of ‘continuous’
memory (CMOS, thus the ‘-C’ suffix). Many of my colleagues had similar toys, too, but
while most of them were playing the then very popular ‘Moon landing simulator’ (in
which you typed in the amount of fuel to burn at each step and in response it displayed
your speed and altitude — and a flashing display on crashing), I was busy programming
the 0.5 dB tolerance frequency response of an RIAA phonograph correction network,
optimized to standard E-12 V and G values. Next year in summer, I made a preamp
based on those calculations and inserted it between a 5-transistor power amp and my
new Transcriptor’s ‘Skeleton’ turntable with the ‘Vestigal’ tonearm and a Sonus ‘Blue
Label’ pickup. It worked perfectly and sounded beautifully.
In the late seventies I was totally devoted to audio; however, I had good
relations with the local Motorola representative, who was supplying me with the latest
data sheet and application notes, so I simply could not have missed the microprocessor
revolution. But I was still using digital chips in the same way as analog ones — by
building the hardware, its function was determined once and for all; I never thought of
it as something programmable or adaptable to different tasks.
It was only in the early 1980s, with the first PCs, that I really began to devote a
substantial part of my working time to programming, and even that was more out of
necessity than desire. I was working on the signal sampling section for a digital
oscilloscope project, so I had to know all the interactions with the microprocessor. I
was also busy drawing printed circuit boards with one of the first CAD programs, the
Wintek’s ‘smArtwork’. Some time later I received a first demo version of the
Spectrum-Software’s ‘MicroCAP’ circuit simulator and then the ‘PCMatlab’ by The
MathWorks, Inc.
Then, one day Peter came to my lab and asked me if I could do a little circuit
simulation for him. This turned out to be a start of a long friendship and one of the
results of it is now in front of you.
-6.2-
P. Stariè, E. Margan Computer Algorithms
Contents:
-6.3-
P. Stariè, E. Margan Computer Algorithms
List of Figures:
Fig. 6.3.1: 5th -order Butterworth poles in the complex plane ................................................................ 6.16
Fig. 6.3.2: Bessel–Thomson poles of systems of 2nd - to 9th -order ........................................................ 6.20
Fig. 6.4.1: 5th -order Butterworth magnitude over the complex plane .................................................... 6.23
Fig. 6.4.2: 5th -order Butterworth complex response vs. imaginary frequency ....................................... 6.24
Fig. 6.4.3: 5th -order Butterworth Nyquist plot ....................................................................................... 6.25
Fig. 6.4.4: 5th -order Butterworth magnitude vs. frequency in linear scale ............................................ 6.26
Fig. 6.4.5: 5th -order Butterworth magnitude in loglog scale .................................................................. 6.27
Fig. 6.4.6: 5th -order Butterworth phase, modulo 21 .............................................................................. 6.28
Fig. 6.4.7: 5th -order Butterworth phase, unwrapped .............................................................................. 6.29
Fig. 6.4.8: 5th -order Butterworth envelope delay .................................................................................. 6.32
Fig. 6.5.1: 5th -order Butterworth impulse- and step-response ............................................................... 6.35
Fig. 6.5.2: Impulse response in time- and frequency-domain ................................................................ 6.37
Fig. 6.5.3: The ‘negative frequency’ concept explained ........................................................................ 6.39
Fig. 6.5.4: Using ‘window’ functions to improve calculation accuracy ................................................ 6.43
Fig. 6.5.5: 1st -order system impulse response calculation error ............................................................ 6.52
Fig. 6.5.6: 1st -order system step response calculation error .................................................................. 6.52
Fig. 6.5.7: 2nd -order system impulse response calculation error ........................................................... 6.53
Fig. 6.5.8: 2nd -order system step response calculation error ................................................................ 6.53
Fig. 6.5.9: 3rd -order system impulse response calculation error ........................................................... 6.54
Fig. 6.5.10: 3rd -order system step response calculation error ................................................................ 6.54
Fig. 6.5.11: Step-response of Bessel–Thomson systems of order 2 to 9 ............................................... 6.55
Fig. 6.7.1: Pole layout of 5th -order Butterworth and Bessel–Thomson systems .................................... 6.61
Fig. 6.7.2: Magnitude of 5th -order Butterworth and Bessel–Thomson systems .................................... 6.62
Fig. 6.7.3: Step-response of 5th -order Butterworth and Bessel–Thomson systems ............................... 6.62
List of routines:
-6.4-
P. Stariè, E. Margan Computer Algorithms
-6.5-
P. Stariè, E. Margan Computer Algorithms
Assume BÐ>Ñ to be the signal presented at the input of a linear, time invariant,
causal system (LTIC) which has 8 dominant energy storing (reactive) components (or
briefly a 8th -order system). The system output in the time domain, CÐ>Ñ, may be
expressed by a linear differential equation with constant coefficients:
8 7
" ,3 C Ð3Ñ Ð>Ñ œ " +4 BÐ4Ñ Ð>Ñ (6.1.1)
3œ0 4œ0
where the coefficients +4 and ,3 are derived from the system’s time constants, whilst
CÐ3Ñ and BÐ4Ñ are the 3th and 4th derivatives of the output and input signal, as required by
the system’s order. From the theory of differential equations we know that the solution
of Eq. 6.1.1, given the initial conditions CÐ!Ñ, C w Ð!Ñ, Cww Ð!Ñ, á , CÐ81Ñ Ð!Ñ is of the form:
Here Ch Ð>Ñ is the solution of the homogeneous differential equation (in which BÐ>Ñ and
all its derivatives are zero), whilst Cf Ð>Ñ is the particular solution for BÐ>Ñ, which
means that Cf Ð>Ñ œ Cf eBÐ>Ñf. From circuit theory we know that Ch Ð>Ñ represents the
natural (also free, impulse, transient, or relaxation from the initially energized
state) response and Cf Ð>Ñ represents the forced (also particular, final, steady state)
response.
Knowing that Cf Ð>Ñ is a description of the output signal in time very distant from
the initial disturbance, when the system has regained a new state of (static or dynamic)
balance, we can define the system transfer function J Ð=Ñ from Cf Ð>Ñ and BÐ>Ñ:
Cf Ð>Ñ
J Ð=Ñ œ where BÐ>Ñ œ e=> (6.1.3)
BÐ>Ñ
Such an input signal has been chosen merely because it still retains its
exponential form when differentiated, i.e.:
BÐ8Ñ Ð>Ñ œ =8 e=> (6.1.4)
a,8 =8 ,8" =8" â ," = ,! bEe=> œ a+7 =7 +7" =7" â +" = +! be=>
(6.1.6)
Then E must be:
+7 =7 +7" =7" â +" = +!
Eœ (6.1.7)
,8 =8 ,8" =8" â ," = ,!
-6.7-
P. Stariè, E. Margan Computer Algorithms
Returning to Eq. 6.1.3 we can now define the system transfer function as a
rational function of =:
Cf Ð>Ñ E e=> +7 =7 +7" =7" â +" = +!
J Ð=Ñ œ œ =>
œ (6.1.8)
BÐ>Ñ e ,8 =8 ,8" =8" â ," = ,!
Instead of Cf Ð>Ñ, it is much easier in practice to find J Ð=Ñ first, since the
coefficients +3 and ,3 are derived from the system’s time constants. The system’s time
domain response is then found from J Ð=Ñ. From algebra we know that J Ð=Ñ can be
expressed also as a function of its poles and zeros. A 8th -order polynomial can be
expressed as a product of terms containing its roots <5 :
8 8
T8 Ð=Ñ œ " +3 =3 œ $ Ð= <5 Ñ (6.1.9)
3œ! 5œ"
The value of this product is zero whenever = assumes a value of a root <5 .
Therefore we can rewrite Eq. 6.1.8 as:
Ð= D" ÑÐ= D# Ñá Ð= D7" ÑÐ= D7 Ñ
J Ð=Ñ œ (6.1.10)
Ð= :" ÑÐ= :# Ñá Ð= :8" ÑÐ= :8 Ñ
Here the roots of the polynomial in the numerator are the system’s zeros, D4 , and the
roots of the polynomial in the denominator are the system’s poles, :3 .
We shall have this form in mind whenever a system is specified, because we
shall always start the design by specifying some optimum pole–zero pattern as the
design goal and then work towards the required system’s time constants.
The system’s time domain equivalent of J Ð=Ñ, labeled 0 Ð>Ñ, is the system’s
impulse response:
Ch Ð>Ñ œ 0 Ð>Ñ ¹ (6.1.11)
BÐ>Ñœ$ Ð>Ñ
where $a>b is the Dirac’s function (the infinitesimal time limit of the unit area impulse).
The response to an arbitrary input signal may then be found by convolving the
input signal with the system’s impulse response (for convolution see Part 1, Sec. 1.15;
see also the VCON routine in Part 7, Sec. 7.2).
It is owed to Oliver Heaviside (1850–1925, [Ref. 6.4–6.8]), who pioneered the
transform theory, that we solve differential equations through the use of the Laplace
transform1.
The transform is applied to the time variable > through a single time domain
integration, producing a new variable =, whose dimension is >" (frequency). In the
frequency domain the 8th -order differential equation is reduced to an 8th -order
polynomial, whilst the convolution is reduced to simple multiplication. Once solved
(using simple algebra), the result is transformed back to the time domain.
1 Apparently, Heaviside developed his ‘operational calculus’ in the 1890s independently of Laplace.
Although useful and giving results in accordance with practice, his method was considered unorthodox
and suspicious for quite a while and only in the mid 1930s it was realized that the theoretical basis for
his work could be traced back to Laplace. Interestingly, he also develpoed the method of compensating
the dominantly capacitive telegraph lines by inductive peaking, amongst many other things.
-6.8-
P. Stariè, E. Margan Computer Algorithms
where _ef denotes the Laplace operator as defined by the integral (see Part 1, Sec.1.4).
Actually, the integration is usually made from > œ ! and not from _, in order
to preserve the response’s causality (i.e., something happens only after closing the
switch). This limitation is caused by the term e=> which for > ! would not integrate
to a finite value unless 0 Ð>Ñ œ ! for > Ÿ !. Such restriction is readily accomplished if
we modulate the input signal by closing a switch at > œ !. Mathematically, this can be
expressed by multiplying 0 Ð>Ñ by 2Ð>Ñ — the Heaviside’s unit step function. In our case
this is not necessary, since for calculation of the transient response we consider such
input signals which satisfy the convergence condition by definition. Also we shall
always assume that the system under investigation was powered up for a time long
enough to settle down, so we can safely say that all initial conditions are zero (or an
additive constant at worst).
Physically, by multiplying the time domain function by e=> in Eq. 6.1.12 we
have canceled the rotation of the phasor e=> at that particular frequency (=), allowing the
function to integrate to some finite value (see Part 1, Sec.1.2). At other frequencies the
phasors will continue to rotate, integrating eventually to zero. By doing so for all
frequencies we produce the frequency domain equivalent of 0 Ð>Ñ. This same process is
going on in a sweeping filter spectrum analyzer; the only difference is that in our case
an infinitely narrow filter bandwidth is considered. Indeed, such bandwidth takes an
infinitely long energy build up time, thus the integration must also last infinitely long
and be performed in infinitely small steps.
The inverse transform process is defined as:
5 4_
"
0 Ð>Ñ œ _" eJ a=bf œ ( J Ð=Ñ e=> .= (6.1.13)
#14
5 4_
where 5 is an arbitrarily chosen real valued positive constant for which the inversion
solution exists (this restriction is required for functions which do not decay to zero in
some finite time and therefore the integral would not converge, e.g., the unit step).
Eq. 6.1.1 can now be written as:
Note that for transient response calculation, BÐ>Ñ (the time domain input
function), is either the Dirac function (or $Ð>Ñ — the unity area impulse) or the
Heaviside function (or 2Ð>Ñ — the unity amplitude step). In these two cases
_eBÐ>Ñf œ \Ð=Ñ is either " (the transform of the unity area impulse), or "Î= (the
transform of the unity amplitude step), as we have already seen in Part 1.
Eq. 6.1.14 has been used extensively in previous parts to calculate the transient
responses analytically. However, for calculation of the frequency response, we are
interested only in that part of the transformed function which is a function of a purely
-6.9-
P. Stariè, E. Margan Computer Algorithms
imaginary = and therefore a special case of J Ð=Ñ, that is J Ð4=Ñ. It is thus interesting to
examine the possibility of calculating the transient response using the inverse Fourier
transform (a special case of the inverse Laplace transform) of the system frequency
response. We have already seen in Part 1 that the only difference between the Laplace
and Fourier transforms is that = is replaced by 4=, which is the same as making 5 , the
real part of =, equal to zero.
Eq. 6.1.11 shows that the time domain equivalent of the system’s frequency
response J Ð=Ñ is 0 Ð>Ñ resulting from the excitation by $ Ð>Ñ, or, in words, the system
impulse response. Since the impulse response of any system (except the conditionally
stable systems, as well as the oscillating or regenerating systems) decays to zero after
some finite time, we do not have to make those special precautions (as in the inverse
Laplace transform) to allow the integral to converge, but instead we can use the inverse
Fourier transform of J Ð4=Ñ to calculate the impulse response:
However, the Fourier transform of the unity amplitude step does not
converge, so we shall have to use an additional procedure to calculate the step
response.
It is possible to put Eq. 6.1.1 into numerical form [Ref. 6.20, 6.21]. Whilst there
are ways of using Eq. 6.1.14 in numerical form [Ref. 6.22, 6.23], we shall rather
concentrate on Eq. 6.1.15, since the Fast Fourier Transform algorithm (FFT, [Ref. 6.16–
6.19]), which we are going to use, offers some very distinct advantages. In addition, we
shall develop an algorithm based on the residue theory (Part 1, Sec. 1.9); the details are
given in Sec. 6.6.
Another point to consider, known from modern filter theory, is that optimized
high order systems are difficult to realize in direct form, because the ratio of the
smallest to the largest time constant quickly falls below component tolerances as the
system’s order is increased. Butterworth [Ref. 6.11] has shown that optimum system
performance is more easily met by a cascade of low order systems (several of second
and only one third, if 8 is odd) separated by amplifiers. As a bonus such structures will
satisfy the gain–bandwidth product requirement more easily. So in practice we shall
rarely need to solve high order system equations, usually only at the system integration
level.
The formulae presented above will be used as the starting point in algorithm
development. We shall develop the algorithms for calculating the system poles for a
desired system order, the complex frequency response, the magnitude and phase
response, the group (envelope) time delay, the impulse response, the step response, and
the numerical convolution. Those algorithms can, of course, be written to solve only
our particular class of problems. It is wise, however, to write them to be as universally
applicable as possible, in spite of losing some algorithm efficiency, to suit eventual
future needs.
-6.10-
P. Stariè, E. Margan Computer Algorithms
> >= < <= greater, greater or equal, smaller, smaller or equal;
; Semicolon, logical end of command line. For matrices it indicates end of a row.
-6.11-
P. Stariè, E. Margan Computer Algorithms
It is beyond the scope of this text to cover all the background of system
optimization theory. Let us just mention the most important optimization criteria for
each major system family. For the same bandwidth and system order 8:
• the Butterworth family
is optimal in the sense of having a maximally flat pass band magnitude;
• the Bessel–Thomson family
is optimal in the sense of having a maximally flat group (envelope) delay
and, consequently, a maximally steep step response with minimum
overshoot;
• the Chebyshev family
is optimal in the sense of a having maximally steep pass band to stop
band transition, at the expense of some specified pass band ripple;
• the Inverse Chebyshev family
is optimal in the sense of having a maximally steep stop band to pass
band transition, at the expense of some specified stop band ripple;
• the Elliptic (Cauer) family
is optimal in the sense of having a maximally steep pass band to stop
band transition, at the expense of some specified pass band and stop
band ripple.
It must be pointed out, however, that some system families, the Bessel–
Thomson family in particular, can be realized in practice more easily than others, owing
to the lower ratio of the largest to the smallest system time constant for a given system
order, the maximum usable ratio being limited by component tolerances. Also, low
order systems can be realized more easily than high order systems. We shall have to
keep these things in mind when denormalizing the system to the actual upper frequency
limit and deciding the number of stages used to achieve the total amplification factor.
The trouble is that during the design process we select the system poles and
zeros in accordance with certain circuit simplifications, useful for speeding up the
analysis. But the implementation of the poles and zeros in an actual amplifier is more a
matter of practical know–how, instead of a rigorous theory. This is particularly true if
we are pushing the performance to the limits of realizability, since in these conditions
the component stray reactances must be taken into account when specifying the system
time constants. Luckily for the amplifier designer, for the same component and layout
strays the Bessel–Thomson system yields the highest system bandwidth, with the bonus
of an optimal transient response. This is also true for ‘feedback stabilized’ systems,
since the large phase margin offered by this system family aids system stability; also,
feedback induced Q-factor enhancement at high frequencies lowers the required
imaginary versus real part ratio of the response shaping component impedances even
further.
-6.13-
P. Stariè, E. Margan Computer Algorithms
In this text we shall only deal with the Butterworth [Ref. 6.11] and Bessel–
Thomson [Ref. 6.12, 6.13] low pass systems for calculation of poles, since these are
required in wideband amplifier design. If needed, Chebyshev, inverse Chebyshev and
elliptic (Cauer) functions are provided in the Matlab Signal Processing Toolbox, as well
as the low pass to high pass, band pass and band stop transform algorithms. The
toolbox also contains many other useful algorithms, such as RESIDUE, ROOTS, etc.,
which will not be considered here (see [Ref. 6.1, 6.2, 6.19]).
In order to be able to compare the performance of different systems on a fair
basis we must specify some form of system standardization:
a) all systems will have the pole values normalized for an upper half power
angular frequency =h of 1 radian per second (equivalent to the cycle
frequency of 0h œ "Î#1 [Hz]). This leads to the use of a normalized
frequency vector, implying that whenever we write either 0 or =, we shall
actually mean 0 Î0h or =Î=h , respectively.
Please note that this can sometimes cause a bit of confusion, since 0 Î0h
is the same as =Î=h ; but = œ #10 , so we should keep an eye on the factor #1,
especially when denormalizing the poles to the actual system upper half power
frequency.
The frequency response is calculated as a function of =, since the poles
and zeros are mapped in terms of = œ 5 4=, where both the real and
imaginary part are measured in [radÎs], but we usually plot it as a function of 0
(in [Hz]). If the values of the poles are normalized we can use the same
normalized frequency vector to calculate and plot the frequency domain
functions. Therefore to plot the magnitude and phase responses vs. frequency
we shall not have to divide the frequency vector (of a length usually between
100 and 1000 elements) by #1.
Since Matlab will not accept the symbol = as a valid name for a variable
we shall replace it by w=2*pi*f in our routines.
b) all systems will have their DC gain (at = œ !) normalized to E0 œ "
(throughout this text we shall consider low pass systems only). Nevertheless, we
shall try to provide the correct gain treatment in the general case, in order to
broaden the applicability of our algorithms.
Of course, to extract the actual system component values, as well as to scale the
various frequency and time domain responses to comply with the desired upper
frequency 0h , the poles and zeros will have to be denormalized (multiplied) by #10h .
Also, each response will have to be scaled by the required gain factor.
I.e., for a simple current driven shunt VG system, the normalized pole,
="n œ "ÎÐV"n G"n Ñ œ ", is first denormalized to the value of the desired bandwidth,
=" œ ="n † #10h . From =" we get the new component values, V" G" œ "ÎÐ#10h Ñ. Finally,
we multiply V" by the gain factor, V œ EV" , to obtain the desired output voltage from
the available input current, that is @o œ V3i , and then reduce G" by the same amount,
G œ G" ÎE. If G is a stray capacitance it cannot be reduced below the limit imposed by
the circuit topology. Then we must work backwards by first finding the V which would
give the desired bandwidth, and then determine the input current which will give the
required output voltage.
-6.14-
P. Stariè, E. Margan Computer Algorithms
Butterworth systems ([Ref. 6.11], see also Part 4, Sec. 4.3) are optimal in the
sense that all the derivatives of the frequency response are zero at the complex plane
origin, resulting in a maximally flat magnitude response. The normalized squared
magnitude response of an 8th -order Butterworth system is:
"
J # Ð=Ñ œ (6.3.1)
" Ð=# Ñ8
This can be rewritten as:
"
J Ð=Ñ J Ð =Ñ œ (6.3.2)
" Ð =# Ñ8
where =5 are found from the expression of Eq. 6.3.5 in the exponential form:
#5"
" 8
=5 œ e41 # for 5 œ ", #, $, á , 8 (6.3.7)
and:
8
5! œ $ Ð=5 Ñ (6.3.8)
5œ"
-6.15-
P. Stariè, E. Margan Computer Algorithms
Eq.6.3.7 and Eq.6.3.8 are implemented in the Matlab Signal Processing Toolbox
function called BUTTAP (an acronym for BUTTerworth Analog Prototype):
As an example see the complex plane layout of the poles of a 5th -order
Butterworth system in Fig. 6.3.1.
For a desired attenuation + œ "ÎE at some chosen =+ =h we can calculate
the required system order:
log"! ÐE# "Ñ
8œ =+ (6.3.10)
# log"!
=h
and round it to the first higher integer.
2
ℑ {s }
s4
1
s2
s1 ℜ{s}
0
s3
−1 s5
−2
−2 −1 0 1 2
Fig. 6.3.1: The 5th -order Butterworth system poles in Cartesian
coordinates of the complex plane. The left half of the figure is
the = domain over which the magnitude in Fig. 6.4.1 is plotted.
-6.16-
P. Stariè, E. Margan Computer Algorithms
Bessel–Thomson systems (see [Ref. 6.12, 6.13]; see also Part 4, Sec. 4.4) are
optimal in the sense that all the derivatives of the group (envelope) time delay response
are zero at origin, which results in a maximally flat group delay. This means that all the
relevant frequencies pass through the system with equal time delay, resulting in a
transient response with a minimal overshoot. In the complex frequency plane a system
with pure time delay is represented by:
F! Ð=Ñ œ "
F" Ð=Ñ œ = "
F8 Ð=Ñ œ Ð#8 "Ñ F8" Ð=Ñ =# F8# Ð=Ñ (6.3.15)
The function, which will calculate the Bessel polynomial coefficients using
Eq. 6.3.16 will be called BESTAP (this stands for BESsel–Thomson Analog Prototype,
but the name is also in good agreement with the best time domain response of this
system family). Within this function the system poles are extracted using the ROOTS
-6.17-
P. Stariè, E. Margan Computer Algorithms
function in Matlab. This works well up to 8 œ #%; for higher orders the ratio of -8 to -!
is so high that the computer numerical resolution (‘double precision’ or 16 significant
digits) is exceeded, but this is not a severe limitation because in most circuit
configurations the 1% component tolerances will limit system realizability to about
8 œ "$ (assuming a 6-stage system, for which the highest reactive component value
ratio is about 12:1). But if needed, we can always calculate the frequency response from
the polynomial expression, using the coefficients -5 directly in Eq. 6.1.8, instead of
using Eq. 6.1.10, as in the Matlab POLYVAL and FREQS routines.
Bessel–Thomson system poles are found in the left half of the complex plane on
a family of ellipses, having a nearer focus at the complex plane origin and the other
focus on the positive part of the real axis (see Fig. 6.3.2).
The poles calculated in this way define a family of systems with equal envelope
delay (normalized to 1 s). This results in a progressively larger bandwidth and smaller
rise time for each higher 8 (see Fig. 6.5.11). In addition, two other normalizations of the
Bessel–Thomson system are possible.
One is to make the asymptote of the magnitude roll off slope the same as it is
for the Butterworth system of equal order (this is useful for calculating transitional
Bessel to Butterworth systems, as we have seen in Part 4, Sec. 4.5.3). If =+ is to become
the half power cut off frequency of the new system:
-! " "Î8
¹J Ð=+ ѹ œ œ Ê =+ œ - ! (6.3.17)
# =8+ #
"Î8
In this case, with the roots of F8 Ð=Ñ divided by -! , the envelope delay will be
"Î8
equal to -! , instead of ", and the system bandwidth will be smaller for each higher 8.
The other is to have equal bandwidth for any 8, possibly normalized to 1 rad/s,
as is the Butterworth family; in this way we would be able to compare different systems
on a fair basis. Unfortunately there is no simple way of matching the Bessel–Thomson
system bandwidth to that of a Butterworth system of the same order. To achieve this we
have to recursively multiply the poles by a correction factor proportional to the
bandwidth ratio, until a satisfying approximation is reached (the values of poles
modified in such a way for systems of order 2 to 10 are shown in Part 4, Table 4.5.1).
The while loop at the end of the BESTAP routine has a tolerance of 0.0001 and it was
experimentally found to match in only 8 to 12 loop iterations, depending on 8; this
tolerance is satisfactory for most practical purposes, but the reader can easily change it
to suit his needs.
All these three normalization options (group delay, asymptote, and bandwidth)
are being provided for by the BESTAP routine by entering, besides the system order 8,
an additional input argument in the form of a single character string:
'n' for 1 rad/s cutoff frequency normalization (the default),
't' for unit time delay and
'a' for the same attenuation slope asymptote as Butterworth system of equal order.
As in the BUTTAP routine, three output variables are returned. But the number
of arguments returned by BESTAP can be either 3, 2, or just 1. If all three output
arguments are requested, the zeros are returned in z, the poles in p, and the non-
-6.18-
P. Stariè, E. Margan Computer Algorithms
normalized system gain is returned in the output variable k. Since there are no zeros in
this family of systems, an empty matrix is returned in z.
With just two output arguments, only z and p are returned.
When only one output argument is specified, instead of having an empty matrix
returned in z, which would not be very useful, we have decided to return the Bessel
polynomial coefficients -5 . Note that for the 8th -order system there are 8 b "
coefficients, from -8 to -! . The system gain normalization is achieved by dividing the
each coefficient by -! , that is, the last one in the vector c, i.e., c=c/c(nb1). The
coefficients are scaled as for the 't' option (equal envelope delay); other options are
then ignored. But, if necessary, we can always calculate the polynomial coefficients for
those cases from the poles, by invoking the POLY routine, i.e., c=poly(p).
function [z,p,k]=bestap(n,x)
%BESTAP BESsel-Thomson Analog Prototype.
% Returns the zeros z, poles p and gain k of the n-th order
% Bessel-Thomson system. This is an all-pole system, so an
% empty matrix is returned in z. The poles are calculated for
% a maximally flat envelope (group) delay.
%
% Call : [z,p,k]=bestap(n,x);
% where :
% n is the system order
% x is a single-character string, making the poles:
% 'n' - normalized to a cutoff of 1 rad/s (default);
% 'a' - normalized to have the same attenuation
% asymptote as a Butterworth system of same n;
% 't' - scaled for a group-delay of 1s.
% k is the non-normalized system DC gain.
% p are the poles (length-n column vector)
% z are the zeros (no zeros, empty matrix returned)
% With only one output argument :
% c=bestap(n);
% the n+1 coefficients of the system polynomial are returned,
% scaled as in the 't' option, ignoring other options.
-6.19-
P. Stariè, E. Margan Computer Algorithms
if x == 'n' | x == 'N'
% Bandwidth normalization to 1 rad/s results in
% progressively greater envelope delay for increasing n
P=p; % copy the poles to P
% Reference (-3 dB point)
y3=1/sqrt(2);
y=abs(freqw(P,1)); % attenuation at 1 rad/s (see FREQW)
while abs( 1 - y3/y ) > 0.0001
P=P*(y3/y); % Make iterative corrections
y=abs(freqw(P,1));
end
p=P; % copy P back to p
end
10 9
8
8 7
6
6 5
4
4 3
2
2
ℑ{s }
1
0
−2
−4
−6
−8
− 10
−6 −4 −2 0 2 4 6 8 10 12 14 16 18
ℜ{s }
Fig. 6.3.2: Complex plane pole map for unit group delay Bessel–
Thomson systems of order 2–9 (with the first-order reference).
-6.20-
P. Stariè, E. Margan Computer Algorithms
function P=pats(R,s)
% PATS Polynomial (product form) value AT S.
% P=pats(R,s) returns the values of the product of terms,
% containing n-th order polynomial roots R=[r1, r2,.., rn],
% for each element of s.
% Values are calculated according to the formula :
%
% P(s)=(s-r1)*(s-r2)*...*(s-rn) / (((-1)^n)*(r1*r2*...*rn))
%
% and are normalized so that P(0)=1.
% PATS is used by FREQW. See also POLY and FREQS.
[m,n]=size(s);
P=ones(m,n); % A matrix of all ones, same dimension as s
nr=max(size(R)); % number of elements in R
for k=1:nr
if R(k) == 0
P=P.*s; % Multiply, but prevent from dividing by 0.
else
P=P.*(s-R(k))/(-R(k));
end
end
function F=freqw(z,p,w)
% FREQW returns the complex frequency response F(jw) of the system
% described by the zeros (vector z=[z1,z2,...,zm]) and the
% poles (vector p=[p1,p2,...,pn]).
% Call : F=freqw(z,p,w);
% w is the frequency vector; can be real, imaginary or complex.
% F=freqw(p,w) assumes a system with poles only.
% FREQW uses PATS. See also FREQS and FREQZ.
-6.21-
P. Stariè, E. Margan Computer Algorithms
¸J Ð=Ѹ œ ÈJ Ð=Ñ J * Ð=Ñ œ ÊŠdÖJ Ð=Ñ× 4eÖJ Ð=Ñ׋ŠdÖJ Ð=Ñ× 4eÖJ Ð=Ñ׋
# #
œ ÊŠdÖJ Ð=Ñ׋ ŠeÖJ Ð=Ñ׋ œ Q a=b (6.4.1)
Assuming a sinusoidal input signal, the magnitude represents the output to input
ratio of the peak signal value at that particular frequency. In practice, when we talk
about the system’s ‘frequency response’, we usually mean ‘the frequency dependent
magnitude’, Q Ð=Ñ. The magnitude contains no phase information.
We can calculate the magnitude by any of the following Matlab basic functions:
We shall use the ABS command, not just because it is easy to type in, but
because it executes much faster when there is a large amount of data to process.
In order to acquire a better understanding of what we are doing, let us write an
example for a 5th -order Butterworth system. In the Matlab command window we write:
If we now type:
z ↵ answer: []
Since there are no zeros an empty matrix (shown by square brackets) is returned
in z. A 5-element column vector with complex conjugate pole values is returned in p.
Let us plot these poles in the complex plane using Cartesian coordinates:
and the result would look as in Fig. 6.3.1 (for clarity, the distance from the origin and
the unit circle are also shown there, both needing extra ‘plot’ operations, not written in
the example above). From now on we shall not write the ENTER character explicitly.
-6.22-
P. Stariè, E. Margan Computer Algorithms
Fig. 6.4.1 has been created using the Matlab WATERFALL function and shows
the 3D magnitude of the 5th -order Butterworth system over a limited = domain in the
complex plane. The = domain here is the same as the left half of Fig. 6.3.1. Over the
poles, the magnitude would extend to infinity, so we have had to limit the height of the
plot in order to show the low level features in more detail.
↑∞
12
9
M(s) = |F(s)|
0
− 2.0
− 1.5 2.0
1.0
− 1.0 0 | F( j ω )|
σ = ℜ{s} − 0.5 − 1.0 jω = jℑ{s}
0 − 2.0
Fig. 6.4.1: The 5th -order Butterworth system magnitude, plotted over the same =-domain
as the shaded left half of Fig. 6.3.1. The surface represents ¸J Ð=Ѹ, but limited in height in
order to reveal the low level details. Its shape above the 4= axis is ¸J Ð4=Ѹ œ QÐ=Ñ.
-6.23-
P. Stariè, E. Margan Computer Algorithms
Now, we have intentionally limited the = domain to just the left half of the
complex plane (where the real part is either zero or negative). This highlights the shape
of the plot along the imaginary axis, which is — guess what ? — Q Ð=Ñ.
Looking at those lines parallel to the imaginary axis we can see what would
happen if the poles were moved closer to that axis: the magnitude would exhibit a
progressively pronounced peak. Such is the consequence of lowering the real part of the
poles. Since the negative real part is associated with energy dissipative (resistive)
components, it is clear that its role is to suppress resonance. But when we design an
oscillator we need to compensate any energy lost in the parasitic resistances of the
reactive components by an active regeneration (‘negative resistance’ or a positive real
part) in order to set the system poles (usually just one pair for oscillators) exactly on the
imaginary axis.
What is interesting to note is the mirror like symmetry about the real axis, owed
to the complex conjugate nature of the Laplace space. Here we see at work the concept
of ‘negative frequency’, which will be discussed latter in Sec. 6.5, dealing with the
Fourier transform inversion. This symmetry property will allow us to greatly improve
the inverse transform algorithm efficiency.
It is also instructive to see the complex frequency response J Ð4=Ñ in 3D:
1.5
ℑ {F ( j ω ) }
1.0
ω =0
0.5 F (0) = 1
0
ℜ{ F ( j ω ) }
− 0.5
−1.0
jω
−1.5
−3
−2
−1
0
1
0.5 1.0 1.5
2 0
3 −1.5
− 1.0 − 0.5
Fig. 6.4.2: The complex 3D plot of J Ð4=Ñ. The response phasor rotates clockwise, going
from negative to positive frequency. The distance from the frequency axis is the magnitude.
The circle on the real axis marks the DC response point. The Nyquist plot (see Fig. 6.4.3)
usually shows only the = ! part, viewing in the 4= direction. The three projections are
plotted to help those readers who do not have access to Matlab to visualize the shape.
-6.24-
P. Stariè, E. Margan Computer Algorithms
Fig. 6.4.2, which has been created using the Matlab PLOT3 function, shows
J Ð4=Ñ with the phase angle twisting about the 4= axis and the magnitude as the
distance from the 4= axis. The circle marker denotes the point where J Ð4=Ñ crosses the
real axis at zero frequency — the DC system gain normalized to 1.
Whilst the Fig. 6.4.1 waterfall plot shape was relatively easy to interpret and
‘feel’, the 3D curve shape is somewhat less clear. In Matlab one can use the
view(azimuth,elevation) command to see the graph from different viewing angles.
In Fig. 6.4.2, view(65,15) was used. In more recent versions of Matlab the user can
even select the viewing point by the ‘mouse’. To help the imagination of readers
without access to Matlab we have also plotted the three ‘shadows’.
Regarding the symmetry, J Ð4=Ñ is not a mirror image of J Ð4=Ñ — unlike
Q Ð=Ñ — because the phasor preserves its sense of rotation (clockwise, negative by
definition for any system with poles on the left) throughout the 4= axis. But if folded
about the real axis the shape would match.
As a result of such symmetry J Ð4=Ñ can be plotted using only the = ! part of
the axis, without any loss of information. The Nyquist plot [Ref. 6.9] shows both the
magnitude and the phase angle on the same graph:
The result should look like Fig. 6.4.3. The view is as if we look at the Fig. 6.4.2
in the opposite direction of the 4= axis (from +_ towards the origin).
1.0
ℑ{F ( j ω ) }
0.5
ω =1
ω= ∞ ω =0
ℜ{ F ( j ω ) }
0
|
ω)
(j ℑ{F ( j ω ) }
|F ϕ (ω ) = arctan
)= ℜ{ F ( j ω ) }
(ω
− 0.5 M
es
as
re
nc
ω i
− 1.0
−1.0 − 0.5 0 0.5 1.0
Fig. 6.4.3: The Nyquist plot of the 5th -order Butterworth system frequency response. The
frequency axis is reduced to a single point projection at the origin and is parametrically
incremented with the phase angle, from the DC point on the real axis, to the half power
bandwidth point at [-0.5, 0.5*j] and to infinity at the origin.
-6.25-
P. Stariè, E. Margan Computer Algorithms
This should look like Fig. 6.4.4. The special point on the graph is the magnitude
at the unit frequency — its value is "ÎÈ# , or 0.707, and since power is proportional to
the magnitude squared this is the system’s half power cut off frequency.
1.2
1.0
M (ω )
0.8
0.707
0.6
0.4
0.2
0
0 0.5 1.0 1.5 2.0 2.5 3.0
ω
Fig. 6.4.4: The magnitude vs. frequency plot in a linear scale. The
characteristic point is the half power bandwidth.
It has also become a standard practice to enhance the stop band detail by using
either the log Q vs. log = or the semilog dB(Q ) vs. log = plot scale (Fig. 6.4.5):
-6.26-
P. Stariè, E. Margan Computer Algorithms
By using the loglog scale or a linear dB vs. log frequency scale we can quickly
estimate the system order, since the slope is simply (for all pole systems) 8 times the
first-order system slope (8×20 dB per frequency decade).
0 10 0
M (ω ) -3 dB
[dB] M (ω )
− 20 10 − 1
Sl o
pe
:
− 40 10 − 2
− 10
0d
B/
10
ω
− 60 10 − 3
− 80 10 − 4
− 100 10 − 5
0.1 1.0 10.0
ω
Fig. 6.4.5: Bode plot of the 5th -order Butterworth system magnitude, as in Fig. 6.4.4, but
in a linear dB vs. log frequency scale. In such a scale, all-pole systems have an
asymptotically linear attenuation slope, proportional to the system order (a factor of 108
or 8×20 dBÎ10=). The marked 3dB reference point is the same half power cut off
frequency point as in Fig. 6.4.4.
-6.27-
P. Stariè, E. Margan Computer Algorithms
The phase response is calculated from the complex frequency response as the
arctangent of the ratio of the imaginary versus real part of J Ð=Ñ:
eeJ a=bf
:Ð=Ñ œ arctan (6.4.2)
d eJ a=bf
Note that the Matlab arctangent function is called atan. However, Matlab also
has a built in command named ANGLE, using the same Eq. 6.4.2, so:
+π
3
ϕ (ω ) −π
[rad] +π
2
0
1
−1
−π
−2 2
−3 −π
−4
0.1 1.0 10.0
ω
Fig. 6.4.6: The phase angle vs. frequency plot of the 5th -order Butterworth system.
The circularity of trigonometric functions, defined within the range „1 radians, is
the cause of the discontinuous phase vs. frequency relationship.
-6.28-
P. Stariè, E. Margan Computer Algorithms
function q=ephd(phi)
% EPHD Eliminate PHase Discontinuities.
% Outperforms UNWRAP and ADDTWOPI for systems with zeros.
% Use : q=ephd(phi);
% where :
% phi --> input phase vector in radians ( range: -pi>=phi>=pi );
% q --> output phase vector, "unwrapped";
% If phi is a matrix, unwrapping is performed down each column.
[r,c]=size(phi);
if min(r,c) == 1
phi=phi(:); % column-wise orientation
c=1;
end
q=diff(phi); % differentiate to detect discontinuities
% compensate for one element lost in diff and round the steps:
q=[zeros(1:c); pi*round(q/pi)];
q=cumsum(q); % integrate back by cumulatively summing
q=phi-q; % subtract the correcting values
if r == 1
q=q.'; % restore orientation
end
The ‘trick’ used in the EPHD routine is to first differentiate the phase, in order
to find where the discontinuities are and determine how large they are, then normalize
them by dividing by 1, round this to integers, multiply back by 1, integrate back to
obtain the corrections, and subtract the corrections from the original phase vector.
Following our 5th -order Butterworth example, we can now write:
alpha=ephd(phi); % unwrapped ;
semilogx(w,180*alpha/pi) % show alpha in degrees vs. log-scaled w;
− 90
ϕ (ω )
[°]
− 180
− 270
− 360
− 450
0.1 1.0 10.0
ω
Fig. 6.4.(: Bode plot of ‘unwrapped’ phase, in a linear degree vs. log frequency scale.
-6.29-
P. Stariè, E. Margan Computer Algorithms
Plotting the phase in linear degrees vs. log scaled frequency reveals an
interesting fact: the system exhibits a 90° phase shift for each pole, 450° total phase
shift for the 5th -order system. Also, the phase shift at the cut off frequency =h is exactly
one half of the total phase shift (at = ¦ =h ).
Another important fact is that stable systems (those with poles on the left half of
the complex plane) will always exhibit a negative phase shift, whatever the system
configuration (low pass or high pass, inverting or non-inverting). If you ever see a
phase graph with a positive slope, first inspect what is the system gain in that frequency
region. If it is 0.1 or higher that is a cause for major concern (that is, if your intention
was not to build an oscillator!).
-6.30-
P. Stariè, E. Margan Computer Algorithms
.:Ð=Ñ
7d Ð=Ñ œ (6.4.3)
.=
(note: : must be in radians!). Now it becomes evident why we have had to ‘unwrap’ the
circular phase function: each #1 discontinuity would, when differentiated, produce a
very high, sharp spike in the envelope delay.
Numerical differentiation can be performed by simply taking the difference of
each pair of adjacent elements for both the phase and the frequency vector:
dphi=phi(2:1:300)-phi(1:1:299);
dw=w(2:1:300)-w(1:1:299);
But Matlab has a built in command called DIFF, so let us use it:
tau=diff(phi)./(diff(w));
w1=sqrt(w(1:1:299).*w(2:1:300));
semilogx(w1,tau) % see the result in Fig.6.4.8.
Note that the values in variable tau are negative, reflecting the fact that the
system output is delayed in time. Since we call this response a ‘delay’ by definition, we
could use the absolute value. However, we prefer to keep the negative sign, because it
also reflects the sense of the phase rotation (see Fig. 6.4.3 and 6.4.7). An upward
rotating phase (or a counter-clockwise rotation in the Bode plot of the complex
frequency response) would imply a positive time delay or output before input and,
consequently, an unstable or oscillatory system.
-6.31-
P. Stariè, E. Margan Computer Algorithms
−1
−2
τe
[s]
−3
−4
−5
−6
0.1 1.0 10.0
ω
Fig. 6.4.): The envelope (group) delay vs. frequency of the 5th -order Butterworth
system. The delay is the largest for frequencies where the phase has the greatest slope.
So far we have derived the phase and group delay functions from the complex
response to imaginary frequency. There are times, however, when we would like to
save either processing time or memory requirement (as in embedded instrumentation
applications). It is then advantageous to calculate the phase or the group delay directly
from the system poles (and zeros, if any) and the frequency vector.
The phase influence of a single pole :5 can be calculated as:
= e˜:5 ™
:5 Ð=ß :5 Ñ œ arctan (6.4.4.)
d ˜:5 ™
The influence of a zero is calculated in the same way, but with a negative sign.
The total system phase shift is equal to the sum of all particular phase shifts of poles
and zeros:
8 7
:Ð=Ñ œ ":5 Ð=ß :5 Ñ ":3 Ð=ß D3 Ñ (6.4.5.)
5œ" 3œ"
But owing to the inherent complex conjugate symmetry of poles and zeros, only
half of them need to be calculated and the result is then doubled. If the system order is
odd the real pole is summed just once, the same is true for any real zero. This, of
course, requires some sorting procedure of the system poles and zeros, but sorting is
performed much quicker than multiplication with =, which is usually a lengthy vector.
If we are interested in getting data for a single frequency, or just two or three
characteristic points, then it might be faster to skip sorting and calculate with all poles
and zeros. In Matlab poles and zeros are already returned sorted. See the PHASE
routine, in which Eq. 6.4.4 and 6.4.5 were implemented.
-6.32-
P. Stariè, E. Margan Computer Algorithms
Note also that with the PHASE routine we obtain the ‘unwrapped’ phase
directly and we do not have to recourse to the EPHD routine.
function phi=phase(z,p,w)
% PHASE returns the phase angle of the system specified by the zeros
% z and poles p for the frequencies in vector w :
%
% Call : phi=phase(z,p,w);
%
% Instead of using angle(freqw(z,p,w)) which returns the phase
% in the range +/-pi, this routine returns the "unwrapped" result.
% See also FREQW, ANGLE, EPHD and GDLY.
if nargin == 2
w = p ;
p = z ;
z = []; % A system with poles only.
end
if any( real( p ) > 0 )
disp('WARNING : This is not a Hurwitz-type system !' )
end
n = max( size( p ) ) ;
m = max( size( z ) ) ;
% find w orientation to return the result in the same form.
[ r, c ] = size( w ) ;
if c == 1
w = w(:).' ; % make it a row vector.
end
% calculate phase angle for each pole and zero and sum it columnwise.
phi(1,:) = atan( ( w - imag( p(1) ) ) / real( p(1) ) ) ;
for k = 2 : n
phi(2,:) = atan( ( w - imag( p(k) ) ) / real( p(k) ) ) ;
phi(1,:) = sum( phi ) ;
end
if m > 0
for k = 1 : m
phi(2,:) = atan( ( imag( z(k) ) - w ) / real( z(k) ) ) ;
phi(1,:) = sum( phi ) ;
end
end
phi( 2, : ) = [] ; % result is in phi(1,:)
if c == 1
phi = phi(:) ; % restore the form same as w.
end
A similar procedure can be applied to the group delay. The influence of a single
pole :5 is calculated as:
d ˜:5 ™
75 Ð=, :5 Ñ œ # (6.4.6.)
#
d˜:5 ™ Še˜:5 ™ =‹
As for the phase, the total system group delay is a sum of all delays for each
pole and zero:
8 7
7d Ð=Ñ œ "75 Ð=, :5 Ñ "73 Ð=, D3 Ñ (6.4.7.)
5œ" 3œ"
-6.33-
P. Stariè, E. Margan Computer Algorithms
Again, owing to the complex conjugate symmetry, only half of the complex
poles and zeros need to be taken into account and the result doubled, and any delay of
an eventual real pole or zero is then added to it. The GDLY (Group DeLaY) routine
implements Eq. 6.4.6 and 6.4.7.
function tau=gdly(z,p,w)
% GDLY returns the group (envelope) time delay for a system defined
% by zeros z and poles z, at the chosen frequencies w.
%
% Call : tau=gdly(Z,P,w);
%
% Although the group delay is defined as a positive time lag,
% by which the system response lags the input, this routine
% returns a negative value, since this reflects the sense of
% phase rotation with frequency.
%
% See also FREQW, PATS, ABS, ANGLE, PHASE.
if nargin == 2
w=p;
p=z;
z=[]; % system has poles only.
end
if any( real( p ) > 0 )
disp( 'WARNING : This is not a Hurwitz type system !' )
end
n=max(size(p));
m=max(size(z));
[r,c]=size(w);
if c == 1
w=w(:).' ; % make it a row vector.
end
tau(1,:) = real(p(1)) ./(real(p(1))^2 + (w-imag(p(1))).^2);
for k = 2 : n
tau(2,:) = real(p(k)) ./(real(p(k))^2 + (w-imag(p(k))).^2);
tau(1,:) = sum( tau ) ;
end
if m > 0
for k = 1 : m
tau(2,:)=-real(Z(k)) ./(real(Z(k))^2 + (w-imag(Z(k))).^2);
tau(1,:) = sum( tau ) ;
end
end
tau(2,:) = [] ;
if c == 1
tau = tau(:) ;
end
-6.34-
P. Stariè, E. Margan Computer Algorithms
There are several methods for time domain response calculation. Three of these
that are interesting from the system designer’s point of view, including the FFT method,
were compared for efficiency and accuracy in [Ref. 6.23]. Besides the high execution
speed, the main advantage of the FFT method is that we do not even have to know the
exact mathematical expression for the system frequency response, but only the graph
data (i.e. if we have measured the frequency and phase response of a system). Although
the method was described in detail in [Ref. 6.23] we shall repeat here the most
important steps, to allow the reader to follow the algorithm development.
There are five difficulties associated with the discrete Fourier transform that we
shall have to solve:
a) the inability to transform some interesting functions (e.g., the unit step);
b) the correct treatment of the DC level in low pass systems;
c) preserving accuracy with as little spectral information input as possible;
d) find to what extent our result is an approximation owed to finite spectral density;
e) equally important, estimate the error owed to finite spectral length.
1.2
1.0
g (t )
0.8
0.6
0.4
0.2
f (t )
0
− 0.2
0 2 4 6 8 10 12 14 16 18 20
t
Fig. 6.5.1: The impulse and step response of the 5th -order Butterworth system. The
impulse amplitude has been normalized to represent the response to an ideal,
infinitely narrow, infinite amplitude input impulse. The impulse response reaches the
peak value at the time equal to the envelope delay value at DC; this delay is also the
half amplitude delay of the step response. The step response first crosses the final
value at the time equal to the envelope delay maximum. Also the step response peak
value is reached when the impulse response crosses the zero level for the first time.
If the impulse response is normalized to have the area (the sum of all samples) equal
to the system DC gain, the step response would be simply a time integral of it.
-6.35-
P. Stariè, E. Margan Computer Algorithms
The basic idea behind this method is that the Fourier transform is a special case
of the more general Laplace transform and the Dirac impulse function is a special type
of signal for which the Fourier transform solution always exists. Comparing Eq. 1.3.8
and Eq. 1.4.3 and taking in account that = œ 5 4=, we see:
Since the complex plane variable = is composed of two independent parts (real
and imaginary), then J Ð=Ñ may be treated as a function of two variables, 5 and =. This
can be most easily understood by looking at Fig. 6.4.1, in which the complex frequency
response (magnitude) of a 5-pole Butterworth function is plotted as a 3D function over
the Laplace plane.
In that particular case we had:
=" =# =$ =% =&
J Ð=Ñ œ (6.5.2)
Ð= =" ÑÐ= =# ÑÐ= =$ ÑÐ= =% ÑÐ= =& Ñ
where ="–& have the same values as in the example at the beginning of Sec. 6.4.1.
When the value of = in Eq. 6.5.2 becomes close to the value of one of the poles,
=i , the magnitude kJ Ð=Ñk then increases until becoming infinitely large for = œ =i .
Let us now introduce a new variable : such that:
This has the effect of slicing the J a=b surface along the imaginary axis, as we
did in Fig. 6.4.1, revealing the curve on the surface along the cut, which is kJ Ð4=Ñk, or
in words: the magnitude Q Ð=Ñ of the complex frequency response. As we have
indicated in Fig. 6.4.5, we usually show it in a loglog scaled plot. However, for transient
response calculation a linear frequency scale is appropriate (as in Fig. 6.4.2), since we
need the result of the inverse transform in linear time scale increments.
Now that we have established the connection between the Laplace transformed
transfer function and its frequency response we have another point to consider:
conventionally, the Fourier transform is used to calculate waveform spectra, so we need
to establish the relationship between a frequency response and a spectrum. Also we
must explore the effect of taking discrete values ÐsamplingÑ of the time domain and
frequency domain functions, and see to what extent we approximate our results by
taking finite length vectors of finite density sampled data. Those readers who would
like to embed the inverse transform in a microprocessor controlled instrument will have
to pay attention to amplitude quantization (finite word length) as well, but in Matlab
this is not an issue.
We have examined the Dirac function $Ð>Ñ and its spectrum in Part 1, Sec. 1.6.6.
Note that the spectral components are separated by ?= œ #1ÎX , where X is the
impulse repetition period. If we allow X p _ then ?= p !. Under these conditions we
-6.36-
P. Stariè, E. Margan Computer Algorithms
can hardly speak of discrete spectral components because the spectrum has become
very dense; we rather speak of spectral density. Also, instead of individual
components’ magnitude we speak of spectral envelope which for $Ð>Ñ is essentially
flat.
However, if we do not have an infinitely dense spectrum, then ?= is small but
not !, and this merely means that the impulse repeats after a finite period X œ #1Î?=
(this is the mathematical equivalent of testing a system by an impulse of a duration
much shorter than the smallest system time constant and of a repetition period much
larger than the largest system time constant).
Now let us take such an impulse and present it to a system having a selective
frequency response. Fig. 6.5.2 shows the results both in the time domain and the
frequency domain (magnitude). The time domain response is obviously the system
impulse response, and its equivalent in the frequency domain is a spectrum, whose
density is equal to the input spectral density, but with the spectral envelope shaped by
the system frequency response. The conclusion is that we only have to sample the
frequency response at some finite number of frequencies and perform a discrete Fourier
transform inversion to obtain the impulse response.
150 20
128 Input Impulse Impulse Response
100
10
50
0
0
128 Samples
− 50 −10
− 30 0 30 60 90 −30 0 30 60 90
Time / Sampling interval Time / Sampling interval
1.5 1.5
Input Spectrum Response Spectrum
64 Frequency components
1.0 1.0
0.5 0.5
0 0
0 20 40 60 0 20 40 60
Frequency × Sampling interval Frequency × Sampling interval
Fig. 6.5.2: Time domain and frequency domain representation of a 5-pole Butterworth system
impulse response. The spectral envelope (only the magnitude is shown here) of the output is
shaped by the system frequency response, whilst the spectral density remains unchanged. From
this fact we conclude that the time domain response can be found from a system frequency
response using inverse Fourier transform. The horizontal scale is the number of samples (128 in
the time domain and 64 in the frequency domain — see the text for the explanation).
-6.37-
P. Stariè, E. Margan Computer Algorithms
If we know the magnitude and phase response of a system at some finite number
of equally spaced frequency points, then each point represents:
Eq. 6.5.5 is the discrete Fourier transform, with the exponential part
expressed in trigonometric form. However, if we were to plot the response calculated
after Eq. 6.5.5, we could see that the time axis is reversed, and from the theory of
Fourier transform properties (symmetry property, [Ref. 6.14, 6.15, 6.18]), we know that
the application of two successive Fourier transforms returns the original function but
with the sign of the independent variable reversed:
or more generally:
Y Y Y Y
0 Ð>Ñ p J Ð4=Ñ p 0 Ð>Ñ p J Ð4=Ñ p 0 Ð>Ñ (6.5.7)
o o o o
Y " Y " Y " Y "
The main drawback in using Eq. 6.5.5 is the high total number of operations,
because there are three input data vectors of equal length (=, M, :) and each contributes
to every time point result. It seems that greater efficiency might be obtained by using
the input frequency response data in the complex form, with the frequency vector
represented by the index of the J Ð4=Ñ vector.
Now J Ð4=Ñ in its complex form is a two sided spectrum, as was shown in
Fig. 6.4.3, and we are often faced with only a single sided spectrum. It can be shown
that a real valued 0 Ð>Ñ will always have J Ð4=Ñ symmetrical about the real axis 5 . Thus:
-6.38-
P. Stariè, E. Margan Computer Algorithms
ℑ ℑ
−A
dϕ a = A sin ω t 2j −A
ω=
dt −ϕ 2
t=0 A ℜ
a
a A ℜ
t1 = ϕ
ω
ϕ A
2
ϕ= ωt ϕ A
2j
A
j
a) b)
t = 2π
ω
Fig. 6.5.3: As in Part 1, Fig. 1.1.1, but from a slightly different perspective: a) the real signal
instantaneous amplitude aÐ>Ñ œ E sin :, where : œ =>; b) the real part of the instantaneous signal
p p
phasor, d˜0A™ œ 0a œ E sin :, can be decomposed into two half amplitude, oppositely rotating,
complex conjugate phasors, aEÎ#4b sin : aEÎ#4b sinÐ:Ñ. The second term has rotated by
: œ => and, since > is obviously positive (see the a) graph), the negative sign is attributed to =;
thus, clockwise rotation is interpreted as a ‘negative frequency’.
-6.39-
P. Stariè, E. Margan Computer Algorithms
hence using Eq. 6.5.9 and Eq. 6.5.10 and taking into account the cancellation of
imaginary parts, we obtain:
Eq. 6.5.11 means that if JT Ð4=Ñ is the = 0 part of the Fourier transformed
real valued function 0 Ð>Ñ, its Fourier transform inversion 0T Ð>Ñ will be a complex
function whose real part is equal to 0 Ð>ÑÎ#. Summing the complex conjugate pair
results in a doubled real valued 0 Ð>Ñ. So by Eq. 6.5.10 and Eq. 6.5.11 we can calculate
the system impulse response from just one half of its complex frequency response
using the forward Fourier transform (not inverse!):
*
0 Ð>Ñ œ # d ”Y šJT* Ð4=Ñ›• Ÿ (6.5.12)
Note that the second (outer) complex conjugate is here only to satisfy the
mathematical consistency — in the actual algorithm it can be safely omitted, since only
the real part is required.
As the operator Y ef in Eq. 6.5.12 implies integration we must use the discrete
Fourier transform (DFT) for computation. The DFT can be defined by decomposing
the Fourier transform integral into a finite sum of R elements:
-6.40-
P. Stariè, E. Margan Computer Algorithms
-6.41-
P. Stariè, E. Margan Computer Algorithms
6.5.2 Windowing
-6.42-
P. Stariè, E. Margan Computer Algorithms
If the system order and type could be precisely identified, this error might be
corrected by forcing the first point of the 1st -order impulse response to a value equal to
the sampling time period, multiplied by the system DC gain, as has been done in the
TRESP routine.
In Sec. 6.5.6 we give a more detailed error analysis for the 1st -, 2nd - and 3rd -
order system for both windowed and non-windowed spectrum.
1.0
0.6
0.4
0.2
abs( F(jw) )
abs( W .* F(jw) )
0
0 5 10 15 20 25 30 35
Frequency : w = (0:1:255)/8
Fig. 6.5.4: Windowing example. The 1st -order frequency response (only the
magnitude is shown on plot) is multiplied elemet by element by the Hamming
type of window function in order to reduce the influence of high frequencies
and improve the impulse response calculation accuracy. Note that the window
function is real only, affecting equally the system real and imaginary part, thus
the phase information is preserved and only the magnitude is corected.
-6.43-
P. Stariè, E. Margan Computer Algorithms
The impulse response, as returned by Eq. 6.5.12, has the amplitude such that the
sum of all the R values is equal to R times the system gain. Also, if we are dealing
with a low pass system and the first point of its frequency response (the DC component)
is J Ð!Ñ, then the impulse response will be shifted vertically by J Ð!Ñ (the DC
component is added). Thus we must first cancel this ‘DC offset’ by subtracting J Ð!Ñ
and then normalize the amplitude by dividing by R :
0 Ð>Ñ J Ð!Ñ
0n Ð>Ñ œ (6.5.14)
R
By default, the TRESP routine (see below) returns the impulse amplitude in the
same way, representing a unity gain system’s response. Optionally, we can denormalize
it as if the response was caused by an ideal, infinitely high impulse; then the "st -order
response starts the exponential decay from a value very close to one, as it should. If the
system’s half power bandwidth, =h , is found at the m+1 element of the frequency
response vector, the amplitude denormalization factor will be:
R
Eœ (6.5.15)
#1 m
The #1 factor comes as a bit of surprise here. See Sec. 6.5.5 about time scale
normalization for an explanation.
The term m can be entered explicitly, as a parameter. But it can also be derived
from the frequency vector by finding the index at which it is equal to 1, or it can be
found by examining the magnitude and finding the index of the point, nearest to the half
power bandwidth value (in both of cases the index must be decremented by 1).
Another problem can be encountered with high order systems, which exhibit a
high degree of ringing, e.g., Chebyshev systems of order 8 or greater. If m<8, some
additional ringing is introduced into the time domain response. This ringing results
from the time frame implicitly repeating with a period X œ #1Î?=, where ?=
describes the finite spectral density of input data. If we have specified the system cut off
frequency too near to the origin of the frequency vector it would cause a time scale
expansion. Thus overlapping of adjacent responses will introduce distortion if the
impulse response has not decayed to zero by the end of the period X . Therefore the
choice of placing the cut off frequency relative to the frequency vector is a compromise
between the pass band and stop band description. In Matlab, the frequency vector of N
linearly spaced frequencies, normalized to 1 at its m+1 element, can be written as:
The variable m specifies the normalized frequency unit. The transient response
of both Butterworth and Bessel systems can be calculated with good accuracy by using
a frequency vector normalized to 1 at its 5th sample (m=4). But by placing the cutoff
frequency at the 9th sample (m=8) of a frequency vector of length 256, an acceptably
low error will be achieved even for a 10th -order Chebyshev system. For higher order,
high ringing systems one will probably need to increase the frequency vector to 512 or
1024 elements in order to prevent time window overlapping.
-6.44-
P. Stariè, E. Margan Computer Algorithms
Up to this point we have obtained the impulse response. The step response is
not available straight from the Fourier transform (if the unit step is integrated, the
integral will diverge to _; this is why we prefer the more general Laplace transform
for analytical work). However, from signal analysis theory we know that the response to
an arbitrary input signal can be found by convolving it with the system’s impulse
response (for convolution, see Part 1, Sec. 1.15 and Fig. 1.15.1; see Part 7, Sec. 7.2 for
the numerical algorithm). With the unit step as the input signal the convolution is
reduced to a simple time domain integration of the system impulse response.
But this integration must give us the final step response value equal, or at least
very close, to the system DC gain. So we must use the impulse response normalized to
obtain the sum of all its elements equal to the system’s gain. Numerical integration can
be done by cumulative summing. This means that the first element is transferred
unchanged, the second element is the sum of the first two, the third of the first three,
and so on, up to the last element which is the sum of all elements. The CUMSUM
command in Matlab will return in Y a cumulative sum of vector X like this:
-6.45-
P. Stariè, E. Margan Computer Algorithms
dt=2*pi*m/N; % delta-t
t=dt*(0:1:N-1); % normalized length-N time vector
-6.46-
P. Stariè, E. Margan Computer Algorithms
Now, normalizing the time scale means showing it in increments of the system
time constant (VG , #VG , $VG , á ). Thus we simply set VG œ ". For the starting
sample at > œ !, the response 0 Ð!Ñ œ ", so in order to obtain the response of a unity
gain system excited by a finite amplitude impulse we must denormalize the amplitude
(see Eq. 6.5.15) by "ÎE:
" >
0r Ð>Ñ œ e (6.5.18)
E
z=[]; % no zeros,
p=-1; % just a single real pole
N=256; % total number of samples
m=8; % samples in the frequency unit
w=(0:1:N-1)/m; % the frequency vector
dt=2*pi*m/N; % sampling time interval = 1/A
t=dt*(0:1:N-1); % the time vector
F=freqw(z,p,w); % the frequency response
In=(2*real(fft(conj(F)))-1)/N; % the impulse response
Ir=dt*exp(-t); % 1st-order ref., denormalized
plot( t, Ir, t, In )
title('Ideal vs. windowed response'), xlabel('Time')
plot( t(1:30), Ir(1:30), t(1:30), In(1:30) )
title('Zoom first 30 samples')
In the above example (see the plot in Fig. 6.5.5), we see that the final values of
normalized impulse response In are not approaching zero, and by zooming on the first
30 points we can also see that the first point is too low and the rest somewhat lower
than the reference. Windowing can correct this:
This plot fits the reference much better. But the first point is still far too low.
From the amplitude denormalization factor, by which the reference was multiplied, we
know that the correct value of the first point should be "ÎE œ ?>. So we may force the
first point to this value, but, by doing so, we would alter the sum of all values by
N*(dt-I(1)). In order to obtain the correct final value of the step response, the impulse
response requires the correction of all points by 1/(1+(dt-I(1))/N), as in the
following example:
-6.47-
P. Stariè, E. Margan Computer Algorithms
Likewise we can compare the calculated step response. Our reference is then:
But if the first-order impulse response is numerically integrated the value of the
first sample of the step response will be equal to the value of the first sample of the
impulse response instead of zero, as it should be in the case of a low pass LTIC system.
Also, there is an additional problem resulting from numerical integration, which
manifests itself as a one half sample time delay. Remember what we have observed
when we derived the envelope delay from the phase: numerical differentiation has
assigned each result point to each difference pair of the original data, so that the
resulting vector was effectively shifted left in (log-scaled) frequency by a (geometrical)
mean of two adjacent frequency points, È=8 =8" . Because we work with a linear
scale, the shift is the arithmentic mean, ?=Î#. Since the numerical integration is the
inverse process of differentiation, the signal is shifted right. However, whilst the
differentiation vector had one sample less, the numerical integration returns the same
number of samples, not one more.
So, in order to see the actual shape of the error, we have to compensate for this
shift of one half sample. We can do this by artificially adding a leading zero to the
impulse response vector, then cumulatively sum the resulting R " elements and
finally take the mean value of this and the version shifted by one sample, as in the
example below which is uses the vectors from above (see the result in Fig. 6.5.6):
-6.48-
P. Stariè, E. Margan Computer Algorithms
Note that the TRESP routine allows us to enter the actual denormalized
frequency vector, in which case all (but the first one) of its elements might be greater
than 1. The normalized frequency unit m is then found from the frequency response, by
checking which sample is closest to abs(F(1))/sqrt(2), and then decremented by
1 to compensate for the frequency vector starting from 0. But in the case of a
denormalized frequency vector we should also denormalize the time scale, by dividing
the sampling interval by the actual upper cut off frequency, which is w(m+1).
To continue with our 5th -order Butterworth example, we can now calculate the
impulse and step response by using the TRESP routine in which we have included all
the above corrections:
The results should look just like Fig. 6.5.1. Here is the TRESP routine:
function [y,t]=tresp(F,w,r,g)
%TRESP Transient RESPonse, using Fast Fourier Transform algorithm.
% Call : [y,t]=tresp(F,w,r,g);
% where:
% F --> complex-frequency response, length-N vector, N=2^B, B=int.
% w --> can be the related frequency vector of F, or it
% can be the normalized frequency unit index, or it
% can be zero and the n.f.u. index is found from F.
% r --> a character, selects the response type returned in y:
% - 'u' is the unity area impulse response (the default)
% - 'i' is the ideal impulse response
% - 's' is the step response
% g --> an optional input argument: plot the response graph.
% y --> the selected system response.
% t --> the normalized time scale vector.
-6.49-
P. Stariè, E. Margan Computer Algorithms
% check magnitude slope between the 2nd and 3rd octave above cutoff
M=abs(diff(20*log10(abs(F(1+4*m*[1,2])))));
x=3; % system is 3rd-order or higher (>=18dB/2f)
if M < 15
x=2; % probably a 2nd-order system (12dB/2f)
elseif M < 9
x=1; % probably a 1st-order system (6dB/2f)
end
-6.50-
P. Stariè, E. Margan Computer Algorithms
Whilst the amount of error in low order system impulse responses might seem
small, it would integrate to an unacceptably high level in the step responses if the input
data were not windowed. In Fig. 6.5.5 to 6.5.10 we have computed the difference
between analytically and numerically calculated impulse and step responses of
Butterworth systems, using both the normal and windowed frequency response for the
numerical method. Fig. 6.5.5 and 6.5.6 show the impulse and the step response of the
1st -order system, Fig. 6.5.7 and 6.5.8 show the 2nd -order system and Fig. 6.5.9 and
6.5.10 the 3rd -order system. The error plots, shown within each response plot, were
magnified 10 or 100 times to reveal the details.
The initial value of the 1st -order system’s impulse response, calculated from a
non-windowed frequency response, is about 0.08% and falls off quickly with time.
Nevertheless, it is about 10× higher than for the impulse response calculated from a
windowed response. If the frequency response is not windowed, the impulse responses
error eventually integrates to almost 4% of the final value in the step response.
The error plots of the second and higher order systems are much smaller, but
they exhibit some decaying oscillations, independently of windowing. This oscillating
error is inherent in the FFT method and it is owed to the Gibbs’ phenomenon (see
Part 1, Sec. 1.2). It can be easily shown that the frequency response of a rectangular
time domain window follows a asin BbÎB curve, and the equivalent holds for the time
response of the frequency domain rectangular window (remember Eq. 6.5.7). Since we
have deliberately truncated the system’s frequency response at the R th sample, we can
think of it as being a product of an infinitely long (but finite density) spectrum with a
rectangular frequency window. This results in a convolution of the system impulse
response with the asin BbÎB function in the time domain, hence the form of the error in
Fig. 6.5.7 to 6.5.10.
This error can be lowered by taking the transform of a longer frequency
response vector, but can never be eliminated. For example, if N=2048 and we specify
the frequency vector as: w=(0:1:N-1)/8, the error will be 8 times lower than in the
case of a frequency vector with N=256, but the calculation will last more than 23 times
longer (the number of multiplications is proportional to R log# R or 11 times, in
addition to 8 times the number of sums and other operations).
As we have stated at the beginning, Sec. 6.0, our aim is to make quick
comparisons of the performance of several systems, and on the basis of those decide
which system suits us best. Since the resolution of the computer VGA type of graphics
is more than adequate for this purpose, and the response error in the case of N=256 can
be seen only if compared with the exact response Ðand even so as only a one pixel
differenceÑ, the extra time and memory requirements do not justify the improvement.
A better way of calculating the response more accurately and also directly from
system poles and zeros is described in the next section.
-6.51-
P. Stariè, E. Margan Computer Algorithms
1.0
Ir, Iw
0.8 Iw - Impulse response from windowed frequency response 0.08
In In - Impulse response from normal frequency response
0.07
Normalized Amplitude
Ir - Reference response
0.6 En - Error: abs(In-Ir) 0.06
Error
Ew - Error: abs(Iw-Ir) 0.05
0.4 0.04
0.03
0.2 0.02
En
0.01
Ew
0 0
0 1 2 3 4 5 6
t / T0
1.0
Sw, Sr
Sn
0.8
Sr - Reference response
Normalized Amplitude
En
0.2 0.02
0.01
Ew
0 0
0 1 2 3 4 5 6
t / T0
Fig. 6.5.5 and 6.5.6: The first 30 points of a 256 sample long 1st -order impulse
and step response vs. the analytically calculated references. The error plots En
and Ew are enlarged 10×. Although the impulse response, calculated from the
normal frequency response, has a relatively small error, it integrates to an
unacceptably high value (4%) in the step response. In contrast, by windowing the
frequency response, both time domain errors are much lower, the step response
final value is in error by les than 0.2%.
-6.52-
P. Stariè, E. Margan Computer Algorithms
0.6
Iw - Impulse response from windowed frequency response
In - Impulse response from normal frequency response
0.5 0.05
Ir - Reference response
Iw Ir, In
0.3 0.03
0.2 0.02
Error
0.1 0.01
Ew
0 En 0
− 0.1
0 1 2 3 4 5 6 7 8
t / T0
1.2
1.0
Sr, Sn, Sw
Normalized Amplitude
0.8
0.6 0.006
Ew
0 0
0 1 2 3 4 5 6 7 8
t / T0
Fig. 6.5.7 and 6.5.8: As in Fig. 6.5.5 and 6.5.6, but with 40 samples of a 2nd -order
Butterworth system. The impulse response error for the windowing procedure is higher
at the beginning, but falls off more quickly, therefore the step response final value error
is still much lower (note that the step response error plots are enlarged 100×). The
oscillations in error plots, owed to the Gibbs’ effect, also begin to show.
-6.53-
P. Stariè, E. Margan Computer Algorithms
0.5
In - Impulse response from normal frequency response
Iw - Impulse response from windowed frequency response
En - Error: abs(In-Ir)
0.3 0.003
Ew - Error: abs(Iw-Ir)
Error
0.2 Ew 0.002
0.1 0.001
0 0
En
− 0.1
0 1 2 3 4 5 6 7 8 9 10
t / T0
1.2
0.8
0.2 0.002
Ew
En 0
0
0 1 2 3 4 5 6 7 8 9 10
t / T0
Fig. 6.5.9 and 6.5.10: As in Fig. 6.5.5–8, but with 50 samples of a 3rd -order
Butterworth system. Windowing does not help any longer and produces even
greater error. The dominant error is now owed to the Gibbs’ effect.
-6.54-
P. Stariè, E. Margan Computer Algorithms
The TRESP routine executes surprisingly fast. Back in 1987, when these Matlab
routines were developed and the first version of this text was written, I was using a
12 MHz PC with an i286-type processor, an i287 math coprocessor, and EGA type of
graphics (640×400 resolution, 16 colors). To produce the 10 responses of Fig.6.5.11,
starting from the system order, finding the system coefficients, extracting the poles,
calculating the complex frequency response, running FFT to obtain the impulse
response, integrating for the step response and finally plotting it all on a screen, that old
PC worked less than 12 seconds. Today (March 2000), a 500 MHz Pentium-III based
processor does it in a few tens of milliseconds (before you can release the ENTER key,
once you have pressed it; although, it takes a lot more time for Matlab working under
Windows to open the graph window). And note that we are talking about floating point,
‘double precision’ arithmetic! Nevertheless, being able to make fast calculations has
become even more important for embedded instrumentation applications, which require
real time data processing and adaptive algorithms.
1.2
1.0
Normalized Amplitude
0.8
0.6
0.4
1 2
9
0.2
0
0 0.5 1.0 1.5 2.0 2.5 3.0
Normalized Time
-6.55-
P. Stariè, E. Margan Computer Algorithms
The method of calculating the transient response by FFT, presented in Sec. 6.5,
has several advantages over other algorithms. The most important ones are high
execution speed, the possibility of computing from either a calculated complex
frequency response or from a measured magnitude–phase relationship, and the use of
the same FFT algorithm to work both time–to–frequency and frequency–to–time.
Its main disadvantage is the error resulting from the Gibbs’ effect, which
distorts the most interesting part of the time domain response. This error, although
small, can sometimes prevent the system designer from resolving or identifying the
cause of possible second-order effects that are spoiling the measured or simulated
system performance to which the desired ideal response is being compared. In such a
case the designer must have a firm reference, which should not be an approximation in
any sense.
The algorithm, presented in this section, with the name ATDR (an acronym of
‘Analytical Time Domain Response’), calculates the impulse and step responses by
following the same analytical method, that has been used extensively in the previous
parts of this book. The routine calculates the residues at each system transfer function
pole, and then calculates the final response at specified time instants. However, the
residues are not calculated by an actual infinitesimally limiting process, so it is not
possible to apply this routine in the most general case (e.g., it fails for systems with
coincident poles), but this restriction is not severe, since all of the optimized system
families are covered properly. Readers who would like to implement a rigorously
universal procedure can obtain the residues calculated by the somewhat longer
RESIDUE routine in Matlab.
In contrast to the FFT method, whose execution time is independent of system
complexity, this method works more slowly for each additional pole or zero.
A nice feature of this method is that the user has a direct control over the time
vector: the response is calculated at exactly those time instants which were specified by
the user. This may be important when making comparison with a measured response of
an actual system prototype.
As we have seen in numerous examples, solved in the previous parts, a general
expression for a residue at a pole :5 of an 8th -order system specified by Eq. 6.1.10 can
be written like this:
8 7
$ Ð:3 Ñ $ Ð: D4 Ñ
3œ" 4œ"
<5 œ :p:
lim Ð: :5 Ñ 8 † 7 e:5 > (6.6.1)
5
$ Ð: :3 Ñ $ ÐD4 Ñ
3œ" 4œ"
-6.57-
P. Stariè, E. Margan Computer Algorithms
The resulting plot should look the same as in Fig. 6.5.1 (but with a much better
accuracy!).
-6.58-
P. Stariè, E. Margan Computer Algorithms
function y=atdr(z,p,t,q)
%ATDR Analytical Time Domain Response by simplified residue calculus
% (does not work for systems with multiple poles).
% y=atdr(z,p,t) or
% y=atdr(z,p,t,'n') returns the normalized impulse response of a
% unity gain system, specified by zeros z and
% poles p in time t.
% y=atdr(z,p,t,'i') returns the impulse response, denormalized to
% the ieal impulse input.
% y=atdr(z,p,t,'s') returns the step response of the system.
%
% Specify the time as : t=(0:1:N-1)/T, where N is the number of
% desired time domain samples and T is the number of samples in
% the time scale unit, i.e.: t=(0:1:200)/10
if nargin==3
q='n' ; % by default, return the unity gain impuse response
end
n=max(size(p)); % find the number of poles
for k=1:n % test for repeating poles
P=p;
P(k)=[ ]; % exclude the pole currently tested
if all(abs(P-p(k)))==0 % is there another such pole ?
error('ATDR cannot handle systems with repeating poles!')
end
end
dc=1; % set low pass system flag
if isempty(z)
Z=1; % no zeros
else
% zeros
if any( abs(z) <1e-6 )
dc=0; % HP or BP system, clear dc flag
end
if all( abs( real( z ) ) < 1e-6 )
z = j * imag( z ) ; % all zeros on imaginary axis
end
Z=ones(size(p)) ;
if dc
for k=1:n
Z(:,k)=prod(p(k)-z)/prod(-z);
end
else
for k = 1:np
for h = 1:nz
if z(h) == 0
Z(k,:) = Z(k,:)*p(k) ;
else
Z(k,:) = Z(k,:)*(p(k)+z(h))/z(h) ;
end
end
end
end
Z=Z(:); % column-wise orientation
end
if n == 1
D=1; % single pole case
else
for k = 1:n
d=p(k)-p; % difference, column orientation
d(k)=[ ]; % k-th element = 0, eliminate it
D(:,k)=d; % k-th column of D
-6.59-
P. Stariè, E. Margan Computer Algorithms
end
if n > 2
D=(prod(D)); % make column-wise product if D is a matrix
end
D=D.'; % column-wise orientation
end
P=prod(-p)*Z./D; % impulse residues
if q == 's'
P=P./p; % if step response is required, divide by p
end
t=t(:).'; % time vector, row orientation
y=P(1)*exp(p(1)*t); % response, first row
for k = 2:n
y=[y; P(k)*exp(p(k)*t)]; % next row
y=sum(y); % sum column-wise, return a single row
end
y=real(y); % result is real only (imaginary parts cancel)
if (q == 's') & ( isempty(z) | dc == 1 )
y=y+1; % if step resp., add 1 for the pole at 0+j0
end
if ( q == 'i' | q == 'n' ) & ( dc == 0 )
y=-diff([0, y]); % impulse response of a high pass system
end
if q == 'n'
y=y/abs(sum(y)); % normalize impulse resp. to unity gain
end
-6.60-
P. Stariè, E. Margan Computer Algorithms
× 10 3
2 ℑ{s }
Bessel System
1
Butterworth System
ℜ{s }
0
−1
−2
3
−2 −1 0 1 2 × 10
Fig. 6.7.1: The Butterworth poles (on the unit cycle) and the
Bessel–Thomson poles (on the fitted ellipse). Note that for the
same bandwidth (" kHz) the values of Bessel–Thomson poles are
much larger, but with a lower ratio of the imaginary to the real part.
The frequency response plots are shown in Fig. 6.7.2. Note the equal pass band
(3 dB point) and equal slope at high frequencies. However, the Butterworth system
atenuation is an order of magnitude (20 dB) better.
-6.61-
P. Stariè, E. Margan Computer Algorithms
− 3 dB
− 20 5 pole
5 pole
Butterworth Bessel-Thomson
System System
− 40
M [dB]
− 60
−80
−100
0.1 1.0 10.0
f [kHz]
Fig. 6.7.2: Frequency responses of the Butterworth and Bessel–Thomson system. For an
equal cut off frequency (0h œ " kHz), the Butterworth system stop band attenuation is
about an order of magnitude (10× or 20 dB) better than that of the Bessel–Thomson.
Using the same poles and the ATDR routine, we compare the step responses:
1.2
1.0
5 pole
Bessel-Thomson
System
0.8
5 pole
Butterworth
System
0.6
0.4
0.2
0
0 0.5 1.0 1.5 2.0 2.5 3.0
t [ms]
Fig. 6.7.3: Step responses of the Butterworth and Bessel–Thomson system. For the same cut off
frequency (1 kHz) the Bessel–Thomson system’s delay is smaller; the overshoot is only 0.4% and
there is no ringing, so settling down to 0.1% occurs within the first 1 ms. Although the rise times
are nearly equal, the Butterworth system is a poor choice if time domain performance is required,
since it settles down to 0.1% only after some 5 ms (but Chebyshev and Elliptic filter systems are
even much worse in this respect).
-6.62-
P. Stariè, E. Margan Computer Algorithms
Résumé of Part 6
The algorithms shown are small, simple, easy to use, and fast in execution. They
are ideal for starting the system’s design from scratch, to specify the design goals, as
well as to provide a reference with which a realized prototype can be compared.
We have shown how the system performance can easily be evaluated by using
the routines developed for Matlab, the prediction of system time domain response in
particular. We also hope that the development and application examples of these
routines offer a deeper insight on how the system should be designed as a whole.
Still, the reader as the future system’s designer is being let down at the most
demanding task of finding the circuitry and hardware that will perform as required, and
engineering experience is the only help here. This book should help to understand how
it might be possible to push the bandwidth up, smooth the transient, and reduce the
settling time. But there are also many other important parameters which must be
carefully considered when designing an amplifier, such as noise, linearity, electrical and
thermal stability, output power, slew rate limiting, the time it takes to recover from
overdrive, etc.
However, these parameters (with the exception of electrical stability) are in
most cases independent of the system pole and zero locations, but are strongly
influenced by the circuit’s topology and by the type of active devices used for the
realization.
Once the design goals have been set and the circuit configuration selected,
performance verification and iterative finalization can then be done using one of the
many CAD/CAE programs available on the market.
To see the numerical convolution routine and calculation examples and an
actual amplifier–filter system design example calculated using the algorithms
developed so far, please turn to Part 7.
-6.63-
P. Stariè, E. Margan Computer Algorithms
References:
-6.65-
P. Starč, E. Margan:
Wideband Amplifiers
Part 7:
Contents:
- 7.3 -
P.Starič, E.Margan Algorithm Application Examples
List of Figures:
Fig. 7.1.1: Convolution example: Response to a sine wave burst ..................................................... 7.11
Fig. 7.1.2: Checking Convolution: Response to a Unit Step ............................................................. 7.12
Fig. 7.1.3: Convolution example: 2-pole Bessel + 2-pole Butterworth System Response ................ 7.13
Fig. 7.1.4: Input signal example used for spectral domain processing .............................................. 7.14
Fig. 7.1.5: Spectral domain multiplication is equivalent to time domain convolution ...................... 7.14
Fig. 7.1.6: Time domain result of spectral domain multiplication .................................................... 7.15
Fig. 7.2.1: Aliasing (frequency mirroring) in sampling systems ....................................................... 7.18
Fig. 7.2.2: Alias of a signal equal in frequency to the sampling clock .............................................. 7.18
Fig. 7.2.3: Alias of a signal slightly higher in frequency than the sampling clock ............................ 7.19
Fig. 7.2.4: Alias of a signal equal in frequency to the Nyquist frequency ......................................... 7.20
Fig. 7.2.5: Same as in Fig. 7.2.4, but with a 45° phase shift ............................................................. 7.20
Fig. 7.2.6: Spectrum of a sweeping sinusoidal signal follows the Ðsin =ÑÎ= function ..................... 7.21
Fig. 7.2.7: Magnitude of Bessel systems of order 5, 7 and 9, with equal attenuation at 0N .............. 7.23
Fig. 7.2.8: Step response of Bessel systems of order 5, 7 and 9 ........................................................ 7.25
Fig. 7.2.9: Alias spectrum of a 7-pole filter with a higher cut off frequency ................................... 7.26
Fig. 7.2.10: The inverse of the alias spectrum is the digital filter attenuation required .................... 7.27
Fig. 7.2.11: Comparing the poles: 13-pole A+D system and the 7-pole analog only system ............ 7.28
Fig. 7.2.12: Bandwidth improvement of the A+D system against the analog only system ................ 7.30
Fig. 7.2.13: Step response comparison of the A+D system and the analog only system ................... 7.31
Fig. 7.2.14: Envelope delay comparison of the A+D System and the analog only system ................ 7.32
Fig. 7.2.15: Convolution as digital filtering — the actual 13-pole A+D step response ..................... 7.33
Fig. 7.2.16: Complex plane plot of a mixed mode filter with zeros .................................................. 7.35
Fig. 7.2.17: Frequency response of a mixed mode filter with zeros .................................................. 7.35
Fig. 7.2.18: Alias spectrum of a mixed mode filter with zeros .......................................................... 7.35
Fig. 7.2.19: Time domain response of a mixed mode filter with zeros ............................................. 7.36
Fig. 7.2.20: Multiple Feedback 3-pole Low Pass Filter Configuration (MFB-3) .............................. 7.37
Fig. 7.2.21: Multiple Feedback 2-pole Low Pass Filter Configuration (MFB-2) .............................. 7.37
Fig. 7.2.22: Realization of the 7-pole Analog Filter for the 13-pole Mixed Mode System ............... 7.43
List of Routines:
- 7.4 -
P.Starič, E.Margan Algorithm Application Examples
7.0 Introduction
- 7.5 -
P.Starič, E.Margan Algorithm Application Examples
where 7 is a fixed time constant, its value chosen so that 0 Ð>Ñ is time reversed.
Usually, it is sufficient to make 7 large enough to allow the system impulse response
0 Ð>Ñ to completely relax and reach the steady state again (not just the first zero-
crossing point!).
If BÐ>Ñ was applied to the system at >0 , then this can be the lower limit of
integration. Of course, the time scale can always be renormalized so that >0 œ !. The
upper integration limit, labeled >1 , can be wherever needed, depending on how much
of the input and output signal we are interested in.
Now, in Eq. 7.1.1 .> is implicitly approaching zero, so there would be an
infinite number of samples between >0 and >1 . Since our computers have a limited
amount of memory (and we have a limited amount of time!) we must make a
compromise between the sampling rate and the available memory length and adjust
them so that we cover the signal of interest with enough resolution in both time and
1
Bounded input p bounded output. This property is a consequence of our choice of basic mathematical
assumptions; since our math tools were designed to handle an infinite amount of infinitesimal
quantities, BIBO is the necessary condition for convergence. However, in the real analog world, we
are often faced with UBIBO requirements (unbounded input), i.e., our instrumentation inputs must be
protected from overdrive. Interestingly, the inverse of BIBO is in widespread use in the computer
world, in fact, any digital computer is a GIGO type of device ( garbage in p garbage out; unbounded!).
2
Linearity, Time Invariance, Causality. Although some engineers consider oscillators to be ‘acausal’,
there is always a perfectly reasonable cause why an amplifier oscillates, even if we fail to recognise it
at first.
- 7.7 -
P.Starič, E.Margan Algorithm Application Examples
amplitude. So if Q is the number of memory bytes reserved for BÐ>Ñ, the required
sampling time interval is:
>1 >0
?> œ (7.1.2)
Q
Then, if ?> replaces .> , the integral in Eq. 7.1.1 transforms into a sum of Q
elements, BÐ>Ñ and CÐ>Ñ become vectors x(n) and y(n), where n is the index of a
signal sample location in memory, and 0 Ð7 >Ñ becomes f(m-n), with m=length(f),
resulting in:
M
y(n) œ " f(m-n)*x(n) (7.1.3)
n=1
Here ?> is implicitly set to 1, since the difference between two adjacent
memory locations is a unit integer. Good book-keeping practice, however,
recommends the construction of a separate time scale vector, with values from >0 to
>1 , in increments of ?> between adjacent values. All other vectors are then plotted
against it, as we have seen done in Part 6.
In Part 1 we have seen that solving the convolution integral analytically can be
a time consuming task, even for a skilled mathematician. Sometimes, even if BÐ>Ñ and
0 Ð>Ñ are analytic functions, their product need not be elementarily integrable in the
general case. In such cases we prefer to take the _ transform route; but this route can
sometimes be equally difficult. Fortunately numerical computation of the convolution
integral, following Eq. 7.1.3 , can be programmed easily:
function y=vcon(f,x)
%VCON Convolution, step-by-step example. See also CONV and FILTER.
%
% Call : y=vcon(f,x);
% where: x(t) --> the input signal
% f(t) --> the system impulse response
% y(t) --> the system response to x(t) by convolving
% f(t) with x(t).
% If length(x)=nx and length(f)=nf, then length(y)=nx+nf-1.
- 7.8 -
P.Starič, E.Margan Algorithm Application Examples
To get a clearer view of what the VCON routine is doing , let us write a short
numerical example, using a 6-sample input signal and a 3-sample system impulse
response, and display every intermediate result of the matrix y in VCON:
0 1 6 13 18 19 12 -6
0 0 0 -1 -3 -5 -6 -6
% after 2 iterations (because f is only 3 samples long)
% the result is the first row of y:
0 1 6 13 18 19 12 -6
% actually, the result is only the first 6 elements:
0 1 6 13 18 19
% since there are only 6 elements in x, the process assumes the rest
% to be zeros. So the remaining two elements of the result represent
% the relaxation from the last value (19) to zero by the integration
% of the system impulse response f.
0 1 3 5 6 6
-1 3 1
( Æ *) ==> 0 ( Ä +) ==> 0
0 1 3 5 6 6
-1 3 1
( Æ *) ==> 0 1 ( Ä +) ==> 1
0 1 3 5 6 6
-1 3 1
( Æ *) ==> 0 3 3 ( Ä +) ==> 6
0 1 3 5 6 6
-1 3 1
( Æ *) ==> 0 -1 9 5 ( Ä +) ==> 13
% ......... etc.
For convolution Matlab has a function named CONV, which uses a built in
FILTER command to run substantially faster, but then the process remains hidden
from the user; however, the final result is the same as with VCON. Another property
of Matlab is the matrix indexing, which starts with 1 (see the lower limit of the sum
- 7.9 -
P.Starič, E.Margan Algorithm Application Examples
symbol in Eq. 7.1.3), in contrast to most programming languages which use memory
‘pointers’ (base address + offset, the offset of the array’s first element being 0).
Let us now use the VCON routine in a real life example. Suppose we have a
gated sine wave generator connected to the same 5th -order Butterworth system which
we inspected in detail in Part 6 . Also, let the Butterworth system’s half power
bandwidth be 1 kHz, the generator frequency 1.5 kHz, and we turn on the gate in the
instant the signal crosses the zero level. From the frequency response calculations, we
know that the forced response amplitude (long after the transient) will be:
Aout=Ain*abs(freqw(z,p,1500/1000));
where z are the zeros and p are the poles of the normalized 5th -order Butterworth
system; the signal frequency is normalized to the system’s cut off frequency.
But how will the system respond to the signal’s ‘turn on’ transient? We can
simulate this using the algorithms we have developed in Part 6 and VCON:
The convolution result, compared to the input signal and the system impulse
response, is shown in Fig. 7.1.1 .
Note that we have plotted only the first nt samples of the convolution result;
however, the total length of y is length(x)+length(Ir)-1 , or one sample less that
the sum of the input signal and the system response lengths. The first length(x)=nt
samples of y represent the system’s response to x, whilst the remaining
length(Ir)-1 samples are the consequence of the system relaxation: since there are
no more signal samples in x after the last point x(nt), the convolution assumes that
the input signal is zero and calculates the system relaxation from the last signal value.
- 7.10 -
P.Starič, E.Margan Algorithm Application Examples
1.0
x (t )
0.6
Ir ( t )
0.2
− 0.2 (t )
− 0.6
− 1.0
0 1 2 3 4 5 6
t [ms]
Fig. 7.1.1: Convolution example: response Ca>b to a sine wave Ba>b switched-on into
a 5th -order Butterworth system, whose impulse-response is Mr a>b, shown here as the
ideal response (instead of unity gain); both are delayed by the same switch–on time
(!Þ& ms). The system responds by phase shifting and amplitude modulating the first
few wave periods, reaching finally the forced (‘steady state’) response.
- 7.11 -
P.Starič, E.Margan Algorithm Application Examples
The resulting step response, shown in Fig. 7.1.2, should be identical to that of
Fig. 6.1.11, Part 6, neglecting the initial 0.5 ms (25 samples) time delay and the
different time scale:
1.2
h (t )
1.0
0.8 (t )
0.6
0.4
0.2 Ir(t )
− 0.2
0 1 2 3 4 5 6
t [ms]
Fig. 7.1.2: Checking convolution: response Ca>b of the 5th -order Butterworth
system to the unit step 2a>b. The system’s impulse response Mr a>b is also shown, but
in its ideal size (not unity gain). Apart from the 0.5 ms (25-samples) time delay and
the time scale, the step response is identical to the one shown in Part 6, Fig.6.1.11.
We can now revisit the convolution integral example of Part 1, Sec. 1.14,
where we had a unit-step input signal, fed to a two-pole Bessel-Thomson system,
whose output was in turn fed to a two-pole Butterworth system. The commands in the
following window simulate the process and the final result of Fig. 1.14.1 . But this
time, let us use the frequency to time domain transform of the TRESP (Part 6) routine.
See the result in Fig. 7.1.3 and compare it to Fig. 1.14.1g .
- 7.12 -
P.Starič, E.Margan Algorithm Application Examples
1.2
1.0 S1( t )
0.8 (t )
0.6
0.4
0.2 I2 ( t )
0
− 0.2
0 5 10 15
t [s]
Fig. 7.1.3: Convolution example of Part 1, Sec. 1.14. A Bessel–Thomson
2-pole system step response W" a>b has been fed to the 2-pole Butterworth
system and convolved with its impulse response M# a>b, resulting in the
output step response Ca>b. Compare it with Fig. 1.14.1g.
- 7.13 -
P.Starič, E.Margan Algorithm Application Examples
a=max(find(t<=5e-5));
b=min(find(t>=20e-6));
plot( t(a:b), g(a:b), '-g', t(a:b), y(a:b), '-b' )
xlabel('Time [\mus]') % see Fig.7.1.6
1.5
R (t )
1.0
0.5
0.0
− 0.5
−1.0
−1.5
0 10 20 30 40 50 60
t [ µ s]
Fig. 7.1.4: Input signal example used for the spectral-domain convolution
example (first 1200 samples of the 2048 total record length)
1.0
|G( f ) |
0.8
0.6 |F ( f )|
0.4
|Y ( f ) |
0.2
0.0
0 1 2 4 3 5 6 7 8
f [MHz]
Fig. 7.1.5: The spectrum Ka0 b of the signal in Fig. 7.1.4a is multiplied by the
system’s frequency response J a0 b to produce the output spectrum ] a0 b. Along
with the modulated signal centered at 560 kHz, there is a strong 2.8 MHz
interference from another source and a high level of white noise (rising with
frequency), both being substantially reduced by the filter.
- 7.14 -
P.Starič, E.Margan Algorithm Application Examples
1.5
1.0 R (t ) (t )
0.5
0.0
− 0.5
−1.0
−1.5
5 10 15 20
t [ µ s]
Fig. 7.1.6: The output spectrum is returned to time domain as Ca>b and is
compared with the input signal Va>b, in expanded time scale. Note the small
change in amplitude, the reduced noise level and the envelope delay (approx.
"Î% period time shift), with little change in phase. The time shift is equal to "Î#
the number of samples of the filter impulse response.
Fig. 7.1.6 illustrates the dramatic improvement in signal quality that can be
achieved by using Bessel filters.
In MRI systems the test object is put in a strong static magnetic field. This
causes the nucleons of the atoms in the test object to align their magnetic spin to the
external field. Then a short RF burst, having a well defined frequency and duration, is
applied, tilting the nucleon spin orientation perpendicular to the static field (this
happens only to those nucleons whose resonant frequency coincides with that of the
RF burst).
After the RF burst has ceased, the nucleons gradually regain their original spin
orientation in a top-like precessing motion, radiating away the excess electromagnetic
energy. This EM radiation is picked up by the sensing coils and detected by an RF
receiver; the detected signal has the same frequency as the excitation frequency, both
being the function of the static magnetic field and the type of nucleons. Obviously the
intensity of the detected radiation is proportional to the number of nucleons having
the same resonant frequency3 .
In addition, since the frequency is field dependent a small field gradient can be
added to the static magnetic field, in order to split the response into a broad spectrum.
The shape of the response spectral envelope then represents the spatial density of the
specific nucleons in the test object. By rotating the gradient around the object the
recorded spectra would represent the ‘sliced view’ of the object from different angles.
A computer can be used to reconstruct the volumetric distribution of particular atoms
through a process called ‘back-projection’ (in effect, a type of spatial convolution).
From this short description of the MRI technique it is clear that the most vital
parameter of the filter, applied to smooth the recorded signal, is its group delay
flatness. Only a filter with a group delay being flat well into the stop band will be able
to faithfully deliver the filtered signal, preserving its shape both in the time and the
frequency domain and Bessel–Thomson filters are ideal in this sense. Consequently a
sharper image of the test object is obtained.
3
The 1952 Nobel prize for physics was awarded to Felix Bloch and Edward Mills Purcell for their
work on nuclear magnetic resonance; more info at <http://nobelprize.org/physics/laureates/1952/>.
- 7.15 -
P.Starič, E.Margan Algorithm Application Examples
- 7.17 -
P.Starič, E.Margan Algorithm Application Examples
The purpose of filtering the signal above the Nyquist frequency is to avoid
‘aliasing’. Fig. 7.2.1 shows a typical situation resulting in a signal frequency alias in
relation to the sampling clock frequency.
5 C
0
1
A
0
S
−1
0 1 2 3 4 5 6 7 8 9 10
t
Fig. 7.2.1: Aliasing (frequency mirroring). A high frequency signal S, sampled by an ADC
at each rising edge of the clock C of a comparably high frequency, can not be distinguished
from its low frequency alias A, which is equal to the difference between the clock and
signal frequency, 0a œ 0s 0c . In this figure, 0s œ Ð*Î"!Ñ 0c , therefore 0a œ Ð"Î"!Ñ 0c
(Yes, a negative frequency! This can be verified by increasing the clock frequency very
slightly and watch the aliased signal apparently moving backwards in time).
A wheel, rotating at the cycle frequency 0w equal to the picture rate 0p (or its
integer multiple or sub-multiple, 0w œ 8 0p Î7, where 7 is the number of wheel
arms), would be perceived as stationary. Likewise, if an ADC’s sampling frequency is
equal to the signal frequency (see Fig. 7.2.2), the apparent result is a DC level.
5 C
0
1 A
−1 S
0 1 2 3 4 5 6 7 8 9 10
t
Fig. 7.2.2: Alias of a signal equal in frequency to the sampling clock looks like a DC.
- 7.18 -
P.Starič, E.Margan Algorithm Application Examples
5 C
0
1 A
−1 S
0 1 2 3 4 5 6 7 8 9 10
t
Fig. 7.2.3: A signal frequency slightly higher than the sampling frequency aliases
into a low frequency, equal to the difference of the two (but now positive).
Here is the ALIAS routine for Matlab, by which we can calculate the aliasing
for any clock and signal frequency desired.
function fa=alias(fs,fc,phi)
% ALIAS calculates the alias frequency of a sampled sinewave signal.
% Call : fa = alias( fs, fc, phi ) ;
% where: fs is the signal frequency
% fc is the sampling clock frequency
% phi is the initial signal phase shift
if nargin < 3
phi = pi/3 ; % signal phase shift re. clk, arbitrary value
end
ofs = 2 ; % clock offset
A = 1/ofs ; % clock amplitude
m = 100 ; % signal reconstruction factor is equal to
% the number of dots within a clock period
N = 1 + 10 * m ; % total number of dots
dt = 1 / ( m * fc ) ; % delta-t for time reconstruction
t = dt * ( 0 : 1 : N ) ; % time vector
- 7.19 -
P.Starič, E.Margan Algorithm Application Examples
Of course, the sampled signal is more often than not a spectrum, either
discrete or continuous, and aliasing applies to a spectrum as well as to discrete
frequency signals. In fact, the superposition theorem applies here, too.
We have noted that a sampled spectrum is symmetrical about the sampling
frequency, because a signal, sampled by a clock with exactly the same frequency,
aliases as a DC level which depends on the initial signal phase shift relative to the
clock. However, something odd already happens at the Nyquist frequency, as can be
seen in Fig. 7.2.4 and Fig. 7.2.5. In both figures the signal frequency is equal to the
Nyquist frequency ("Î# the sampling frequency), but differs in phase. Although the
correct alias signal is equal in amplitude to the original signal, we perceive an
amplitude which varies with phase.
5 C
0
1 A
X
0
−1 S
0 1 2 3 4 5 6 7 8 9 10
t
Fig. 7.2.4: When the signal frequency is equal to the Nyquist frequency,
there are two samples per period and the correct alias signal is of the same
amplitude as the original signal. However, the perceived alias amplitude is a
function of the phase difference between the signal and the clock. A "!°
phase shift results in a low apparent amplitude, as shown by the X waveform.
5 C
0
1 S A X
−1
0 1 2 3 4 5 6 7 8 9 10
t
Fig. 7.2.5: Same as in Fig. 7.2.4, but with a 45° phase shift.
The apparent amplitude of X is now higher.
- 7.20 -
P.Starič, E.Margan Algorithm Application Examples
0
sin ω T s
− 0.2 ω Ts
− 0.4 0.01
0.1 1.0 10.0 0.1 1.0 10.0
f / fN f / fN
Fig. 7.2.6: The spectrum resulting from sampling a constant amplitude sinusoidal signal
varying in frequency from 0.10N to 100N follows the asin =Xs bÎ=Xs function, where
Xs œ "Î0s . The function is shown in the linear vertical scale on the left and in the log of the
absolute value on the right. The first zero occurs at the Nyquist frequency, the second at the
sampling frequency and so on, at every Nyquist harmonic. Note that the effective sampling
bandwidth 0h is reduced to about 0.43 0N . The asymptote is the same as for a simple VG low
pass filter, #! dBÎ"!0 with a cut off at 0a œ 0N ÎÈ"! .
The aliasing amplitude follows this same asin =Xs bÎ=Xs function, from the
Nyquist frequency up. An important side effect is that the bandwidth is reduced to
about 0.43 0N . This can be taken into account when designing an input anti-aliasing
filter and partially compensate the function pattern below the Nyquist frequency by an
adequate peaking.
By the term ‘mixed mode filter’ we mean a combination of analog and digital
filtering which gives the same result as a single filter having the same total number of
poles. The simplest way to understand the design requirements and optimization, as
well as the advantages of such an approach, is by following an example.
Let us imagine a sampling system using an ADC with a 12 bit amplitude
resolution and a 50 ns time resolution (sampling frequency 0s œ #! MHz). The
number of discrete levels resolved by 12 bits is E œ #"# œ %!*'; the ADC relative
resolution level is simply "ÎE, or in dB, + œ #! log"! Ð"ÎEÑ œ (# dB. According to
the Shannon sampling theorem the frequencies above the Nyquist frequency
(0N œ 0s Î# œ "! MHz) must be attenuated by at least #"# to reduce the alias of the
high frequency spectral content (signal or noise) below the ADC resolution.
As we have just learned, the asin =Xs bÎ=X= function of the alias spectrum
allows us to relax the filter requirements by some 4 bits (a factor of %Þ& or "$ dB) at
the frequency 0.7 0s ; for a while, we are going to neglect this, leaving it for the end of
our analysis.
Let us also assume a % V peak to peak ADC input signal range and let the
maximum required vertical amplifier sensitivity be & mVÎdivision. Since oscilloscope
displays usually have 8 vertical divisions, this means %! mV of input for a full scale
display, or a gain of "!!. We would like to achieve the required gain–bandwidth
product with either a two- or a three-stage amplifier. We shall assume a 5-pole filter
- 7.21 -
P.Starič, E.Margan Algorithm Application Examples
for the two-stage amplifier (a 3-pole and a 2-pole stage), and a 7-pole filter for the
three-stage amplifier (one 3-pole stage and two 2-pole stages). We shall also inspect
the performance of a 9-pole (four-stage) filter to see if the higher bandwidth (achieved
as a result of a steeper cut off) justifies the cost and circuit complexity of one
additional amplifier stage.
Now, if our input signal was of a square wave or pulse form, our main
requirement would be a shortest possible ADC ‘aperture’ time and an analogue
bandwidth as high as possible. Then we would be able to recognize the sampled
waveform shape even with only 5 samples per period. But suppose we would like to
record a transient event having the form of an exponentially decaying oscillating
wave, along with lots of broad band noise, something like the signal in Fig. 7.1.4. To
do this properly we require both aliasing suppression of the spectrum beyond the
Nyquist frequency and preserving the waveform shape; the later requirement limits
our choice of filter systems to the Bessel–Thomson family.
Finally, we shall investigate the possibility of improving the system bandwidth
by filtering the recorded data digitally.
We start our calculations from the requirement that any anti-alias filter must
have the attenuation at the Nyquist frequency 0N equal to the ADC resolution level.
Since we know that the asymptote attenuation slope depends on the system order 8
(number of poles) as 8 ‚ #! dBÎ"!0 , we can follow those asymptotes from 0N back
to the maximum signal level; the crossing point then defines the system cut off
frequency 0h8 for each of the three filter systems.
Since we do not have an explicit relation between the Bessel–Thomson filter
cut off and its asymptote, we shall use Eq. 6.3.10 for Butterworth systems to find the
frequency 0a at which the 5-, 7-, and 9-pole Butterworth filter would exhibit the
E œ #"# attenuation required. By using 0h œ 0N# Î0a we can then find the Butterworth
cut off frequencies. Then by using the modified Bessel–Thomson poles (those that
have the same asymptote as the Butterworth system of comparable order) we can find
the Bessel–Thomson cut off frequencies which satisfy the no-aliasing requirement.
fh5=fN/10^(log10(A^2-1)/(2*5));
fh7=fN/10^(log10(A^2-1)/(2*7));
fh9=fN/10^(log10(A^2-1)/(2*9));
disp(['fh5 = ', num2str(fa5/M), ' MHz'])
disp(['fh7 = ', num2str(fa7/M), ' MHz'])
disp(['fh9 = ', num2str(fa9/M), ' MHz'])
% the disp commands return the following values :
- 7.22 -
P.Starič, E.Margan Algorithm Application Examples
We now find the poles and the system bandwidth of the 5-, 7-, and 9-pole
Bessel–Thomson systems, which have their responses normalized to the same
asymptotes as the above Butterworth systems of equal order:
0
− 3dB
− 10
− 20
Attenuation [dB]
− 30
f 5 = 1.16 MHz
− 40 f 7 = 1.66 MHz
f 9 = 1.97 MHz
− 50
− 60
− 70 ADC resolution: − 72 dB
− 80
0.1 1.0 10.0
f [MHz]
Fig. 7.2.7: Magnitude vs. frequency of Bessel–Thomson 5-, 7-, and 9-pole
systems, normalized to the attenuation of 2c12 (72 dB) at 0N (10 MHz).
- 7.23 -
P.Starič, E.Margan Algorithm Application Examples
Fig. 7.2.7 shows the frequency responses, calculated to have the same
attenuation, equal to the relative ADC resolution level of 72 dB, at the Nyquist
frequency. We now need their approximate 3 dB cut off frequencies:
m=abs(M7-3.0103);
x7=find(m==min(m));
m=abs(M9-3.0103);
x9=find(m==min(m));
Note that these values are much lower then the cutoff frequencies of the
asymptotes, owing to the more gradual roll–off of Bessel–Thomson systems. Also,
note that a greater improvement in performance is achieved by increasing the system
order from 5 to 7 then from 7 to 9. We would like to have a confirmation of this fact
from the step responses (later, we shall also see how these step responses would look
when sampled at the actual sampling time intervals).
- 7.24 -
P.Starič, E.Margan Algorithm Application Examples
1.2
1.0
0.8
Tr9 = 178 ns
0.4 Tr7 = 213 ns
Tr5 = 302 ns
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
t [ µ s]
Fig. 7.2.8 : Step responses of the 5-, 7-, and 9-pole Bessel–Thomson systems, having equal
attenuation at the Nyquist frequency. The rise times are calculated from the number of
samples between the 10% and 90% of the final value of the normalized amplitude.
We see that in this case the improvement from order 7 to order 9 is not so high
as to justify the added circuit complexity and cost of one more amplifying stage.
So let us say we are temporarily satisfied with the 7-pole filter system.
However, its 1.66 MHz bandwidth for a 12 bit ADC, sampling at 20 MHz, is simply
not good enough. Even the 1.96 MHz bandwidth of the 9-pole system is rather low.
The question is whether we can find a way around the limitations imposed by the anti-
aliasing requirements?
Most ADC recording systems do not have to show the sampled signal in real
time. To the human eye a screen refreshing rate of 10 to 20 times per second is fast
enough for most purposes. Also many systems are intentionally made to record and
accumulate large amounts of data to be reviewed later. So on most occasions there is
plenty of time available to implement some sort of signal post–processing.
We are going to show how a digital filter can be combined with the analog
anti-aliasing filter to expand the system bandwidth beyond the aliasing limit without
increasing the sampling frequency.
Suppose we could implement some form of digital filtering which would
suppress the alias spectrum below the ADC resolution and then we ask ourselves
what would be the minimum required pass band attenuation of such a filter. The
answer is simple: the filter attenuation must follow the inverse of the alias spectrum
envelope. But if we were to allow the spectrum around the sampling frequency to
alias, our digital filter would need to extend its attenuation characteristic back to DC.
Certainly this is neither practical nor desirableÞ Therefore since 0s œ #0s , our
bandwidth improvement factor, let us call it F , must be lower than #.
- 7.25 -
P.Starič, E.Margan Algorithm Application Examples
So let us increase the filter cut off by F œ "Þ($; the input spectrum would
then contain frequencies only up to F0N , which would alias back down to 0s F0N ,
in this case # "Þ($ œ !Þ#(0N . This frequency is high enough to allow the realization
of a not too demanding digital filter.
Let us now study the shape of the alias spectrum which would result from
taking our original 7-pole analog filter, denoted by J(o , and push it up by the chosen
factor F to J(b , as shown in Fig. 7.2.9. The spectrum WA between 0N and F0N is
going to be aliased below the Nyquist frequency into WB .
− 10 F7o F7b
fs fs
B fN =
− 20 2
Attenuation [dB]
− 30
− 40
− 50
− 60 fs − B fN B fN
SB SA
− 70 ADC resolution
− 80
1 10 100
f [MHz]
Fig. 7.2.9: Alias spectrum of a 7-pole filter with a higher cut off frequency. J(o is
our original 7-pole Bessel-Thomson analog filter, which crosses the 12-bit ADC
resolution level of (# dB at exactly the Nyquist frequency, 0N = 0W Î# œ 10 MHz.
This guaranties freedom from aliasing, but the bandwidth is rather low. If we move
it upwards by a factor F œ "Þ($ to J(b , the spectrum WA will alias into WB . Note the
alias spectrum inversion: 0N remains in its place, whilst F0N is aliased to 0s F0N .
Note also that the alias spectral envelope has changed in comparison with the
original: in the loglog scale plot a linearly falling spectral envelope becomes curved
when aliased. This change of the spectral envelope is important, since it will allow
us to use a relatively simple filter response shape to suppress the aliased spectrum
below the ADC resolution.
Note that in the log frequency scale the aliased spectrum envelope is not
linear, even if the original one is (as defined by the attenuation characteristic of the
analog filter).
If we flip the spectrum WB up, as in Fig. 7.2.10, the resulting spectral envelope,
denoted by Jrq , represents the minimal attenuation requirement of a digital filter,
which would restore freedom from aliasing.
- 7.26 -
P.Starič, E.Margan Algorithm Application Examples
− 10 Frq
fs
− 20
F7o F7b
Attenuation [dB]
− 30
fN
− 40
− 50
− 60 fs − B fN B fN
SB
− 70 ADC resolution
B
− 80
1 10 100
f [MHz]
Fig. 7.2.10: If we invert the alias spectrum WB the envelope of the resulting
spectrum Jrq represents the minimum attenuation requirement that a digital filter
should have in order to suppress the aliased spectrum below the ADC resolution.
The following procedure shows how to calculate and plot the various elements
of Fig. 7.2.9 and Fig. 7.2.10, and Jrq in particular, starting from the previously
calculated 7-pole Bessel–Thomson system magnitude J(o :
- 7.27 -
P.Starič, E.Margan Algorithm Application Examples
As can be seen in Fig. 7.2.10, the required minimum attenuation Jrq is broad
and smooth, so we can assume that it can be approximated by a digital filter of a
relatively low order; e.g., if the analog filter has 7 poles, the digital one could have
only 6 poles. The combined system would then be effectively a 13-pole. Of course,
the digital filter reduces the combined system’s cut off frequency, but it would still be
higher than in the original, non-shifted, analog only version. However, the main
problem is that the cascade of two separately optimized filters has a non-optimal
response and the shape of the transient suffers most. This can be solved by simply
starting from a higher order system, say, a 13-pole Bessel–Thomson. Then we assign
7 of the 13 poles to the analogue filter and 6 poles to the digital one.
The 6 poles of the digital filter must be transformed into appropriate sampling
time delays and amplitude coefficients. This can be done either with ‘z-transform’
mapping, or simply by calculating its impulse response and use it for convolution
with the sampled input signal, as we shall do here.
But note that since now the 7 poles of the analog filter will be taken from a
13-pole system, they will be different from the 7-pole system discussed so far (see a
comparison of the poles in Fig. 7.2.11). Although the frequency response will be
different, the shape of the alias band will be similar, since the final slope is the same
in both cases. Nevertheless, we must repeat the calculations with the new poles.
The question is by which criterion do we choose the 7 poles from the 13. From
Jrq in Fig. 7.2.9 we can see that the digital filter should not cut sharply, but rather
gradually. Such a response could be achieved if we reserve the poles with the lower
imaginary part for the digital filter and assign those with high imaginary part to the
analog filter. But then the analog filter step response would overshoot and ring,
compromising the dynamic range. Thus, the correct design strategy is to assign the
real and every other complex conjugate pole pair to the analog filter, as shown below.
× 10 7
4
ℑ {s}
3
1
ℜ {s}
0
−1
−2
−3
−4
−4 −3 −2 −1 0 1 2 3 4 × 10 7
Fig. 7.2.11: The 13-pole mixed mode system, the analog part marked by ‘×’, the
digital by ‘+’; compared with the original 7-pole analog only filter, marked by ‘*’.
Note the difference in pattern size (proportional to bandwidth!).
- 7.28 -
P.Starič, E.Margan Algorithm Application Examples
The values of the mixed mode filter poles for the analog and digital part are:
- 7.29 -
P.Starič, E.Margan Algorithm Application Examples
The result of the above plot operation (semilogx) can be seen in Fig. 7.2.12,
where we have highlighted the spectrum under the analog filter J(a beyond the
Nyquist frequency, its alias, and the inverted alias, which represents the minimum
required attenuation Jrq of the digital filter. Note how the 6-pole digital filter’s
response J6d just touches the Jrq . Probably it is just a coincidence that the bandwidth
increase factor F chosen for the analog filter is equal to È$ (we have arrived at this
value by repeating the above calculation several times, adjusting F on each run). This
process can be easily automated by iteratively comparing the samples of Jrq and J6d ,
and increase or decrease the factor F accordingly.
10
fN fs
0
− 3dB
Frq
−10 f h7
f h13
−20 F13 F7a
F7o
Attenuation [dB]
−30
F6d
− 40
− 50
− 60
Sb Sa
− 70 ADC resolution
− 80
1 10 100
f [MHz]
Fig. 7.2.12: The bandwidth improvement achieved with the 13-pole mixed mode filter.
The J(o is the original 7-pole analog only filter response, as in Fig. 7.2.9. The new
analog filter response J(a , which uses 7 of the 13 poles as shown in Fig. 7.2.11, was
first calculated to have the same 72 dB attenuation at the Nyquist frequency 0N (as
J(o ), and then all the 13 poles were increased by the same factor F œ "Þ($ as before.
The resulting J(a response generates the alias spectrum Wb from its source Wa . The
envelope of the inverted alias spectrum Jrq sets the upper limit for the digital filter’s
response, J6d , required for effective alias suppression. The response J"$ is the total
analog + digital 13-pole system, which crosses the ADC resolution limit at 0R , and
suppresses the alias band below the ADC resolution level, which was the main goal. In
this way the system’s cut off frequency has increased from "Þ'' MHz for J(o to #Þ#
MHz for the J"$ . Thus, a bandwidth improvement of about "Þ$# is achieved (not very
much, but still significant — note that there are ways of improving this further!).
As expected, the digital filter has reduced the system bandwidth below that of
analog filter; however, the analog digital system’s response J"$ has a cut off
frequency 0h"$ well above the 0h( of the original analog only 7-pole response, the J(o :
- 7.30 -
P.Starič, E.Margan Algorithm Application Examples
% plot the step responses with the 0.1 and 0.9 reference levels :
plot( t*M, g7o, '-k',...
t*M, g13, '-r',...
t([5, 25])*M, [0.1, 0.1], '-k',...
t([15, 45])*M, [0.9, 0.9], '-k' )
xlabel('t [us]')
1.2
t r7
1.0
0.8
0.6 t r7 = 210 ns
g7o t r13 = 155 ns
0.4 g13
0.2 t r13
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
t [µ s]
Fig. 7.2.13: Step response comparison of the original 7-pole analog only filter g(o and
the improved 13-pole mixed mode (7-pole analog + 6-pole digital) Bessel–Thomson
filter, g"$ . Note the shorter rise time and longer time delay of g"$ . The resulting
rise time is also better than that of the 9-pole analog only filter (see Fig. 7.2.8).
- 7.31 -
P.Starič, E.Margan Algorithm Application Examples
− 0.1
τe
[ µ s] τ e7 τ e13
− 0.2
− 0.3
1 10 100
f [MHz]
Fig. 7.2.14: Envelope delay comparison of the original 7-pole analog only
filter 7e( and the mixed mode 13-pole Bessel–Thomson filter 7e"$ . The 7-pole
analog only filter is linear to a little over 2 MHz, while the 13-pole mixed
mode filter is linear well into the stop band, up to almost 5 MHz, more than
doubling the useful linear phase bandwidth.
The reader is encouraged to repeat all the above calculations also for the 5-
and 9-pole case to examine the dependence of the bandwidth improvement factor on
the analog filter’s slope.
As we have seen, the bandwidth improvement factor is very sensitive to the
steepness of the attenuation slope, so the 9-pole analog filter assisted by an 8-pole
equivalent digital filter may be found to be attractive now, extending the bandwidth
further.
- 7.32 -
P.Starič, E.Margan Algorithm Application Examples
The plot can be seen in Fig. 7.2.15. The dot markers indicate the first 15 time
samples of the analog filter step response, the digital filter impulse response and the
composite mixed mode filter step response.
1.2
1.0
0.8 g 7a
0.6 g 13
0.4 f 6d
11
0.2
− 0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
t [ µ s]
Fig. 7.2.15 : Convolution as digital filtering for a unit step input. The 7-pole analog
filter step response ga( is sent to the 6-pole equivalent digital filter, having the
impulse response of 0d' . Their convolution results in the step response g"$ of the
effectively 13-pole mixed mode composite filter. The marker dots represent the
actual time quantization as would result from the ADC sampling at 20 MHz. The
impulse response of the digital filter is almost zero after only 11 samples, so the
digital filter needs only the first 11 samples as its coefficients for convolution. Note
that even if the value of the first coefficient is zero, it must nevertheless be used in
order to have the correct response.
- 7.33 -
P.Starič, E.Margan Algorithm Application Examples
- 7.34 -
P.Starič, E.Margan Algorithm Application Examples
8
× 10
ℑs
analog filter
1 poles
zeros
digital filter
poles
ℜs
0
−1
8
−1 0 1 × 10
Fig. 7.2.16: An example of a similar mixed mode filter, but with the analog
filter using two pairs of imaginary zeros, one pair at the sampling frequency
and the other pair at the Nyquist frequency.
10
fN fs
0
Fa7
− 10 F d6
− 20
Attenuation [dB]
− 30 F13x
− 40
− 50
− 60 F a7z
− 70
− 80
− 90
1 10 100
f [MHz]
Fig. 7.2.17: By adding the zeros the analog filter frequency response in
modified from Ja( to Ja(z . Jd' is the digital filter response and J"$x is the
composite mixed mode response.
10
fN fs
0
Fd6 Sr
− 10
− 20
Attenuation [dB]
− 30
− 40 Fa7z
− 50
− 60
Sb Sa
− 70
− 80 Fd6 × S b
− 90
1 10 100
f [MHz]
Fig. 7.2.18: The spectrum Wa is aliased into Wb . The difference between the ADC
resolution and Wb (both in dB) gives Wr , the envelope of which determines the
minimum attenuation required by Jd6 to suppress Wb below the ADC resolution.
- 7.35 -
P.Starič, E.Margan Algorithm Application Examples
Fig. 7.2.19 shows the time domain responses. Note the influence of zeros on
the analog filter response.
1.2
g a7
1.0 g a7z
0.8
0.6 g 13
f d6
0.4
0.2
0
− 0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
t [ µ s]
Fig. 7.2.19: The step response of the 7-pole analog filter ga( is modified by the
4 zeros into ga(z . Because the alias spectrum is narrower the bandwidth can be
increased. The mixed mode system step response g"$ has a rise time of less than
150 ns (in contrast with the 180 ns for the case with no zeros).
- 7.36 -
P.Starič, E.Margan Algorithm Application Examples
strays. In the figures below are the schematic diagrams of a 3-pole and a 2-pole stage.
We shall use the 3+2+2 cascade to implement the 7-pole filter required.
R4 3 R3 2
R2
s
R1 C1
1
o
C3 C2
A
Fig. 7.2.20. The ‘Multiple–Feedback’ 3-pole (MFB-3) low pass filter configuration
R3 2
R2
s
R1 1
C1
o
C2
A
Fig. 7.2.21: The ‘Multiple–Feedback’ 2-pole (MFB-2) low pass filter configurations
Our task is to find the transfer functions of these two circuits and then relate
the time constants to the poles so that the required component values can be found.
The 3rd -order stage will be the first in the cascade, so let us calculate its
components first. For the complete analysis please refer to Appendix 7.1; here we
write only the resulting transfer function, which in the general case is:
=" =# =$
J a=b œ E!
a= =" ba= =# ba= =$ b
=" =# =$
œ E! (7.2.2)
=$ =# a=" =# =$ b = a=" =# =" =$ =# =$ b =" =# =$
" V% " V$ V$
a=" =# =$ b œ Œ" Œ" (7.2.3)
G$ V% V$ G# V$ V# V"
" " V$
" aV$ V% bŒ
V# V" V#
=" = # = " = $ = # = $ œ (7.2.4)
G# G$ V$ V% G1 G 2 V 1 V 3
V$ V% "
=" =# =$ œ Œ" (7.2.5)
V# V$ G" G # G $ V " V $ V %
- 7.37 -
P.Starič, E.Margan Algorithm Application Examples
By examining the coefficients and the gain we note that we can optimize the
component values by making the resistors V" , V$ , and V% equal:
V" œ V$ œ V% œ V (7.2.7)
The coefficients and the gain equations now take the following form:
# " V
a=" =# =$ b œ Œ# (7.2.8)
G$ V G# V V#
" #V " V
=" =# =" =$ =# =$ œ Œ$ † (7.2.9)
G# G$ V # V# G1 G2 V # V#
#V "
=" =# =$ œ † (7.2.10)
V# G" G# G$ V$
V#
E! œ (7.2.11)
#V
To simplify our expressions we shall substitute the term VÎV# by:
V "
Qœ œ (7.2.12)
V# #E!
so the coefficients can be written as:
# #Q
O# œ a=" =# =$ b œ (7.2.13)
G$ V G# V
$ #Q Q
O" œ =" =# =" =$ =# =$ œ (7.2.14)
G# G$ V# G1 G2 V#
#Q
O! œ =" =# =$ œ (7.2.15)
G" G# G$ V$
Next we can normalize the resistance ratios and the VG time constants in
order to eliminate the unnecessary variables. After the equations are solved and the
component ratios are found we shall denormalize the component values to the actual
cut off frequency, as required by the poles. We can thus set the normalization factor:
"
R œ (7.2.16)
V
so instead of V we have the normalized resistance VR :
VR œ R V œ " (7.2.17)
- 7.38 -
P.Starič, E.Margan Algorithm Application Examples
$ #Q Q
O" œ (7.2.20)
Gb Gc G a Gb
#Q
O! œ (7.2.21)
Ga Gb Gc
We can now express, say, Ga from Eq. 7.2.21:
#Q
Ga œ (7.2.22)
O ! G b Gc
which we insert into Eq. 7.2.20:
$ #Q O! Gc
O" œ (7.2.23)
Gb Gc #
From this we express Gb :
#a$ #Q b
Gb œ (7.2.24)
#O" Gc O! Gc#
#a$ #Q bO#
;œ (7.2.28)
a# Q bO!
%a$ #Q b
<œ (7.2.29)
a# Q bO!
- 7.39 -
P.Starič, E.Margan Algorithm Application Examples
The real general solution for the third-order has been written already in Appendix 2.1:
(7.2.31)
By inserting the poles =" , =# , and =$ into the expressions for O! , O" , and O$ ,
and the expression for gain in Q , and then using it all in :, ; , and <, we finally obtain
the value of Gc . The explicit expression would be too long to write here, and, anyway,
it is only a matter of simple substitution, which in a numerical algorithm would not be
necessary. With the value of Gc known we can go back to Eq. 7.2.24 to find the value
of Gb :
"
#Œ$
E!
Gb œ (7.2.32)
#a=" =# =" =$ =# =$ bGc =" =# =$ Gc# ‘
" "
Ga œ † (7.2.33)
E! =" =# =$ Gb Gc
Now that the normalized capacitor values are known, the denormalization
process makes use of the circuit’s cut off frequency, =$h (which, to remind you, is
different from the 7-pole filter cut off =(h , as it is from the total system cut off, ="$h );
=$h relates to O! and the poles as:
From =$h we can denormalize the values of V and the capacitors to acquire
some suitable values, such that the opamp of our choice can easily drive its own
feedback impedance and the input impedance of the following stage. For high system
bandwidth, it is advisable to choose V as low as possible, usually in the range
between "&! and (&! H. Let us say that we can set V œ ##! HÞ This means that:
"
R œ (7.2.35)
##!
- 7.40 -
P.Starič, E.Margan Algorithm Application Examples
V# œ # V E! (7.2.40)
By inserting the first three poles from p7a for =" , =# , and =$ , we obtain the
following component values:
% The first three poles of p7a are assigned to the MFB-3 circuit:
Ao = 100^(1/3) = 4.642
The derivation for the two second-order stages, which are both of the form
shown in Fig. 7.3.10, is also reported in detail in Appendix 7.1. Again, here we give
only the resulting transfer function:
V3 "
†
@o V2 V2 V1 V 3 G 1 G 2
œ †
@s V3 V3 V3 " V3 "
=2 = Œ " †
V1 V2 V3 G 2 V2 V1 V3 G1 G 2
(7.2.41)
- 7.41 -
P.Starič, E.Margan Algorithm Application Examples
" "
a=" =# b œ Œ# (7.2.45)
V G2 E0
a=" =# b "
G1 œ (7.2.47)
=" = # V a#E0 "b
a=" =# b "
Ga œ (7.2.50)
=" =# #E0 "
Let us say that here, too, we use the same values for V and R , as before
(V œ ##! H, R œ "Î#!!; note however that in general we can use a different value if
for whatever reason we find it more suitable — one such reason can be the preferred
values of capacitors). Thus:
G" œ R Ga G# œ R Gb (7.2.51)
- 7.42 -
P.Starič, E.Margan Algorithm Application Examples
From p7a we use the 4th and the 5th pole for the first MFB-2 stage and the 6th
th
and 7 pole for the second MFB-2 stage. With those we obtain the following
component values:
We can now finally draw the complete circuit schematic diagram with
component values:
Fig. 7.2.22: The realization of the 7-pole analog filter for the ADC discussed in the
aliasing suppression example. The input signal is separated from the filter stages by a
high impedance Ð1 MH, 12 pF Ñ unity gain buffer (UGB). The first amplifier stage A1
with a gain of 4.64 is combined with the third-order filter, the components are
calculated from the first three poles taken from the 7-pole analog part of the 13-pole
mixed-mode system. The following two second-order filter stages A2 and A3 have the
same gain as the first stage, whilst their components are calculated from the next two
complex conjugate pairs of poles from the same bunch of 7. All three amplifying stages
are inverting, so the final inversion must be done either at the signal display, the digital
filter routine, or simply by taking the 2’s complement of the ADC’s digital word.
- 7.43 -
P.Starič, E.Margan Algorithm Application Examples
More often than not, multi-stage filters will have awkward component values,
far from either of the closest preferred standard E12 or E24 values. In addition the
greater the number of stages, the larger will be the ratio of maximal to minimal
values, forcing the use of components with very tight tolerance.
Fortunately we are not obliged to use exactly the calculated values; indeed, we
are free to adjust them, paying attention that each stage keeps its time constants as
calculated. I.e., the capacitors will be probably difficult to obtain in E24 values, but
resistors are easily available in E48 and even E96 values, so we might select the
closest E12 values for the capacitors and then select the resistors from, say, E48.
Considering for example the first 2-pole stage (E# ) we could use 18 pF for G#"
(instead of 18.5); then G## would be 210 pF (say, 180 and 30 pF in parallel), the
resistors V#" and V#$ can be set to 226 H and V## can be set to 1050 H.
The other two stages can be changed in a similar way. It is advisable not to
depart from the initial values too much, in order to keep the impedances close to the
driving capability of the amplifiers (remember that each amplifier has to drive both
the input impedance of the following stage and the impedance of its own feedback
network) and also to remain well above the influence of stays (low value capacitances
and the amplifier inverting inputs are the most sensitive in this respect).
- 7.44 -
P.Starič, E.Margan Algorithm Application Examples
We have shown only a small part of the vast possibilities of use offered by the
application of numerical routines, either for the purpose of system design or for
implementing them within the system’s digital signal processing.
Some additional information and a few examples can be found on the CD
included with the book, in form of the ‘*.M’ files to be run within Matlab. Many of
those files were used to produce the various figures in the book and can be used as a
starting point for further routine development.
One of the problems of writing a book about a fast developing subject is that
by the time the writers have finished the editing, several years might have passed and
the book is no longer at the forefront of the technology’s development.
We have tried to prevent the book from becoming outdated too soon by
including all the necessary theory and covering the fundamental design principles
which are independent of technology, and thus of lasting value. We have also tried to
cover some of the most important steps in the development from a historical point of
view, to show how those theoretical concepts and design principles have been applied
in the past.
Although the importance of staying alert and following the new developments
and ideas is as high as ever, the knowledge of the theory and its past applications
helps us to identify more quickly the correct paths to follow and the aims to pursue.
- 7.45 -
P.Starič, E.Margan Algorithm Application Examples
References:
[7.1] J.N. Little and C.B. Moller, The MathWorks, Inc.:
PC-MATLAB For Students (containing disks with Matlab program)
Prentice-Hall, 1989
MATLAB-V For Students (containing CD with Matlab program)
Prentice-Hall, 1998
Web link : <http://www.mathworks.com/>
See also the books on electronics + Matlab at:
<http://www.mathworks.com/matlabcentral/link_exchange/MATLAB/Books/Electronics/>
See also:
<http://en.wikipedia.org/wiki/Nyquist-Shannon_interpolation_formula>
[7.4] K. Azadet, Linear-phase, continuous-time video filters based on mixed A/D structure,
ECCTD 1993 - Circuit Theory and Design, pp. 73–78, Elsevier Scientific Publishing
- 7.47 -
Wideband Amplifiers Index
Index
- X.1 -
Wideband Amplifiers Index
- X.2 -
Wideband Amplifiers Index
- X.3 -
Wideband Amplifiers Index
- X.4 -
Wideband Amplifiers Index
- X.5 -
Wideband Amplifiers Index
- X.6 -