Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Wideband Amplifiers

Download as pdf or txt
Download as pdf or txt
You are on page 1of 602

WIDEBAND AMPLIFIERS

Wideband Amplifiers

by

PETER STARIý
Jožef Stefan Institute,
Ljubljana, Slovenia

and

ERIK MARGAN
Jožef Stefan Institute,
Ljubljana, Slovenia
A C.I.P. Catalogue record for this book is available from the Library of Congress.

ISBN-10 0-387-28340-4 (HB)


ISBN-13 978-0-387-28340-1 (HB)
ISBN-10 0-387-28341-2 (e-book)
ISBN-13 978-0-387-28341-8 (e-book)

Published by Springer,
P.O. Box 17, 3300 AA Dordrecht, The Netherlands.

www.springer.com

Printed on acid-free paper

All Rights Reserved


© 2006 Springer
No part of this work may be reproduced, stored in a retrieval system, or transmitted
in any form or by any means, electronic, mechanical, photocopying, microfilming, recording
or otherwise, without written permission from the Publisher, with the exception
of any material supplied specifically for the purpose of being entered
and executed on a computer system, for exclusive use by the purchaser of the work.

Printed in the Netherlands.


We dedicate this book to all our friends and colleagues in the art of electronics.
Table of Contents

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii

Part 1: The Laplace Transform


Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Part 2: Inductive Peaking Circuits


Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

Part 3: Wideband Amplifier Stages With Semiconductor


Devices
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

Part 4: Cascading Amplifier Stages, Selection of Poles


Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311

Part 5: System Synthesis and Integration


Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383

Part 6: Computer Algorithms for Analysis and Synthesis of


Amplifier  Filter Systems
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513

Part 7: Algorithm Application Examples


Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
P.Starič, E.Margan Wideband Amplifiers

Acknowledgments

The authors are grateful to John Addis, Carl Battjes, Dennis Feucht, Bruce
Hoffer and Bob Ross, all former employees Tektronix, Inc., for allowing us to use
their class notes, ideas, and publications, and for their help when we had run into
problems concerning some specific circuits.
We are also thankful to prof. Ivan Vidav of the Faculty of Mathematics and
Physics in Ljubljana for his help and reviewing of Part 1, and to Csaba Szathmary, a
former employee of EMG, Budapest, for allowing us to use some of his measurement
results in Part 5.
However, if, in spite of meticulously reviewing the text, we have overlooked
some errors, this, of course, is our own responsibility alone; we shall be grateful to
everyone for bringing such errors to our attention, so that they can be corrected in the
next edition. To report the errors please use one of the e-mail addresses below.

Peter Starič & Erik Margan

 peter.staric@guest.arnes.si
 erik.margan@ijs.si

- IX -
P.Starič, E.Margan Wideband Amplifiers

Foreword

With the exception of the tragedy on September 11, the year 2001 was
relatively normal and uneventful: remember, this should have been the year of the
Clarke’s and Kubrick’s Space Odyssey, mission to Juiter; it should have been the year
of the HAL-9000 computer.
Today, the Personal Computer is as ubiquitous and omnipresent as was HAL
on the Discovery spaceship. And the rate of technology development and market
growth in electronics industry still follows the famous ‘Moore Law’, almost four
decades after it has been first formulated: in 1965, Gordon Moore of Intel Corporation
predicted the doubling of the number of transistors on a chip every 2 years, corrected
to 18 months in 1967; at that time, the landing on the Moon was in full preparation.
Curiously enough, today noone cares to go to the Moon again, let alone
Jupiter. And, in spite of all the effort in digital engineering, we still do not have
anything close to 0.1% of the HAL capacity (fortunately?!). Whilst there are many
research labs striving to put artificial intelligence into a computer, there are also
rumors that this has already happened (with Windows-95, of course!).
In the early 1990s it was felt that digital electronics will eventually render
analog systems obsolete. This never happened. Not only is the analog sector vital as
ever, the job market demands are expanding in all fields, from high-speed
measurement instrumentation and data acquisition, telecommunications and radio
frequency engineering, high-quality audio and video, to grounding and shielding,
electromagnetic interference suppression and low-noise printed circuit board design,
to name a few. And it looks like this demand will be going on for decades to come.
But whilst the proliferation of digital systems attracted a relatively high
number of hardware and software engineers, analog engineers are still rare birds. So,
for creative young people, who want to push the envelope, there are lots of
opportunities in the analog field.
However, analog electronics did not earn its “Black-Magic Art” attribute in
vain. If you have ever experienced the problems and frustrations from circuits found
in too many ‘cook-books’ and ‘sure-working schemes’ in electronics magazines, and
if you became tired of performing exorcism on every circuit you build, then it is
probably the time to try a different way: in our own experience, the ‘hard’ way of
doing the correct math first often turns out to be the ‘easy’ way!
Here is the book “Wideband Amplifiers”. The book is intended to serve both
as a design manual to more experienced engineers, as well as a good learning guide to
beginners. It should help you to improve your analog designs, making better and faster
amplifier circuits, especially if time-domain performance is of major concern. We
have strieved to provide the complete math for every design stage. And, to make
learning a joyful experience, we explain the derivation of important math relations
from a design engineer point of view, in an intuitive and self-evident manner (rigorous
mathematicians might not like our approach). We have included many practical
applications, schematics, performance plots, and a number of computer routines.

- XI -
P.Starič, E.Margan Wideband Amplifiers

However, as it is with any interesting subject, the greatest problem was never
what to include, but rather what to leave out!
In the foreword of his popular book “A Brief History of Time”, Steven
Hawking wrote that his publisher warned him not to include any math, since the
number of readers would be halved by each formula. So he included the I œ 7 - #
and bravely cut out one half of the world population.
We went further: there are some 220 formulae in Part 1 only. By estimating
the current world population to some 6×109 , of which 0.01% could be electronics
engineers and assuming an average lifetime interest in the subject of, say, 30 years, if
the publisher’s rule holds, there ought to be one reader of our book once every:
2220 Îa6 ‚ 109 ‚ 104 ‚ 30 ‚ 356 ‚ 24 ‚ 3600b ¸ 3 ‚ 1051 seconds
or something like 6.6×1033 ‚ the total age of the Universe!

Now, whatever you might think of it, this book is not about math! It is about
getting your design to run right first time! Be warned, though, that it will be not
enough to just read the book. To have any value, a theory must be put into practice.
Although there is no theoretical substitute for hands-on experience, this book should
help you to significantly shorten the trial-and-error phase.
We hope that by studying this book thoroughly you will find yourself at the
beginning of a wonderful journey!

Peter Starič and Erik Margan,


Ljubljana, June 2003

Important Note:
We would like to reassure the Concerned Environmentalists
that during the writing of this book, no animal or plant had suffered
any harm whatsoever, either in direct or indirect form (excluding the
authors, one computer ‘mouse’ and countless computation ‘bugs’!).

- XII -
P.Starič, E.Margan Wideband Amplifiers

Release Notes

The manuscript of this book appeared first in spring of 1988.


Since then, the text has been revised sveral times, with some minor errors
corrected and figures redrawn, in particular in Part 2, where inductive peaking
networks are analyzed. Several topics have been updated to reflect the latest
developments in the field, mainly in Part 5, dealing with modern high-speed circuits.
The Part 6, where a number of computer algorithms are developed, and Part 7,
containing several algorithm application examples, were also brought up to date.
This is a release version 3 of the book.
The book also commes in the Adobe Portable Document Format (PDF),
readable by the Adobe Acrobat™ Reader program (the latest version can be
downloaded free of charge from http://www.adobe.com/products/Acrobat/ ).
One of the advantages of the book, offered by the PDF format and the Reader
program, are numerous links (blue underlined text), which enable easy access to
related topics by pointing the ‘mouse’ cursor on the link and clicking the left ‘mouse’
button. Returning to the original reading position is possible by clicking the right
‘mouse’ button and select “Go Back” from the pop-up menu (see the AR HELP
menu). There are also numerous highlights (green underlined text) relating to the
content within the same page.
The cross-file links (red underlined text) relate to the contents in different PDF
files, which open by clicking the link in the same way.
The Internet and World-Wide-Web links are in violet (dark magenta) and are
accessed by opening the default browser installed on your computer.
The book was written and edited using ê ™, the Scientific Word Processor,
version 5.0, (made by Simon L. Smith, see http://www.expswp.com/ ).
The computer algorithms developed and described in Part 6 and 7 are intended
as tools for the amplifier design process. Written for Matlab™, the Language of
Technical Computing (The MathWorks, Inc., http://www.mathworks.com/), they have
all been revised to conform with the newer versions of Matlab (version 5.3 ‘for
Students’), but still retaining downward compatibility (to version 1) as much as
possible. The files can be found on the CD in the ‘Matlab’ folder as ‘*.M’ files, along
with the information of how to install them and use within the Matlab program. We
have used Matlab to check all the calculations and draw most of the figures. Before
importing them into ê, the figures were finalized using the Adobe Illustrator,
version 8 (see http://www.adobe.com/products/Illustrator/ ).
All circuit designs were checked using Micro-CAP ™, the Microcomputer
Circuit Analysis Program, v. 5 (Spectrum Software, http://www.spectrum-soft.com/ ).
Some of the circuits described in the book can be found on the CD in the ‘MicroCAP’
folder as ‘*.CIR’ files, which the readers with access to the MicroCAP program can
import and run the simmulations by themselves.

- XIII -
P. Starič, E. Margan

Wideband Amplifiers

Part 1

The Laplace Transform

There is nothing more practical than a good theory!


(William Thompson, Lord Kelvin)
P. Starič, E.Margan The Laplace Transform

About Transforms

The Laplace transform can be used as a powerful method of solving


linear differential equations. By using a time domain integration to obtain
the frequency domain transfer function and a frequency domain integration
to obtain the time domain response, we are relieved of a few nuisances of
differential equations, such as defining boundary conditions, not to speak of
the difficulties of solving high order systems of equations.
Although Laplace had already used integrals of exponential functions
for this purpose at the beginning of the 19th century, the method we now
attribute to him was effectively developed some 100 years later in
Heaviside’s operational calculus.
The method is applicable to a variety of physical systems (and even
some non physical ones, too! ) involving trasport of energy, storage and
transform, but we are going to use it in a relatively narrow field of
calculating the time domain response of amplifier filter systems, starting
from a known frequency domain transfer function.
As for any tool, the transform tools, be they Fourier, Laplace,
Hilbert, etc., have their limitations. Since the parameters of electronic
systems can vary over the widest of ranges, it is important to be aware of
these limitations in order to use the transform tool correctly.

-1.2-
P. Starič, E.Margan The Laplace Transform

Contents ........................................................................................................................................... 1.3


List of Tables ................................................................................................................................... 1.4
List of Figures .................................................................................................................................. 1.4

Contents:
1.0 Introduction ............................................................................................................................................. 1.5
1.1 Three Different Ways of Expressing a Sinusoidal Function .................................................................. 1.7
1.2 The Fourier Series ................................................................................................................................. 1.11
1.3 The Fourier Integral .............................................................................................................................. 1.17
1.4 The Laplace Transform ......................................................................................................................... 1.23
1.5 Examples of Direct Laplace Transform ................................................................................................. 1.25
1.5.1 Example 1 ............................................................................................................................ 1.25
1.5.2 Example 2 ............................................................................................................................ 1.25
1.5.3 Example 3 ............................................................................................................................ 1.26
1.5.4 Example 4 ............................................................................................................................ 1.26
1.5.5 Example 5 ............................................................................................................................ 1.27
1.5.6 Example 6 ............................................................................................................................ 1.27
1.5.7 Example 7 ............................................................................................................................ 1.28
1.5.8 Example 8 ............................................................................................................................. 1.28
1.5.9 Example 9 ............................................................................................................................ 1.29
1.5.10 Example 10 ........................................................................................................................ 1.29
1.6 Important Properties of the Laplace Transform ..................................................................................... 1.31
1.6.1 Linearity (1) ......................................................................................................................... 1.31
1.6.2 Linearity (2) ......................................................................................................................... 1.31
1.6.3 Real Differentiation .............................................................................................................. 1.31
1.6.4 Real Integration .................................................................................................................... 1.32
1.6.5 Change of Scale ................................................................................................................... 1.34
1.6.6 Impulse $ (>) .......................................................................................................................... 1.35
1.6.7 Initial and Final Value Theorems ......................................................................................... 1.36
1.6.8 Convolution .......................................................................................................................... 1.37
1.7 Application of the _ transform in Network Analysis ............................................................................ 1.41
1.7.1 Inductance ............................................................................................................................ 1.41
1.7.2 Capacitance .......................................................................................................................... 1.41
1.7.3 Resistance ............................................................................................................................ 1.42
1.7.4 Resistor and capacitor in parallel ......................................................................................... 1.42
1.8 Complex Line Integrals ......................................................................................................................... 1.45
1.8.1 Example 1 ............................................................................................................................ 1.49
1.8.2 Example 2 ............................................................................................................................ 1.49
1.8.3 Example 3 ............................................................................................................................ 1.49
1.8.4 Example 4 ............................................................................................................................ 1.50
1.8.5 Example 5 ............................................................................................................................ 1.50
1.8.6 Example 6 ............................................................................................................................ 1.50
1.9 Contour Integrals ................................................................................................................................... 1.53
1.10 Cauchy’s Way of Expressing Analytic Functions ............................................................................... 1.55
1.10.1 Example 1 .......................................................................................................................... 1.58
1.10.2 Example 2 .......................................................................................................................... 1.58
1.11 Residues of Functions with Multiple Poles, the Laurent Series ........................................................... 1.61
1.11.1 Example 1 .......................................................................................................................... 1.63
1.11.2 Example 2 .......................................................................................................................... 1.63
1.12 Complex Integration Around Many Poles:
The Cauchy–Goursat Theorem ...................................................................................................... 1.65
-4_
1.13 Equality of the Integrals ( J Ð=Ñe=> .= and ( J Ð=Ñe=> .= .......................................................... 1.67
-4_
1.14 Application of the Inverse Laplace Transform .................................................................................... 1.73
1.15 Convolution ......................................................................................................................................... 1.81
Résumé of Part 1 .......................................................................................................................................... 1.85
References .................................................................................................................................................... 1.87
Appendix 1.1: Simple Poles, Complex Spaces ...................................................................................(CD) A1.1

-1.3-
P. Starič, E.Margan The Laplace Transform

List of Tables:
Table 1.2.1: Square Wave Fourier Components ........................................................................................... 1.15
Table 1.5.1: Ten Laplace Transform Examples ............................................................................................ 1.30
Table 1.6.1: Laplace Transform Properties .................................................................................................. 1.39
Table 1.8.1: Differences Between Real and Complex Line Integrals ........................................................... 1.48
List of Figures:
Fig. 1.1.1: Sine wave in three ways ................................................................................................................. 1.7
Fig. 1.1.2: Amplifier overdrive harmonics ...................................................................................................... 1.9
Fig. 1.1.3: Complex phasors ........................................................................................................................... 1.9
Fig. 1.2.1: Square wave and its phasors ........................................................................................................ 1.11
Fig. 1.2.2: Square wave phasors rotating ...................................................................................................... 1.12
Fig. 1.2.3: Waveform with and without DC component ............................................................................... 1.13
Fig. 1.2.4: Integration of rotating and stationary phasors ............................................................................. 1.14
Fig. 1.2.5: Square wave signal definition ...................................................................................................... 1.14
Fig. 1.2.6: Square wave frequency spectrum ................................................................................................ 1.14
Fig. 1.2.7: Gibbs’ phenomenon .................................................................................................................... 1.16
Fig. 1.2.8: Periodic waveform example ........................................................................................................ 1.16
Fig. 1.3.1: Square wave with extended period .............................................................................................. 1.17
Fig. 1.3.2: Complex spectrum of the timely spaced sqare wave ................................................................... 1.17
Fig. 1.3.3: Complex spectrum of the square pulse with infinite period ......................................................... 1.20
Fig. 1.3.4: Periodic and aperiodic functions ................................................................................................. 1.21
Fig. 1.4.1: The abscissa of absolute convergence ......................................................................................... 1.24
Fig. 1.5.1: Unit step function ........................................................................................................................ 1.25
Fig. 1.5.2: Unit step delayed ......................................................................................................................... 1.25
Fig. 1.5.3: Exponential function ................................................................................................................... 1.26
Fig. 1.5.4: Sine function ............................................................................................................................... 1.26
Fig. 1.5.5: Cosine function ............................................................................................................................ 1.27
Fig. 1.5.6: Damped oscillations .................................................................................................................... 1.27
Fig. 1.5.7: Linear ramp function ................................................................................................................... 1.28
Fig. 1.5.8: Power function ............................................................................................................................ 1.28
Fig. 1.5.9: Composite linear and exponential function ................................................................................. 1.30
Fig. 1.5.10: Composite power and exponential function .............................................................................. 1.30
Fig. 1.6.1: The Dirac impulse function ......................................................................................................... 1.35
Fig. 1.7.1: Instantaneous voltage on P, G and V .......................................................................................... 1.41
Fig. 1.7.2: Step response of an VG -network ................................................................................................ 1.43
Fig. 1.8.1: Integral of a real inverting function ............................................................................................. 1.45
Fig. 1.8.2: Integral of a complex inverting function ..................................................................................... 1.47
Fig. 1.8.3: Different integration paths of equal result ................................................................................... 1.49
Fig. 1.8.4: Similar integration paths of different result ................................................................................. 1.49
Fig. 1.8.5: Integration paths about a pole ..................................................................................................... 1.51
Fig. 1.8.6: Integration paths near a pole ....................................................................................................... 1.51
Fig. 1.8.7: Arbitrary integration paths .......................................................................................................... 1.51
Fig. 1.8.8: Integration path encircling a pole ................................................................................................ 1.51
Fig. 1.9.1: Contour integration path around a pole ....................................................................................... 1.53
Fig. 1.9.2: Contour integration not including a pole ..................................................................................... 1.53
Fig. 1.10.1: Cauchy’s method of expressing analytical functions ................................................................. 1.55
Fig. 1.12.1: Emmentaler cheese .................................................................................................................... 1.65
Fig. 1.12.2: Integration path encircling many poles ...................................................................................... 1.65
Fig. 1.13.1: Complex line integration of a complex function ....................................................................... 1.67
Fig. 1.13.2: Integration path of Fig.1.13.1 .................................................................................................... 1.67
Fig. 1.13.3: Integral area is smaller than QP ............................................................................................... 1.67
Fig. 1.13.4: Cartesian and polar representation of complex numbers ........................................................... 1.68
Fig. 1.13.5: Integration path for proving of the Laplace transform ............................................................... 1.69
Fig. 1.13.6: Integration path for proving of input functions .......................................................................... 1.71
Fig. 1.14.1: V P G circuit driven by a current step ....................................................................................... 1.73
Fig. 1.14.2: V P G circuit transfer function magnitude ................................................................................ 1.75
Fig. 1.14.3: V P G circuit in time domain .................................................................................................... 1.79
Fig. 1.15.1: Convolution of two functions .................................................................................................... 1.82
Fig. 1.15.2: System response calculus in time and frequency domain .......................................................... 1.83

-1.4-
P. Starič, E.Margan The Laplace Transform

1.0 Introduction

With the advent of television and radar during the Second World War, the behavior
of wideband amplifiers in the time domain has become very important [Ref. 1.1]. In today’s
digital world this is even more the case. It is a paradox that designers and troubleshooters of
digital equipment still depend on oscilloscopes, which — at least in their fast and low level
input part — consist of analog wideband amplifiers. So the calculation of the time domain
response of wideband amplifiers has become even more important than the frequency,
phase, and time delay response.
The emphasis of this book is on the amplifier’s time domain response. Therefore a
thorough knowledge of time related calculus, explained in Part 1, is a necessary
pre-requisite for understanding all other parts of this book where wideband amplifier
networks are discussed.
The time domain response of an amplifier can be calculated by two main methods:
The first one is based on differential equations and the second uses the inverse Laplace
transform (_" transform). The differential equation method requires the calculation of
boundary conditions, which — in case of high order equations — means an unpleasant and
time consuming job. Another method, which also uses differential equations, is the so
called state variable calculation, in which a differential equation of order 8 is split into 8
differential equations of the first order, in order to simplify the calculations. The state
variable method also allows the calculation of non linear differential equations. We will use
neither of these, for the simple reason that the Laplace transform and its inverse are based
on the system poles and zeros, which prove so useful for network calculations in the
frequency domain in the later parts of the book. So most of the data which are calculated
there is used further in the time domain analysis, thus saving a great deal of work. Also the
use of the _" transform does not require the calculation of boundary conditions, giving the
result directly in the time domain.
In using the _" transform most engineers depend on tables. Their method consists
firstly of splitting the amplifier transfer function into partial fractions and then looking for
the corresponding time domain functions in the _ transform tables. The sum of all these
functions (as derived from partial fractions) is then the result. The difficulty arises when no
corresponding function can be found in the tables, or even at an earlier stage, if the
mathematical knowledge available is insufficient to transform the partial fractions into such
a form as to correspond to the formulae in the tables.
In our opinion an amplifier designer should be self-sufficient in calculating the time
domain response of a wideband amplifier. Fortunately, this can be almost always derived
from simple rational functions and it is relatively easy to learn the _" transforms for such
cases. In Part 1 we show how this is done generally, as well as for a few simple examples.
A great deal of effort has been spent on illustrating the less clear relationships by relevant
figures. Since engineers seek to obtain a first glance insight of their subject of study, we
believe this approach will be helpful.
This part consists of four main sections. In the first, the concept of harmonic (e.g.,
sinusoidal) functions, expressed by pairs of counter-rotating complex conjugate phasors, is
explained. Then the Fourier series of periodic waveforms are discussed to obtain the
discrete spectra of periodic waveforms. This is followed by the Fourier integral to obtain
continuous spectra of non-repetitive waveforms. The convergence problem of the Fourier

-1.5-
P. Starič, E.Margan The Laplace Transform

integral is solved by introducing the complex frequency variable = œ 5  4=, thus coming
to direct Laplace transform (_ transform).
The second section shows some examples of the _ transforms. The results are
useful when we seek the inverse transforms of simple functions.
The third section deals with the theory of functions of complex variables, but only
to the extent that is needed for understanding the inverse Laplace transform. Here the line
and contour integrals (Cauchy integrals), the theory of residues, the Laurent series and the
_" transform of rational functions are discussed. The existence of the _" transform for
rational functions is proved by means of the Cauchy integral.
Finally, the concluding section deals with some aspects of the _" transforms and
the convolution integral. Only two standard problems of the _" transform are shown,
because all the transient response calculations (by means of the contour integration and the
theory of residues) of amplifier networks, presented in Parts 2–5, give enough examples and
help to acquire the necessary know-how.
It is probably impossible to discuss Laplace transform in a manner which would
satisfy both engineers and mathematicians. Professor Ivan Vidav said: “If we
mathematicians are satisfied, you engineers would not be, and vice versa”. Here we have
tried to achieve the best possible compromise: to satisfy electronics engineers and at the
same time not to ‘offend’ the mathematicians. But, as late colleague, the physicist Marko
Kogoj, used to say: “Engineers never know enough of mathematics; only mathematicians
know their science to the extent which is satisfactory for an engineer, but they hardly ever
know what to do with it! ” Thus successful engineers keep improving their general
knowledge of mathematics — far beyond the text presented here.
After studying this part the readers will have enough knowledge to understand all
the time domain calculations in the subsequent parts of the book. In addition, the readers
will acquire the basic knowledge needed to do the time-domain calculations by themselves
and so become independent of _ transform tables. Of course, in order to save time, they
will undoubtedly still use the tables occasionally, or even make tables of their own. But
they will be using them with much more understanding and self-confidence, in comparison
with those who can do _" transform only via the partial fraction expansion and the tables
of basic functions.
Those readers who have already mastered the Laplace transform and its inverse,
can skip this part up to Sec. 1.14, where the _" transform of a two pole network is dealt
with. From there on we discuss the basic examples, which we use later in many parts of the
book; the content of Sec. 1.14 should be understood thoroughly. However, if the reader
notices any substantial gaps in his/her knowledge, it is better to start at the beginning.
In the last two parts of this book, Part 6 and 7, we derive a set of computer
algorithms which reduce the circuit’s time domain analysis, performance plotting and pole
layout optimization to a pure routine. However attractive this may seem, we nevertheless
recommend the study of Part 1: a good engineer must understand the tools he/she is using in
order to use them effectively.

-1.6-
P. Starič, E.Margan The Laplace Transform

1.1 Three Different Ways of Expressing a Sinusoidal Function

We will first show how a sinusoidal function can be expressed in three different
ways. The most common way is to express the instantaneous value + of a sinusoid of
amplitude E and angular frequency =" œ #10" , (0" œ frequency) by the well known
formula:
+ œ 0 Ð>Ñ œ E sin =" > (1.1.1)

The reason that we have appended the index ‘"’ to = will become apparent very
soon when we will discuss complex signals containing different frequency components.
The amplitude vs. time relation of this function is shown in Fig. 1.1.1a. This is the most
familiar display seen by using any sine-wave oscillator and an oscilloscope.

y b) y a)

A
a
ω1
ϕ x ϕ = ω 1t
0 π π π 3π 2π
4 2 2

a = A sin ω1 t
ℑ ∆ϕ ℑ
ω1= ∆ t
A/2
ϕ=0 ϕ= π
−ϕ 4
ℜ ℜ
−ω1 −ω 1
ω1 ω1
ω ω
ϕ

− A/2
c) d)
Fig. 1.1.1: Three different presentations of a sine wave: a) amplitude in time domain; b) a phasor
of length E, rotating with angular frequency =" ; c) two complex conjugate phasors of length EÎ#,
rotating in opposite directions with angular frequency =" , at =" > œ !; d) the same as c), except at
=" > œ 1Î%.

In electrical engineering, another presentation of a sinusoidal function is often used,


coming from the vertical axis projection of a rotating phasor E, as displayed in Fig. 1.1.1b,
for which the same Eq. 1.1.1 is valid. Here both axes are real, but one of the axes may also
be imaginary. In this case the corresponding mathematical presentation is:

s œ E e4 =" >
0 Ð>Ñ œ E (1.1.2)

s is a complex quantity and e œ #Þ(") #)"á is the basis of natural logarithms.


where E
However, we can also obtain the real quantity + by expressing the sinusoidal function by
two complex conjugate phasors of length EÎ# which rotate in opposite directions, as

-1.7-
P. Starič, E.Margan The Laplace Transform

displayed in a three-dimensional presentation in Fig. 1.1.1c. Here both phasors are shown at
= > œ ! (or = > œ #1, %1, á ). The sum of both phasors has the instantaneous value +,
which is always real. This is ensured because both phasors rotate with the same angular
frequency =" and =" , starting as shown in Fig. 1.1.1c, and therefore they are always
complex conjugate at any instant. We express + by the well-known Euler formula:

E 4 =" >
+ œ 0 Ð>Ñ œ E sin =" > œ Še  e4 =" > ‹ (1.1.3)
#4

The 4 in the denominator means that both phasors are imaginary at > œ !. The sum of both
rotating phasors is then zero, because:

E 4 =" ! E 4 =" !
0 Ð!Ñ œ e  e œ! (1.1.4)
#4 #4

Both phasors in Fig. 1.1.1c and 1.1.1d are placed on the frequency axis at such a
distance from the origin as to correspond to the frequency „ =" . Since the phasors rotate
with time the Fig. 1.1.1d, which shows them at : œ =1 > œ 1Î%, helps us to acquire the idea
of a three-dimensional presentation. The understanding of these simple time-frequency
relations, presented in Fig. 1.1.1c and 1.1.1d and expressed by Eq. 1.1.3, is essential for
understanding both the Fourier transform and the Laplace transform.
Eq. 1.1.3 can be changed to the cosine function if the phasor with =" is multiplied
by 4 œ e4 1Î# and the phasor with =" by 4 œ e4 1Î# . The first multiplication means a
counter-clockwise rotation by *!° and the second a clockwise rotation by *!°. This causes
both phasors to become real at time > œ !, their sum again equaling E:

E 4 =" > E 4 =" >


0 Ð>Ñ œ e  e œ E cos =" > (1.1.5)
#4 #4

In general a sinusoidal function with a non-zero phase angle : at > œ ! is expressed as:

E
E sin Ð= >  :Ñ œ ’ e4Ð= >:Ñ  e4Ð= >:Ñ “ (1.1.6)
#4

The need to introduce the frequency axis in Fig. 1.1.1c and 1.1.1d will become
apparent in the experiment shown in Fig. 1.1.2. Here we have a unity gain amplifier with a
poor loop gain, driven by a sinewave source with frequency =" and amplitude E" , and
loaded by the resistor VL . If the resistor’s value is too low and the amplitude of the input
signal is high the amplifier reaches its maximum output current level, and the output signal
0 Ð>Ñ becomes distorted (we have purposely kept the same notation E as in the previous
figure, rather than introducing the sign Z for voltage). The distorted output signal contains
not just the original signal with the same fundamental frequency =" , but also a third
harmonic component with the amplitude E$  E" and frequncy =$ œ $ =" :

0 Ð>Ñ œ E" sin =" >  E$ sin $ =" > œ E" sin =" >  E$ sin =$ > (1.1.7)

-1.8-
P. Starič, E.Margan The Laplace Transform

Ai Vi
V1 Vi Vo
A1
Vo RL

A3 V3
t
ω 3 = 3ω1
t= π
4 ω1 Vi = A i sin ω 1t
V1 = A 1 sin ω 1t t = 2π
ω1
V3 = A 3 sin ω 3t
Vo = V1 + V3

Fig. 1.1.2: The amplifier is slightly overdriven by a pure sinusoidal signal, Zi , with a frequency ="
and amplitude Ei . The output signal Zo is distorted, and it can be represented as a sum of two
signals, Z"  Z$ . The fundamental frequency of Z" is =" and its amplitude E" is somewhat lower.
The frequency of Z$ (the third harmonic component) is =$ œ $ =" and its amplitude is E$ .

Now let us draw the output signal in the same way as we did in Fig. 1.1.1c,d. Here
we have two pairs of harmonic components: the first pair of phasors E" Î# rotating with the
fundamental frequency „ =" , and the second pair E$ Î# rotating with the third harmonic
frequency „ =$ , which are three times more distant from the origin than =" . This is shown
in Fig. 1.1.3a, where all four phasors are drawn at time > œ !. Fig. 1.1.3b shows the phasors
at time > œ 1Î% =. Because the third harmonic phasor pair rotates with an angular
frequency three times higher, they rotate up to an angle „ $1Î% in the same time.

ℑ ℑ
A1
ϕ =0 ϕ = π /4
2
A3 −ϕ
2 −3 ϕ
− 3ω 1 −3ω 1

−ω 1 ℜ −ω 1 ℜ
ω1 ω1
3ω 1
3ω 1
− A3 3ϕ
2 ϕ
ω
− A1 ω
2
a) b)
Fig. 1.1.3: The output signal of the amplifier in Fig. 1.1.2, expressed by two pairs of complex
conjugate phasors: a) at =" > œ !; b) at =" > œ 1Î%.

Mathematically Eq. 1.1.7, according to Fig. 1.1.2 and 1.1.3, can be expressed as:

0 Ð>Ñ œ E" sin =" >  E$ sin =$ >

E" 4 = " > E$ 4 = $ >


œ Še  e4 =" > ‹  Še  e4 =$ > ‹ (1.1.8)
#4 #4

-1.9-
P. Starič, E.Margan The Laplace Transform

The amplifier output obviously cannot exceed either its supply voltage or its
maximum output current. So if we keep increasing the input amplitude the amplifier will
clip the upper and lower peaks of the output waveform (some input protection, as well as
some internal signal source resistance must be assumed if we want the amplifier to survive
in these conditions), thus generating more harmonics. If the input amplitude is very high
and if the amplifier loop gain is high as well, the output voltage 0 Ð>Ñ would eventually
approach a square wave shape, such as in Fig. 1.2.1b in the following section. A true
mathematical square wave has an infinite number of harmonics; since no amplifier has an
infinite bandwidth, the number of harmonics in the output voltage of any practical amplifier
will always be finite.
In the next section we are going to examine a generalized harmonic analysis.

-1.10-
P. Starič, E.Margan The Laplace Transform

1.2 The Fourier Series

In the experiment shown in Fig. 1.1.2 we have composed the sinusoidal waveforms
with the amplitudes E" and E$ to get the output time-function 0 Ð>Ñ. Now, if we have a
square wave, as in Fig. 1.2.1b, we would have to deal with many more discrete frequency
components. We intend to calculate the amplitudes of them, assuming that the time
function of the square wave is known. This means a decomposition of the time function
0 Ð>Ñ into the corresponding harmonic frequency components. To do so we will examine the
Fourier series, following the French mathematician Jean Baptiste Joseph de Fourier1.
The square wave time function is periodic. A function is periodic if it acquires the
same value after its characteristic period X" œ # 1Î=" , at any instant >:
0 Ð>Ñ œ 0 Ð>  X" Ñ (1.2.1)

Consequently the same is true for 0 Ð>Ñ œ 0 Ð>  8 X" Ñ, where 8 is an integer. According to
Fourier this square wave can be expressed as a sum of harmonic components with
frequencies 08 œ „ 8ÎX" . If 8 œ " we have the fundamental frequency 0" with a phasor
E" Î#, rotating counter-clockwise. The phasor 0" with the same length E" Î# œ E" Î#
rotates clockwise and forms a complex conjugate pair with the first one. A true square wave
would have an infinite number of odd-order harmonics (all even order harmonics are zero).

A7 A1 ℑ
2 A5 2
−7 ω 1 A
2 3 2 T1
2 π t =0 1
− 5ω 1
0 t
− 3ω 1

−ω 1
−1
ω1
ω1 = 2π
3ω1 T1

5ω1 T
−A3 1, 0< t < 1
−2 2
π 2 −A5 7 ω f (t ) = T1
− A1 1 −1, < t < T1
2 2
2 −A7
a) 2 b)
ω

Fig. 1.2.1: A square wave, as shown in b), has an infinite number of odd-order frequency
components, of which the first 4 complex-conjugate phasor pairs are drawn in a) at time
> œ Ð! „ # 8 1ÑÎ=" , where 8 is an integer representing the number of the period.

1 It is interesting that Fourier developed this method in connection with thermal engineering. As a general in
the Napoleon's army he was concerned with gun deformation by heat. He supposed that one side of a straight
metal bar is heated and then bent, joining the ends, to form a thorus. Then he calculated the temperature
distribution along the circle so formed, in such a way that it would be the sum of sinusoidal functions, each
having a different amplitude and a different angular frequency.

-1.11-
P. Starič, E.Margan The Laplace Transform

In Fig. 1.2.1, we have drawn the complex-conjugate phasor pairs of the first 4
harmonics. Because all the phasor pairs are always complex-conjugate, the sum of any pair,
as well as their total sum, is always real. The phasors rotate with different speeds and in
opposite directions. Fig. 1.2.2a shows them at time X" Î) to help the reader’s imagination.
Although this figure looks confusing, the phasors shown have an exact inter-relationship.
Looking at the positive = axis, the phasor with the amplitude E" Î# has rotated in the
counter-clockwise direction by an angle of 1Î%. During the same interval of X" Î) the
remaining phasors have rotated: E$ Î# by $1Î%; E& Î# by &1Î%; E( Î# by (1Î%; etc. The
corresponding complex conjugate phasors on the negative = axis rotate likewise, but in the
opposite (clockwise) direction. The sum of all phasors at any instant > is the instantaneous
amplitude of the time domain function. In general, the time function with the fundamental
frequency =" is expressed as:
_
E8 4 8 =" >
0 Ð>Ñ œ " e
8œ_
#

E8 4 8 =" > E# 4 # =" > E" 4 =" >


œâ e â e  e
# # #

E" 4 =" > E# 4 # =" > E8 4 8 =" >


 E!  e  e â e â (1.2.2)
# # #


ϕ = ω 1 t = π/4
−7ω 1 T1
A1 1
−5 ω 1 −ϕ 2
0 t
−3 ω 1
ℜ ϕ
−ω 1 ω1
−1
ω1
ω1 = 2π
3ω 1 T1

5ω 1 T
1, 0< t < 1
f (t ) = 2
ϕ T1
7ω1 −1, < t < T1
2
ω
a) b)

Fig. 1.2.2: As in Fig. 1.2.1, but at an instant > œ Ð1Î% „ # 8 1ÑÎ=" ; a) the spectrum,
expressed by complex conjugate phasor pairs, corresponds to the instant > œ :Î=" in b).

Note that for the square wave all the even frequency components are missing. For
other types of waveforms the even coefficients can be non-zero. In general Ei may also be
complex, thus containing some non-zero initial phase angle :i . In Eq. 1.2.2 we have also
introduced E! , the DC component, which did not exist in our special case. The meaning of
E! can be understood by examining Fig. 1.2.3a, where the so-called sawtooth waveform is
shown, with no DC component. In Fig. 1.2.3b, the waveform has a DC component of
magnitude E! .

-1.12-
P. Starič, E.Margan The Laplace Transform

Eq. 1.2.2 represents the complex spectrum of the function 0 Ð>Ñ, while Fig. 1.2.1
represents the corresponding most significant part of the complex spectrum of a square
wave. The next step is the calculation of the magnitudes of the rotating phasors.

f (t ) f (t )

A0
A0 = 0 t t

a) b)
Fig. 1.2.3: a) A waveform without a DC component; b) with a DC component E! .

If we want to measure safely and accurately the diameter of a wheel of a working


machine, we must first stop the machine. Something similar can be done with our Eq. 1.2.2,
except that here we can mathematically stop the rotation of any single phasor. Suppose we
have a phasor Ek Î#, rotating counter-clockwise with frequency =k œ k =" with an initial
phase angle :k (at > œ !), which is expressed as:

Ek 4Ð=k >  :k Ñ Ek 4 = k > 4 : k


e œ e e (1.2.3)
# #
Now we multiply this expression by a unit amplitude, clockwise rotating phasor e4 =k
(having the same angular frequency =k ) to cancel the e4 =k term, [Ref. 1.2]:
Ek 4 :k 4 =k > 4 =k > Ek 4 : k
e e e œ e (1.2.4)
# #
and obtain a non-rotating component which has the magnitude Ek Î# and phase angle :k at
any time. With this in mind let us attack the whole time function 0 Ð>Ñ. The duration of the
multiplication must last exactly one whole period and the corresponding expression is:
X Î#
Ek " 4 = >
œ ( 0 Ð>Ñ e k .> (1.2.5)
# X
X Î#

Since we have integrated over the whole period X in order to get the average value of that
harmonic component, the result of the integration must be divided by X , as in Eq. 1.2.5. If
there is a DC component (with = œ !) in the spectrum, the calculation of it is simply:

X Î#
"
E! œ ( 0 Ð>Ñ .> (1.2.6)
X
X Î#

To return to Eq. 1.2.5, let us explain the meaning of the integration Eq. 1.2.5 by
means of Fig. 1.2.4.

-1.13-
P. Starič, E.Margan The Laplace Transform

By multiplying the function 0 Ð>Ñ by e4 =k > we have stopped the rotating phasor
Ek Î#, while during the time interval of integration all the other phasors have rotated
through an angle of 8 # 1 (where 8 is an integer), including the DC phasor E! , because it is
now multiplied by e4 =k > . The result of the integration for all these rotating phasors is zero,
as indicated in Fig. 1.2.4a, while the phasor Ek Î# has stopped, integrating eventually to its
full amplitude; the integration for this phasor only is shown in Fig. 1.2.4b.
The understanding of the effect described of the multiplication 0 Ð>Ñ e4 = > is
essential to understanding the basic principles of the Fourier series, the Fourier
integral and the Laplace transform.

ℑ Ak

2

2 k
d A
Ak
d
2
ℜ ϕk ℜ

ϕi = dϕk ϕi = ϕk

a) b)

Fig. 1.2.4: a) The integral over the full period X of a rotating phasor is zero; b) the integral
over a full period X of a non-rotating phasor . Ek Î#, gives its amplitude, Ek Î#, Ðthe symbol
. stands for .>ÎX — in this figures .> p J> such that J> =k œ 1Î%Ñ. Note that a stationary
phasor retains its initial angle :k .

For us the Fourier series represents only a transitional station on the journey towards
the Laplace transform. So we will drive through it with a moderate speed “via the Main
Street”, without investigating some interesting things in the side streets. Nevertheless, it is
useful to make a practical example. Since we have started with a square wave, shown in
Fig. 1.2.5, let us calculate its complex spectrum components E8 Î#, assuming that the
square wave amplitude is E œ ".

4
T1 π A1
A1
1 Ak =
k
0 t ωk = kω1

ω1 = 2π
−1 T1 A3
A5
A7 A9
T A11 A13
1, 0< t < 1
2 ω
f (t ) = T1
−1, < t < T1 ω1 3ω 1 5ω1 7ω1 9ω1 11ω1 13ω1
2
Fig. 1.2.5: A square wave signal. Fig. 1.2.6: The frequency spectrum of a square wave,
expressed by real values (magnitudes) only.

For a single period the corresponding mathematical expression for this function is:

" for X Î#  >  !


0 Ð>Ñ œ œ
" for !  >  X Î#

-1.14-
P. Starič, E.Margan The Laplace Transform

According to Eq. 1.2.5 we calculate:


! X Î#
Ô ×
E8 " Ö 4 # 1 8 >ÎX
œ ( Ð"Ñ e .>  ( Ð"Ñ e4 # 1 8 >ÎX .>Ù
# X
ÕX Î# ! Ø

! X Î#
" X X
œ  e4 # 1 8 >ÎX º  e4 # 1 8 >ÎX º 
X 4#18 X Î# 4 # 1 8 !

" " e4 1 8  e4 1 8


œ Š"  e4 1 8  e4 1 8  "‹ œ Œ  "
4#18 418 #

"
œ ˆcos 1 8  "‰ (1.2.7)
418

The result is zero for 8 œ ! (the DC component E! ) and for any even 8. For any
odd 8 the value of cos 1 8 œ ", and for such cases the result is:

E8 # #4
œ œ (1.2.8)
# 418 18

The factor 4 in the numerator means that for any positive 8 (and for > œ !, #1, %1,
'1, á ) the phasor is negative and imaginary, whilst for negative 8 it is positive and
imaginary. This is evident from Fig. 1.2.1a.
Let us calculate the first eight phasors by using Eq. 1.2.8. The lengths of phasors in
Fig. 1.2.1a and 1.2.2.b correspond to the values reported in Table 1.2.1. All the phasors
form complex conjugate pairs and their total sum always gives a real value.

Table 1.2.1: The first few harmonics of a square wave

„8 ! " $ & ( * "" "$


… E8 Î# ! #4Î1 #4Î$1 #4Î&1 #4Î(1 #4Î*1 #4Î""1 #4Î"$1

However, a spectrum can also be shown with real values only, e.g., as it appears on
the cathode ray tube screen of a spectrum analyzer. To obtain this, we simply sum the
corresponding complex conjugate phasor pairs (e.g., lE8 Î#l  lE8 Î#l œ E8 ) and place
them on the abscissa of a two-dimensional coordinate system, as shown in Fig. 1.2.6. Such
a non-rotating spectrum has only the positive frequency axis. Although such a presentation
of spectra is very useful in the analysis of signals containing several (or many) frequency
components, we will continue calculating with the complex spectra, because the phase
information is also important. And, of course, the Laplace transform, which is our main
goal, is based on a complex variable.
Now let us recompose the waveform using only the harmonic frequency
components from Table 1.2.1, as shown in Fig. 1.2.7a. The waveform resembles the square
wave but it has an exaggerated overshoot $ ¶ 18 % of the nominal amplitude.
The reason for the overshoot $ is that we have abruptly cut off the higher harmonic
components from a certain frequency upwards. Would this overshoot be lower if we take

-1.15-
P. Starič, E.Margan The Laplace Transform

more harmonics? In Fig. 1.2.7b we have increased the number of harmonic components
three times, but the overshoot remained the same. No matter how many, yet for any finite
number of harmonic components, used to recompose the waveform, the overshoot would
stay the same (only its duration becomes shorter if the number of harmonic components is
increased, as is evident from Fig. 1.2.7a and 1.2.7b ).
This is the Gibbs’ phenomenon. It tells us that we should not cut off the frequency
response of an amplifier abruptly if we do not wish to add an undesirably high overshoot to
the amplified pulse. Fortunately, real amplifiers can not have an infinitely steep high
frequency roll off, so a gradual decay of high frequency response is always ensured.
However, as we will explain in Part 2 and 4, the overshoot may increase as a result of other
effects.

1 δ δ
a) b)
7 harmonics 21 harmonics

−1
− 0.2 0.0 0.2 0.4
t 0.6 0.8 1.0 1.2 − 0.2 0.0 0.2 0.4
t 0.6 0.8 1.0 1.2
T T
Fig. 1.2.7: The Gibbs’ phenomenon; a) A signal composed of the first seven harmonics of a
square wave spectrum from Table 1.2.1. The overshoot is $ ¶ 18 % of the nominal amplitude;
b) Even if we take three times more harmonics the overshoot $ is nearly equal in both cases.

In a similar way to that for the square wave, any periodic signal of finite
amplitude and with a finite number of discontinuities within one period, can be
decomposed into its frequency components. As an example the waveform in Fig. 1.2.8
could also be decomposed, but we will not do it here. Instead in the following section we
will analyze another waveform which will allow us to generalize the method of frequency
analysis.

20

15

10
f (t ) [V]

0 20 40 60 80 100
t [ µ s]
Fig. 1.2.8: An example of a periodic waveform (a typical flyback switching power supply), having
a finite number of discontinuities within one period. Its frequency spectrum can also be calculated
using the Fourier transform, if needed (e.g., to analyze the possibility of electromagnetic
interference at various frequencies), in the same way as we did for the square wave.

-1.16-
P. Starič, E.Margan The Laplace Transform

1.3 The Fourier Integral

Suppose we have a function 0 Ð>Ñ composed of square waves with the duration 7 and
repeating with a period X , as shown in Fig. 1.3.1. For this function we can also calculate the
Fourier series (the corresponding spectrum is shown in Fig. 1.3.2) in the same way as for
the continuous square wave case in the previous section.
f (t )
1
τ
t
0

−1
T

Fig. 1.3.1: A square wave with duration 7 and period X œ &7 .

The difference between the continuous square wave spectrum and the spaced square
wave in Fig. 1.3.1 is that the integral of this function can be broken into two parts, one
comprising the length of the pulse, 7 , and the zero-valued part between two pulses of a
length X  7 . The reader can do this integration for himself, because it is fairly simple. We
will only write the result:
E8 sin# c 8 =" Ð7 Î%Ñ d
œ 4 7 (1.3.1)
# 8 =" Ð7 Î%Ñ

where =" œ #1 ÎX , assuming that the pulse amplitude is " (if the amplitude were E it
would simply multiply the right hand side of the equation). For the conditions in Fig. 1.3.1,
where X œ &7 and E œ ", the spectrum has the form shown in Fig. 1.3.2, with =7 œ #1Î7 .


1

ω τ = 2τπ

−4ω τ
− 2ω τ ∆ω = 2 π
T
0 2ωτ
4ω τ
ω

−1

Fig. 1.3.2: Complex spectrum of the waveform in Fig. 1.3.1.

-1.17-
P. Starič, E.Margan The Laplace Transform

A very interesting question is that of what would happen to the spectrum if we let
the period X p _? In general a function 0 Ð>Ñ can be recomposed by adding all its harmonic
components:
_
E8 4 8 = " >
0 Ð>Ñ œ " e (1.3.2)
8œ_
#

where E8 may also be complex, thus containing the initial phase angle :i . Again, as in the
previous section, each discrete harmonic component can be calculated with the integral:
X Î#
E8 " 4 8 =" >
œ ( 0 Ð>Ñ e .> (1.3.3)
# X
X Î#

For the case in Fig. 1.3.1 the integration should start at > œ ! and the integral has the form:
X
E8 " 4 8 =" >
œ ( 0 Ð>Ñ e .> (1.3.4)
# X
!

Insert this into Eq. 1.3.2:


X
_ Ô " ×
4 8 =" 7
0 Ð>Ñ œ " ( 0 Ð7 Ñ e . 7 e 4 8 =" > (1.3.5)
8œ_Õ
X Ø
!

Here we have introduced a dummy variable 7 in the integral, in order to distinguish it from
the variable > outside the brackets. Now we express the integral inside the brackets as:
X X
4 8 =" 7 #81
( 0 Ð7 Ñ e . 7 œ ( 0 Ð7 Ñ e4 # 1 8 7 ÎX . 7 œ J Š ‹ œ J Ð8 =" Ñ (1.3.6)
X
! !

Thus:
_
" " _ #1
0 Ð>Ñ œ " J Ð8 =" Ñ e4 8 =" > œ " J Ð8 =" Ñ e4 =" >
8œ_
X # 1 8œ_ X

" _
œ " =" J Ð8 =" Ñ e4 =" > (1.3.7)
# 1 8œ_

where #1ÎX œ =" . If we let X p _ then =" becomes infinitesimal, and we call it .=. Also
8=" becomes a continuous variable =. So in Eq. 1.3.7 the following changes take place:
_
_
" Ê ( =" Ê .= 8 =" Ê =
8œ_
_

With all these changes Eq. 1.3.7 is transformed into Eq. 1.3.8:

_
" 4=>
0 Ð>Ñ œ ( J Ð=Ñ e .= (1.3.8)
#1
_

-1.18-
P. Starič, E.Margan The Laplace Transform

Consequently Eq. 1.3.6 also changes, obtaining the form:

J Ð=Ñ œ ( 0 Ð>Ñ e4 = > .> (1.3.9)


!

In Eq. 1.3.9 J Ð=Ñ has no discrete frequency components but it forms a continuous
spectrum. Since X p _ the DC part vanishes (as it would for any pulse shape, not just
symmetrical shapes), according to Eq. 1.2.6:
X
"
E! œ lim ( 0 Ð>Ñ .> œ ! (1.3.10)
Xp _ X
!

Eq. 1.3.8 and 1.3.9 are called Fourier integrals. Under certain (usually rather
limited) conditions, which we will discuss later, it is possible to use them for the
calculation of transient phenomena. The second integral ( Eq. 1.3.9 ) is called the direct
Fourier transform, which we express in a shorter way:

Y ˜0 Ð>Ñ™ œ J Ð=Ñ (1.3.11)

The first integral (Eq. 1.3.8) represents the inverse Fourier transform and it is
usually written as:

Y " ˜J Ð=Ñ™ œ 0 Ð>Ñ (1.3.12)

In Eq. 1.3.9, J Ð=Ñ means a firm spectrum and the factor e4 = > means the rotation of
each of the corresponding infinite spectrum components contained in J Ð=Ñ with its angular
frequency =, which is a continuous variable. In Eq. 1.3.8 0 Ð>Ñ means the complete time
function, containing an infinite number of rotating phasors and the factor e4 = > means the
rotation ‘in the opposite direction’ to stop the rotation of the corresponding rotating phasor
e4 = > contained in 0 Ð>Ñ, at its particular frequency =.
Let us now select a suitable time function 0 Ð>Ñ and calculate its continuous
spectrum. Since we have already calculated the spectrum of a periodic square wave, it
would be interesting to display the spectrum of a single square wave as shown in
Fig. 1.3.3b. We use Eq. 1.3.9:

_ ! 7 Î#
4 = > 4 = >
J Ð=Ñ œ ( 0 Ð>Ñ e .> œ ( Ð"Ñ e .>  ( Ð"Ñ e4 = > .> (1.3.13)
7 Î# 7 Î# !

Here we have a single square wave with a ‘period’ X from > œ 7 Î# to _.
However, we need to integrate only from > œ 7 Î# to > œ 7 Î#, because 0 Ð>Ñ is zero
outside this interval. It is important to note that at the discontinuity where > œ !, we have
started the second integral. For a function with more discontinuities, between each of them
we must write a separate integral. Thus it is obvious that the function 0 Ð>Ñ must have a
finite number of discontinuities for it to be possible to calculate its spectrum.

-1.19-
P. Starič, E.Margan The Laplace Transform

The result of the above integration is:

" # e4 = 7 Î#  e 4 = 7 Î#
J Ð=Ñ œ Š"  e4 = 7 Î#  e 4 = 7 Î#  "‹ œ  " 
4 = 4= #

# 4 =7 # 4 =7 % 4 =7
œ Š"  cos ‹œ Š# sin# ‹œ sin#
= # = % = %
=7
sin#
œ 4 7 % (1.3.14)
=7
%

A three-dimensional display of a spectrum, corresponding to this result, is shown in


Fig. 1.3.3a. Here the frequency scale has been altered with respect to Fig. 1.2.1a in order to
display the spectrum better.


T=∞
1
−τ
ω τ = 2τπ 2 0 t
τ
∆ω = 0 2
ℜ −1
− 6ω τ
− 4ω τ
− 2ω τ b)
2ωτ
0
4ωτ
6ω τ
ω

a)

Fig. 1.3.3: a) The frequency spectrum of a single square wave is expressed by complex
conjugate phasors. Since the phasors are infinitely many, they merge in a continuous planar
form. Also the spectrum extends to = œ „ _. The corresponding waveform is shown in b).
Note that all the even frequency components #18Î7 are missing (8 is an integer).

By comparing Fig. 1.2.1a and 1.3.3a we may draw the following conclusions:
1. Both spectra contain no even frequency components, e.g., at „ # =7 , „ % =7 ,
etc., where =7 œ # 1Î7 ;
2. In both spectra there is no DC component E! ;
3. By comparing Fig. 1.3.2 and 1.3.3 we note that the envelope of both spectra can
be expressed by Eq. 1.3.14;
4. By comparing Eq. 1.3.1 and 1.3.14 we note that the discrete frequency 8 =" from
the first equation is replaced by the continuous variable = in the second equation.
Everything else has remained the same.

-1.20-
P. Starič, E.Margan The Laplace Transform

In the above example we have decomposed an aperiodic waveform (also called a


transient), expressed as 0 Ð>Ñ, into a continuous complex spectrum J Ð=Ñ. Before
discussing the functions which are suitable for the application of the Fourier integral let us
see some common periodic and non-periodic signals. A sustained tone from a trumpet we
consider to be a periodic signal, whilst a beat on a drum is a non-periodic signal (in a strict
mathematical sense, both signals are non-periodic, because the first one also started out of
silence). The transition from silence to sound we call the transient. In accordance with this
definition, of the waveforms in Fig. 1.3.4 only a) and b) show a periodic waveform, whilst
c) and d) display transients.
a) c)
f (t ) f (t )

t t

b) f (t ) d) f (t )

t t

kT

Fig. 1.3.4: a) and b) periodic functions, c) and d) aperiodic functions.

The question arises of whether it is possible to calculate the spectra of the transients
in Fig. 1.3.4c and 1.3.4d by means of the Fourier integral using Eq. 1.3.8?
The answer is no, because the integral in Eq. 1.3.8 does not converge for any of
these two functions. The integral is also non-convergent for the most simple step signal,
which we intend to use extensively for the calculation of the step response of amplifier
networks.
This inconvenience can be avoided if we multiply the function 0 Ð>Ñ by a suitable
convergence factor, e.g., e-> , where -  ! and its magnitude is selected so that the integral
in Eq. 1.3.2 remais finite when > p _. In this way, the problem is solved for >   !. In doing
so, however, the integral becomes divergent for >  !, because for negative time the factor
e-> has a positive exponent, causing a rapid increase to infinity. But this, too, can be
avoided, if we assume that the function 0 Ð>Ñ is zero for >  !. In electrical engineering and
electronics we can always assume that a circuit is dead until we switch the power on or we
apply a step voltage signal to its input and thus generate a transient. The transform where
0 Ð>Ñ must be zero for >  ! is called a unilateral transform.
For functions which are suitable for the unilateral Fourier transform the following
relation must hold [Ref. 1.3]:
X

lim ( l0 Ð>Ñl e-> .>  _ (1.3.15)


Xp _
!

where 0 Ð>Ñ is a single-valued function of > and - is positive and real.

-1.21-
P. Starič, E.Margan The Laplace Transform

If so, we can write the direct transform:


_

J Ð- , =Ñ œ ( 0 Ð>Ñ e-> ‘ e4 = > .> (1.3.16)


!

If we want this integral to converge to some finite value for > p _, the real constant
must be -   5a , where 5a is the abscissa of absolute convergence. The magnitude of 5a
depends on the nature of the function 0 Ð>Ñ. I.e., if 0 Ð>Ñ œ ", then 5a œ !, and if 0 Ð>Ñ œ e!>
then 5a œ !, where !  !. By applying the convergence factor e-> , the inverse Fourier
transform obtains the form:
_
-> " 4=>
0 Ð>Ñ e œ ( J Ð- , =Ñ e . = for >   ! (1.3.17)
#1
_

Here we must add all the complex-conjugate phasors with frequencies from
= œ _ to _. Although the direct Fourier transform in our case was unilateral, the
inverse transform is always bilateral. Because in Eq. 1.3.16 we have deliberately
introduced the convergence factor e-> we must limit - p ! after the integral is solved in
order to get the required J Ð=Ñ.
Since our final goal is the Laplace transform we will stop the discussion of the
Fourier transform here. We will, however, return to this topic later in Part 6, where we will
discuss the solving of system transfer functions and transient responses using numerical
methods, suitable for machine computation. There we will discuss the application of the
very efficient Fast Fourier Transform (FFT) algorithm to both frequency and time domain
related problems.

-1.22-
P. Starič, E.Margan The Laplace Transform

1.4 The Laplace Transform

By a slight change of Eq. 1.3.16 and 1.3.17 we may arrive at a general complex
Fourier transform [Ref. 1.3]. This is done so that we join the kernel e4 = > and the
convergence factor e-> . In this way Eq. 1.3.16 is transformed into:
_

J Ð-  4 =Ñ œ ( 0 Ð>Ñ‘ eÐ-  4 =Ñ > .> where -   5a (1.4.1)


!

The formula for an inverse transform is derived from Eq. 1.3.17 if both sides of the
equation are multiplied by e-> . In addition, the simple variable = is now replaced by a new
one: -  4 =. By doing so we obtain:

-4_
"
0 Ð>Ñ œ ( J Ð-  4 =Ñ eÐ-  4 =Ñ> .Ð-  4 =Ñ for >   ! and -   5a (1.4.2)
#14
-4_

If in Eq. 1.4.1 and 1.4.2 the constant - becomes a real variable 5 , both equations are
transformed into the form called Laplace transform. The name is fully justified, since the
French mathematician Pierre Simon de Laplace had already introduced this transform in
1779, whilst Fourier published his transform 43 years later.
It is a custom to denote the complex variable 5  4 = by a single symbol =, which
we also call the complex frequency (in some, mostly mathematical, literature this variable is
also denoted as :). With this new variable Eq. 1.4.1 can be rewritten:

J Ð=Ñ œ _ e0 Ð>Ñf œ ( 0 Ð>Ñ e=> .> (1.4.3)


!

and this is called the direct Laplace transform, or _ transform. It represents the complex
spectrum J Ð=Ñ. The above integral is valid for functions 0 Ð>Ñ such that the factor e=>
keeps the integral convergent. If we now insert the variable = in Eq. 1.4.2, we have:

-4_
" "
0 Ð>Ñ œ _ eJ Ð=Ñf œ ( J Ð=Ñ e=> .= (1.4.4)
#14
-4_

This integral is called the inverse Laplace transform, or _" transform.


Like the inverse Fourier transform, Eq. 1.4.4 is a bilateral transform too. In the
integral Eq. 1.4.3 it is assumed that 0 Ð>Ñ œ ! for >  !, thus that equation represents the
unilateral transform. In addition, the real part of the variable = satisfies de=f œ 5   5a ,
where 5a is the abscissa of absolute convergence, as we have already discussed for
Eq. 1.3.16 and 1.3.17 [see also Ref. 1.23]. In the integral Eq. 1.4.4 >   !, so here too we
must have 5   5a .

-1.23-
P. Starič, E.Margan The Laplace Transform

The path of integration is parallel with the imaginary axis, as shown in Fig. 1.4.1.
The constant - in the integration limits must be properly chosen, in order to ensure the
convergence of the integral.


c + jω
direction of
integration
c σ
0

c − jω

Fig. 1.4.1: The abscissa of absolute convergence — the integration path for Eq. 1.4.4.

The factor e=> in Eq. 1.4.3 is needed to stop the rotation of the corresponding
phasor e=> ; there are infinitely many such phasors in the time function 0 Ð>Ñ. As our variable
is now complex, = œ 5  4 =, the factor e=> does not mean a simple rotation, but a spiral
rotation in which the radius is exponentially decreasing with > because of 5 , the real part
of =. This is necessary to cancel the corresponding rotation e=> , contained in 0 Ð>Ñ, with a
radius, which, in an exactly equal manner, increases with > [Ref. 1.23].
Since in Eq. 1.4.4 the factor e=> becomes divergent if the exponent de= >f  ", the
above conditions for the variable 5 (and for the constant - ) must be met to ensure the
convergence of the integral. In the analysis of passive networks these conditions can always
be met, as we will show in many examples in the subsequent sections.
Now, because we have reached our goal, the Laplace transform and its inverse, we
may ask ourselves what we have accomplished by doing all this hard work.
For the time being we can claim that we have transformed the function of a real
variable > into a function of a complex variable =. This allows us to calculate, using the
_ transform, the spectrum function J Ð=Ñ of a finite transient, defined by the function 0 Ð>Ñ.
Or, more important for us, by means of the _" transform we can calculate the time domain
function, if the frequency domain function J Ð=Ñ is known.
Later we will show how we can transform linear differential equations in the time
domain, by means of the _ transform, into algebraic equations in the = domain. Since the
algebraic equations are much easier to solve than the differential ones, this means one has a
great facility. Once our calculations in the = domain are completed, then by means of the
_" transform we obtain the corresponding time domain function. In this way we avoid
solving directly the differential equations and the calculation of boundary conditions.

-1.24-
P. Starič, E.Margan The Laplace Transform

1.5 Examples of Direct Laplace Transform

Now let us put our new tools to use and calculate the _ transform of several simple
functions. The results may also be used for the _" transform and the reader is encouraged
to learn the most basic of them by heart, because they are used extensively in the other parts
of the book and, of course, in the analysis of the most common electronics circuits.

1.5.1 Example 1
Most of our calculations will deal with the step response of a network. To do so our
excitation function will be a simple unit step 2a>b, or the Heaviside function (after Oliver
Heaviside, 1850–1925) as is shown in Fig. 1.5.1. This function is defined as:

! for >  !
0 Ð>Ñ œ 2Ð>Ñ œ œ
" for >  !

f (t ) f (t )
1 1

t t
0 0 a

Fig. 1.5.1: Unit step function. Fig. 1.5.2: Unit step function starting at > œ +.

As we agreed in the previous section, 0 Ð>Ñ œ ! for >  ! for all the following
functions, and we will not repeat this statement in further examples. At the same time let us
mention that for our calculations of _ transform it is not important what is the actual value
of 0 Ð!Ñ, providing it is finite [Ref. 1.3].
The _ transform for the unit step function 0 a>b œ 2a>b is:
_
>œ_
" => "
J Ð=Ñ œ _ e0 Ð>Ñf œ ( c"d e=> .> œ e º œ (1.5.1)
= >œ! =
!

1.5.2 Example 2
The function is the same as in Example 1, except that the step does not start at > œ !
but at > œ +  ! ( Fig. 1.5.2 ):
! for >  +
0 Ð>Ñ œ œ
" for >  +
Solution:
_
>œ_
" => " +=
J Ð=Ñ œ ( c"d e=> .> œ e º œ e (1.5.2)
= >œ+ =
+

-1.25-
P. Starič, E.Margan The Laplace Transform

1.5.3 Example 3
The exponential decay function is shown in Fig. 1.5.3; its mathematical expression:

0 Ð>Ñ œ e5" >

is defined for >  !, as agreed, and 5" is a constant.


Solution:
_ _
5" > =>
J Ð=Ñ œ ( e e .> œ ( eÐ5" =Ñ> .>
! !

>œ_
" "
œ eÐ5" =Ñ> º œ (1.5.3)
5"  = >œ! 5" =

Later, we shall meet this and the following function and also their product very often.

f (t ) f (t )
1 1
sin (2 π t / T )
− σ1 t
e
1 t
e 0 T
t
0
T = σ1
1

Fig. 1.5.3: Exponential function. Fig. 1.5.4: Sinusoidal function.

1.5.4 Example 4
We have a sinusoidal function as in Fig. 1.5.4; its corresponding mathematical
expression is:
0 Ð>Ñ œ sin =" >
where the constant =" œ #1ÎX .
Solution: its _ transform is:
_

J Ð=Ñ œ ( asin =" >b e=> .> (1.5.4)


!

To integrate this function we substitute it using Euler’s formula:


"
sin =" > œ Še4 =" >  e4 =" > ‹ (1.5.5)
#4
Then we have:
_ _
" 4 = > => 4 = > =>
J Ð=Ñ œ ( e " e .>  ( e " e .> 
#4 
! !

_ _
" Ô Ð=4 =" Ñ>
×
œ ( e .>  ( eÐ=4 =" Ñ> .> (1.5.6)
#4 Õ Ø
! !

-1.26-
P. Starič, E.Margan The Laplace Transform

The solution of this integral is, in a way, similar to that in the previous example:

" " "


J Ð=Ñ œ Œ  
# 4 =  4 =" =  4 ="

" =  4 ="  =  4 = " ="


œ † œ # (1.5.7)
#4 = #  =#" =  =#"

This is a typical function of a continuous wave (CW) sinusoidal oscillator, with a


frequency =" .

1.5.5 Example 5
Here we have the cosine function as in Fig. 1.5.5, expressed as:

0 Ð>Ñ œ cos =" >

Solution: the _ transform of this function is calculated in a similar way as for the sine.
According to Euler’s formula:
"
cos =" > œ Še4 =" >  e4 =" > ‹ (1.5.8)
#
Thus we obtain:
_ _
"Ô Ð= 4 =" Ñ>
× " " "
J Ð=Ñ œ ( e .>  ( eÐ= 4 =" Ñ> .> œ Œ  
#Õ Ø # =  4 =" =  4 ="
! !

" =  4 ="  =  4 =" =


œ † œ # (1.5.9)
# = #  =#" =  =#"

f (t )
f (t ) 1

1
cos (2π t / T ) e− σ1 t

t
t
0 0
T
e−σ1 t sin (2 π t / T )
T

Fig. 1.5.5: Cosine function. Fig. 1.5.6: Damped oscillations.

1.5.6 Example 6
In Fig. 1.5.6 we have a damped oscillation, expressed by the formula:

0 Ð>Ñ œ e 5" > sin =" >

Solution: we again substitute the sine function, according to Euler’s formula:

-1.27-
P. Starič, E.Margan The Laplace Transform

_
" Ð=  5" Ñ>
J Ð=Ñ œ ( e Še4 =" >  e4 =" > ‹ .>
#4
!
_
" Ð=5" 4 =" Ñ>
œ ( ’e  eÐ=5" 4 =" Ñ> “ .>
#4
!

" " " ="


œ Œ  œ (1.5.10)
# 4 =  5"  4 =" =  5"  4 =" Ð=  5" Ñ#  =#"

An interesting similarity is found if this formula is compared with the result of


Example 4. There, for a CW we have in the denominator =# alone, whilst here, because the
oscillations are damped, we have Ð=  5" Ñ# instead, and 5" is the damping factor.

f (t ) f (t )

t
tn

t t
0 0
Fig. 1.5.7: Linear ramp 0 Ð>Ñ œ >. Fig. 1.5.8: Power function 0 Ð>Ñ œ > 8.

1.5.7 Example 7
A linear ramp, as shown in Fig. 1.5.7, is expressed as:

0 Ð>Ñ œ >

Solution: we integrate by parts according to the known relation:

( ? .@ œ ? @  ( @ .?

and we assign > œ ? and e= > .> œ .@ to obtain:


_ _
>œ_
> e=> "
J Ð=Ñ œ ( > e=> .> œ º  ( e=> .>
= >œ! =
! !

>œ_
" => "
œ!! e º œ # (1.5.11)
=# >œ! =

1.5.8 Example 8
Fig. 1.5.8 displays a function which has a general analytical form:

0 Ð>Ñ œ > 8

-1.28-
P. Starič, E.Margan The Laplace Transform

Solution: again we integrate by parts, decomposing the integrand >8 e=> into:
" =>
? œ >8 .? œ 8 >8" .> @œ e .@ œ e=> .>
=
With these substitutions we obtain:
_ >œ_ _
8 => >8 e=> 8
J Ð=Ñ œ ( > e .> œ  ( >8" e=> .>
= » =
! >œ! !
_
8 8" =>
œ ( > e .> (1.5.12)
=
!

Again integrating by parts:


_ >œ_ _
8 8" => >8" e=> 8 Ð8  "Ñ 8# =>
( > e .> œ »  ( > e .>
= = =#
! >œ! !

_
8 Ð8  "Ñ 8# =>
œ ( > e .> (1.5.13)
=#
!

By repeating this procedure 8 times we finally arrive at:


_ _
8 => 8 Ð8  "ÑÐ8  #Ñ â $ † # † " ! => 8x
J Ð=Ñ œ ( > e .> œ ( > e .> œ 8" (1.5.14)
=8 =
! !

1.5.9 Example 9
The function shown in Fig. 1.5.9 corresponds to the expression:

0 Ð>Ñ œ > e5" >

Solution: by integrating by parts we obtain:


_ _
"
J Ð=Ñ œ ( > e5" > e=> .> œ ( > eÐ5" =Ñ> .> œ (1.5.15)
Ð5"  =Ñ#
! !

1.5.10 Example 10
Similarly to Example 9, except that here we have > 8 , as in Fig. 1.5.10:

0 Ð>Ñ œ > 8 e5" >

Solution: we apply the procedure from Example 8 and Example 9:


_ _
8 5" > => 8x
J Ð=Ñ œ ( > e e .> œ ( > 8 eÐ5" =Ñ> .> œ (1.5.16)
Ð5"  =Ñ8"
! !

-1.29-
P. Starič, E.Margan The Laplace Transform

f (t ) f (t )
2 2

tn
t
1 1
e −σ1 t e−σ1 t

t e−σ1 t t n e−σ1 t
t t
0 1 2 0 1 2

0.2 f ( t ) 0.2 f ( t )

t e−σ1 t t n e−σ1 t
0.1 0.1

t t
0 1 2 0 1 2

Fig. 1.5.9: Function 0 Ð>Ñ œ > e 5 " > . Fig. 1.5.10: Function 0 Ð>Ñ œ > 8 e 5" > .

These ten examples, which we frequently meet in practice, demonstrate that the
calculation of an _ transform is not difficult. Since the results derived are used often, we
have collected them in Table 1.5.1.

Table 1.5.1: Ten frequently met _ transform examples

No. 0 Ð>Ñ J Ð=Ñ No. 0 Ð>Ñ J Ð=Ñ


" ="
" " (for >  !) ' e5" > sin =" >
= Ð =#  5"# Ñ  =#"
" + > "
# " (for >  +) e ( >
= =#
" 8x
$ e5" > ) >8
5"  = =8"
=" "
% sin =" > * > e 5 " >
=#  =#" Ð5"  =Ñ#
= 8x
& cos =" > "! > 8 e5" >
=#  =#" Ð5"  =Ñ8"

-1.30-
P. Starič, E.Margan The Laplace Transform

1.6 Important Properties of the Laplace Transform


It is useful to know some of the most important properties of the _ transform:
1.6.1 Linearity (1)
_ e0 Ð>Ñ „ gÐ>Ñf œ _ e0 Ð>Ñf „ _ egÐ>Ñf (1.6.1)
Example:
" ="
_e>  sin =" >f œ _e>f  _esin = " >f œ  # (1.6.2)
=# =  =#"

1.6.2 Linearity (2)


_eO 0 Ð>Ñf œ O _e0 Ð>Ñf (1.6.3)
where O is a real constant.
Example:
="
_˜% e5" > sin =" >™ œ % _˜e5" > sin =" >™ œ % (1.6.4)
Ð=  5" Ñ#  =#"
1.6.3 Real Differentiation
.0 Ð>Ñ 
_œ  œ = J Ð=Ñ  0 Ð! Ñ (1.6.5)
.>

The transform of a derivative of the function 0 Ð>Ñ is obtained if we multiply J Ð=Ñ


by = and subtract the value of 0 Ð>Ñ if ! o > from the right side, denoted by the + sign at
0 Ð! Ñ (the direction is important because the values 0 Ð! Ñ and 0 Ð! Ñ can be different). We
will prove this statement by deriving it from the definition of the _ transform:
_

J Ð=Ñ œ _e0 Ð>Ñf œ ( 0 Ð>Ñ e=> .> (1.6.6)


!

We will integrate by parts by making 0 Ð>Ñ œ ? and e=> .> œ .@. The result is:
_ _
>œ_
=> " => " . 0 Ð>Ñ =>
( 0 Ð>Ñ e .> œ 0 Ð>Ñ e º  ( Œ  e .>
= >œ! = .>
! !
_
0 Ð! Ñ " . 0 Ð>Ñ =>
œ  ( Œ  e .>
= = .>
!

0 Ð! Ñ " .0 Ð>Ñ


œ  _œ  (1.6.7)
= = .>
By rearranging, we prove the statement expressed in Eq. 1.6.6:
_

=( 0 Ð>Ñ e=> .>  0 Ð! Ñ œ = J Ð=Ñ  0 Ð! Ñ


!
_
. 0 Ð>Ñ => . 0 Ð>Ñ
œ( ” • e .> œ _ œ  (1.6.8)
.> .>
!

-1.31-
P. Starič, E.Margan The Laplace Transform

Example:
. Ðe5" > Ñ  " 5"
_œ  œ = J Ð=Ñ  0 Ð! Ñ œ = "œ (1.6.9)
.> =  5" =  5"

We may also check the result by first differentiating the function e5" > :

. Ðe5" > Ñ
œ 5" e5" > (1.6.10)
.>
and then applying the _ transform:
5"
_˜5" e5" > ™ œ 5" _˜e5" > ™ œ (1.6.11)
=  5"
The result is the same.
By now the advantage of the _ transform against differential equations should have
become obvious. In the = domain the derivative of the function 0 Ð>Ñ corresponds to J Ð=Ñ
multiplied by = and subtracting the value 0 Ð! Ñ. The reason that > must approach zero from
the right (+) side is our prescribing 0 Ð>Ñ to be zero for >  !. In other words, we have a
unilateral transform.
The higher derivatives are obtained by repeating the above procedure. If for the first
derivative we have obtained:
_e0 w Ð>Ñf œ = J Ð=Ñ  0 Ð! Ñ (1.6.12)

then the _ transform of the second derivative is:

_e0 ww Ð>Ñf œ = Š_e0 w Ð>Ñf  0 w Ð! Ñ‹

œ = Š= J Ð=Ñ  0 Ð! Ñ‹  0 w Ð! Ñ œ =# J ÐsÑ  = 0 Ð! Ñ  0 w Ð! Ñ (1.6.13)

By a similar procedure the _ transform of the third derivative is:

_e0 www Ð>Ñf œ = Š=# J Ð=Ñ  = 0 Ð! Ñ  0 w Ð! Ñ‹  0 ww Ð! Ñ œ

œ =$ J Ð=Ñ  =# 0 Ð! Ñ  = 0 w Ð! Ñ  0 ww Ð! Ñ (1.6.14)

Thus the _ transform of the 8th derivative is simply:

_ ˜0 Ð8Ñ Ð>Ñ™ œ =8 J Ð=Ñ  =8" 0 Ð! Ñ  =8# 0 w Ð! Ñ  =8$ 0 ww Ð! Ñ  â

â  =# 0 a8$b a! b  = 0 Ð8#Ñ Ð! Ñ  0 Ð8"Ñ Ð! Ñ (1.6.15)

1.6.4 Real Integration


We intend to prove that:
Ú > Þ
J Ð=Ñ
_ Û ( 0 Ð7 Ñ . 7 ß œ (1.6.16)
Ü! à =

-1.32-
P. Starič, E.Margan The Laplace Transform

We will derive the proof from the basic definition of the _ transform:

Ú > Þ _ >
Ô × =>
_ Û ( 0 Ð7 Ñ . 7 ß œ( ( 0 Ð7 Ñ . 7 e .> (1.6.17)
Ü! à ! Õ! Ø

For the integration by parts we assign:

>
" =>
? œ ( 0 Ð7 Ñ .7 .? œ 0 Ð7 Ñ .7 @œ e .@ œ e=> .> (1.6.18)
=
!

By considering all this we may write the integral:

_ > > >œ_ _


=> " => Ô × " =>
( ( 0 Ð7 Ñ . 7 e .> œ ( e 0 Ð7 Ñ . 7 ( e 0 Ð>Ñ .> (1.6.19)
Õ = Ø =
! ! ! >œ! !

!
The term between both limits is zero for > œ ! because '! â œ !, and for > œ _ as well
_
because the exponential function e œ !. Thus only the last integral remains, from which
we can factor out the term "Î=. The result is:

Ú > Þ _
" => J Ð=Ñ
_ Û ( 0 Ð7 Ñ . 7 ß œ ( 0 Ð>Ñ e .> œ (1.6.20)
Ü! à = =
!

and thus the statement expressed by Eq. 1.6.16 is proved.


Example:

0 Ð>Ñ œ e5" > sin =" > (1.6.21)

We have already calculated the transform of this function ( Eq. 1.5.10 ) and it is:

="
J Ð=Ñ œ
a=  5" b#  =#"

Let us now calculate the integral of this function according to Eq. 1.6.20 by introducing a
dummy variable 7 :

Ú > Þ
J Ð=Ñ ="
_Û ( e5" 7 sin =" 7 . 7 ß œ œ (1.6.22)
Ü! à = = cÐ=  5" Ñ#  =#" d

This expression describes the step response of a network, having a complex


conjugate pole pair. We meet such functions very often in the analysis of inductive peaking
circuits or in calculating the step response of an amplifier with negative feedback.
We may obtain the transform of multiple integrals by repeating the procedure
expressed by Eq. 1.6.16.

-1.33-
P. Starič, E.Margan The Laplace Transform

By doing so we obtain for the


Ú > Þ
J Ð=Ñ
single integral: _Û ( 0 Ð 7 Ñ . 7 ß œ
Ü! à =

Ú > 7" Þ
J Ð=Ñ
double integral: _Û ( ( 0 Ð7 Ñ . 7 ß œ
Ü! ! à =#

Ú > 7" 7# Þ
J Ð=Ñ
triple integral: _Û ( ( ( 0 Ð 7 Ñ . 7 ß œ
Ü! ! ! à =$

> 78"
J Ð=Ñ
8-th integral: _ ( â( 0 Ð7 Ñ . 7 Ÿ œ (1.6.23)
=8
!
ðóñóò !

8 integrals

The _ transform of the integral of the function 0 Ð>Ñ gives the complex function
J Ð=ÑÎ=. The function J Ð=Ñ must be divided by = as many times as we integrate.
Here again we see a great advantage of the _ transform, for we can replace the
integration in the time domain (often a rather demanding procedure) by a simple
division by = in the (complex) frequency domain.

1.6.5 Change of Scale


We have the function:
_

_e0 Ð+ >Ñf œ ( 0 Ð+ >Ñ e=> .> (1.6.24)


!

We introduce a new variable @ œ + >, and for this .@ œ + .> and also > œ @Î+. Thus
we obtain:
_
" 
=> " =
_e0 Ð+ >Ñf œ ( 0 Ð@Ñ e + .@ œ JŠ ‹ (1.6.25)
+ + +
@œ!

Example: we have the function:


0 Ð>Ñ œ > e$ > (1.6.26)

We have already calculated the _ transform of a similar function by Eq. 1.5.15. For the
function above the result is:
"
J Ð=Ñ œ (1.6.27)
a=  $b#

-1.34-
P. Starič, E.Margan The Laplace Transform

Now let us change the scale tenfold. The new function is:

gÐ>Ñ œ 0 Ð"! >Ñ œ "! > e$! > (1.6.28)

According to Eq. 1.6.25 it follows that:


" = " "!
_egÐ>Ñf œ JŠ ‹œ # œ (1.6.29)
"! "! = a =  $!b#
"! Š  $‹
"!

1.6.6 Impulse $ (>)

In Fig. 1.6.1a we have a square pulse E" with amplitude " and duration > œ ". The
area under the pulse, equal to the time integral of this pulse, is amplitude ‚ time and thus
equal to ". It is obvious that we may obtain the same time integral if the duration of the
pulse is halved and its amplitude doubled (E# ). The pulse E 4 has a four times higher
amplitude and its duration is only > œ !Þ#& and still has the same time-integral.
If we keep narrowing the pulse and adjusting the amplitude accordingly to keep the
value of the time integral ", we eventually arrive at a situation where the duration of the
pulse becomes infinitely small, > œ & p !, and its amplitude infinitely large,
E œ Ð"Î&Ñ p _, as shown in Fig. 1.6.1b.
This impulse is denoted $Ð>Ñ and it is called the Dirac2 function. Mathematically we
express this function as:

"Î& when ! Ÿ > Ÿ &


$ Ð>Ñ œ 0& Ð>Ñ œ  º (1.6.30)
! when >  & &p!
f (t )
4
A4 ∞, t = 0
f (t ) = δ(t ) =
0, t = 0
A1 = A 2 = A 4

A2 Aε = A 1
2

A1 ε 0
1
1
t

0 1 1 1 t 0 t
4 2

Fig. 1.6.1: The Dirac function as the limiting case of narrowing the pulse width, while keeping
the time integral constant: a) If the pulse length is decreased, its amplitude must increase
accordingly. b) When the pulse length & p ! the amplitude is a"Î&b p _.

Let us calculate the _ transform of this function:


_ & _
" =>
_e0& Ð>Ñf œ ( 0& Ð>Ñ e=> .> œ ( e .>  ( Ð!Ñ e=> .>
&
! ! &

" => > œ & "  e= &


œ e ¹ œ (1.6.31)
=& >œ! =&

2 Paul Dirac, 1902–1984, English physicist, Nobel Prize winner in 1933 (together with Erwin Schrödinger).

-1.35-
P. Starič, E.Margan The Laplace Transform

Now we express the function e= & in this result by the following series:
"  e= & "  c"  = &  Ð= &Ñ# Î#x  Ð= &Ñ$ Î$x  âd =& Ð= &Ñ#
œ œ"  â
=& =& #x $x
and by letting & p ! we obtain:
"  e= & =& Ð= &Ñ#
_e$ Ð>Ñf œ lim œ lim ”"    â• œ " (1.6.32)
&p! =& & p ! #x $x

Therefore the magnitude of the spectrum envelope of this function is one and it is
independent of frequency. This means that the Dirac impulse $Ð>Ñ contains all frequency
components, the amplitude of each component being E œ ".

1.6.7 Initial and Final Value Theorems


The initial value theorem is expressed as:

lim 0 Ð>Ñ œ =lim


p_
= J Ð=Ñ (1.6.33)
!o>

We have written the notation ! o > in order to emphasize that > approaches zero
from the right of the coordinate system. From real differentiation we know that:
_
. 0 Ð>Ñ w w => 
_œ  œ _e0 Ð>Ñf œ ( 0 Ð>Ñ e .> œ = J Ð=Ñ  0 Ð! Ñ (1.6.34)
.>
!

The limit of this integral when = p _ is zero:


_

lim ( 0 w Ð>Ñ e=> .> œ !


=p_
(1.6.35)
!

If we assume that 0 Ð>Ñ is continuous at > œ ! we may write the limit of the right
hand side of Eq. 1.6.33:
! œ =lim
p_
= J Ð=Ñ  0 Ð! Ñ (1.6.36)

or, in a form more useful for practical calculations:

lim = J Ð=Ñ œ 0 Ð! Ñ  lim 0 Ð>Ñ œ lim 0 Ð>Ñ


=p_
(1.6.37)
!o> !o>

Even if 0 Ð>Ñ is not continuous at > œ !, this relation is still valid, although the proof
is slightly more difficult [Ref. 1.10]. The expression 0 Ð! Ñ is introduced because we are
dealing with a unilateral transform, in which it is assumed that 0 Ð>Ñ œ ! for >  !, so to
calculate the actual initial value we must approach it from the positive side of the time axis.
For the functions which we will discuss in the rest of the book we can, in a similar
way, prove the final value theorem, which is stated as:

lim 0 Ð>Ñ œ lim = J Ð=Ñ (1.6.38)


>p_ =p!

(note that for some functions, such as sin =" > or cos =" > or the squarewave, this limit does
not exist, since the value oscillates with the time integral of the function!).

-1.36-
P. Starič, E.Margan The Laplace Transform

We repeat the statement from Eq. 1.6.34:


_

_e0 Ð>Ñf œ ( 0 w Ð>Ñ e=> .> œ = J Ð=Ñ  0 Ð! Ñ


w
(1.6.39)
!

Now let = p ! (using ; as an intermediate dummy variable):


_ _ ;

lim ( 0 w Ð>Ñe=> .> œ ( 0 w Ð>Ñ .> œ ;lim


p_ (
0 w Ð>Ñ .>
=p!
! ! !

œ ;lim
p_
c0 Ð;Ñ  0 Ð! Ñd œ lim 0 Ð>Ñ  0 Ð! Ñ

(1.6.40)
>p_

Although the lower limit of the integral is a (simple) zero we have nevertheless
written ! in the result, to emphasize the unilateral transform. The limit of the right hand
side of Eq. 1.6.39, when = p ! is:
lim = J Ð=Ñ  0 Ð! Ñ (1.6.41)
=p!

By comparing the results of Eq. 1.6.34, 1.6.39 and 1.6.41 we may write:

lim 0 Ð>Ñ  0 Ð! Ñ œ =lim


p!
= J Ð=Ñ  0 Ð! Ñ (1.6.42)
>p_

or as stated initially in Eq. 1.6.38:

lim 0 Ð>Ñ œ lim = J Ð=Ñ


>p_ =p!

The Eq. 1.6.37 and 1.6.38 are extremely useful for checking the results of
complicated calculations by the direct or the inverse Laplace transform, as we will
encounter in the following parts of the book. Should the check by these two equations fail,
then we have obviously made a mistake somewhere.
However, this is a necessary, but not a sufficient condition: if the check was passed
we are not guarantied that other ‘sneaky’ mistakes will not exist, which may become
obvious when we plot the resulting function.

1.6.8 Convolution
We need a process by which we can calculate the response of two systems
connected so that the output of the first one is the input of the second one and their
individual responses are known. We have two functions [Ref. 1.19]:

0 Ð>Ñ œ _" eJ Ð=Ñf and gÐ>Ñ œ _" eKÐ=Ñf (1.6.43)

and we are looking for the inverse transform of the product:

CÐ>Ñ œ _" eJ Ð=Ñ † KÐ=Ñf (1.6.44)

The product of functions is equal to the product of their Laplace transforms:


_ _
=7
J Ð=Ñ † KÐ=Ñ œ ( 0 Ð7 Ñ e . 7 † ( gÐ@Ñ e=@ .@ (1.6.45)
! !

-1.37-
P. Starič, E.Margan The Laplace Transform

In order to distinguish better between 0 Ð>Ñ and gÐ>Ñ, we assign the letter ? for the
argument of 0 and @ for the argument of g; thus, 0 a>b Ä 0 a?b and ga>b Ä ga@b. Since both
variables are now well separated we may write the above integral also in the form:
_ _
Ô =Ð?@Ñ
×
J Ð=Ñ † KÐ=Ñ œ ( ( 0 Ð?Ñ gÐ@Ñ e .@ .? (1.6.46)
Õ Ø
! !

Let us integrate the expression inside the brackets to the variable @. To do so we introduce a
new variable 7 :
7 œ?@ so @ œ7 ? and .@ œ .7 (1.6.47)

We consider the variable 7 in the inner integral to be a (variable) parameter. From the
above expressions it follows that @ œ ! if 7 œ ?. By considering all this we may transform
Eq. 1.6.46 into:
_ _
Ô =a?7 ?b
×
J Ð=Ñ † KÐ=Ñ œ ( ( 0 Ð?Ñ gÐ7  ?Ñ e . 7 .? (1.6.48)
Õ Ø
! 7

We may also change the sequence of integration. Thus we may choose a fixed >"
and first integrate from 7 œ ! to 7 œ >" . In the second integration we integrate from ? œ !
to ? œ _. Then the above expression obtains the form:
_ >"
Ô × =7
J Ð=Ñ † KÐ=Ñ œ ( ( 0 Ð?Ñ gÐ7  ?Ñ .? e . 7 (1.6.49)
Õ Ø
! !

Now we can return from ? back to the usual time variable >:
_ >"
Ô × =7
J Ð=Ñ † KÐ=Ñ œ ( ( 0 Ð>Ñ gÐ7  >Ñ .> e . 7 (1.6.50)
Õ Ø
! !
ðóóóóóóóóóñóóóóóóóóóò
CÐ>Ñ

The expression inside the brackets is the function CÐ>Ñ which we are looking for,
whilst the outer integral is the usual Laplace transform. Thus we define the convolution
process, denoted by gÐ>Ñ ‡ 0 Ð>Ñ, as:

>"

CÐ>Ñ œ _ eJ Ð=Ñ † KÐ=Ñf œ gÐ>Ñ ‡ 0 Ð>Ñ œ ( 0 Ð>Ñ gÐ7  >Ñ .>


"
(1.6.51)
!

The operator symbolized by the asterisk (‡) means ‘convolved with’. Convolutio is
the Latin word for folding. The German name for convolution is die Faltung and this also
means folding. Obviously:

gÐ7  >Ñ œ gÐ>ѹ


7 =!
and
gÐ7  >Ñ œ gÐ!ѹ
7œ>

-1.38-
P. Starič, E.Margan The Laplace Transform

This means that the function is ‘folded’ in time around the ordinate, from the right
to the left side of the coordinate system. At the end of this part, after we master the network
analysis in Laplace space, we will make an example (Fig. 1.15.1) in which this ‘folding’
and the convolution process will be explicitly shown, step by step.
In general we convolve whichever of the two functions is simpler. We may do so
because the convolution is commutative:

gÐ>Ñ ‡ 0 Ð>Ñ œ 0 Ð>Ñ ‡ gÐ>Ñ (1.6.52)

The main properties of the Laplace transform are listed in Table 1.6.1.

Table 1.6.1: The main properties of Laplace transform

Property 0 Ð>Ñ J Ð=Ñ


. 0 Ð>Ñ
Real Differentiation = J Ð=Ñ  0 Ð! Ñ
.>
>"
J Ð=Ñ
Real Integration ( 0 Ð>Ñ .>
=
!

" =
Time-Scale Change 0 Ð+>Ñ JŠ ‹
+ +
Impulse function $ Ð>Ñ "
Initial Value lim 0 Ð>Ñ lim = J Ð=Ñ
=p_
!o>

Final Value lim 0 Ð>Ñ lim = J Ð=Ñ


>p_ =p!

>"

Convolution ( 0 Ð>Ñ gÐ7  >Ñ .> J Ð=Ñ † KÐ=Ñ


!

-1.39-
P. Starič, E.Margan The Laplace Transform

1.7 Application of the _ transform in Network Analysis


1.7.1 Inductance
As we have discussed the fundamentals of _ transform, we will now apply it in the
network analysis. From basic electrical engineering we know that the instantaneous voltage
@Ð>Ñ across an inductance P, through which a current 3Ð>Ñ flows, as in Fig. 1.7.1a, is:

.3
@ Ð>Ñ œ P (1.7.1)
.>

By assuming time >  ! and 3Ð! Ñ œ !, then, according to Eq. 1.6.5, the _ transform of the
above equation is the voltage across the inductance in the = domain:

Z Ð=Ñ œ = P MÐ=Ñ (1.7.2)

where M Ð=Ñ is the current in the = domain. The inductive reactance is then:

Z Ð=Ñ
œ =P (1.7.3)
MÐ=Ñ

Here = œ 5  4 = and thus it is complex; it can lie anywhere in the = plane. In the
special case when 5 œ !, and considering only the positive 4 = axis, = degenerates into 4 =.
Then the inductive reactance becomes the familiar 4 = P, as is known from the usual
‘phasor’ analysis of networks.

i i i i i i
+ t + t + t

L C R
t t t
− − −
a) b) c)

Fig. 1.7.1: The instantaneous voltage @ as a function of the instantaneous current 3: a)


on an inductance P; b) on a capacitance G ; c) on a resistance VÞ

1.7.2 Capacitance
From basic electrical engineering we also know that the instantaneous voltage @Ð>Ñ
across a capacitance through which a current 3Ð>Ñ flows during a time >   ! is:
>
;Ð>Ñ "
@Ð>Ñ œ œ ( 3 .> (1.7.4)
G G
!

as shown in Fig. 1.7.1b. Here ;Ð>Ñ is the instantaneous charge on the capacitor G . By
applying Eq. 1.6.20 we may calculate the voltage on the capacitor in the = domain:

"
Z Ð=Ñ œ MÐ=Ñ (1.7.5)
=G

-1.41-
P. Starič, E.Margan The Laplace Transform

The capacitive reactance in the = domain is:

Z Ð=Ñ "
œ (1.7.6)
MÐ=Ñ =G

Here, too, = degenerates to 4 = if 5 œ !. In this case the capacitive reactance


becomes simply "Î4 = G .

1.7.3 Resistance
For a resistor ( Fig. 1.7.1c ) the instantaneous voltage is simply:
@Ð>Ñ œ V 3Ð>Ñ (1.7.7)
and, as there are no time-derivatives the same holds in the = domain, with the
corresponding values Z Ð=Ñ and MÐ=Ñ:
Z Ð=Ñ œ V MÐ=Ñ (1.7.8)
yielding:
Z Ð=Ñ
œV (1.7.9)
MÐ=Ñ

1.7.4 Resistor and capacitor in parallel


By applying the Eq. 1.7.3, 1.7.6 and 1.7.9 we may transform a differential equation
in the > domain into an algebraic equation in the = domain. Thus we may express an
impedance ^ Ð=Ñ or an admittance ] Ð=Ñ of more complicated networks by simple algebraic
equations. Let us express a parallel combination of a resistor and a capacitor in = domain as
shown in the upper part of Fig. 1.7.2. The impedance is:
"
Œ 
" VG ="
^Ð=Ñ œ œV œV œ V K" Ð=Ñ (1.7.10)
" " =  ="
=G  =  Œ 
V VG

where the (real) pole is at =" œ 5" œ "ÎVG and K" represents the frequency dependenceÞ
The pole of a function is that particular value of the argument for which the function
denominator is equal to zero and, consequently, the function value goes to infinity.
Now let us apply a current step, MÐ=Ñ œ " VÎV , to our network expressed in the =
domain as "ÎÐ=VÑ, according to Eq. 1.5.1. We introduced the factor "ÎV in order to get a
voltage of " V on our VG combination when > p _. The corresponding function is then:
" "
J Ð=Ñ œ Z Ð=Ñ œ MÐ=Ñ † ^Ð=Ñ œ † V K" Ð=Ñ œ K" Ð=Ñ
=V =
" " "
œ † † (1.7.11)
VG = "
=  Œ 
VG

-1.42-
P. Starič, E.Margan The Laplace Transform

From "Î= a second pole at = œ ! is introduced, as is drawn in Fig. 1.7.2b and c. To


obtain the time domain function of the voltage across our impedance, we apply Eq. 1.5.3
and 1.6.20. First we discuss only the function K" a=b. According to Eq. 1.5.3:
"
_˜e5" > ™ œ (1.7.12)
=  5"
"
or inversely: _" œ 5 >
 œe " (1.7.13)
=  5"

By comparing Eq. 1.7.11 with Eq. 1.7.13 we see that 5" œ "ÎVG :
"
_" œ œe
>ÎVG
(1.7.14)
=  a"ÎVG b
From Eq. 1.6.20 we concluded that the division in the = domain corresponds to the
real integration in the > domain. By considering this together with Eq. 1.5.1, we obtain:
>
"ÎVG "
@o Ð>Ñ œ 0 Ð>Ñ œ _ " eJ Ð=Ñf œ _" œ œ ( e
>ÎVG
.>
= c=  a"ÎVG bd VG
!
>
"
œ VG ˆe>ÎVG ‰º — œ e>ÎVG  a"b œ "  e>ÎVG (1.7.15)
VG – !

1
− t C R jω
g1( t ) = e RC
a)
σ
t s1 = − 1
0 RC
1
0, t <0
h (t ) =
1 V/ R , t > 0 jω
b)
t σ
0 s0 = 0
ii o
− t
g2( t ) = − e RC

−1 C R
i i R = 1V
o jω
1

σ
c) − t s1 s0
f (t ) = 1 − e RC

t
0

Fig. 1.7.2: The course of mathematical operations for a parallel V G network excited by a unit step
current 3i . The > domain functions are on the left, the = domain functions are on the right. a) The self-
discharge network function is equal to the impulse function g" Ð>Ñ. b) The unit step in > domain, 2Ð>Ñ, is
represented by a pole at the origin (=! ) in the = domain. The function g# Ð>Ñ is the reaction of the network
to the unit step excitation. c) The output voltage is the sum of both functions, @9 œ 0 Ð>Ñ œ 2Ð>Ñ  g# Ð>Ñ.

-1.43-
P. Starič, E.Margan The Laplace Transform

From this simple example we obtain the idea of how to use tables of _ transforms
to obtain the response in the > domain, which should otherwise be calculated by differential
equations. In addition to this we may state a very important conclusion for the =-domain:

Šoutput function‹ œ Šexcitation function‹ ‚ Šnetwork function‹

In our case it was:

"
excitation function L a=b œ
=V

"
VG also named
network function V K" a=b œ V † Œ 
" ‘impulse response’
=
VG

"
" VG
output function J a=b œ †
= "
=
VG

However, in general, especially for more complicated networks, the calculation of


the corresponding function in the > domain is not as easy as shown above. Of course, one
may always apply the formula for the inverse Laplace transform ( Eq. 1.4.4 ):

-4_
" "
0 Ð>Ñ œ _ eJ Ð=Ñf œ ( J Ð=Ñ e=> .=
#14
-4_

but it would not be fair to leave the reader to grind through this integral of his J Ð=Ñ with
the best of hisÎher knowledge. In essence the above expression is a contour integral.
Knowledge of contour integration is a necessary prerequisite for calculating the
inverse Laplace transform. We will discuss this in the following section. After studying it
the reader will realize that the calculation of the step-response in the > domain by contour
integration is — although a little more difficult than in the above example of the simple
VG circuit — still a relatively simple procedure.

-1.44-
P. Starič, E.Margan The Laplace Transform

1.8 Complex Line Integrals

In order to learn how to calculate contour integrals the first step is the calculation of
complex line integrals. Both require a knowledge of the basics of complex variable theory,
also called the theory of analytic functions. We will discuss only that part of this theory
which is relevant to the inverse Laplace transform of rational functions (which are
important in the calculation of amplifier step and impulse response). The reader who would
like to know more about the complex variable theory, should study at least one of the books
listed at the end of Part 1 [Ref. 1.4, 1.9, 1.11, 1.12, 1.13, 1.14, 1.15], of which [Ref. 1.10
and 1.14] (in English), [Ref. 1.13] (in German), and [Ref. 1.11] (in Slovenian) are
especially recommended.
The definition of an analytical function is:
In a certain domain (which we are interested in) a function 0 ÐDÑ is analytical if it is:
1) continuous;
2) single valued (at each argument value); and
3) has a derivative at any selected point D , independently of from which side we
approach that point.

From the calculus we know that a definite integral of a function of a real variable,
such as C œ 0 ÐBÑ, is equal to the area E between the function (curve) and the real axis B
and between both limits B" to B# . An example is shown in Fig. 1.8.1, where the integral of a
simple function "ÎB, integrated from B" to B# is displayed.
y
3
y = x1

2 x2
A= y dx
x1
1
A
x1 x2
x
−2 −1 0 1 2

−1
y = x1
−2

−3

Fig. 1.8.1: The integral of a real function C œ "ÎB between the


limits B" and B# corresponds to the area E.

The corresponding mathematical expression is:


B#
" B#
Eœ( .B œ J ÐB# Ñ  J ÐB" Ñ œ ln B#  ln B" œ ln (1.8.1)
B B"
B"

-1.45-
P. Starič, E.Margan The Laplace Transform

The area above the B axis is counted as positive and the area below the B axis (if
any) as negative. The area E in Fig. 1.8.1 represents the difference of the integral values at
the upper limit, J ÐB# Ñ and the lower limit, J ÐB" Ñ. As shown in Fig. 1.8.1 the integration
path was from B" along the B axis up to B# .
For a comparison let us now calculate a similar integral, but with a complex
variable D œ B  4 C:
D#
" D#
[ œ( .D œ J ÐD# Ñ  J ÐD" Ñ œ ln D#  ln D" œ ln (1.8.2)
D D"
D"

So far we can not see any difference between Eq. 1.8.1 and 1.8.2 (a close
investigation of the result would show that it may be multi-valued in the case the path from
D" to D# circles the pole one or more times; but we will not discuss such cases). The whole
integration procedure is the same in both cases. The difference in the result of the second
equation becomes apparent when we express the complex variable D in the exponential
form:
D" œ kD" k e4 )" and D# œ kD# k e4 )# (1.8.3)
then:
D# kD# k e4 )# kD# k 4Ð)# )" Ñ kD# k
ln œ ln œ ln e œ ln  4Ð)#  )" Ñ œ ?  4 @ (1.8.4)
D" kD" k e4 )" kD" k kD" k

kD# k
where: ? œ ln and @ œ )#  ) " (1.8.5)
kD" k

Ci
and also: kDi k œ ÉB#i  Ci# and )i œ arctan (1.8.6)
Bi

Obviously the result of Eq. 1.8.2, as shown in Eq. 1.8.4, is complex. It can not be
plotted as simply as the integral of Fig. 1.8.1, since for displaying the complex function of a
complex argument we would need a 4D graph, whilst the present state of technology allows
us to plot only a 2D projection of a 3D graph, at best.
We can, however, restrict the D argument’s domain, as in Fig. 1.8.2, by making its
real part a constant, say, B œ - and then make plots of J Ð-  4 CÑ œ "ÎÐ-  4 CÑ for some
selected value of - . In Fig. 1.8.2 we have chosen - œ ! and - œ !Þ&, whilst the imaginary
part was varied from 4 $ to 4 $.
In this way we have plotted two graphs, labeled A and B. The graph A belongs to
- œ ! and lies in the e˜=™ ‚ e˜J Ð=Ñ™ plane; it looks just like the one in Fig. 1.8.1, but
changed in sign, owing to the following rationalization of the function’s denominator:

" 4 4 4
œ # œ œ
4C 4 C " † C C

The graph B belongs to - œ !Þ& and is a 3D curve, twisting in accordance with the
phase angle of the function. To aid the 3D view the three projections of F have also been
plotted.

-1.46-
P. Starič, E.Margan The Laplace Transform

3
A

B
1

0
ℑ {F ( z )}
−1

−2
A

−3 3
2
0 c 1
1 0
−1
ℜ{ z } 2 −2 ℑ{ z}
3 −3
ℜ {F ( z )}

Fig. 1.8.2: By reducing the complex domain B  4 C to -  4 C, where - is a constant, we can plot the
complex function J Ð-  4 CÑ in a 3D graph. Here we have - œ ! (graph A) and - œ !Þ& (graph B).
Also shown are the three projections of F. The twisted surface is the integral of J Ð-  4 CÑ for
- œ !Þ& and C in the range 4#  C  4 #. See Appendix 1 for more details.

Let us determine a few characteristic points of the graph B:


a) for the first point on the left we have - œ !Þ& and C œ $, thus:
" !Þ&  4 $
J Ð-  4 CÑ œ œ
!Þ&  4 $ Ð!Þ&  4 $ÑÐ!Þ&  4 $Ñ

!Þ&  4 $ !Þ&  4 $ !Þ& $


œ œ œ 4 œ !Þ!&%"  4 !Þ$#%$
!Þ&#  $# !Þ#&  * *Þ#& *Þ#&

b) next, let us have - œ !Þ& and C œ !Þ&, thus:


" !Þ&  4 !Þ&
J Ð-  4 CÑ œ œ
!Þ&  4 !Þ& Ð!Þ&  4 !Þ&ÑÐ!Þ&  4 !Þ&Ñ

!Þ&  4 !Þ& !Þ&  4 !Þ& !Þ& !Þ&


œ œ œ 4 œ"4
!Þ&#  !Þ&# !Þ#&  !Þ#& !Þ& !Þ&
(here both the real and the imaginary part are " — this is the top point of the curve).
c) an obvious choice is - œ !Þ& and C œ !, thus:
" "
J Ð-  4 CÑ œ œ œ#
!Þ&  4 ! !Þ&
(here the real part is #, the imaginary part is ! and this is the right-most point on the
curve; also, it is its only real value point).
For positive imaginary values, J ÐDÑ is the complex conjugate of the values above.

-1.47-
P. Starič, E.Margan The Laplace Transform

Now that we have some idea of how J ÐDÑ looks, let us return to our integral
problem. If the integration path is parallel to the imaginary axis, 4 #  C  4 #, and
displaced by B œ d ˜D ™ œ - œ !Þ&, the result of integration would be the surface indicated
in Fig. 1.8.2Þ But for an arbitrary path, with B not constant, we should make many such
plots as above and then trace the integration path to appropriate curves. The area bounded
by the integration path and its trace on those curves would be the result we seek.
For a detailed treatment of complex function plotting see Appendix 1.
Returning to the result of Eq. 1.8.4 we may draw an interesting conclusion:
The complex line integral depends only on the initial value D" and the final
value D# , which represent both limits of the integral.
The result of the integration is independent of the actual path beneath these limits,
providing the path lies on the same side of the pole.
All the significant differences between an integral of a real function and the line
integral of a complex function are listed in Table 1.8.1.
The B axis is the argument’s domain for a real integral, whilst for a complex
integral it is the whole D plane. Do not confuse the D plane (the complex plane, B  4C , with
the diagram’s D axis (vertical axis), which here is J ÐDÑ œ J ÐB  4CÑ. We recommend the
readers to ponder over Fig. 1.8.2 and try to acquire a clear idea of the differences between
both types of integral, since this is necessary for the understanding of the discussion which
follows.

Table 1.8.1 Differences between real and complex integration

real variable complex variable

B# D#
" "
integral ( .x ( .D
B D
B" D"

independent
B D œ B4C
variable

dependent " " B C


Cœ Aœ œ ?4@ œ # 4 #
variable B D B  C# B  C#

integration from B" to B# from D" œ B"  4 C" to D# œ B#  4 C#


path along the B axis anywhere in the D plane*

B# D# kD# k
result ln (real) ln œ ln  4 Ð)#  )" Ñ (complex)
B" D" kD" k

* except through the pole, where D œ !

To understand the theory better let us give a few examples:

-1.48-
P. Starič, E.Margan The Laplace Transform

1.8.1 Example 1
We have a function 0 ÐDÑ œ $D which we shall integrate from #4 to "  4:
"4 "4
$ D# $
( $D .D œ œ Ð"  4Ñ#  Ð# 4Ñ# ‘ œ '  $ 4
# » #
#4 #4

1.8.2 Example 2
The integration limits are the same as in the previous example, whilst the function is
different, 0 ÐDÑ œ "  D # :
"4 "4 "4
# D$ " (
( ˆ"  D ‰ .D œ D »  œ  4
$ » $ $
#4 #4 #4

1.8.3 Example 3
The same function as in Example 1, except that both limits are interchanged:
#4 #4
$ D# $
( $ D .D œ œ Ð# 4Ñ#  Ð"  4Ñ# ‘ œ '  $ 4
# » #
"4 "4

We see that although the function under the integral is complex, the same rules
apply for integration as for a function of a real variable. The last example shows us that if
the limits of the integral of a complex function are exchanged the result of the integration
changes the sign.
As already mentioned, the result of the integration of a complex function is
independent of the actual path of integration between the limits D" and D# (see Fig. 1.8.3),
provided that no pole lies between the extreme paths P" and P# . Thus for all the paths
shown the result of integration is the same. This means that the function in the area between
P" and P# is analytic. When at least one pole of the function lies between P" and P# , the
integral along the path P" is in general no more equal to the integral along the path P# . In
Fig. 1.8.4 we show such a case, in which the function is non-analytic (or non-regular)
inside a small area between D" and D# (in the remaining area the function is analytic).
jy z2 jy
z2
L1 L1

L2
L2
z1

z1 x x

Fig. 1.8.3: A line integral from D" to D# along Fig. 1.8.4: Here the function has a non-analytic
the line P" , P# , or any other line lying between domain area between P" and P# . Now the integral
these two yields the same result because between along the path P" is not equal to the integral along
P" and P# the function has no poleÞ the path P# .

-1.49-
P. Starič, E.Margan The Laplace Transform

Let us prove the above statement by two simple examples.


1.8.4 Example 4
We will again take the function 0 ÐDÑ œ "ÎD and integrate along a part of a circle
with the radius of ", from 4 to " as is drawn in Fig. 1.8.5 (the pole D! lies at D œ !). We
first calculate the integral:
"
"
( .D along the path P"
D
4

On the circle with radius kD k œ " it is:

D œ Ð"Ñ † e4 ) and .D œ 4 e4 ) .)

When we integrate along P" the angle ) goes from 1Î# to !. Thus it is:
!
!
4 e4 ) 1
( . ) œ 4 )º œ4
e4 ) 1Î# #
1Î#

1.8.5 Example 5
Here everything is the same as in the previous example, except that we will
integrate along the path P# of Fig. 1.8.5. In this case the angle ) goes from $1Î# to !:
!
!
4 e4 ) $1
( 4 )
. ) œ 4 )º œ 4
e $1Î# #
$1Î#

In Example 4, the integration path goes counter-clockwise (which in mathematics is


the positive sense) and we obtain a positive result. But in Example 5, in which the
integration path goes clockwise, the result is negative, and, moreover, it has a different
value, because the integration path lies on the other side of the pole, even if the limits of
integration remain the same as in Example 4.

1.8.6 Example 6
We would like to see whether there is any difference in the result of Example 4 if
we choose not to integrate along the circle, but instead along a straight line from 4 to "
(P$ in Fig. 1.8.6):
"
" 1
( .D œ ln "  ln Ð4 Ñ œ  ln e4 1Î# œ 4
D #
4

because ln " œ ! and 4 œ e4 1Î# . The result is the same as in Example 4.
In general if we consider Fig. 1.8.7, the integral along the path Pa or Pb , or any path
in between, is always equal to 4 1Î#. Similarly, the integral along the path Pc or Pd , or any
path in between, is equal to 4 $1Î#.

-1.50-
P. Starič, E.Margan The Laplace Transform

jy jy
L2

1
z=
θ x 1 x
z0 1 z0
L3

L1 L1
−j
−j

Fig. 1.8.5: The integral along the path P" is not equal Fig. 1.8.6: The integral along the straight
to the integral along the path P# because the function path P$ is the same as the integral along
has a pole which lies between both paths. the circular path P".

jy jy
Lc
C

1
Ld

z=
x θ x
z0 1 z0 1
Lb

La
−j

Fig. 1.8.7: The integrals along the paths Pa and Pb are Fig. 1.8.8: The integral along the circular
equal to 4 1Î#. However, those along Pc and Pd are path G around the pole is #14. See Sec.1.9.
equal to 4$ 1Î#.

-1.51-
P. Starič, E.Margan The Laplace Transform

1.9 Contour Integrals

Let us take again our familiar function 0 ÐDÑ œ "ÎD and calculate the integral along
the full circle G (Fig. 1.8.8), where kD k œ ". We use the same notation for D and .D as we
did in Example 4 and start the integration at ) œ !, going counter-clockwise (positive by
definition):
#1 #1 #1
.D e4 ) . ) .D
( œ 4( œ 4( .) œ #14 œ ( (1.9.1)
D e4 ) D
! ! ! G

The resulting integral along the circle G is called the contour integral; the arrow in
the symbol indicates the direction of encirclement of the pole (at D œ !).
Now let us move the pole from the origin to the point + œ Ba  4 Ca . The
corresponding function is then 0 ÐDÑ œ "ÎÐD  +Ñ. The first attempt would be to integrate
along the contour G as shown in Fig. 1.9.1. Inside this contour the domain of the function is
analytic, except for the point +. Unfortunately G is a random contour and can not be
expressed in a convenient mathematical way. Since + is the only pole inside the contour G ,
we may select another, simpler integration path. As we have already mastered the
integration around a circular path, we select a circle Gc with the radius & that lies inside the
contour G . From Fig. 1.9.1 it is evident that:
& œ kD  +k or D  + œ & e4 ) (1.9.2)
Thus:
D œ & e4 )  + (1.9.3)

where the angle ) can have any value in the range ! á #1. Furthermore it follows that:
.D œ 4 & e4 ) .) (1.9.4)

The contour integral around the pole + is then:


#1 #1
.D 4 & e4 ) . )
( œ( œ 4( .) œ # 1 4 (1.9.5)
D+ & e4 )
Gc ! !

The result is the same as we have obtained for the function 0 ÐDÑ œ "ÎD , in which
the pole was at the origin of the D plane.
jy jy
C C
Cc
ε θ
a
z

a
x x

Fig. 1.9.1: Contour integral around the pole at +. Fig. 1.9.2: The integral around the contour G
is zero because +, the only pole of the function,
lies outside the contour.

-1.53-
P. Starič, E.Margan The Laplace Transform

We look again at Fig. 1.8.3, where the integral around the path P" is equal to the
integral around the path P# because there is no pole between P" and P# . It would be
interesting to make the integral from D" to D# along the path P# and then back again from D#
to D" along the path P" , making a closed loop (contour) integral:

D# D"
( 0 ÐD Ñ .D  ( 0 ÐD Ñ .D œ ( 0 ÐD Ñ .D œ! (1.9.6)
D"
ðóóñóóò D#
ðóóñóóò ðóñóò
along P# along P" along P#  P"

Since both integrals have the same magnitude, by exchanging the limits of the
second integral, thus making it negative, their sum is zero. This statement affords us the
conclusion that the integral around the contour G in Fig. 1.9.2, which encircles an area
where the function is analytic, is zero (the only pole + in the vicinity lies outside the
contour of integration). This is expressed as:

( 0 ÐDÑ .D œ ! (1.9.7)
G

The expressions in Eq. 1.9.6 and 1.9.8 were derived by the French mathematician
Augustine Louis Cauchy (1788–1857). In all the calculations so far we have integrated in a
counter-clockwise sense, having the integration field, including the pole, always on the left
side. In the case of a clockwise direction, let us again take Eq. 1.9.1 and integrate clockwise
from #1 to !:
! ! !
.D e4 ) .) .D
( œ 4( œ 4( .) œ  #14 œ ) (1.9.8)
D e4 ) D
#1 #1 #1 G

Note that the sign of the result changes if we change the direction of encirclement. So we
may write in general:

( 0 ÐD Ñ .D œ  ) 0 ÐD Ñ .D (1.9.9)
G G

-1.54-
P. Starič, E.Margan The Laplace Transform

1.10 Cauchy’s Way of Expressing Analytic Functions

Let us take a function 0 ÐDÑ which is analytic inside a contour G . There are no
regulations for the nature of 0 ÐDÑ outside the contour, where 0 ÐDÑ may also have poles. So
this function is analytic also at the point + (inside G ) where its value is 0 Ð+Ñ.
Now we form another function:
0 ÐDÑ
gÐDÑ œ (1.10.1)
D+
This function is also analytic inside the contour G , except at the point +, where it has a
pole, as shown in Fig. 1.10.1. Let us take the integral around the closed contour G :

0 ÐDÑ
( .D (1.10.2)
D+
G

which is similar to the integral in Eq. 1.9.6, except that here we have 0 ÐDÑ in the numerator.
Because at the point + the function under the integral is not analytic, the path of integration
must avoid this point. Therefore we go around it along a circle of the radius &, which can be
made as small as required (but not zero).
For the path of integration we shall use the required contour G and the circle Gc . To
make the closed contour the complete integration path will start at point 1 and go counter-
clockwise around the contour G to come back to the point 1; then from the point 1 to the
point 2 along the dotted line; then clockwise around the circle Gc back to the point 2; and
finally from the point 2 back to the point 1 along the dotted line. In this way, the contour of
integration is closed. The integral from the point 1 to 2 and back again is zero. Thus there
remain only the integrals around the contour G and around the circle Gc .
jy
C

ε
θ 2 1
a

Cc

Fig. 1.10.1: Cauchy’s method of expressing analytic functions (see text).

Since around the complete integration path the domain on the left hand side of the
contour was always analytical, the resulting integral must be zero. Thus:

0 ÐDÑ 0 ÐDÑ 0 ÐDÑ


( .D œ ( .D  ) .D œ ! (1.10.3)
D+ D+ D+
GGc G Gc

-1.55-
P. Starič, E.Margan The Laplace Transform

and so it follows that:


0 ÐDÑ 0 ÐDÑ
( .D œ ( .D (1.10.4)
D+ D+
G Gc

Here we have changed the the second integral sign by reversing the sense of encirclement.
Similarly as in Eq. 1.9.2 and 1.9.4 we write:
.D
D  + œ & e4 ) .D œ 4 & e4 ) .) and œ 4 .)
D+
Nothing would change if in Eq. 1.10.4 we write:

0 ÐDÑ œ 0 Ð+Ñ  c0 ÐDÑ  0 Ð+Ñd (1.10.5)


thus obtaining:
#1 #1
0 ÐDÑ
( .D œ 4 0 Ð+Ñ( . )  4( c0 ÐDÑ  0 Ð+Ñd . ) (1.10.6)
D+
G ! !

The integration must go from ! to #1 in order to encircle the point + in the required
direction. The value of the first integral on the right is:
#1

4 0 Ð+Ñ( . ) œ #14 0 Ð+Ñ (1.10.7)


!
and we will prove that the second integral is zero. Its magnitude is:

Q  #1 max ¸0 ÐD Ñ  0 Ð+Ѹ (1.10.8)

The function 0 ÐDÑ is continuous everywhere inside the field bordered by G and Gc ;
therefore the point D can be as close to the point + as desired. Consequently l0 ÐDÑ  0 Ð+Ñl
may also be as small as desired. The radius of the circle Gc inside the contour G is
& œ lD  +l, and in Eq. 1.9.5 we have already observed that the value of the integral is
independent of &. If we take the limit & p ! we obtain:
#1

lim ( c0 ÐDÑ  0 Ð+Ñd . ) œ ! (1.10.9)


&p!
!
Thus:
0 ÐDÑ . D
#14 0 Ð+Ñ œ ( (1.10.10)
D+
G
and:

" 0 ÐDÑ .D
0 Ð+Ñ œ ( (1.10.11)
#1 4 D+
G

where the point + may be any point inside the contour G .


Eq. 1.10.11 is of essential importance for the inverse Laplace transform; we name it
Cauchy’s expression for an analytic function. By means of this integral it is possible to

-1.56-
P. Starič, E.Margan The Laplace Transform

calculate the value of an analytic function at any desired point (say, +) if all the values on
the contour surrounding this point are known. Thus if:

0 ÐDÑ
gÐDÑ œ
D+

then the value 0 Ð+Ñ is called the residue of the function gÐD Ñ for the pole +.
To make the term ‘residue’ clear let us make a practical example. Suppose gÐDÑ is a
rational function of two polynomials:

T ÐDÑ D 7  ,7" D 7"  ,7# D 7#  â  ," D  ,!


gÐDÑ œ œ 8 (1.10.12)
UÐDÑ D  +8" D 8"  +8# D 8#  â  +" D  +!

where ,i and +i are real constants and 8  7. Eq. 1.10.12 represents a general form of a
frequency response of an amplifier, where D can be replaced by the usual = œ 5  4= and
,! Î+! is the DC amplification (at frequency = œ !). Instead of the sums, the polynomials
T ÐDÑ and UÐDÑ, and thus gÐDÑ, may also be expressed in the product form:

ÐD  D" ÑÐD  D# Ñ â ÐD  D7 Ñ
gÐDÑ œ (1.10.13)
ÐD  :" ÑÐD  :# Ñ â ÐD  :8 Ñ

In this equation, D" , D# , á , D7 are the roots of the polynomial T ÐDÑ, so they are also
the zeros of gÐDÑ. Similarly, :" , :# , á , :8 are the roots of the polynomial UÐDÑ and
therefore also the poles of gÐDÑ. Both statements are valid if :i Á Di for any i that can be
applied to Eq. 1.10.13 (if D  D" were equal to, say, D  :$ , there would be no pole at :$ ,
because this pole would be canceled by the zero D" ). Now we factor out the term with one
pole, i.e., "ÎÐD  :# Ñ and write:

ÐD  D" ÑÐD  D# Ñ â ÐD  D7 Ñ " "


gÐDÑ œ † œ 0 ÐDÑ (1.10.14)
ÐD  :" ÑÐD  :$ Ñ â ÐD  :8 Ñ ÐD  :# Ñ ÐD  :# Ñ

where:
ÐD  D" ÑÐD  D# Ñ â ÐD  D7 Ñ
0 ÐDÑ œ (1.10.15)
ÐD  :" Ñ ÐD  :$ Ñ â ÐD  :8 Ñ

If we focus only on 0 ÐDÑ and let D p :# , we obtain the residue of the function gÐDÑ
for the pole :# and this residue is equal to 0 Ð:# Ñ. Since we have taken the second pole we
have appended the index ‘#’ to the res(idue). By performing the suggested operation we
obtain:

res# gÐDÑ œ Dp:


lim ÐD  :# Ñ gÐDÑ œ 0 Ð:# Ñ (1.10.16)
#

The word ‘residue’ is of Latin origin and means the remainder. However, since a
remainder may also appear when we divide a polynomial by another, we shall keep using
the expression ‘residue’ in order to avoid any confusion. Also in our further practical
calculations we will simply write, say, res# instead of the complete expression res# J Ð=Ñ.

-1.57-
P. Starič, E.Margan The Laplace Transform

The reader could obtain a rough idea of a residue by the following


similarity: suppose we have a big circus tent, the canvas of which is supported by,
say, four poles. If one of the poles is removed, the canvas sags. The height of the
canvas above the ground where we have removed the pole is something similar to a
residue for that pole. However, in this comparison two important facts are
different: first, in the complex variable theory our ‘canvas’ as well as ‘ground’ are
complex and, second, the poles are infinitely high (actually _ on one side of the
pole and _ on the other; see Appendix 1).

In the following examples we shall see that the calculation of residues is a relatively
simple matter.
From now on we shall replace the variable D by our familiar complex variable
= œ 5  4 =. Also, in order to distinguish more easily the functions of complex
frequency from functions of time, we shall write the former with capitals, like J Ð=Ñ
or KÐ=Ñ and the later with small letters, like 0 Ð>Ñ or gÐ>Ñ.
To prove that the calculation of residues is indeed a simple task let us calculate two
examples.

1.10.1 Example 1
Ð=  #ÑÐ=  $Ñ
Let us take a function: J Ð=Ñ œ
Ð=  %ÑÐ=  &ÑÐ=  'Ñ

We need to calculate the three residues of J Ð=Ñ for the poles at = œ %, = œ & and
= œ ':
Ð=  #ÑÐ=  $Ñ Ð%  #ÑÐ%  $Ñ
res" œ lim Ð=  %Ñ œ œ"
= p % Ð=  %ÑÐ=  &ÑÐ=  'Ñ Ð%  &ÑÐ%  'Ñ

and in a similar way:


Ð=  #ÑÐ=  $Ñ
res# œ lim Ð=  &Ñ œ '
= p & Ð=  %ÑÐ=  &ÑÐ=  'Ñ

Ð=  #ÑÐ=  $Ñ
res$ œ lim Ð=  'Ñ œ'
= p ' Ð=  %ÑÐ=  &ÑÐ=  'Ñ

An interesting fact here is that since all the poles are real, all the residues are real as
well; in other words, a real pole causes the residue of that pole to be real.

1.10.2 Example 2
Our function is:
Ð=  #Ñ e=>
J Ð=Ñ œ
$ =#  * =  *

Here we must consider that the variable of the function J Ð=Ñ is only = and not >.
First we tackle the denominator to find both roots, which are the poles of our function:

$ =#  * =  * œ $ Ð=#  $ =  $Ñ œ !

-1.58-
P. Starič, E.Margan The Laplace Transform

Thus:
# È$
$ $ $
=",# œ 5" „ 4 =" œ  „ ËŒ   $ œ  „ 4
# # # #

and by expressing the function J Ð=Ñ with both poles we have:

Ð=  #Ñ e=>
J Ð=Ñ œ
$ Ð=  =" ÑÐ=  =# Ñ

We shall carry out a general calculation of the two residues and then introduce the
numerical values for 5" and =" .

Ð=  #Ñ e=> Ð="  #Ñ e=>


res" œ =lim
p=
Ð=  =" Ñ œ
" $ Ð=  =" ÑÐ=  =# Ñ $ Ð="  =# Ñ

Ð5  4 =  #Ñ e5" > e4 =" > Ð5  4 =  #Ñ e5" > e4 =" >


œ œ
$ Ð5"  4 ="  5"  4 =" Ñ ' 4 ="

We now set 5" œ $Î# and =" œ È$ Î# to obtain the numerical value of the residue:

È$ >Î#
Š$Î#  4 È$ Î#  #‹ e$ >Î# e4
res" œ
' 4 È$

"  4 È$ $ >Î# 4È$ >Î# È$  4 È


œ e e œ e$ >Î# e4 $ >Î#
È
"# 4 $ È
"# $

In a similar way we calculate the second residue:

Ð=  #Ñ e=> Ð=#  #Ñ e=>


res# œ =lim
p =#
Ð=  =# Ñ œ
$ Ð=  =" ÑÐ=  =# Ñ $ Ð=#  =" Ñ

È$  4 È
œ e$ >Î# e4 $ >Î#
"# È$

Since both poles are complex conjugate, both residues are complex conjugate as
well. In rational functions, which will appear in the later sections, all the poles will be
either real, or complex conjugate, or both. Therefore the sum of all residues of these
functions (that is, the time function) will always be real.

-1.59-
P. Starič, E.Margan The Laplace Transform

1.11 Residues of Functions with Multiple Poles,


the Laurent Series

When a function contains multiple poles it is not possible to calculate the residues
in the way shown in the previous section. As an example let us take the function:

J Ð=Ñ
KÐ=Ñ œ (1.11.1)
Ð=  +Ñ8

To calculate the residue we first expand J Ð=Ñ into a Taylor series [Ref. 1.4, 1.11]:

J Ð=Ñ œ Ð=  +Ñ8 KÐ=Ñ (1.11.2)

J Ð+Ñ J Ð+Ñw Ð=  +Ñ J Ð+Ñww Ð=  +Ñ# J Ð+ÑÐ8"Ñ Ð=  +Ñ8"


œ   â â
!x "x #x Ð8  "Ñx

Now we divide all the fractions in this equation by Ð=  +Ñ8 (considering that
!x œ " by definition):

J Ð=Ñ
KÐ=Ñ œ (1.11.3)
Ð=  +Ñ 8

J Ð+Ñ J w Ð+Ñ J ww Ð+Ñ J Ð8"Ñ Ð+Ñ


œ 8
 8"
 8#
â â
Ð=  +Ñ Ð=  +Ñ #xÐ=  +Ñ Ð8  "Ñx Ð=  +Ñ

The values J Ð+Ñ, J w Ð+Ñ, J ww Ð+ÑÎ#x, á , J Ð8"Ñ Ð+ÑÎÐ8  "Ñx, á are constants and
we write them as E8 , EÐ8"Ñ , EÐ8#Ñ , á , E" , E! , E" , E# , á .
We may now express the function KÐ=Ñ as:

E8 EÐ8"Ñ EÐ8#Ñ E"


KÐ=Ñ œ   â
Ð=  +Ñ8 Ð=  +Ñ8" Ð=  +Ñ8# Ð=  +Ñ

 E!  E" Ð=  +Ñ  E# Ð=  +Ñ#  â (1.11.4)

The sum of all fractions from the above function we call the principal part and the
rest is the analytic part (also known as the regular part).
Eq. 1.11.4 is named the Laurent series, after the French mathematician Pierre-
Alphonse Laurent, 1813–1854, who in 1843 described “a series with negative powers”.
A general expression for the Laurent series is:

+_
J Ð=Ñ œ " E8 Ð=  +Ñ8 (1.11.5)
8œ7

where 7 and 8 are integers.

-1.61-
P. Starič, E.Margan The Laplace Transform

Let us calculate the contour integral of the above function:


+_
( J Ð=Ñ .= œ ( " E8 Ð=  +Ñ8 .= (1.11.6)
G G 8œ7

We shall integrate each part of the series separately.


Again, E8 , á , E# , E" , E! , E" , E# , á are constants and they may be put in
fron of the integral sign. In general we need to know the solution of the integral:

( Ð=  +Ñ8 .= (1.11.7)
G

where 8 is an integer either positive or negative. As in Eq. 1.9.2 we write again:


=  + œ & e4 ) Ê .= œ 4 & e4 ) .) Ê Ð=  +Ñ8 œ &8 e4 8 )

where the radius & is considered to be constant. Thus:


#1 #1

( Ð=  +Ñ8 .= œ ( Š&8 e4 8 ) ‹ 4 & e4 ) . ) œ 4 &8" ( e4Ð8"Ñ) . ) (1.11.8)


G ! !

If 8 Á ", the result of this integration is:


) œ #1
4 &8" &8"
e4Ð8"Ñ) º œ  e4Ð8"Ñ#1  "‘ œ ! (1.11.9)
4 Ð8  "Ñ )œ! Ð8  "Ñ

because e4 5 #1 œ ", for any positive or negative integer 5, including !. For 8 œ ", we
derive from Eq. 1.11.8:
#1
.= 4 & e4 ) . )
( œ( œ #14 (1.11.10)
=+ & e4 )
G !

In order that the result corresponds to the Laurent series we must add the constant
factor with 8 œ " and this is E" . Eq. 1.11.8 to 1.11.10 prove that the contour integration
for the complete Laurent series KÐ=Ñ yields only:

( KÐ=Ñ .= œ E" #14 (1.11.11)


G

Thus from the whole series ( Eq. 1.11.4 ) only the part with E" remained after the
integration. If we return to Eq. 1.11.3 we conclude that:

" J Ð=Ñ J Ð8"Ñ Ð+Ñ


E" œ ( KÐ=Ñ .= œ res œ
#1 4 Ð=  +Ñ8 Ð8  "Ñx
G

" .Ð8"Ñ
œ =lim
p+ – Ð=  +Ñ8 KÐ=Ñ— (1.11.12)
Ð8  "Ñx .=Ð8"Ñ

-1.62-
P. Starič, E.Margan The Laplace Transform

is the residue of the function KÐ=Ñ œ J Ð=ÑÎÐ=  +Ñ8 for the pole +. The following
examples will show how we calculate the residues for multiple poles in practice.

1.11.1 Example 1
We take a function:
J Ð=Ñ
KÐ=Ñ œ
Ð=  +Ñ$

Our task is to calculate the general expression for the residue of the triple pole
Ð8 œ $Ñ at = œ +. According to Eq. 1.11.12 it is:

" . Ð$"Ñ
res œ =lim
p+ – Ð=  +Ñ$ KÐ=Ñ—
Ð$  "Ñx .=Ð$"Ñ

" .# $
œ ” # Ð=  +Ñ KÐ=Ñ•
# .= =p+

1.11.2 Example 2
Here we shall calculate with numerical values.
We intend to find the residues for the double pole at = œ # and for the single pole
at = œ $ of the function:
&
J Ð=Ñ œ
Ð=  #Ñ# Ð=  $Ñ
Solution:

" .Ð#"Ñ &


res" œ =lim – Ð=  #Ñ#
p # Ð#  "Ñx .=Ð#"Ñ Ð=  #Ñ# Ð=  $Ñ —

. & &
œ” • œ” • œ &
.= Ð=  $Ñ = p# Ð=  $Ñ# = p#

& &
res# œ =lim
p $ ”
Ð=  $Ñ •œ” • œ&
Ð=  #Ñ# Ð=  $Ñ Ð=  #Ñ# = p$

It is important to remember the required order of the operations: first we multiply by


the expression containing the multiple pole and then find the derivative. To do it the
opposite way is wrong! Finally, we insert the numerical value for the pole.

-1.63-
P. Starič, E.Margan The Laplace Transform

1.12 Complex Integration Around Many Poles:


the Cauchy–Goursat Theorem

So far we have calculated a contour integral around one pole (simple or multiple).
Now we will integrate around more poles, either single or multiple.
Cheese is a regular part of French meals. So we may imagine that the great
mathematician Cauchy observed a slice of Emmentaler cheese like that in Fig. 1.12.1 (the
characteristics of this cheese is big holes) on his plate and reflected in the following way:
Suppose all that is cheese is an analytic (regular) domain V of a function J Ð=Ñ. In
the holes are the poles =" , á , =& . We are not interested in the domain outside the cheese.
How could we ‘mathematically’ encircle the cheese around the crust and around the rims of
all the holes, so that the cheese is always on the left side of the contour?

jω C jω
R R
C1
s2 s1 s2 C2 s1

C4
s4 σ s4 A σ

s3 C5
s3
C3 s5
s5

Fig. 1.12.1: The cheese represents a regular Fig. 1.12.2: Encircling the poles by contours
(analytic) domain V of a function which has G , G" , á , G5 , so that the regular domain of
one simple pole in each hole. the function is always on the left side.

Impossible? No! If we take a knife and make a cut from the crust towards each hole
without removing any cheese, we provide the necessary path for the suggested contour, as
shown in Fig. 1.12.2.
Now we calculate a contour integral starting from the point E in the suggested
(counter-clockwise) direction until we come to the cut towards the first pole, =" . We follow
the cut towards contour G" , follow it around the pole and then go along the cut again, back
to the crust. We continue around the crust up to the cut of the next pole and so on, until we
arrive back to point E and close the contour. Since we have not removed any cheese in
making the cuts, the paths from the crust to the corresponding hole and back again cancel
out in this integration path. As we have proved by Eq. 1.9.6:
, +

( J Ð=Ñ .=  ( J Ð=Ñ .= œ !
+ ,

Therefore, only the contour G around the crust and the small contours G" , á , G&
around the rims of the holes containing the poles are what we must consider in the
integration around the contour in Fig. 1.12.2. The contour G was taken counter-clockwise,
whilst the contours G" , á , G& were taken clockwise.

-1.65-
P. Starič, E.Margan The Laplace Transform

We write down the complete contour integral:

( J Ð=Ñ .=  ) J Ð=Ñ .=  â  ) J Ð=Ñ .= œ ! (1.12.1)


G G" G&

The result of integration is zero because along this circuitous contour of integration
we have had the regular domain always on the left side. By changing the sense of encircling
of the contours G" , á , G& we may write Eq. 1.12.1 also in the form:

( J Ð=Ñ .= œ ( J Ð=Ñ .=  â  ( J Ð=Ñ .= (1.12.2)


G G" G&

When we changed the sense of encircling, we changed the sign of the integrals; this allows
us to put them on the right hand side with a positive sign. Now all the integrals have
positive (counter-clockwise) encircling. Therefore the integral encircling all the poles is
equal to the sum of the integrals encircling each particular pole.
By observing this equation we realize that the right hand side is the sum of residues
for all the five poles, multiplied by #14. Thus for the general 8-pole case the Eq. 1.12.2 may
also be written as:

8
( J Ð=Ñ .= œ #14 c res"  â  res8 d œ #14 " res3 (1.12.3)
G 3œ"

Eq. 1.12.2 and 1.12.3 are called the Cauchy–Goursat theorem; they are essentially
important for the inverse Laplace transform.

-1.66-
P. Starič, E.Margan The Laplace Transform

-4_
=>
1.13 Equality of the Integrals ( J Ð=Ñ e .= and ( J Ð=Ñ e=> .=
G -4_

The reader is invited to examine Fig. 1.13.1, where the function lJ Ð=Ñl œ l"Î=l was
plotted. The function has one simple pole at the origin of the complex plane. The resulting
surface has been cut between 4 and " to expose an arbitrarily chosen integration path P
between =" œ B"  4 C" œ !  4 !Þ& and =# œ B#  4 C# œ !Þ&  4 ! (see the integration
path in the plot of the = domain in Fig. 1.13.2).

20

15
| F (s ) |

10 1
s | F (L) |

5
1.0
M
A 0.5
0 s2 0.0
s1 L
− 1.0 − 0.5 ℑ{ s }
− 0.5 0.0
0.5 − 1.0
ℜ{s} 1.0
Fig. 1.13.1: The complex function magnitude, lJ Ð=Ñl œ l"Î=l. The resulting surface has been cut
between 4 and " to expose an arbitrarily chosen integration path P, starting at =" œ !  4 !Þ& and
ending at =# œ !Þ&  4 !. On the path of integration the function lJ Ð=Ñl has a maximum value Q .

ℑ{ s
}

ℜ{
s}
M
0.5
s2
L 1
s1
− 0.5 j A

L
−j

Fig. 1.13.2: The complex domain of Fig. 1.13.1 Fig. 1.13.3: The area E of Fig. 1.13.1 has
shows the arbitrarily chosen integration path P, been laid flat in order to show that it must
from =" œ !  4 !Þ& to =# œ !Þ&  4 !. be smaller than or equal to the area of the
rectangle Q ×P.

Let us take a closer look at the area E between =" , =# , lJ Ð=" Ñl and lJ Ð=# Ñl, shown in
Fig 1.13.3. The area E corresponds to the integral of J Ð=Ñ from =" to =# and it can be shown
that it is always smaller than, or at best equal to, the rectangle Q ×P:

-1.67-
P. Starič, E.Margan The Laplace Transform

â =" â â =# â =# =#
â â â â
â â â .= â l.=l
â ( J Ð=Ñ .= â œ â ( âŸ( Ÿ ( Q k.=k œ Q P (1.13.1)
â â â = â l=l
â=" â â=" â ="
ðóóóóóóóóóóóóóóóóóóóóñóóóóóóóóóóóóóóóóóóóóò ="

along the path P

Here Q is the greatest value of kJ Ð=Ñk for this particular path of integration P, as shown in
Fig. 1.13.3, in which the resulting 3D area between =" , =# , lJ Ð=" Ñl and lJ Ð=# Ñl was
stretched flat. So:
â =# â =#
â â
â â
â ( J Ð=Ñ .= â Ÿ ( ¸J Ð=Ñ .=¸ Ÿ Q P (1.13.2)
â â
â=" â ="

Eq. 1.13.2 is an essential tool in the proof of the inverse _ transform via the
integral around the closed contour.
Let us now move to network analysis, where we have to deal with rational functions
of the complex variable = œ 5  4 =. These functions have a general form:

=7  ,7" =7"  â  ," =  ,!


J Ð=Ñ œ (1.13.3)
=8  +8" =8"  â  +" =  +!
where 7  8 and both are positive and real. Since we can also express = œ V e4) (as can be
derived from Fig. 1.13.4), we may write Eq. 1.13.3 also in the form:

V 7 e4 7 )  ,7" V 7" e4Ð7"Ñ)  â  ," V e4 )  ,!


J Ð=Ñ œ (1.13.4)
V 8 e4 8 )  +8" V 8" e4Ð8"Ñ)  â  +" V e4 )  +!
According to Eq. 1.13.2 and 1.13.4 we have:

V 7 e4 7 )  â  , ! O
¸J Ð=Ѹ œ º º Ÿ 87 œ Q (1.13.5)
V 8 e4 8 )  â  +! V

where O is a real constant and Q is the maximum value of ¸J Ð=Ѹ within the integration
interval, according to Fig. 1.13.1 and 1.13.3 (in [Ref. 1.10, p. 212] the interested reader can
find the complete derivation of the constant O ).

s1 jω 1
4)
=" œ 5"  4 =" œ V e
5" œ V cos ) =" œ V sin ) R
V œ ÈÐ5"  4 =" ÑÐ5"  4 =" Ñ
θ
œ È5"#  =#" β
σ
=" σ1
) œ arctan β
5"

Fig. 1.13.4: Cartesian and polar representations of a complex number (note: tan ) is equal for the
counter-clockwise defined ) from the positive real axis and for the clockwise defined " œ )  1Ñ.

-1.68-
P. Starič, E.Margan The Laplace Transform

Let us draw the poles of Eq. 1.13.3 in the complex plane to calculate the integral
around an inverted D-shaped contour, as shown in Fig. 1.13.5 (for convenience only three
poles have been drawn there). Since Eq. 1.13.3 is assumed to describe a real passive system,
all poles must lie either on the left side of the complex plane or at the origin. As we know,
the integral around the closed contour embracing all the poles is equal to the sum of
residues of the function J Ð=Ñ:
5a 4 ="
8
( J Ð=Ñ .= œ ( J Ð=Ñ .=  ( J Ð=Ñ .= œ #14 " res3 (1.13.6)
P 3œ"
5a 4 =" I

The contour has two parts: the straight line from 5a  4 =" to 5a  4 =" , where 5a is a
constant (which we will define more exactly later) and the arc I œ V # , where # is the arc
angle and V is its radius. According to Eq. 1.13.2, the line integral along the path P is:
â â
â â
â â
â( J Ð=Ñ .=â Ÿ Q P (1.13.7)
â â
âP â

where Q is the maximum value of the integral (magnitude!) on the path P. In our case:
O
Qœ and P œ # V œ I (1.13.8)
V87

jω 1
s2

Γ R
γ

σa σ
s1

s3
− jω 1

Fig. 1.13.5: The integral along the inverted D-shaped contour encircling the poles is equal
to the sum of the residues of each pole. This contour is used to prove the inverse Laplace
transform, where the integral along the arc vanishes if V p _, provided that the number of
poles exceeds the number of zeros by at least # (in this example no zeros are shown).

For a very large V we may write:


â â
â â
â â O O#
â ( J Ð=Ñ .=â Ÿ 87 # V œ 87" (1.13.9)
â â V V
âI â
If V p _:
O#
lim œ ! only if 87 # (1.13.10)
Vp_ V87"

and this procedure is called Jordan’s lemma.

-1.69-
P. Starič, E.Margan The Laplace Transform

If the condition of Eq. 1.13.10 holds, only the straight part of the contour counts
because if V p _ then also =" p _, thus changing the limits of the integral along the
straight path accordingly. If we make these changes to Eq. 1.13.6, it shrinks to:
5a 4_
8
( J Ð=Ñ .= œ ( J Ð=Ñ .= œ #14 " res3 (1.13.11)
P 3œ"
5a 4_

The function J Ð=Ñ may also contain the factor e=> , where d Ð=Ñ   5a and >   !. In
this case the constant 5a , which is called the abscissa of absolute convergence [Ref. 1.3,
1.5, 1.8], must be small enough to ensure the convergence of the integral. The factor e=> is
always present in the inverse _ transform. Let us write this factor down and let us divide
Eq. 1.13.11 by the factor #14. In this way the integral obtains the form:

5a 4_
" " => =>
0 Ð>Ñ œ _ eJ Ð=Ñf œ ( J Ð=Ñ e .= œ " res šJ Ð=Ñ e › (1.13.12)
#14
5a 4_

and this is the formula for the inverse _ transform [Ref. 1.3, 1.5, 1.8]. The above integral is
convergent for >   !, which is the usual constraint in passive network analysis. This
constraint will also apply to all derivations which follow.
In the condition written in Eq. 1.13.10 we see that the order of the denominator’s
polynomial must exceed the order of the numerator by at least two, otherwise we could not
prove the inverse _ transform by the method derived above. This means that the number of
poles must exceed the number of zeros by at least two. However, in network theory we
often deal with the input functions called positive real functions [Ref. 1.16]. The degree of
the denominator in these functions may exceed the degree in the numerator by one only. To
prove the inverse _ transform for such a case, we must reach for another method. The proof
is possible by using a rectangular contour [Ref. 1.5, 1.13, 1.17]:
When the degree of the denominator exceeds the degree of the numerator by one
only, Eq. 1.13.5 is reduced to:
O
¸J Ð=Ѹ Ÿ œQ (1.13.13)
V

so to prove the inverse Laplace transform we use:

" "
J Ð=Ñ œ œ (1.13.14)
=  =p =  5p

This is a single-pole function, with the pole on the negative real axis (for our
calculations it is not essential that the pole lies on the real axis, but in the theory of real
passive networks, a single-pole always lies either on the negative 5 axis or at the origin of
the complex plane).
The pole and the rectangular contour with the sides I" , I# , I$ and I% are shown in
Fig. 1.13.6. We will integrate around this rectangular contour. At the same time we let both
5" p _ and =" p _. Next we will prove, considering these limits, that the line integrals
along the sides I# , I$ and I% are all equal to zero.

-1.70-
P. Starič, E.Margan The Laplace Transform

Γ2
jω 1
Γ1

σ
σ1 sp σa

Γ3

Γ4 −jω 1

Fig. 1.13.6: By using a rectangular contour as shown it is possible to prove the inverse
Laplace transform by means of the contour integral, even if the number of poles
exceeds the number of zeros by only one. In this integral, encircling the single simple
pole, we let 51 p _ and =1 p _, so that the integrals along I# , I$ and I% vanish.

The proof must show that:


5a 4="
=>
( J Ð=Ñ e .= œ =lim
p_ (
J Ð=Ñ e=> .= œ # 1 4 " res šJ Ð=Ñ e=> › (1.13.15)
1
I 5a 4="

Here we will include the factor e=> (which always appears in the inverse
_ transform) at the very beginning, because it will help us in making the integral along I$
convergent. Let us start with the integral along the side I# , where =" is constant:
â 5" â
â â
=> â Ð5 4 =" Ñ> â
º( J Ð=Ñ e .=º œ â ( J a5  4 =" b e .5â
I# â â
â5a â
5a
O 5>
Ÿ( e .5
5"
 5"

O "
œ † Še5a >  e5" > ‹ p !º (1.13.16)
5" > 5" p_

Since we are calculating the absolute value, we can exchange the limits of the last integral.
The integral along I% is almost equal:
â 5a â
â â
=> â Ð5 4 =" Ñ> â
º( J Ð=Ñ e .=º œ â ( J a5  4 =" b e .5â
I% â â
â 5 " â
5a
O 5>
Ÿ( e .5
5"
 5"

O "
œ † Še5a >  e5" > ‹ p !º (1.13.17)
5" > 5" p_

-1.71-
P. Starič, E.Margan The Laplace Transform

In the integral along I$ , 5" is constant:


â â â â
â â â 4 =" â
â â â â
â ( J Ð=Ñ e=> .=â œ â ( J a5"  4 =b eÐ 5" 4 =Ñ> .=â
â â â â
â â â â
âI $ â â4 =" â
="
O  5" >
Ÿ( e .=
5"
 ="

O  5" >
œ e Ð="  =" Ñ p !º (1.13.18)
5" 5" p_
=" p_

Since the integrals along I# , I$ and I% are all equal to zero if 5" p _ and =" p_,
only the integral along I" remains, which, in the limit, is equal to the integral along the
complete rectangular contour and, in turn, to the sum of the residues of the poles of J a=b:

5a 4 =" 5a 4_

lim
=1 p_ (
J Ð=Ñ e=> .= œ => =>
( J Ð=Ñ e .= œ ( J Ð=Ñ e .=
5a 4 =" 5a 4_ I

œ #14 ! res šJ Ð=Ñ e=> › (1.13.19)

If this equation is divided by #14, we again obtain the Eq. 1.13.12 which is the
inverse Laplace transform of the function J Ð=Ñ.
Although there was only a single pole in our J a=b in Eq. 1.13.14 the result obtained
is valid in the general case, when J a=b has 8 poles and 8  " zeros.
Thus we have proved the _" transform by means of a contour integral for positive
real functions. As in Eq. 1.13.12, here, too, the abscissa of absolute convergence 5a must be
chosen so that de=f   5a and also >   ! in order to ensure the convergence of the integral.
However, we may also integrate along a straight path, where 5  5a , provided that all the
poles remain on the left side of the path.
From all the complicated equations above the reader must remember only one
important fact, which we will use very frequently in the following sections: By means of
the _" transform of J Ð=Ñ, the complex transfer function of a linear network, we
obtain the real time function, 0 Ð>Ñ, as the sum of the residues of all the poles of the
complex frequency function J Ð=Ñ e=> .
Let us put this in the symbolic form:

0 Ð>Ñ œ _" eJ Ð=Ñf œ ! res šJ Ð=Ñ e=> › (1.13.20)

-1.72-
P. Starič, E.Margan The Laplace Transform

1.14 Application of the Inverse Laplace Transform

In the following parts of the book we will very frequently need the inverse Laplace
transform of two-pole and three-pole systems, in which the third pole at the origin, "Î=, is
the _ transform of the unit step function. Therefore it would be useful to perform our first
example of the _" transform calculation on such a network function.
A typical network of this sort is shown in Fig. 1.14.1. Our task is to calculate the
voltage on the resistor V as a function of time for >  !. First we will apply an input current
3i in the form of an impulse $ Ð>Ñ, and next the input current will have a unit step form. Both
results will be used in many cases in the following parts of the book. In the same way as for
J a=b and 0 a>b we will label the voltages and currents with capitals (Z , M ) when they are the
functions of frequency and with small letters (@, 3) when they are functions of time.
iL L

iC
1V
ii =
R i C R o

Fig. 1.14.1: A simple VPG circuit, driven by a current step,


often found in electrical and electronics networks.

The input current is composed of two components, the current through the capacitor
MG , and through the inductor MP (and the resistor V) and Zi is the input voltage:
Zi =# P G  = V G  "
Mi œ MG  MP œ Zi = G  œ Zi (1.14.1)
=PV =PV
Correspondingly:
Zi =PV
œ # (1.14.2)
Mi = PG =VG "

This is a typical input function [Ref. 1.16], in this case it has the form of an (input)
impedance, ^i . The characteristics of an input function is that the number of poles exceeds
the number of zeros by one only. The output voltage Zo is:
V
Zo œ Z i (1.14.3)
=PV
and so:
Zo V =PV V
œ † œ # (1.14.4)
Mi = P  V =# P G  = V G  " = PG =VG "

The result is the transfer function of the network (from input to output, but is
expressed as the output to input ratio). Since the dimension of Eq. 1.14.4 is (complex)
Ohms it is also named the transimpedance. In general we will assume that the input current
is " VÎV, in order to obtain a normalized transfer function:

" [V]
Zo œ œ KÐ=Ñ (1.14.5)
=# P G  = V G  "

-1.73-
P. Starič, E.Margan The Laplace Transform

In our later applications of the circuit in Fig. 1.14.1 the denominator of Eq. 1.14.5
must have complex roots (although, in general, the roots can also be real). Now let us
calculate both roots of the denominator from its canonical form:
V "
=#  =  œ! (1.14.6)
P PG
with the roots:
V V# "
=",# œ 5" „ 4 =" œ  „Ê  (1.14.7)
#P % P# PG

In special cases, some of which we shall analyze in the later parts of the book, the roots
may also be double and real.
Expressing the transfer function, Eq. 1.14.5, by its roots, we obtain:

" "
KÐ=Ñ œ † (1.14.8)
PG Ð=  =" ÑÐ=  =# Ñ

From the _" transform of this function we obtain the system’s impulse response in
the time domain, gÐ>Ñ œ ge$ Ð>Ñf. The factor "ÎPG is the system resonance, =#" , which in a
different network may take a different form (in the general normalized second-order case it
is equal to the product of the two poles, =" =# ). Thus, we put O œ "ÎPG :

O
gÐ>Ñ œ _" eKÐ=Ñf œ _" œ 
Ð=  =" ÑÐ=  =# Ñ

O e=> e=>
œ ( .= œ O " res œ  (1.14.9)
#14 Ð=  =" ÑÐ=  =# Ñ Ð=  =" ÑÐ=  =# Ñ
G

The contour of integration in Eq. 1.14.9 must encircle both poles.


Since the network in Fig. 1.14.1 is passive, both poles =" and =# lie on the left side
of the complex plane. As an example, in Fig. 1.14.2 we have drawn the magnitude of KÐ=Ñ
for the special case of a 2nd -order Butterworth network, for which the absolute values of the
pole components are equal, l5 l œ l=l œ "ÎÈ# œ !Þ(!(.
In this figure the Laplace transformed system transfer function, ¸KÐ=Ѹ, is
represented by the surface over the complex plane, peaking to infinity over the poles.
If we intersect the magnitude function by a vertical plane along the 4=-axis, the
surface edge (curve) at the intersection represents the frequency response lKÐ4 =Ñl of the
function lKÐ=Ñl. The response is shown in a linear scale, and for the negative values of = as
well. In later sections we shall draw the frequency response graphs with a logarithmic scale
for the frequency (positive only). Likewise, the magnitude will be logarithmic as well.
Eq. 1.14.9 has two residues:
e=> e=" >
res" œ =lim
p=
Ð=  =" Ñ œ
" Ð=  =" ÑÐ=  =# Ñ ="  =#

e=> e=# >


res# œ =lim
p=
Ð=  =# Ñ œ (1.14.10)
# Ð=  =" ÑÐ=  =# Ñ =#  ="

-1.74-
P. Starič, E.Margan The Laplace Transform

3

2
| G (s ) |

0
-3
|G ( jω ) |
-2
s2
ℜ{s } -1 s1 2 3
0 1
0 -1
-3 -2 ℑ{s}

Fig. 1.14.2: The magnitude of the system transfer function, Eq. 1.14.8, for ="ß# œ a" „ 4 bÈ#
and O œ ". For dÐ=Ñ œ !, the surface lKÐ=Ñl is reduced to the frequency response’s magnitude
curve, lKÐ4 =Ñl. The height at ="ß# is _, but was limited to 3 in order to see lKÐ4 =Ñl in detail.

The corresponding time function is the sum of both residues:


O
gÐ>Ñ œ res"  res# œ Še=" >  e=# > ‹ (1.14.11)
="  =#
Now we insert 5"  4 =" for =" and 5"  4 =" for =# :
O
gÐ>Ñ œ eÐ5"  4 =" Ñ>  eÐ5"  4 =" Ñ> ‘ (1.14.12)
5"  4 ="  5"  4 ="
We factor out e5" > and rearrange the denominator to obtain:
O O 5" > e4=" >  e4 =" >
gÐ>Ñ œ e5" > e4 =" >  e4 =" > ‘ œ e (1.14.13)
# 4 =" =" #4
Since:
e4=" >  e4 =" >
œ sin =" > (1.14.14)
#4
then:
O 5" >
gÐ>Ñ œ e sin =" > (1.14.15)
="
But O can also be expressed with 5" and =" :
"
Oœ œ =" =# œ 5"#  ="# (1.14.16)
PG
so:
5"#  =#" 5" > È5"#  =#" 5 >
gÐ>Ñ œ e sin =" > œ e " sin =" > (1.14.17)
=" sin )

where ) is the angle between a pole and the positive 5 axis, as in Fig. 1.13.4.

-1.75-
P. Starič, E.Margan The Laplace Transform

In our example, 5"#  =#" œ " (Butterworth case), so Eq. 1.14.17 can be simplified:
" 5" > "
gÐ>Ñ œ e sin =" > œ e5" > sin =" > (1.14.18)
=" sin )
Note that Eq. 1.14.13 and Eq. 1.14.17 are valid for any complex pole pair, not
just for Butterworth poles. This completes the calculation of the impulse response.
The next case, in which we are interested more often, is the step response. In
Example 1, Sec. 1.5, we have calculated that the unit step function in the time domain
corresponds to "Î= in the frequency domain. To obtain the step response in the time
domain, we need only to multiply the frequency response by "Î= and calculate the inverse
_ transform of the product. So by multiplying KÐ=Ñ by "Î= we obtain a new function:

" O
J Ð=Ñ œ KÐ=Ñ œ (1.14.19)
= = Ð=  =" ÑÐ=  =# Ñ

To calculate the step response in the time domain we use the _" transform:
O
0 Ð>Ñ œ _" eJ Ð=Ñf œ _" œ
= Ð=  =" Ñ Ð=  =# Ñ

O e=> e=>
œ ( .= œ O " res œ  (1.14.20)
#14 = Ð=  =" ÑÐ=  =# Ñ = Ð=  =" ÑÐ=  =# Ñ
G

The difference between Eq. 1.14.9 and Eq. 1.14.20 is that here we have an additional
pole =! œ !, because of the factor "Î=. Thus here we have three residues:
e=> "
res! œ lim = œ
=p! = Ð=  =" ÑÐ=  =# Ñ =" =#
e=> e=" >
res" œ =lim
p=
Ð=  =" Ñ œ
" = Ð=  =" ÑÐ=  =# Ñ =" Ð="  =# Ñ
e=> e=# >
res# œ =lim
p=
Ð=  =# Ñ œ (1.14.21)
# = Ð=  =" ÑÐ=  =# Ñ =# Ð=#  =" Ñ

In the double-pole case (coincident pole pair, =" œ =# ) the calculation is different
(remember Eq. 1.11.12) and it will be shown in several examples in Part 2. The time
domain function is the sum of all three residues (OÎ=" =# is factored out):

O =# ="
0 Ð>Ñ œ Œ"  e=" >  e=# >  (1.14.22)
=" =# ="  = # =#  = "

By expressing =" œ 5"  4 =" and =# œ 5"  4 =" in each of the residues we obtain:

O O O
œ œ # œ" (see Eq. 1.14.16)
=" =# Ð5"  4 =" ÑÐ5"  4 =" Ñ 5"  =#"
=# e=" > Ð5"  4 =" Ñ eÐ5"  4 =" Ñ> 5"  4 =" Ð5"  4 =" Ñ>
œ œ e
="  = # 5"  4 ="  5"  4 =" # 4 ="

=" e=# > Ð5"  4 =" Ñ eÐ5"  4 =" Ñ> 5"  4 =" Ð5"  4 =" Ñ>
œ œ e (1.14.23)
=#  =" 5"  4 ="  5"  4 =" # 4 ="

-1.76-
P. Starič, E.Margan The Laplace Transform

We put these results into Eq. 1.14.22 and obtain:


5"  4 =" Ð5"  4 =" Ñ> 5"  4 =" Ð5"  4 =" Ñ>
0 Ð>Ñ œ "  e  e (1.14.24)
# 4 =" # 4 ="

By factoring out e5" > , and with a slight rearranging, we arrive at:

5" e4 =" >  e4 =" > e4 =" >  e4 =" >
0 Ð>Ñ œ "  e5" > –    — (1.14.25)
=" #4 #

Since Ðe4 =" >  e4 =" > ÑÎ# 4 œ sin =" > and Ðe4=" >  e4 =" > ÑÎ# œ cos =" > we can
simplify Eq. 1.14.25 into the form:

5"
0 Ð>Ñ œ "  e 5" > Œ sin =" >  cos =" > (1.14.26)
="

We could now numerically calculate the response, but we want to show two things:
1) how the formula relates to the physical circuit behavior;
2) explain an error, all too often ingored (even by experienced engineers!).

We can further simplify the sine–cosine term by using the vector sum of the two
phasors (this relation can be found in any mathematics handbook):

E sin !  F cos ! œ ÈE#  F # sin Ð!  )Ñ where ) œ arctanaFÎEb

By putting E œ 5" =" and F œ " we arrive at:


#
5" 5 >
0 Ð>Ñ œ "  Ë"  Œ  e " sin Ð=" >  )Ñ (1.14.27)
="
where:
="
) œ arctanŒ 
5"

For the Buttreworth case, the square root is equal to #, but in the general case it is:
# È5"#  =#"
5" "
Ë"  Œ  œ œº º (1.14.28)
=" =" sin )

Note that for any value of 5" and =" their square can never be negative, which is reflected
in the absolute value notation at the end; on the other hand, it is important to preserve the
correct sign of the phase shifting term in sin Ð=" >  )Ñ. By putting Eq. 1.14.28 back into
Eq. 1.14.27 we obtain a relatively simple expression:

" 5 >
0 Ð>Ñ œ "  º º e " sin Ð=" >  )Ñ (1.14.29)
sin )

If we now insert the numerical values for 5" , =" and ) and plot the function for > in
the interval from 0 to 10, the resulting graph will be obviously wrong! What happened?

-1.77-
P. Starič, E.Margan The Laplace Transform

Let us check our result by applying the rule of initial and final value from Sec. 1.6.
We will use Eq. 1.14.29 and Eq. 1.14.8, considering that O œ =" =# (Eq. 1.14.16).
1. Check the initial value in the frequency-domain, = p _:
=" =#
0 Ð!Ñ œ =lim
p_
= J Ð=Ñ œ =lim
p_
œ! (1.14.30)
Ð=  =" ÑÐ=  =# Ñ

which is correct. But in the time-domain at > œ !:

"
0 Ð!Ñ œ "  e5" ! sin Ð=" !  )Ñ œ # (1.14.31)
lsin )l

which is wrong!

2. Check the final value for > p _:

"
0 Ð_Ñ œ lim ”"  e5" > sin Ð=" >  )Ñ•
>p_ lsin )l
=" =#
œ = J Ð!Ñ œ œ" (1.14.32)
Ð!  =" ÑÐ!  =# Ñ

and at least this one is correct in both the time and frequency domain. Note that in both
checks the pole at = œ ! is canceled by the multiplication of J Ð=Ñ by =.

Considering the error in the inital value in the time domain, many engineers
wrongly assume that they have made a sign error and change the time domain
equation to:
" 5 >
0 Ð>Ñ œ "  º º e " sin Ð=" >  )Ñ (wrong!)
sin )

Although the step response plot will now be correct, a careful analysis shows that
the negative sign is completely unjustified! Instead we should have used:
="
) œ 1  arctanŒ  (1.14.33)
5"

The reason for the added 1 lies in the tangent function, which repeats with a period
of 1 radians (and not #1, as the sine and cosine do). This results in a lost sign since
the arctangent can not tell between angles in the first quadrant from those in the
third and in the second quadrant from those in fourth. See Appendix 2.3 for more of
such cases in 3rd - and 4th -order systems.

A graphical presentation of the step response solution, given by Eq. 1.14.29 and
with the correct initial phase angle, Eq. 1.14.33, is displayed in Fig. 1.14.3.
The physical circuit behavior can be explained as follows:

-1.78-
P. Starič, E.Margan The Laplace Transform

The system resonance term, sinÐ=" >Ñ, is first shifted by ), the characteristic angle of
the pole, becoming sinÐ=" >  )Ñ (the time shift is )Î=" ). At resonance the voltage and
current in reactive components are each others’ derivatives (a sine–cosine relationship, see
Eq. 1.14.26), the initial phase angle ) reflects their impedance ratios.
The amplitude of the shifted function is then corrected by the absolute value of the
function at > œ !, which is l"Îsin )l. Thus the starting value is equal to ", and in addition
the slope is precisely identical to the initial slope of the exponential damping function, e5" > ,
so that their product has zero initial slope.
This product is the system reaction to the unit step excitation, 2Ð>Ñ, which sets the
final value for > p _ (=! œ !). By summing the residue at =! (res0 œ " ) with the reaction
function gives the final result, the step response 0 Ð>Ñ.

1 3 jω
h (t ) s1 s 2 f (t )
2 4
s1
ω1
1.5 1 sin (ω1 t + θ )
| sin θ | σ1 θ σ
1.0
eσ1 t σ1 = − 1
sin (ω1 t + θ )
0.5 ω1 = 1 − ω1
t s2
a) 0
0 1 2 3 4 5 6 7 8 9 10
− 0.5 jω
eσ1 t sin (ω t + θ )
− 1.0 1
| sin θ | σ
− 1.5 s0

1.5 0, t < 0
h( t ) =
1, t > 0
1.0
eσ1 t sin (ω t + θ ) jω
0.5 f (t ) = 1 + 1
| sin θ | s1
t
b) 0
0 1 2 3 4 5 6 7 8 9 10
σ
− 0.5 s0
eσ1 t
− 1.0 sin (ω1 t + θ )
| sin θ |
s2
− 1.5

Fig. 1.14.3: Step by step graphic representation of the procedures used in the calculation of
the step response of a system with two complex conjugate poles.

We have purposely presented the complete calculation of the step response for the
VPG circuit in every detail for two reasons:
1) to show how the step response is calculated by means of the _" transform and
the theory of residues; and
2) because we shall meet such functions very frequently in the following parts.

-1.79-
P. Starič, E.Margan The Laplace Transform

1.15 Convolution

In network analysis we often encounter a cascade of networks, so that the ontput of


the preceeding network is driving the input of the following one. The output of the later
network is therefore a response to the response of the preceeding network. We need a
procedure to solve such problems. In the time domain this is done by the convolution
integral [Ref. 1.2]. Fig. 1.15.1 displays the complete procedure of convolution.
In Fig. 1.15.1a there are two networks:
The network E has a Bessel pole pair with the following data:
=",# œ 5" „ 4 =" œ "Þ&!! „ 4!Þ)''; the pole angle is )" œ „ "&!°.
In addition, owing to the input unit step function, we have a third pole =! œ !.
The network F has a Butterworth pole pair with the following data:
=$,% œ 5# „ 4 =# œ !Þ(!(" „ 4!Þ(!("; the pole angle is )# œ „ "$&°.
Bessel and Butterworth poles are discussed in detail in Part 4 and Part 6.
According to Eq. 1.14.29 the step response of the network E is:
"
0 Ð>Ñ œ "  e5" > sin Ð=" >  )" Ñ
lsin )" l

and, according to Eq. 1.14.17, the impulse response of the network F is:
"
gÐ>Ñ œ e5# > sin =# >
lsin )# l

Both functions are shown in Fig. 1.15.1a. We will convolve ga>b because it is easier
to do so. This convolving (folding) is done by time reversal about > œ !, obtaining
ga7  >b. The reversion interval 7 has to be chosen so that gÐ>   7 Ñ œ ! (or at least very
close to zero), otherwise the convolution integral would not converge to the correct final
value. The output function CÐ>Ñ is then the convolution integral:
>max
" "
CÐ>Ñ œ ( ”"  e5" > sin Ð=" >  )" Ñ• e5# Ð7 >Ñ sin =# Ð7  >Ñ .> (1.15.1)
lsin ) " l
ðóóóóóóóóóóóóóóóóñóóóóóóóóóóóóóóóóò lsin )# l
ðóóóóóóóóóóóóóóñóóóóóóóóóóóóóóò
!
0 Ð>Ñ gÐ7  >Ñ
To solve this integral requires a formidable effort and the reader may be assured that
we shall not attempt to solve it here, because — as we will see later — there is a more
elegant method of doing so. We have written the complete integral merely to give the
reader an example of the convolution based on the functions which we have already
calculated. Nevertheless, it is a challenge for the reader who wants to do it by himself (for
the construction of diagrams in Fig. 1.15.1, this integral has been solved!).
In Fig. 1.15.1b we first convolve the function gÐ>Ñ and introduce the time constant 7
to obtain gÐ7  >Ñ. Next, in Fig. 1.15.1c the function gÐ7  >Ñ is shifted right along the time
axis to the position > œ ", obtaining gÐ7  >  "Ñ. The area E" under the product of the two
signals is the value of the convolution integral for the interval ! Ÿ > Ÿ ".
In a similar fashion, in Fig. 1.15.1d the function gÐ7  >Ñ is shifted to > œ #. Here
the value of the convolution integral for the interval ! Ÿ > Ÿ # is equal to the area E# .

-1.81-
P. Starič, E.Margan The Laplace Transform

A B a
h( t ) 1
s1 s2 s3 s4 (t ) f (t )
s0

f (t ) g( t ) g( t )
t = ti
( t ) = f ( t ) * g( t ) = f ( t ) g(τ − t ) d t
0
t=0 t
0 1 2 3 4 5

b c
1 1

f (t ) Shift and f (t)


Convolution
(folding) of g ( t ) integrate
A 0 = f ( t ) g(− t ) = 0
g( − t ) g( − t ) g( 1 − t )
g( t ) A1
0 0
t t
− 5 −4 −3 − 2 −1 0 1 2 3 4 5 −5 − 4 −3 −2 −1 0 1 2 3 4 5

d e
1 1

f (t ) f (t)

g( 2 − t ) g(3 − t )

A2 A3
0 0
t t
−5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5

f g
1 1

f (t) f (t )

g( 4 − t ) g(5 − t )

A4 A5
0 0
t t
−5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5

h (t )
1
f (t ) A3 A4 A
5
t = ti
A2
A i = ( ti) = f ( t ) g(τ − t ) d t
t=0

g( t )
A1
0
t
0 1 2 3 4 5
A0

Fig. 1.15.1: Graphic presentation of the mathematical course of the


convolution calculus, 0 Ð>чgÐ>Ñ. See the text for the description.

-1.82-
P. Starič, E.Margan The Laplace Transform

In Fig. 1.15.1e, the function gÐ7  >Ñ is shifted to > œ $ to obtain the area E$ and in
Fig. 1.15.1f, the function gÐ7  >Ñ is shifted to > œ %, resulting in the area E% , which is in
part negative, owing to the shape of gÐ7  >Ñ.
In Fig. 1.15.1g, > œ & and the area E& is obtained. Since 0 Ð>Ñ has nearly reached its
final value and gÐ>Ñ is almost zero for >  &, any further shifting changes E> only slightly.
Finally, in Fig. 1.15.1h the values of E" , á , E5 are inserted to point to the
particular values of the output function CÐ>Ñ. For comparison, the input of network F , 0 Ð>Ñ,
is also drawn. Although 0 Ð>Ñ has almost no overshoot, the Butterworth poles in the network
F cause an undershoot in gÐ>Ñ, which results in an overshoot in the output signal CÐ>Ñ.

Important note: In the last plot of Fig. 1.15.1 the system response CÐ>Ñ is plotted as
if the network F had a unity gain. The impulse response of a unity gain system is
characterized by the whole area under it being equal to 1; consequently, its peak
amplitude would be very small compared to 0 Ð>Ñ, so there would not be much to
see. Therefore for gÐ>Ñ we have plotted its ideal impulse response. The
normalization to a unity-gain is acomplished by dividing the ideal impulse-response
by its own time integral (numerically, each instantaneous amplitude sample is
divided by the sum of all samples). See Part 6 and Part 7 for more details.

From Eq. 1.6.51, it has become evident that convolution in the time domain
corresponds to a simple frequency domain multiplication. This is also shown in Fig. 1.15.2.
The upper half of the figure is the = domain whilst the bottom half is the > domain. Instead
of making the convolution gÐ>Ñ ‡ 0 Ð>Ñ in the > domain, which is difficult, (see Eq. 1.15.1 ),
we rather perform a simple multiplication KÐ=Ñ † J Ð=Ñ in the = domain. Then, by means of
the _" transform we obtain the function CÐ>Ñ which we are looking for.

G (s ) algebraic F (s ) Y (s )
system equation signal response
transform (easy) transform transform

s domain −1
transform transform
t domain

g (t ) integral f (t ) (t )
system equation
transfer function (difficult) signal response

Fig. 1.15.2: Equivalence of the system response calculation in the time domain, 0 Ð>чgÐ>Ñ,
and the frequency domain, J Ð=Ñ † KÐ=Ñ. For analytical work the transform route is the easy
way. For computer use the direct method is preferred.

-1.83-
P. Starič, E.Margan The Laplace Transform

By taking the transform route we need only to calculate the sum of all the residues
(five in the case shown in Fig. 1.15.1), which is far less difficult than the calculation of the
integral in Eq. 1.15.1.
The mathematical expression, which applies to this case, this is:

CÐ>Ñ œ _" eKÐ=Ñ † J Ð=Ñf

=" =# =$ = % =>
œ " res ” •” •e (1.15.2)
= Ð=  = " ÑÐ=  = # Ñ Ð=  = $ ÑÐ=  =% Ñ
ðóóóóóóóóóñóóóóóóóóóòðóóóóóóóóñóóóóóóóóò
J Ð=Ñ KÐ=Ñ

Here the numerators of both fractions have been normalized by introducing the products
=" =# and =$ =% respectively, to replace the constant O (according to Eq. 1.14.16) in the
Eq. 1.14.19 and 1.14.8. A solution of the above equation can be found in Part 2, Sec. 2.6.
Fig. 1.15.2 also reveals another very important possibility. If the input signal 0 Ð>Ñ is
known and a certain output signal CÐ>Ñ is desired, we can synthesize (not always!) the
intermediate network KÐ=Ñ by taking the _ transform of both time functions and calculating
their quotient:
] Ð=Ñ
KÐ=Ñ œ (1.15.3)
J Ð=Ñ

where ] Ð=Ñ œ _eCÐ>Ñf and J Ð=Ñ œ _e0 Ð>Ñf.


Eq. 1.15.1 has convinced us that the calculation of convolution is not an easy task,
even for relatively simple functions. By using a PC computer, the convolution in time
domain can be calculated numerically. Several good mathematical programs exist (we
have been using Matlab™ [Ref. 1.18]), which simplify the convolution calculation to a
matter of pure routine. This is explained in detail in Part 6 and Part 7.

-1.84-
P. Starič, E.Margan The Laplace Transform

Résumé of Part 1

So far we have discussed the Laplace transform and its inverse, only to the extent
which the reader needs for understanding the rest of the book.
Since we shall calculate many practical examples of the _" transform in the
following chapters, we have discussed extensively only the calculation of the time function
of a simple two pole network with a complex conjugate pole pair, excited by the unit step
function.
The readers who want to broaden their knowledge of the Laplace transform, can
find enough material for further study in the references quoted.

-1.85-
P. Starič, E.Margan The Laplace Transform

References:

[1.1] G.E. Valley & H. Wallman, Vacuum Tube Amplifiers,


MIT Radiation Laboratory Series, Vol. 18, McGraw-Hill, New York, 1948.
[1.2] R.B. Randall, Frequency Analysis,
Bruel
¨ & Kjær, Nærum, 1987.
[1.3] M.F. Gardner & J.L. Barnes, Transients in Linear Systems Studied by Laplace Transform,
Twelfth Printing, John Wiley & Sons, New York, 1956.
[1.4] O. Follinger
¨ , Laplace und Fourier Transformation,
AEG–Telefunken, Berlin, 1982.
[1.5] G. Doetsch, Introduction to the Theory and Application of the Laplace Transform,
Springer–Verlag, Berlin, 1970.
[1.6] G. Doetsch, Anleitung zum praktischen Gebrauch der Laplace Transformation und der
Z–Transformation, R. Oldenburg Verlag, Munich–Vienna, 1985.
[1.7] T.F. Bogart, Jr., Laplace Transforms and Control Systems, Theory for Technology,
John Wiley & Sons, New York, 1982.
[1.8] M. O’Flynn & E. Moriarthy, Linear Systems, Time Domain and Transform Analysis,
John Wiley, New York, 1987.
[1.9] G.A. Korn & T.M. Korn, Mathematical Handbook for Scientists and Engineers,
McGraw-Hill, New York, 1961.
[1.10] M.R. Spiegel, Theory and Problems of Laplace Transforms,
Schaum’s Outline Series, McGraw-Hill, New York, 1965.
[1.11] J. Plemelj, Teorija analitičnih funkcij,
Slovenska akademija znanosti in umetnosti, Ljubljana, 1953.
[1.12] M.R. Spiegel, Theory and Problems of Complex Variable, SI (Metric) Edition,
McGraw-Hill, New York, 1974.
[1.13] I. Stewart & D. Tall, Complex Variables,
Cambridge University Press, Cambridge, 1983.
[1.14] R.W. Churchil & J.W. Brown, Complex Variables and Applications, Fourth Edition,
International Student Edition, McGraw-Hill, Auckland, 1984.
[1.15] W. Gellert, H. Kustner,
¨ M. Hellwich, & H. Kastner
¨ , The VNR Concise Encyclopedia of
Mathematics, second edition, Van Nostrand Rheinhold, New York, 1992.
[1.16] E. Van Valkenburg, Introduction to Modern Network Synthesis,
John Wiley & Sons, New York, 1960.
[1.17] P. Starič, Proof of the Inverse Laplace Transform for Positive Real Functions,
Elektrotehniški vestnik, Ljubljana, 1991, pp. 23–27.
[1.18] J.N. Little & C.B. Moler, MATLAB User’s Guide,
The MathWorks, Inc., South Natick, USA, 1990.
[1.19] G.E. Hostetter, Engineering Network Analysis,
Harper & Row, Publishers, New York, 1984.
[1.20] J. Bednařík & J. Daněk, Obrazové zesilovače pro televisi a měřicí techniku,
Statní nakladatelství technické literatury, Prague, 1957.
[1.21] V. Bubeník, Impulsová Technika,
Statní nakladatelství technické literatury, Prague, 1958.
[1.22] D.E. Scott, An Introduction to System Analysis, A System Approach,
McGraw-Hill, New York, 1987.
[1.23] P. Kraniauskas, Transforms in Signals and Systems,
Addison-Wesley, Wokingham, 1992.

-1.87-
P. Starič, E. Margan

Wideband Amplifiers

Part 2:

Inductive Peaking Circuits

Complex solutions always have one simple explanation!


(Lunsford’s Rule of scientific endeavor)
P. Stariè, E. Margan Inductive Peaking Circuits

The Renaissance of Inductance

In Part 2 of Wideband Amplifiers we discuss various forms


of inductive peaking circuits.
The topic of inductive peaking actually started with Oliver
Heaviside’s “Telegrapher’s Equation” back in 1890s, in which for
the first time an inductance was used to compensate a dominantly
capacitive line to extend the bandwidth. The development flourished
with radio, TV and radar circuits and reached a peak in oscilloscopes
in 1970s.
With the widespread use of modern high speed low power
operational amplifiers and digital electronics, the inductance virtually
disappeared in signal transmission path, remaining mostly in power
supply filtering and later in switching power supplies. By all too
many contemporary electronics engineers, the inductance is being
considered more as a nuisance, rather than a useful circuit
component.
However, the available frequency spectrum is fixed and the
bandwidth requirements are continuously rising, especially with
modern wireless telecommunications. In our opinion the inductance
just waits to be rediscovered by new generations of electronics
circuit designers. So we believe that the inclusion of this subject in
the book is fully justified.

-2.2-
P. Stariè, E. Margan Inductive Peaking Circuits

Contents ........................................................................................................................................... 2.3


List of Figures ................................................................................................................................. 2.4
List of Tables ................................................................................................................................... 2.5
Contents:
2.0 Introduction ............................................................................................................................................ 2.7
2.1 The Principle of Inductive Peaking ........................................................................................................ 2.9
2.2 Two-Pole Series Peaking Circuit .......................................................................................................... 2.13
2.2.1. Butterworth Poles for Maximally Flat Amplitude Response (MFA) ................................... 2.15
2.2.2. Bessel Poles for Maximally Flat Envelope Delay (MFED) Response ................................. 2.16
2.2.3. Critical Damping (CD) ........................................................................................................ 2.17
2.2.4. Frequency Response Magnitude .......................................................................................... 2.17
2.2.5. Upper Half Power Frequency .............................................................................................. 2.17
2.2.6. Phase Response ................................................................................................................... 2.19
2.2.7. Phase Delay and Envelope Delay ....................................................................................... 2.20
2.2.8. Step Response ..................................................................................................................... 2.22
2.2.9. Rise Time ............................................................................................................................ 2.24
2.2.10. Input Impedance ................................................................................................................ 2.25
2.3 Three-Pole Series Peaking Circuit ........................................................................................................ 2.27
2.3.1. Butterworth Poles (MFA) .................................................................................................... 2.28
2.3.2. Bessel Poles (MFED) .......................................................................................................... 2.29
2.3.3. Special Case (SPEC) ........................................................................................................... 2.31
2.3.4. Phase Response ................................................................................................................... 2.31
2.3.5. Envelope Delay ................................................................................................................... 2.32
2.3.6. Step Response ..................................................................................................................... 2.32
2.4 Two-Pole T-coil Peaking Circuit .......................................................................................................... 2.35
2.4.1. Frequency Response ............................................................................................................ 2.42
2.4.2. Phase-Response ................................................................................................................... 2.43
2.4.3. Envelope-Delay ................................................................................................................... 2.43
2.4.4. Step Response ..................................................................................................................... 2.45
2.4.5. Step Response from Input to @V .......................................................................................... 2.46
2.4.6. A T-coil Application Example ............................................................................................ 2.48
2.5 Three-Pole T-coil Peaking Circuit ........................................................................................................ 2.51
2.5.1. Frequency Response ............................................................................................................ 2.54
2.5.2. Phase Response ................................................................................................................... 2.55
2.5.3. Envelope Delay ................................................................................................................... 2.56
2.5.4. Step Response ..................................................................................................................... 2.57
2.5.5. Low Coupling Cases ............................................................................................................ 2.58
2.6 Four-pole L+T Peaking Circuit (L+T) .................................................................................................. 2.63
2.6.1. Frequency Response ............................................................................................................ 2.66
2.6.2. Phase Response ................................................................................................................... 2.68
2.6.3. Envelope Delay ................................................................................................................... 2.69
2.6.4. Step Response ..................................................................................................................... 2.69
2.7 Two-Pole Shunt Peaking Circuit .......................................................................................................... 2.73
2.7.1. Frequency Response ........................................................................................................... 2.74
2.7.2. Phase Response and Envelope Delay ................................................................................. 2.74
2.7.3. Step Response .................................................................................................................... 2.78
2.8 Three-Pole Shunt-Peaking Circuit ......................................................................................................... 2.83
2.8.1. Frequency Response ............................................................................................................ 2.83
2.8.2. Phase Response ................................................................................................................... 2.86
2.8.3. Envelope-Delay ................................................................................................................... 2.86
2.8.4. Step Response ..................................................................................................................... 2.88
2.9 Shunt–Series Peaking Circuit ................................................................................................................ 2.91
2.9.1. Frequency Response ........................................................................................................... 2.96
2.9.2. Phase Response ................................................................................................................... 2.97
2.9.3. Envelope Delay ................................................................................................................... 2.97
2.9.4. Step Response ..................................................................................................................... 2.99

-2.3-
P. Stariè, E. Margan Inductive Peaking Circuits

2.10 Comparison of MFA Frequency Responses and of MFED Step Responses ...................................... 2.103
2.11 The Construction of T-coils .............................................................................................................. 2.105
References ................................................................................................................................................. 2.111

Appendix 2.1: General Solutions for 1st -, 2nd -, 3rd - and 4rd -order Polynomials ..................................... A2.1.1
Appendix 2.2: Normalization of Complex Frequency Response Functions ............................................ A2.2.1
Appendix 2.3: Solutions for Step Responses of 3rd - and 4th -order Systems ................................... (CD) A2.3.1
Appendix 2.4: Table 2.10 — Summary of all Inductive Peaking Circuits ......................................(CD) A2.4.1

List of Figures:
Fig. 2.1.1: A common base amplifier with VG load ..................................................................................... 2.9
Fig. 2.1.2: A hypothetical ideal rise time circuit ......................................................................................... 2.10
Fig. 2.1.3: A common base amplifier with the series peaking circuit .......................................................... 2.11
Fig. 2.2.1: A two-pole series peaking circuit .............................................................................................. 2.13
Fig. 2.2.2: The poles =" and =# in the complex plane .................................................................................. 2.14
Fig. 2.2.3: Frequency response magnitude of the two-pole series peaking circuit ...................................... 2.18
Fig. 2.2.4: Phase response of the series peaking circuit .............................................................................. 2.19
Fig. 2.2.5: Phase and envelope delay definition .......................................................................................... 2.20
Fig. 2.2.6: Phase delay and phase advance ................................................................................................. 2.21
Fig. 2.2.7: Envelope delay of the series peaking circuit .............................................................................. 2.22
Fig. 2.2.8: Step response of the series peaking circuit ................................................................................ 2.24
Fig. 2.2.9: Input impedance of the series peaking circuit ............................................................................ 2.26
Fig. 2.3.1: The three-pole series peaking circuit ......................................................................................... 2.27
Fig. 2.3.2: Frequency response of the third-order series peaking circuit ..................................................... 2.30
Fig. 2.3.3: Phase response of the third-order series peaking circuit ............................................................ 2.31
Fig. 2.3.4: Envelope delay of the third-order series peaking circuit ............................................................ 2.32
Fig. 2.3.5: Step response of the third-order series peaking circuit .............................................................. 2.33
Fig. 2.3.6: Pole patterns of the third-order series peaking circuit ............................................................... 2.34
Fig. 2.4.1: The basic T-coil circuit and its equivalent ................................................................................. 2.35
Fig. 2.4.2: Modeling the coupling factor ..................................................................................................... 2.35
Fig. 2.4.3: The poles and zeros of the all pass transimpedance function .................................................... 2.40
Fig. 2.4.4: The complex conjugate pole pair of the Bessel type ................................................................. 2.40
Fig. 2.4.5: The frequency response magnitude of the T-coil circuit ............................................................ 2.43
Fig. 2.4.6: The phase response of the T-coil circuit .................................................................................... 2.44
Fig. 2.4.7: The envelope delay of the T-coil circuit .................................................................................... 2.44
Fig. 2.4.8: The step response of the T-coil circuit, taken from G ............................................................... 2.45
Fig. 2.4.9: The step response of the T-coil circuit, taken from V ............................................................... 2.48
Fig. 2.4.10: An example of a system with different input impedances ......................................................... 2.49
Fig. 2.4.11: Input impedance compensation by T-coil sections ................................................................... 2.50
Fig. 2.5.1: The three-pole T-coil network ................................................................................................... 2.51
Fig. 2.5.2: The layout of Bessel poles for Fig.2.5.1 .................................................................................... 2.51
Fig. 2.5.3: The basic trigonometric relations of main parameters for one of the poles ............................... 2.52
Fig. 2.5.4: Three-pole T-coil network frequency response ......................................................................... 2.55
Fig. 2.5.5: Three-pole T-coil network phase response ................................................................................ 2.56
Fig. 2.5.6: Three-pole T-coil network envelope delay ................................................................................ 2.56
Fig. 2.5.7: The step response of the three-pole T-coil circuit ..................................................................... 2.58
Fig. 2.5.8: Low coupling factor, Group 1: frequency response .................................................................. 2.60
Fig. 2.5.9: Low coupling factor of Group 1: step response ........................................................................ 2.60
Fig. 2.5.10: Low coupling factor of Group 2: frequency response ............................................................... 2.61
Fig. 2.5.11: Low coupling factor of Group 2: step response ........................................................................ 2.61
Fig. 2.6.1: The four-pole L+T network ....................................................................................................... 2.63
Fig. 2.6.2: The Bessel four-pole pattern of L+T network ........................................................................... 2.63
Fig. 2.6.3: Four-pole L+T peaking circuit frequency response ................................................................... 2.67

-2.4-
P. Stariè, E. Margan Inductive Peaking Circuits

Fig. 2.6.4: Additional frequency response plots of the four-pole L+T peaking circuit ............................... 2.67
Fig. 2.6.5: Four-pole L+T peaking circuit phase response .......................................................................... 2.68
Fig. 2.6.6: Four-pole L+T peaking circuit envelope delay .......................................................................... 2.69
Fig. 2.6.7: Four-pole L+T circuit step response .......................................................................................... 2.72
Fig. 2.6.8: Some additional four-pole L+T circuit step responses .............................................................. 2.72
Fig. 2.7.1: A shunt peaking network ........................................................................................................... 2.73
Fig. 2.7.2: Two-pole shunt peaking circuit frequency response .................................................................. 2.76
Fig. 2.7.3: Two-pole shunt peaking circuit phase response ......................................................................... 2.77
Fig. 2.7.4: Two-pole shunt peaking circuit envelope delay ......................................................................... 2.77
Fig. 2.7.5: Two-pole shunt peaking circuit step response ........................................................................... 2.80
Fig. 2.7.6: Layout of poles and zeros for the two-pole shunt peaking circuit .............................................. 2.81
Fig. 2.8.1: Three-pole shunt peaking circuit ............................................................................................... 2.83
Fig. 2.8.2: Three-pole shunt peaking circuit frequency response ................................................................ 2.86
Fig. 2.8.3: Three-pole shunt peaking circuit phase response ....................................................................... 2.87
Fig. 2.8.4: Three-pole shunt peaking circuit envelope delay ....................................................................... 2.87
Fig. 2.8.5: Three-pole shunt peaking circuit step response ......................................................................... 2.89
Fig. 2.9.1: The shunt–series peaking circuit ................................................................................................ 2.91
Fig. 2.9.2: The shunt–series peaking circuit frequency response ................................................................ 2.97
Fig. 2.9.3: The shunt–series peaking circuit phase response ....................................................................... 2.98
Fig. 2.9.4: The shunt–series peaking circuit envelope delay ....................................................................... 2.98
Fig. 2.9.5: The shunt–series peaking circuit step response ....................................................................... 2.100
Fig. 2.9.6: The MFED shunt–series step responses by Shea and Braude .................................................. 2.100
Fig. 2.9.7: The MFED shunt–series pole layouts ...................................................................................... 2.101
Fig. 2.10.1: MFA frequency responses of all peaking circuits .................................................................. 2.104
Fig. 2.10.2: MFED step responses of all peaking circuits ......................................................................... 2.104
Fig. 2.11.1: Four-pole L+T circuit step response dependence on component tolerances .......................... 2.105
Fig. 2.11.2: T-coil coupling factor as a function of the coil length to diameter ratio ................................ 2.106
Fig. 2.11.3: Form factor as a function of the coil length to diameter ratio ................................................ 2.108
Fig. 2.11.4: Examples of planar coil structures ......................................................................................... 2.109
Fig. 2.11.5: Compensation of a bonding inductance by a planar T-coil .................................................... 2.109
Fig. 2.11.6: A high coupling T-coil on a double sided PCB ..................................................................... 2.110

List of Tables:

Table 2.2.1: Second-order series peaking circuit parameters ..................................................................... 2.26


Table 2.3.1: Third-order series peaking circuit parameters .......................................................................... 2.34
Table 2.4.1: Two-pole T-coil circuit parameters ......................................................................................... 2.48
Table 2.5.1: Three-pole T-coil circuit parameters ....................................................................................... 2.59
Table 2.6.1: Four-pole L+T peaking circuit parameters .............................................................................. 2.71
Table 2.7.1: Two-pole shunt peaking circuit parameters ............................................................................. 2.81
Table 2.8.1: Three-pole shunt peaking circuit parameters ........................................................................... 2.89
Table 2.9.1: Series–shunt peaking circuit parameters ................................................................................ 2.101

Appendix 2.4: Table 2.10 — Summary of all Inductive Peaking Circuits ......................................(CD) A2.4.1

-2.5-
P. Stariè, E. Margan Inductive Peaking Circuits

2.0 Introduction

In the early days of wideband amplifiers ‘suitable coils’ were added to the load
(consisting of resistors and stray capacitances) in order to extend the bandwidth, causing in
most cases a resonance peak in the frequency response. Hence the term inductive peaking.
Even though later designers of wideband amplifiers were more careful in doing their best
to achieve as flat a frequency response as possible, the word ‘peaking’ still remained and it
is used today as well.
In some respect the British engineer S. Butterworth might be considered the first to
introduce coils in the (then) anode circuits of electronic tubes to construct an amplifier with
a maximally flat frequency (low pass) response. In his work On the Theory of Filter
Amplifiers, published as early as October 1930 [Ref. 2.1], besides introducing the pole
placement which was later named after him, he also mentioned: “The writer has
constructed filter units in which the resistances and inductances are wound round a
cylinder of length 3in and diameter 1.25 in, whilst the necessary condensers are contained
within the core of the cylinder”. However, it is hard to tell exactly the year when these
‘necessary condensers’ were omitted to leave only the stray and inter-electrode
capacitances of the electronic tubes to form, together with the properly dimensioned coils
and load resistances, a wideband amplifier with maximally flat frequency response. This
was probably done some time in the mid 1930s, when the first electronic voltmeters,
oscilloscopes, and television amplifiers were constructed.
The need for wideband and pulse amplifiers was emphasized with the introduction
of radar during the Second World War. A book of historical value, G. E. Valley & H.
Wallman, Vacuum Tube Amplifiers [Ref. 2.2] was written right after the war and
published in 1948. Apart from details about other types of amplifiers, the most important
knowledge about wideband amplifiers, gained during the war in the Radiation Laboratory
at Massachusetts Institute of Technology, was made public. In this work the amplifier step
response calculation also received the necessary attention.
After the war people who worked in the Radiation Laboratory spread over USA
and UK, and many of them started working at firms where oscilloscopes were produced.
Many articles were written about wideband amplifiers with inductive peaking, but books
which would thoroughly discuss wideband amplifiers were almost non-existent. The reason
was probably because the emphasis has shifted from the frequency domain to the time
domain, where a gap-free mathematical discussion was considered difficult. Nevertheless,
here and there a book on this subject appeared, and one of the most significant was
published in 1957 in Prague: J. Bednařík & J. Daněk, Obrazové zesilovače pro televisí a
měřicí techniku, (Video Amplifiers for Television and Measuring Techniques) [Ref. 2.3].
There the authors attempted to present a thorough discussion of all inductive peaking
circuits known at that time and also of high frequency resonant amplifiers. Computers were
a rare comodity in those days, with restricted access and equally rare was the programming
knowledge; this prevented the authors from executing some important calculations, which
are too elaborate to be done by pencil and paper.
An important change in wideband amplifier design, using inductive peaking, was
introduced by E.L. Ginzton, W.R. Hewlett, J.H. Jasberg, and J.D. Noe in their revolutionary
article Distributed Amplification, [Ref. 2.4]. This was an amplifier with electronic tubes
connected in parallel, where the grid and anode interconnections were made of lumped

-2.7-
P. Stariè, E. Margan Inductive Peaking Circuits

sections of a delay line. In this way the bandwidth of the amplifier was extended beyond
the limits imposed by the mutual conductance (gm ) divided by stray capacitance (Gin ) of
electronic tubes. For reasons which we will discuss in Part 3, this type of amplification has
a rather limited application if transistors are used instead of electronic tubes. The necessary
delay in a distributed amplifier was realized using the so-called ‘m-derived’ T-coils, which
did not have a constant input impedance. The correct T-coil circuit was developed in 1964
by C.R. Battjes [Ref. 2.17] and was used for inductive peaking of wideband amplifiers.
Compared with a simple series peaking circuit, a T-coil circuit improves the bandwidth and
rise time exactly twofold. For many years the T-coil peaking circuits were considered a
trade secret, so the first complete mathematical derivations were published by a pupil of
C.R. Battjes only in the early 1980s [Ref. 2.5, 2.6] and in 1995 by C. R. Battjes himself
[Ref. 2.18]. Transistor inter-stage coupling with T-coils represented a special problem,
which was solved by R.I. Ross in late 1960s. This too was considered a classified matter
and appeared in print some ten to twenty years later [Ref. 2.7, 2.8, 2.9]. Owing to the
superb performance of the T-coil circuit we shall discuss it very thoroughly. The transistor
inter-stage T-coil coupling will be derived in Part 3.
Here in Part 2, we shall first explain the basic idea of inductive peaking, followed
by the discussion of the peaking circuits with poles only: series peaking two-pole, series
peaking three-pole, T-coil two-pole, T-coil three-pole, and L+T four-pole circuits. This will
be followed by peaking circuits with poles and zeros: shunt peaking two-pole and one-zero
circuit, shunt peaking three-pole and two-zero circuit, and shunt–series peaking circuit. For
each of the circuits discussed we shall calculate and plot the frequency, phase, envelope
delay, and the step response. The emphasis will be on T-coil circuits, owing to their superb
performance. All the necessary calculations will be explained as we proceed and, whenever
practical, the complete derivations will be given. The exception is the step response of the
series peaking circuit with one complex conjugate pole pair, which was already derived and
explained in Part 1. Since the complete calculation for the step-responses of four-pole L+T
circuits and shunt–series peaking circuits is rather complicated, only the final formulae will
be given. Those readers who want to have the derivations for these circuits as well, will be
able to do so themselves by learning and applying the principles derived in Part 1 and 2
(some assistance can also be found in Appendix 2.1, 2.2 and 2.3).
To the beginners we strongly recommend the study of Sec. 2.2 and 2.3: the circuit
examples are simple enough to allow the analysis to be easily followed and learned; the
same methods can then be applied to more sophisticated circuits in other sections, in which
some of the most basic details are omitted and some equations imported from those two
sections.
At the end of Part 2 we shall draw two diagrams, showing the Butterworth (MFA)
frequency responses and the Bessel (MFED) step responses, to offer an easy comparison of
performance. Finally, in Appendix 2.4 we give a summary table containing the essential
design parameters and equations for all the circuits discussed.

-2.8-
P. Stariè, E. Margan Inductive Peaking Circuits

2.1 The Principle of Inductive Peaking

A simple common base transistor amplifier is shown in Fig. 2.1.1. A current step
source 3s is connected to the emitter ; the time scale has its origin > œ ! at the current step
transition time and is normalized to the system time constant, VG . The collector is loaded
by a resistor V ; in addition there is the collector–base capacitance Gcb , along with the
unavoidable stray capacitance Gs and the load capacitance GL in parallel. Their sum is
denoted by G .

τ = RC
1.0
Q1 ic 0.9

Ccb
is CL R o o = 1 − e− t /RC
Cs 0.5
ic R

V bb V cc
0.1 τ r 1 = 2.2 RC
0.0
Ccb + Cs + CL = C 0 1 2 3 4 5
t1 t2
t / RC

Fig. 2.1.1: A common base amplifier with VG load: the basic circuit and its step response.

Because of these capacitances, the output voltage @o does not jump suddenly to the
value 3c V , where 3c is the collector current. Instead this voltage rises exponentially
according to the formula (see Part 1, Eq. 1.7.15):

@o œ 3c V Š"  e>ÎVG ‹ (2.1.1)

The time elapsed between 10 % and 90 % of the final output voltage value (3c V ),
we call the rise time, 7r1 (the index ‘1’ indicates that it is the rise time of a single-pole
circuit). We calculate it by inserting the 10 % and 90 % levels into the Eq. 2.1.1:

!Þ" 3c V œ 3c V Š"  e>" ÎVG ‹ Ê >" œ V G ln !Þ* (2.1.2)

Similarly for ># :

!Þ* 3c V œ 3c V Š"  e># ÎVG ‹ Ê ># œ V G ln !Þ" (2.1.3)

The rise time is the difference between these two instants:

!Þ*
7r" œ >#  >" œ V G ln !Þ*  V G ln !Þ" œ V G ln œ #Þ# V G (2.1.4)
!Þ"

The value #Þ# VG is the reference against which we shall compare the rise time of all
other circuits in the following sections of the book.

-2.9-
P. Stariè, E. Margan Inductive Peaking Circuits

Since in wideband amplifiers we strive to make the output voltage a replica of the
input voltage (except for the amplitude), we want to reduce the rise time of the amplifier as
much as possible. As the output voltage rises more current flows through V and less
current remains to charge G . Obviously, we would achieve a shorter rise time if we could
disconnect V in some way until G is charged to the desired level. To do so let us introduce
a switch W between the capacitor G and the load resistor V . This switch is open at time
> œ ! , when the current step starts, but closes at time > œ VG , as in Fig. 2.1.2. In this way
we force all the available current to the capacitor, so it charges linearly to the voltage 3c V .
When the capacitor has reached this voltage, the switch W is closed, routing all the current
to the loading resistor V .

τ = RC iC = 0 iR = ic
1.0
Q1 ic t = RC 0.9
vo
S b a
is t
vo C 0 < t < RC
0
C R 0.5 =
iR =

ic R t > RC

0.1 τ r0
0.0
Fig. 2.1.2: A hypothetical ideal rise time circuit. The switch disconnects V from the circuit, so that all
0 1 2 3 4
of 3c is available to charge G ; but after a time > œ VG the switch is closed and all 3c flows through V.
5
t t
The resulting output voltage is shown in b, compared1 with the 2exponential response in a. t / RC

Fig. 2.1.2: A hypothetical ideal rise time circuit. The switch disconnects V from the circuit, so that all
of 3c is available to charge G ; but after a time > œ VG the switch is closed and all 3c flows through V.
The resulting output voltage is shown in b, compared with the exponential response in a.

By comparing Fig. 2.1.1 with Fig. 2.1.2 , we note a substantial decrease in rise time
7r! , which we calculate from the output voltage:
7
>œ7
" 3c
@o œ ( 3c .> œ >º œ 3 c V (2.1.5)
G G >œ!
!

where 7 œ VG . Since the charging of the capacitor is linear, as shown in Fig. 2.1.2, the rise
time is simply:

7r! œ !Þ* VG  !Þ" VG œ !Þ) VG (2.1.6)

In comparison with Fig. 2.1.1, where there was no switch, the improvement factor
of the rise time is:

7r" #Þ#! VG
(r œ œ œ #Þ(& (2.1.7)
7r! !Þ) VG

It is evident that the rise time (Eq. 2.1.6 ) is independent of the actual value of
the current 3c , but the maximum voltage 3c V (Eq. 2.1.5) is not. On the other hand, the
smaller the resistor V the smaller is the rise time. Clearly the introduction of the switch W
would mean a great improvement. By using a more powerful transistor and a lower value
resistor V we could (at least in principle) decrease the rise time at a will (provided that G
remains unchanged). Unfortunately, it is impossible to make a low on-resistance switch,

-2.10-
P. Stariè, E. Margan Inductive Peaking Circuits

functioning as in Fig. 2.1.2, which would also suitably follow the signal and automatically
open and close in nanoseconds or even in microseconds. So it remains only a nice idea.
But instead of a switch we can insert an appropriate inductance P between the
capacitor G and resistor V and so partially achieve the effect of the switch, as shown in
Fig. 2.1.3. Since the current through an inductor can not change instantaneously, more
current will be charging G , at least initially. The configuration of the VPG network allows
us to take the output voltage either from the resistor V or from the capacitor G . In the first
case we have a series peaking network whilst in the second case we speak of a shunt
peaking network. Both types of peaking networks are used in wideband amplifiers.

τ = RC
1.0
Q1 ic 0.9
o c a
b σ
L o e 1 sin ( ω t + θ )
is =1+ 1
ic R |sin θ |
C R 0.5
R2 1
σ 1 = − 2RL ω1 = ±
4L2
− LC

− ω1
θ = π + arctan σ1
0.1 τr
0.0
0 1 2 3 4 5
t1 t2 t / RC

Fig. 2.1.3: A common base amplifier with the series peaking circuit. The output voltage @o
(curve c) is compared with the exponential response (a, P œ !) and the response using the
ideal switch (b). If we were to take the output voltage from the capacitor G , we would have a
shunt peaking circuit (see Sec. 2.7). We have already seen the complete derivation of the
procedure for calculating the step response in Part 1, Sec. 1.14. However, the response
optimization in accordance with different design criteria is shown in Sec. 2.2 for the series
peaking circuit and in Sec. 2.7 for the shunt peaking circuit.

Fig. 2.1.3 is the simplest series peaking circuit. Later, when we discuss T-coil
circuits, we shall not just achieve rise time improvements similar to that in Eq. 2.1.7, but in
cases in which it is possible (usually it is) to split G into two parts, we shall obtain a
substantially greater improvement.

-2.11-
P. Stariè, E. Margan Inductive Peaking Circuits

2.2 Two Pole Series Peaking Circuit

Besides the series peaking circuit, in this section we shall discuss all the significant
mathematical methods which are needed to calculate the frequency, phase and
time delay response, the upper half power frequency and the rise time. In addition, we shall
derive the most important design parameters of the series peaking circuit, which we will
use in the other sections of the book also.

ii iL

iC L
o
C R

Fig. 2.2.1: A two-pole series peaking circuit.

In Fig. 2.2.1 we have repeated the collector loading circuit of Fig. 2.1.3. Since the
inductive peaking circuits are used mostly as collector load circuits, from here on we shall
omit the transistor symbol; instead we shall show the input current Mi (formerly Mc ) flowing
into the network, with the common ground as its drain. At first we shall discuss the
behavior of the network in the frequency domain, assuming that Mi is the RMS value of the
sinusoidally changing input current. This current is split into two parts: the current through
the capacitance MG , and the current through the inductance MP . Thus we have:

Zi "
Mi œ MG  MP œ Zi 4 = G  œ Zi Š4 = G  ‹ (2.2.1)
4=PV 4=P  V

where the input voltage Zi is the product of the driving current Mi and the input impedance
^i (represented by the expression in paretheses). The output voltage is:
V
Zo œ M P V œ Z i (2.2.2)
4=PV
From these equations we obtain the transfer function:
V
Zi
Zo 4=PV V
œ œ
Mi " 4 = G Ð4 = P  VÑ  "
Zi Œ4 = G  
4=PV
V
œ (2.2.3)
 =# P G  V 4 = G  "

Let us set M i œ " Z ÎV and P œ 7V# G , where 7 is a dimensionless parameter; also let us
substitute 4 = with =. With these substitutions the output voltage Zo œ J a=b becomes:
" " "
J Ð=Ñ œ œ † (2.2.4)
=# 7 V # G #  = V G  " 7 V# G # = "
=#  
7VG 7 V# G #
-2.13-
P. Stariè, E. Margan Inductive Peaking Circuits

The denominator roots, which for an efficient peaking must be complex conjugates,
as in Fig. 2.2.2, are the poles of J Ð=Ñ:
" " "
=",# œ 5" „ 4 =" œ  „Ê  (2.2.5)
#7VG % 7# V# G # 7 V# G #
m = 0.5
m > 0.25 m = 0.33 s1 jω
jω s1
s1
jω 1 m = 0.25
M s1 = s2
m = 0.25 θ
σ m =0 m =0 σ
σ1 s2 = − ∞ −2
−1 s 1 = −1
RC −θ RC RC

− jω 1
s2 s2
m > 0.25 s2

Fig. 2.2.2: The poles =" and =# in the complex plane. If the parameter 7 œ !, the poles are
=" œ  "ÎVG and =# œ  _. by increasing 7, they travel along the real axis towards each
other and meet at =" œ =# œ  #ÎVG Ðfor 7 œ !.25Ñ. Increasing 7 further, the poles split
into a complex conjugate pair travelling along the circle, the radius of which is < œ "ÎVG and
its center at 5 œ  <. The figure on the right shows the four characteristic layouts, which are
explained in detail in the text.

With these poles we may write Eq. 2.2.4 also in the following form:
" "
J Ð=Ñ œ † (2.2.6)
7VG Ð=  =" ÑÐ=  =# Ñ
At DC Ð= œ !Ñ Eq. 2.2.6 shrinks to:
" "
J Ð!Ñ œ † (2.2.7)
7VG =" =#
By dividing Eq. 2.2.6 by Eq. 2.2.7, we obtain the amplitude normalized transfer function:
=" = #
J Ð=Ñ œ (2.2.8)
Ð=  =" Ñ Ð=  =# Ñ

We shall need this expression for the calculation of the step response. But for the
frequency response J a4=b we replace both poles by their components from Eq. 2.2.5 and
group the imaginary parts to obtain:
5"#  =#"
J Ð4 =Ñ œ (2.2.9)
Ò  5"  4 Ð=  =" ÑÓ Ò  5"  4 Ð=  =" ÑÓ

We are often interested in the magnitude, ¸J Ð=Ѹ, which we obtain by multiplying J Ð4 =Ñ


by its own complex conjugate and then taking the root:
5"#  =#"
¸J Ð=Ѹ œ ÈJ Ð4 =Ñ † J * Ð4 =Ñ œ (2.2.10)
È Ò5"#  Ð=  =" Ñ# Ó Ò5"#  Ð=  =" Ñ# Ó

The next step is the calculation of the parameter 7. Its value depends on the type of
poles we want to have, which in turn depend on the intended application of the amplifier.
As a general rule, for sine wave signal amplification we prefer the Butterworth poles whilst

-2.14-
P. Stariè, E. Margan Inductive Peaking Circuits

for pulse amplification we prefer the Bessel poles. If high bandwidth is not of primary
importance, we can use a ‘critically damped’ system for a zero overshoot step response.
Other types of poles are optimized for use in filters, in which our primary goal is to
selectively amplify only a part of the spectrum. Poles are discussed in Part 4 (derived from
some chosen optimization criteria) and Part 6 (computer algorithms).

2.2.1 Butterworth Poles for Maximally Flat Amplitude Response (MFA)

We shall calculate the actual values of the poles as well as the parameter 7, by
using Eq. 2.2.5 where we factor out "Î# 7VG . If the square root of Eq. 2.2.11 is
imaginary, which is true for 7  !Þ#& , we can also factor out the imaginary unit:

" "
=",# œ Š  " „ È"  %7 ‹ œ Š  " „ 4È%7  " ‹ (2.2.11)
#7VG #7VG

We now compare this relation with the normalized 2nd -order Butterworth poles (the
reader can find them in Part 4, Table 4.3.1, or by running the BUTTAP computer routine
given in Part 6). The values obtained are 5"t œ  !Þ(!(" and ="t œ „ !Þ(!(".

Note: From now on we will append the index ‘t’ to the poles taken from the
tables or calculated by a suitable computer program; these values are
normalized to the frequency of " radian per second.
Since both the real and imaginary axis of the Laplace plane have the
dimension of frequency, the pole dimension is radians per second [radÎs];
however, it has become almost a custom not to write the dimensions.
The sign is also seldom written; instead, most authors leave it to the
reader to keep in mind that the poles of unconditionally stable systems
always have the real part negative and the imaginary part is either zero or
both positive and negative, forming a complex conjugate pair.
To make it easier for the reader, we shall always have the symbols 5
and = signed as required by the mathematical operation to be performed,
whilst the numerical values within the symbols will always be negative for 5
and positive for =. For example, we shall express a complex conjugated pole
pair Ð=" , =# Ñ œ Ð=" , =‡" Ñ as:
=" œ 5"  4 =" œ  !Þ(!("  4 !Þ(!("
=# œ 5#  4 =# œ  !Þ(!("  4 !Þ(!("
Ê =# œ 5"  4 =" œ =‡"

A real pole will be given as:


=$ œ 5$ œ  "Þ!!!
Each 5i and =i will bear the index of the pole =i (and not their table
order number). We shall use the odd index for complex conjugate pair
components (with the appropriate  Î  sign for the imaginary part).

In order to have the same response, the poles of Eq. 2.2.11 must be proportional to
those from the tables, so the ratio of their imaginary to the real part must be the same:

-2.15-
P. Stariè, E. Margan Inductive Peaking Circuits

ee="t f ee=" f ="t =" !Þ(!(" È%7  "


œ Ê œ Ê œ œ " (2.2.12)
d e="t f d e=" f 5"t 5"  !Þ(!(" "

and the same is true for =# (except the sign). From the square root of Eq. 2.2.12 it follows
that the value of 7 which satisfies our requirement for the Butterworth poles must be:

7 œ !Þ& (2.2.13)
Thus the inductance is:
P œ 7 V# G œ !Þ& V# G (2.2.14)

Finally, by inserting the value of 7 back into Eq. 2.2.11 the poles of our system are:
"
=",# œ 5" „ 4 =" œ ˆ"„4‰ (2.2.15)
VG
The value "ÎVG œ =h is equal to the upper half power frequency of the non-peaking
amplifier of Fig. 2.1.1 (at this frequency, since power is proportional to voltage squared, the
voltage gain drops to "ÎÈ# œ !Þ(!(" ). If we put "ÎVG œ " (or V œ " H and G œ " F,
or V œ &!! kH and G œ # .F, or any other similar combination, provided that it can be
driven by the signal source) , we obtain the normalized (denoted by the index ‘n’) poles:

="n,#n œ 5"n „ 4 ="n œ  " „ 4 (2.2.16)

If we use normalized poles, we must also normalize the frequency: 4 =Î=h instead of 4 = .

Note: It is important not to confuse our system with normalized poles (Eq. 2.2.16)
with the system having normalized Butterworth poles taken from the table
( ="t , =#t œ  !Þ(!( „ 4 !Þ(!( ). Although both are Butterworth-type and both are
normalized, they differ in bandwidth:

È="t =#t œ " whilst È="n =#n œ È# (2.2.17)

This will become evident soon in Sec. 2.2.4, where we shall calculate and plot the
magnitude (absolute value) of the frequency response.

2.2.2 Bessel Poles for Maximally Flat Envelope Delay (MFED) Response

From Table 4.4.3 in Part 4 (or by using the BESTAP routine in Part 6), the poles
for the 2nd -order Bessel system are 5"t œ  "Þ(&%% and ="t œ „ "Þ&!!! . Then, as for the
Butterworth case above, the ratio of their imaginary to real component is:
ee=" f ="t È%7  " "Þ&!!!
œ Ê œ (2.2.18)
d e=" f 5"t "  "Þ(&%%
Solving for 7 gives:
"
7œ (2.2.19)
$
So the inductance is:

P œ !Þ$$ V# G (2.2.20)
and the poles are:
"
=",# œ Ð  "Þ& „ 4 !Þ)'' Ñ (2.2.21)
VG
-2.16-
P. Stariè, E. Margan Inductive Peaking Circuits

2.2.3 Critical Damping (CD)

In this case both poles are real and equal, so the imaginary part in Eq. 2.2.11 (the
square root) must be zero:
%7  " œ ! Ê 7 œ !Þ#& (2.2.22)
from which the inductance is:
P œ !Þ#& V# G (2.2.23)
resulting in a double real pole:
#
=",# œ  (2.2.24)
VG

In general the parameter 7 may be calculated with the aid of Fig. 2.2.2, where both
poles and the angle ) are shown. If the poles are expressed by Eq. 2.2.11:

ee=" f =" È%7  "


tan ) œ œ œ (2.2.25)
de=" f 5" "
and from this we obtain:
"  tan# )
7œ (2.2.26)
%

which is also equal to "Î% cos# ), as can be found in some literature. We prefer Eq. 2.2.26 .
Now we have all the data needed for further calculations of the frequency, phase,
time delay, and step responses.

2.2.4 Frequency Response Magnitude

We have already written the magnitude in Eq. 2.2.10. Here we will use the
normalized frequency =Î=h :

5"#n  ="#n
¸J Ð=Ѹ œ Í (2.2.27)
Í # #
Í # = #  =
–5 "  Œ  =" n  —– 5" Œ  =" n  —
Ì
n
=h n
=h

This is a normalized equation, in magnitude as ¸J Ð=Ѹ œ " for = œ !, and in


frequency to the upper half power frequency =h of the non-peaking system.
Inserting the pole types of MFA, MFED, and CD, and the frequency in the range
!Þ"  Ð=Î=h Ñ  "!, we obtain the diagrams in Fig. 2.2.3 .

2.2.5 Upper Half Power Frequency

An important amplifier parameter is its upper half power frequency, which we shall
name =H for the peaking amplifier (in contrast to =h in the non-peaking case). This is the
frequency at which the output voltage Zo drops to ZoDC ÎÈ# , where ZoDC is the output

-2.17-
P. Stariè, E. Margan Inductive Peaking Circuits

voltage at DC Ð= œ ! Ñ, or, if normalized, to " VÎÈ# . Since the power is proportional to


the square of the voltage, the normalized output power To œ Ð" VÑ# Î#, which is one half of
the output power at DC. We can calculate the upper half power frequency from Eq. 2.2.27,
by inserting = œ =H ; the result must be "ÎÈ# :
5"#  =#" "
¸J Ð=H Ѹ œ œ (2.2.28)
È#
É5"#  a=H  =" b# ‘5"#  a=H  =" b #‘

We shall use the term upper half power frequency intentionally, rather than the term
upper 3 dB frequency, which is commonly found in the literature. Whilst it has become a
custom to express the amplifier gain in dB, the dB scale (the log of the output-to-input
power ratio) implies that the driving circuit, which supplies the current Mi to the input, has
the same internal resistance as the loading resistor V . This is not the case in most of the
circuits which we shall discuss.
2.0
ωH
Vo CD
Ii R Bessel
Butterworth
1.0

0.7071
0.7
L=0
0.5
2
L = mR C
Ii Vo
L a ) m = 0.50
C R b ) m = 0.33
c ) m = 0.25
0.2
ω h = 1 / RC
a b c
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.2.3: Frequency response magnitude of the two-pole series peaking circuit for some
characteristic values of 7: a) 7 œ !Þ& is the maximally flat amplitude (MFA) response; b)
7 œ !Þ$$ is the maximally flat envelope delay (MFED) response; c) 7 œ !Þ#& is the critical
damping (CD) case; the non-peaking case (7 œ ! Ê P œ !) is the reference. The bandwidth of all
peaking responses is improved compared to the non-peaking bandwidth =h at Zo ÎMi V œ !Þ(!(".

For a series peaking circuit the calculation of =H is relatively easy. The calculation
becomes progressively more difficult for more sophisticated networks, where more poles
and sometimes even zeros are introduced. In such cases it is better to use a computer and in
Part 6 we have presented the development of routines which the reader can use to calculate
the various response functions.
If we solve Eq. 2.2.28 for =H Î=h we can define [Ref. 2.2, 2.4]:

=H
(b œ (2.2.29)
=h

-2.18-
P. Stariè, E. Margan Inductive Peaking Circuits

The value (b is the cut off frequency improvement factor, defined as the ratio of the
system upper half power frequency against that of the non-peaking amplifier (and, since the
lower half power frequency of a wideband amplifier is generally very low, usually it is flat
down to DC, we may call (b also the bandwidth improvement factor). In Table 2.2.1 at the
end of this section the bandwidth improvement factors and other data for different values
of the parameter 7 are given.

2.2.6 Phase Response

We calculate the phase angle : of the output voltage Zo referred to the input current
Mi by finding the phase shift :5 a=b of each pole =5 œ 55 „ 4 =5 and then sum them:
8 8 = … =5
:a=b œ ! :5 a=b œ ! arctan (2.2.30)
5œ" 5œ" 55

In Eq. 2.2.30 we have the ratio of the imaginary part to the real part of the pole, so
the pole values may be either exact or normalized. For normalized values we must also
normalize the frequency variable as =Î=h . Our frequency response function (Eq. 2.2.8) has
two complex conjugated poles, therefore the phase response is:

= =
 =1n  ="n
=h =h
:Ð=Ñ œ arctan  arctan (2.2.31)
51n 5"n

In Fig. 2.2.4 the phase plots corresponding to the same values of 7 as in Fig. 2.2.3
are shown:

−20

−40
ϕ[ ] L=0
− 60

− 80
L = m R 2C c
−100 Ii Vo
L a ) m = 0.50
b
−120 C R b ) m = 0.33
c ) m = 0.25 a
−140
ω h = 1 / RC
−160

−180
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.2.4: Phase response of the series peaking circuit for a) MFA; b) MFED; c) CD case,
compared with the non-peaking response (P œ !). The phase angle scale was converted from
radians to degrees by multiplying it by 180/1. For = p _ the non-peaking (single-pole) response
has its asymptote at 90°, whilst the second-order peaking systems have their asymptote at 180°.

-2.19-
P. Stariè, E. Margan Inductive Peaking Circuits

2.2.7 Phase Delay and Envelope Delay

For each pole the phase delay (or the phase advance for each zero) is:
:
7: œ (2.2.32)
=
If = is the positive angular frequency with which the input signal phasor rotates,
then the angle : by which the output signal phasor lags the input is defined in the direction
opposite to =, meaning that, for a phase-delay, : will be negative, as in Fig. 2.2.4;
consequently 7: will also be negative. Note that 7: has the dimension of time.
Now, 7: is obviously frequency dependent, so in order to evaluate the time domain
performance of a wideband amplifier on a fair basis we are much more interested in the
‘specific’ phase delay, known as the envelope delay (also group delay) and it is a frequency
derivative of the phase angle as the function of frequency:
.:
7e œ (2.2.33)
.=
Here, too, a negative result means a delay and a positive result an advance against
the input signal. In Fig. 2.2.5 a tentative explanation of the difference between the phase
delay and the envelope delay is displayed both in time domain and as a phasor diagram.

Input envelope t0
Output envelope S A (ω )
AV g Vg i o
ϕ (ω )

i = V g sin (ω t − t0 ) t > t
vi t2 t3 0
t0 t1 vo
t o = A V g sin (ω t − ϕ ) t > t + 1/ω
0

50% ϕ AV g
Vg

dϕ ϕ ωt ℜ
τe = τϕ =
dω ω t = t0 + N / ω

Fig. 2.2.5: Phase delay and envelope delay definitions. The switch W is closed at the instant >! ,
applying a sinusoidal voltage with amplitude Zg to the input of the amplifier having a frequency
dependent amplitude response EÐ=Ñ and its associated phase response :Ð=Ñ. The input signal
envelope is a unit step. The output envelope lags the input by 7/ œ .:Î.=, measured from >! to >" ,
where >" is the instant at which the output envelope reaches 50% of its final value. A number of
periods later (NÎ=), the phase delay can be measured as the time between the input and output zero
crossing, indicated by ># and >$ , and is expressed as 7: œ :Î=. Note the phase lag being defined in
the oposite direction of the rotation => in the corresponding phasor diagram.

In the phase advance case, when zeros dominate over poles, the name suggests that
the output voltage will change before input, which is impossible, of course. To see what
actually happens we apply a sinewave to two simple VG networks, low pass and high pass,
as shown in Fig. 2.2.6. Compare the phase advance case, @oHP , with the phase delay case,
@oLP . The input signal frequency is equal to the network cutoff, "ÎVGÞ

-2.20-
P. Stariè, E. Margan Inductive Peaking Circuits

t1LP −τ ϕ
i
R oLP
oHP
C

C t0 t
i oHP

Vg R oLP

t 1HP
+τ ϕ

Fig. 2.2.6: Phase delay and phase advance. It is evident that both output signals undergo a phase
modulation during the first half period. The time from >! and the first ‘zero crossing’ of the output is
shorter for @oHP (>"HP ) and longer for @oLP (>"LP ). However, both envelopes lag the input envelope. On
the other hand, the phase, measured after a number of periods, exhibits an advance of  7: for the
high pass network and a delay of  7: for the low pass network.

Returning to the envelope delay for the series peaking circuit, in accordance with
Eq. 2.2.33 we must differentiate Eq. 2.2.30. For each pole we have:

.: . = … =i 5i
œ ”arctan •œ # (2.2.34)
.= .= 5i 5i  Ð= … =i Ñ#

and, as for the phase delay, the total envelope delay is the sum of the contributions of each
pole (and zero, if any). Again, if we use normalized poles and the normalized frequency,
we obtain the normalized envelope delay, 7e =h , resulting in a unit delay at DC.
For the 2-pole case we have:

5"n 5"n
7e =h œ #  # (2.2.35)
= =
5"#n  Œ  ="n  5"#n  Œ  ="n 
=h =h

The plots for the same values of 7 as before, in accordance with Eq. 2.2.35, are
shown in Fig. 2.2.7.
For pulse amplification the importance of achieving a flat envelope delay cannot be
overstated. A flat delay means that all the important frequencies will reach the output with
unaltered phase, preserving the shape of the input signal as much as possible for the given
bandwidth, thus resulting in minimal overshoot of the step response (see the next section).
Also, a flat delay means that, since it is a phase derivative, the phase must be a linear
function of frequency up to the cutoff. This is why Bessel systems are often being referred
to as ‘linear phase’ systems. This property can not be seen in the log scale used here, but if
plotted against a linearly scaled frequency it would be seen. We leave it to the curious
reader to try it by himself.
In contrast the Butterworth system shows a pronounced delay near the cut off
frequency. Conceivably, this will reveal the system resonance upon the step excitation.

-2.21-
P. Stariè, E. Margan Inductive Peaking Circuits

0.0
L = m R 2C
Ii Vo a ) m = 0.50
− 0.2 b ) m = 0.33
L L=0
C R c ) m = 0.25

− 0.4
τe ω h
− 0.6 ω h = 1 / RC

c
− 0.8
b

− 1.0

a
− 1.2
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h

Fig. 2.2.7: Envelope delay of the series peaking circuit for the same characteristic values of 7
as before: a) MFA; b) MFED; c) CD. Note the MFED plot being flat up to nearly !Þ5 =h .

2.2.8 Step Response

We have already derived the formula for the step response in Part 1, Eq. 1.14.29:

"
gÐ>Ñ œ "  e5" > sin Ð=" >  )Ñ (2.2.36)
ksin )k

where ) is the pole angle in radians, ) œ arctan a  =" Î5" b  1 (read the following Note!).

Note: We are often forced to calculate some of the circuit parameters from the
trigonometric relations between the real and imaginary components of the pole. The
Cartesian coordinates of the pole =" in the Laplace plane are 5" on the real axis and ="
on the imaginary axis. In polar coordinates the pole is expressed by its modulus (the
distance of the pole from the origin of the complex plane):
Q œ ÈÐ5"  4 =" ÑÐ5"  4 =" Ñ œ È5"#  ="#
and its argument (angle) ), defined so that:
="
tan ) œ
5"
Now, a mathematically correct definition of the positive-valued angle is counter-
clockwise from the positive real axis; so if 5" is negative, ) will be greater than 1Î# .
However, the tanget function is defined within the range of … 1Î# and then repeats for
values between 1 „ 5 1Î# . Therefore, by taking the arctangent, ) œ arctana=" Î5" b, we
loose the information about which half of the complex plane the pole actually lies and
consequently a sign can be wrong. This is bad, because the left (negative) side of the
real axis is associated with energy dissipative, that is, resistive circuit action, while the
right (positive) side is associated with energy generative action. This is why

-2.22-
P. Stariè, E. Margan Inductive Peaking Circuits

unconditionally stable circuits have the poles always in the left half of the complex
plane.
To keep our analytical expressions simple we will keep tracking the pole layout
and correct the sign and value of the arctan Ð Ñ by adding 1 radians to the angle )
wherever necessary. But in order to avoid any confusion our computer algorithm should
use a different form of equation (see Part 6).
See Appendix 2.3 for more details.

To use the normalized values of poles in Eq. 2.2.36 we must also enter the
normalized time, >ÎX , where X is the system time constant, X œ VG . Thus we obtain:
a) for Butterworth poles (MFA):
ga Ð>Ñ œ "  È# e >ÎX sin Ð>ÎX  !Þ()&  1Ñ (2.2.37)

b) for Bessel poles (MFED):


gb Ð>Ñ œ "  # e"Þ& >ÎX sin Ð!Þ)'' >ÎX  !Þ&#$'  1Ñ (2.2.38)

c) for Critical damping (CD) we have a double real pole at =" , so Eq. 2.2.36 is not
valid here, because it was derived for simple poles. To calculate the step response for the
function with a double pole, we start with Eq. 2.2.8, insert the same (real!) value Ð=" œ =# Ñ
and multiply it by the unit step operator "Î=. The resulting equation:

=#"
KÐ=Ñ œ (2.2.39)
= Ð=  =" Ñ#

has the time domain function:


="# e=>
gÐ>Ñ œ _" eKa=bf œ " res (2.2.40)
= Ð=  =" Ñ#
There are two residues, res! and res" :
=#" e=>
res! œ lim =” •œ"
=p! = Ð=  =" Ñ#
For res" we must use Eq. 1.11.12 in Part 1:
. =#" e=>
res" œ lim ”Ð=  =" Ñ# = >
• œ e " Ð=" >  "Ñ
= p=" .= = Ð=  =" Ñ#
The sum of the residues is then:

gÐ>Ñ œ "  e5" > Ð5" >  "Ñ (2.2.41)

Eq. 2.2.39 has a double real pole =" œ 5" œ  #ÎVG or, normalized, 5"n œ  #.
We insert this in the Eq. 2.2.41 to obtain the CD step response plot (curve c, 7 œ !Þ#&):

gc Ð>Ñ œ "  e#>ÎX Ð"  #>ÎX Ñ (2.2.42)


The step-response plots of all three cases are shown in Fig. 2.2.8. Also shown is the
non-peaking response as the reference (P œ !). The MFA overshoot is $ œ %Þ$ %, while
for the MFED case it is 10 times smaller!

-2.23-
P. Stariè, E. Margan Inductive Peaking Circuits

1.2

a
δ b
1.0
o c
ii R L=0
0.8

L = m R 2C
ii
0.6 o
L a ) m = 0.50
C R b ) m = 0.33
0.4 c ) m = 0.25
c ba

0.2 ω h = 1 / RC T = 1/ ω h

0.0
0 1 2 3 4 5 6
t /T
Fig. 2.2.8: Step-response of the series peaking circuit for the four characteristic values of 7:
a) MFA; b) MFED; c) CD. The case for 7 œ ! (P œ !Ñ is the reference. The MFA overshoot
is $ œ %Þ$ %, whilst for MFED it is only $ œ !Þ%$ %.

2.2.9 Rise Time

The most important parameter, by which the time domain performance of a


wideband amplifier is evaluated, is the rise time. As we have already seen in Fig. 2.1.1, this
is the difference between the instants at which the step response crosses the 90 % and 10 %
levels of the final value. For the non-peaking amplifier, we have labeled this time as 7r and
we have already calculated it by Eq. 2.1.4, obtaining the value #Þ#! VG . The risetime of a
peaking amplifier is labeled 7R .
To calculate 7R we use Eq. 2.1.4. For more complex circuits, the step response
function can be rather complicated, consequently the analytical calculation becomes
difficult, and in such cases it is better to use a computer (see Part 6). The rise time
improvement against a non-peaking amplifier is:
7r
(r œ (2.2.43)
7R

The values for the bandwidth improvement (b and for the rise time improvement (r
are similar, but in general they are not equal. In practice we more often use (b , the
calculation of which is easier. If the step response overshoot is not too large ($  # %) we
can approximate the rise time starting from the formula for the cut off frequency:
" "
=h œ # 1 0 h œ and furthermore 0h œ
VG #1VG
where =h is the upper half power frequency in [radÎs], whilst 0h is the upper half-power
frequency in Hz. We have already calculated the non-peaking risetime 7r by Eq. 2.2.4 and
found it to be #Þ#! VG . From this we obtain 7r 0h œ #Þ#!Î#1 ¸ !Þ$&, and this relation we
meet very frequently in practice:

-2.24-
P. Stariè, E. Margan Inductive Peaking Circuits

!Þ$&
7r ¸ (2.2.44)
0h

By replacing 0h with 0H in this equation, we obtain (an estimate of) the rise time of
the peaking amplifier. But note that by doing so, we miss that Eq. 2.2.44 is exact only for
the single-pole amplifier, where the load is the parallel VG network. For all other cases, it
can be used as an approximation only if the overshoot $  2 %. The overshoot of a
Butterworth two-pole network amounts to 4.3 % and it becomes larger with each additional
pole(-pair), thus calculating the rise time by Eq. 2.2.43 will result in an excessive error.
Even greater error will result for networks with Chebyshev and Cauer (elliptic) system
poles. In such cases we must compute the actual system step response and find the risetime
from it. For Bessel poles, the error is tolerable since the =h -normalized Bessel frequency
response closely follows the first-order response upto =h . Even so, by using a computer to
obtain the rise time from the step response yields more accurate results.

2.2.10 Input Impedance

We shall use the series peaking network also as an addition to T-coil peaking. This
is possible since the T-coil network has a constant input impedance (the T-coil is discussed
in Sec. 2.4, 2.5 and 2.6). Therefore it is useful to know the input impedance of the series
peaking network. From Fig. 2.2.1 it is evident that the input impedance is a capacitor G in
parallel with the serially connected P and V :

" 4=P  V
^i œ œ (2.2.45)
4=G  "Îa4=P  V b "  =# PG  4=VG

It would be inconvenient to continue with this expression. To simplify we substitute


P œ 7V# G and =h œ "ÎVG , obtaining:

4=
"  7Œ 
=h
^i œ V # (2.2.46)
= 4=
"  7Œ  
=h =h

By making the denominator real and carrying out some further rearrangement we obtain:

#
4= # =
"Œ  Ð7  "Ñ  7 Œ  —
=h – =h
^i œ V # % (2.2.47)
= # =
"  Ð"  #7ÑŒ  7 Œ 
=h =h

and the phase angle is:

#
ee^i f = # =
: œ arctan œ arctan Œ  Ð7  "Ñ  7 Œ  —Ÿ (2.2.48)
d e^i f =h – =h

-2.25-
P. Stariè, E. Margan Inductive Peaking Circuits

The normalized impedance modulus is:


# #
k^i k ^i ^i
œ Ëd œ   eœ 
V V V
Í
Í # # #
Í = #
=
"Œ  –Ð7  "Ñ  7 Œ  —
Ì =h =h
œ # 4
(2.2.49)
= # =
"  Ð"  # 7ÑŒ  7 Œ 
=h =h

In Fig. 2.2.9 the plots of Eq. 2.2.49 and Eq. 2.2.48 for the same values of 7 as
before are shown:

1.0
b a
0.7 c
0.5 L = m R 2C
Ii L=0
Vo
| Z i|
L
R
Zi C R
a ) m = 0.50
0.2
b ) m = 0.33
ω h = 1 / RC c ) m = 0.25

0.1
0 b
a
c
− 30

ϕ[ ]

− 60
L=0
a c
b
− 90
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.2.9: Input impedance modulus (normalized) and the associated phase angle of the
series peaking circuit for the characteristic values of 7. Note that for high frequencies the
input impedance approaches that of the capacitance. a) MFA; b) MFED; c) CD.

Table 2.2.1 shows the design parameters of the two-pole series peaking circuit:
Table 2.2.1
response 7 (b (r $ Ò%Ó
MFA 0.50 1.41 1.49 4.30
MFED 0.33 1.36 1.39 0.43
CD 0.25 1.29 1.33 0.00
Table 2.2.1: 2nd -order series peaking circuit parameters summarized: 7 is the
inductance proportionality factor; (b is the bandwidth improvement; (r is the
risetime improvement; and $ is the step response overshoot.

-2.26-
P. Stariè, E. Margan Inductive Peaking Circuits

2.3 Three Pole Series Peaking Circuit

In a practical amplifier we cannot have a pure two-pole series-peaking circuit. The


output of the amplifier is always connected to something, be it the next amplifying stage or,
say, a cathode ray tube. Any device connected to the output will have at least some
capacitance. Therefore the series peaking circuit shown in Fig. 2.3.1 is what we generally
encounter in practice. Here we have three independent reactive elements (two capacitors
and one inductor), so the circuit has three poles. In order to extract the greatest possible
bandwidth from this circuit, the value of the input capacitor Gi , which is in parallel to the
loading resistor V , must always be smaller than the loading capacitance G . Since the
network is reciprocal, which means we may exchange the input and the output, the
condition Gi  G can always be met. As we will see later, the ratio GÎGi depends on the
pole pattern selected and it can not be chosen at random.

ii L
o

Ci R C

Fig. 2.3.1: The three-pole series peaking circuit.

We shall calculate the network transfer function from the input admittance:

" "
]i œ 4= Gi   (2.3.1)
V "
 4= G
4= P
The input impedance is then:
" VÐ"  =# PG Ñ
^i œ œ (2.3.2)
]i Ð"  4= Gi VÑÐ"  =# PG Ñ  4= G V
The input voltage is:
Zi œ Mi ^i (2.3.3)

and the output voltage is:


Zo œ + Zi œ + Mi ^i (2.3.4)

where + is the voltage attenuation caused by the elements P and G :


"
4=G "
+œ œ (2.3.5)
" "  =# PG
 4=P
4=G
If we insert Eq. 2.3.2 and Eq. 2.3.5 into Eq. 2.3.4, we obtain:
V
Zo œ Mi (2.3.6)
"  4=VÐG  Gi Ñ  =# PG  4=$ Gi GPV

-2.27-
P. Stariè, E. Margan Inductive Peaking Circuits

Since Mi V is the voltage at zero frequency, we can obtain the amplitude-normalized transfer
function by dividing Eq. 2.3.6 by Mi V:
"
J Ð=Ñ œ (2.3.7)
"  4=VÐG  Gi Ñ  =# PG  4 =$ Gi GVP

Let us now make the following three substitutions:


G "
P œ 7 V # ÐG  Gi Ñ 8œ =h œ (2.3.8)
G  Gi VÐG  Gi Ñ

where =h is the upper half power frequency of the non-peaking case (P œ !). With these
substitutions we obtain the function which is normalized both in amplitude and in
frequency (to the non-peaking system cut off):
"
J Ð=Ñ œ # $ (2.3.9)
= = =
"4  78Œ   4 7 8 Ð"  8ÑŒ 
=h =h =h

Since the denominator is a 3rd -order polynomial we have three poles, one of which
must be real and the remaining two should be complex conjugated (readers less
experienced in mathematics can find the general solutions for polynomials of 1st -, 2nd -, 3rd -
and 4th -order in Appendix 2.1 ). Here we shall show how to calculate the required
parameters in an easier way. The magnitude is:
"
¸J Ð=Ѹ œ (2.3.10)
# #
ÊŠd eJ Ð=Ñf‹  ŠeeJ Ð=Ñf‹

By rearranging the real and imaginary parts in Eq. 2.3.9 and inserting them into
Eq. 2.3.10, we obtain:
"
¸J Ð=Ѹ œ Í (2.3.11)
Í # # $ #
Í = = =
"  7 8 Œ  —   7 8 a "  8b Œ  —
Ì– =h –=
h =h

The squaring of both expressions under the root gives:


"
¸J a;b¸ œ
É"  a"  # 7 8b;#  7 8 c7 8  #a"  8bd;%  7# 8# a"  8b# ;'
(2.3.12)
where we have used ; œ =Î=h in order to be able to write the equation on a single line.

2.3.1 Butterworth Poles (MFA)

The magnitude of the normalized frequency response for a three-pole Butterworth


function is:
"
¸J Ð=Ѹ œ (2.3.13)
'
=
Ë"  Œ 
=h

-2.28-
P. Stariè, E. Margan Inductive Peaking Circuits

By comparing Eq. 2.3.13 with Eq. 2.3.12 we realize that the factors at Ð=Î=h Ñ# and
at Ð=Î=h Ñ% in Eq. 2.3.12 must be zero if we want the function to correspond to Butterworth
poles:
"#78 œ ! and 7 8  #Ð"  8Ñ œ !
Ê 7 œ #Î$ and 8 œ $Î% (2.3.14)

With these data we can calculate the actual values of Butterworth poles and the
upper half power frequency. By inserting 7 and 8 into Eq. 2.3.12 and, considering that
now the coefficients at Ð=Î=h Ñ# and at Ð=Î=h Ñ4 are zero, we obtain the frequency response;
its plot is shown in Fig. 2.3.2 as curve a.
To calculate the poles we insert the values for 7 and 8 into Eq. 2.3.9 and by
inserting = instead of 4=Î=h , the denominator of Eq. 2.3.9 gets the form:
W œ !Þ"#& =$  !Þ& =#  =  " (2.3.15)
To obtain the canonical form we divide this equation by 0.125. Then to find the roots we
equate it to zero:
=$  % =#  ) =  ) œ ! (2.3.16)

The roots of this function are the normalized poles of the function J Ð=Ñ:

="n , =#n œ 5"n „ 4 ="n œ  " „ 4 "Þ($#"


=$n œ 5$n œ  # (2.3.17)

The values are the normalized to =h , considering that =h œ "ÎVÐGi  GÑ œ " .


All Butterworth poles lie on a circle which has a radius equal to the system upper
half-power frequency; the real pole =$n also lies on the same circle. Now remember that the
poles in tables are normalized to =h œ " rad/s, so their radius is equal to ". This means that
(for Butterworth poles only!) =$n Î=$t is already the bandwidth improvement ratio,
=H Î=h œ (b , and in our case it is equal to 2 (we obtain the same value from the factor at
a=Î=h b' of Eq.2.3.12, "ÎÈ '
7# 8# Ð"  8Ñ# œ # ).

2.3.2. Bessel Poles (MFED)

The ‘classical’ way of calculating the parameters 7 and 8 for Bessel poles is first to
derive the formula for the envelope delay, 7e œ .:Î.=. This is a rational function of =. By
equating the two coefficients in the numerator with the corresponding two in the
denominator polynomial we obtain two equations from which both parameters may be
calculated. However, this is a lengthy and error-prone procedure. A more direct and easier
way is as follows: in the literature [e.g. Ref. 2.10, 2.11], or with an appropriate computer
program (as in Part 6, BESTAP), we look for the Bessel 3rd -order polynomial:

FÐ=Ñ œ =$  ' =#  "& =  "& (2.3.18)

The canonical form of the denominator of Eq. 2.3.9, with = instead of 4=Î=h , is:

=# = "
W œ =$    (2.3.19)
"8 7 8 Ð"  8Ñ 7 8 Ð"  8Ñ

-2.29-
P. Stariè, E. Margan Inductive Peaking Circuits

The functions in Eq. 2.3.18 and Eq. 2.3.19 must be the same. This is only possible if the
corresponding coefficients are equal. Thus we may write the following two equations:
" "
œ' and œ "& (2.3.20)
"8 7 8 Ð"  8Ñ

This gives the following values for the parameters:

7 œ !Þ%)! and 8 œ !Þ)$$ (2.3.21)

The roots of Eq. 2.3.18 (or Eq. 2.3.19, with the above values for 7 and 8) are the Bessel
poles of the function J Ð=Ñ:
="n,#n œ 5"n „ 4 ="n œ  "Þ)$)* „ 4 "Þ(&%%
=$n œ 5$n œ  #Þ$### (2.3.22)

Note that the same values are obtained from the pole tables (or by running the
BESTAP routine in Part 6); in general, for Bessel poles, =5 n œ =5 t .
With these poles the frequency response, according to Eq. 2.3.11, results in the
curve b in Fig. 2.3.2. The Bessel poles are derived from the condition that the transfer
function has a unit envelope delay at the origin, so there is no simple way of relating it to
the upper half power frequency =H . We need to calculate lJ Ð=Ñl numerically for a range,
say, "  =Î=h  $, using either Eq. 2.3.11 or the FREQW algorithm in Part 6, and find =H
from it. The bandwidth improvement factor for Bessel poles is given in Table 2.3.1 at the
end of this section.

2.0
Vo
Ii R

1.0
b a c
0.7
L = m R 2C
Ii Vo
0.5
L L=0
Ci R C

a ) m = 0.67 C / Ci = 3
0.2
b ) m = 0.48 C / Ci = 5
c ) m = 0.67 C / Ci = 2
ω h = 1 / R (C + C i )
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.3.2: Frequency response of the third-order series peaking circuit for different values of
7. The correct setting for the required pole pattern is achieved by the input to output
capacitance ratio, GÎG3 . Fair circuit performance comparison is met by normalization to the
total capacitance G  G3 . Here we have: a) MFA; b) MFED; c) SPEC, and the non-peaking
(P œ !) case as a reference. Although being of highest bandwidth, the SPEC case is non-
optimal, owing to the slight but notable dip in the range 0.5  =Î=h  "Þ# .

-2.30-
P. Stariè, E. Margan Inductive Peaking Circuits

2.3.3 Special Case (SPEC)

In practice it is sometimes difficult to achieve the capacitance ratio G ÎGi required


for Butterworth or for Bessel poles. Let us see what the frequency response would be if we
take the capacitance ratio G ÎGi œ #, which we shall call a special case (SPEC). This
makes both parameters equal, 7 œ 8 œ !Þ''(, and the canonical form of the denominator
in Eq. 2.3.9, where Ð4=Î=h Ñ œ =, is then:
W œ =$  $ =#  'Þ(#&& =  'Þ(#&& œ ! (2.3.23)
Its roots are the required poles:
="n,#n œ 5"n „ 4 ="n œ  !Þ(&!! „ 4 "Þ*)%)
=$n œ 5$n œ  "Þ&!!! (2.3.24)

The corresponding frequency response is the curve c in Fig. 2.3.2. This gives a
bandwidth improvement (b œ #Þ#), which sounds very fine if there were not a small dip in
the range !Þ& < Ð=Î=h Ñ < "Þ#. So we regretably realize that the ratio G ÎGi can not be
chosen at random. The aberrations are even greater for the envelope delay and the step
response, as we shall see later.

2.3.4 Phase Response

For the calculation of phase response we can use Eq. 2.2.31, but we must also add
the influence of the real pole 5$n :
= = =
 = "n  ="n
=h =h =h
: œ arctan  arctan  arctan (2.3.25)
5"n 5"n 5$n

In Fig. 2.3.3 we have plotted the phase response for different values of parameters
7 and 8. Instead of the parameter 8, the ratio G ÎGi is given.
0

− 30
L=0
− 60
ϕ[ ]
− 90
Ii L = m R 2C
− 120 Vo
L
− 150 Ci R C
b
− 180
a ) m = 0.67 C / Ci = 3 a
− 210
b ) m = 0.48 C / Ci = 5
c
− 240 c ) m = 0.67 C / Ci = 2
ω h = 1 / R (C + C i )
− 270
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.3.3: Phase response of the third-order series peaking circuit for different values of 7:
a) MFA; b) MFED; c) SPEC; the non-peaking (P œ !) case is the reference.

-2.31-
P. Stariè, E. Margan Inductive Peaking Circuits

2.3.5. Envelope-delay

We apply Eq. 2.2.35 to which we add the influence of the real pole 5$n :
5"n 5"n 5$ n
7/ =h œ #  #  # (2.3.26)
= = =
5"#n  Œ  ="n  5"#n  Œ  ="n  5$#n  Œ 
=h =h =h

In Fig. 2.3.4 the corresponding plots for different values of parameters 7 and 8 are
shown; instead of 8 the ratio GÎGi is given.
0

a ) m = 0.67 C / Ci = 3
L=0
b ) m = 0.48 C / Ci = 5
− 0.4
c ) m = 0.67 C / Ci = 2
ω h = 1 / R (C + C i )

− 0.8
b
τe ω h

− 1.2
a
Ii L = m R 2C
Vo
L c
− 1.6 Ci R C

− 2.0
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.3.4: Envelope delay of the third-order series peaking circuit for some characteristic
values of 7: a) MFA; b) MFED; c) SPEC; the non-peaking (P œ !) case is the reference. Note
the MFED flatness extending beyond =h .

2.3.6 Step Response

The calculation is done in a way similar to the case of a two-pole series peaking
circuit. Our starting point is Eq. 2.3.9, where we consider that we have two complex
conjugate poles =" and =# , and a real pole =$ . The resulting equation must be transformed
into a similar form as Eq. 2.2.8. We need a normalized form of equation, so we must
multiply the numerator by  =" =# =$ (see Appendix 2.2). So we obtain a general form:
 =" = # = $
J Ð=Ñ œ (2.3.27)
Ð=  =" ÑÐ=  =# ÑÐ=  =$ Ñ

Since we apply a unit step to the network input, the above expression must be multiplied
by "Î= to obtain a new, fourth-order function:
 =" = # = $
KÐ=Ñ œ (2.3.28)
= Ð=  =" ÑÐ=  =# ÑÐ=  =$ Ñ

-2.32-
P. Stariè, E. Margan Inductive Peaking Circuits

The sum of the residues of KÐ=Ñ is the step response:


$
gÐ>Ñ œ _" eKÐ=Ñf œ ! res3 eKÐ=Ñf (2.3.29)
3œ!

Since the calculation of a three-pole network step response is lengthy, we give here only
the final result. The curious reader can find the full derivation in Appendix 2.3.
5$ 5#  =#" 5$ >
gÐ>Ñ œ "  É A#  =#" B# e5" > sinÐ=" >  " Ñ  " e (2.3.30)
=" C C
where:
A œ 5" Ð5"  5$ Ñ  =#" B œ # 5"  5$
C œ Ð5"  5$ Ñ#  =#" " œ arctana  =" BÎAb  1 (2.3.31)

Note that we have written " for the initial phase angle of the resonance function,
instead of the usual ), in order to emphasize the difference between the response phase and
the angle of the complex conjugated pole pair (in two-pole circuits they have the same
value). We enter the normalized poles from Eq. 2.3.17, 2.3.22, and 2.3.24, and the
normalized time >ÎVÐGi  G Ñ œ >ÎX , obtaining the step responses (ploted in Fig. 2.3.5):
a) For Butterworth poles, where 7 œ !Þ''( and 8 œ !Þ(&! Ð" œ 1 radÑ:

ga Ð>Ñ œ "  "Þ"&& e>ÎX sinÐ"Þ($# >ÎX  1Ñ  e# >ÎX (2.3.32)


b) For Bessel poles, where 7 œ !Þ%)! and 8 œ !Þ)$$ Ð" œ #Þ&*(! radÑ:

gb Ð>Ñ œ "  "Þ)$* e"Þ)$* >ÎX sinÐ"Þ(&% >ÎX  #Þ&*(Ñ  "Þ*&" e#Þ$## >ÎX (2.3.33)
c) For our Special Case, where 7 œ 8 œ !Þ''( Ð" œ 1 radÑ:
gc Ð>Ñ œ "  !Þ(&' e!Þ(& >ÎX sinÐ"Þ*)& >ÎX  1Ñ  e"Þ& >ÎX (2.3.34)

1.2
c
a
1.0
o
b
iiR
L=0
0.8

0.6
ii L = m R 2C
o a ) m = 0.67 C / Ci = 3
0.4 L b ) m = 0.48 C / Ci = 5
Ci R C c ) m = 0.67 C / Ci = 2

0.2 ω h = 1 / R (C + C i )
T= 1/ ω h
0.0
0 1 2 3 4 5 6
t /T

Fig. 2.3.5: Step response of the third-order series peaking circuit for some characteristic
values of 7: a) MFA; b) MFED; c) SPEC; the non-peaking ÐP œ !) case is the reference. The
overshoot of both MFA and SPEC case is too large to be suitable for pulse amplification.

-2.33-
P. Stariè, E. Margan Inductive Peaking Circuits

The pole patterns for the three response types discussed are shown in Fig. 2.3.6.
Note the three different second-order curves fitting each pole pattern: a (large) horizontal
ellipse for MFED, a circle for MFA, and a vertical ellipse for the SPEC case.

MFA: s 1S
s 1B = −1.0000 + j 1.7321 s 1T s 1B
s 2B = −1.0000 − j 1.7321
s 3B = −2.0000

MFED:
s 1T = − 1.8389 + j 1.7544
s 2T = − 1.8389 − j 1.7544
s 3T = − 2.3222 s 3T s 3B s 3S σ
−4 −3 −2 −1
T T T T
SPEC:
s 1S = − 0.7500 + j 1.9848
s 2S = − 0.7500 − j 1.9848
s 3S = − 1.5000

s 2T s 2B
T= R (C +Ci )
s 2S

Fig. 2.3.6: Pole patterns of the 3-pole series peaking circuit for the MFA, the MFED, and the
SPEC case. The curves on which the poles lie are: a circle with the center at the origin for MFA;
an ellipse with both foci on the real axis (the nearer at the origin) for the MFED; and an ellipse
with both foci on the imaginary axis for the SPEC case (which is effectively a Chebyshev-type
pole pattern). Also shown are the characteristic circles of each complex conjugate pole pair.

Table 2.3.1 resumes the parameters for the three versions of the 3-pole series
peaking circuit. Note the high overshoot values for the MFA and the SPEC case, both
unacceptable for a pulse amplifier.

Table 2.3.1
response 7 8 (b (r $%
a) MFA 0.667 0.750 2.00 2.27 8.1
b) MFED 0.480 0.833 1.76 1.79 0.7
c) SPEC 0.667 0.667 2.28 2.33 10.2

Table 2.3.1: Third-order series peaking circuit parameters.

-2.34-
P. Stariè, E. Margan Inductive Peaking Circuits

2.4 Two-Pole T-coil1 Peaking Circuit

The circuit schematic of a two-pole T-coil peaking network is shown in Fig. 2.4.1aÞ
The main characteristic of this circuit is the center tapped coil P, which is bridged by the
capacitance Gb , consisting (ideally) of the coil’s self-capacitance [Ref. 2.3, 2.17-2.21].
Since the coils in the equivalent network, Fig. 2.4.1b, form a letter T we call it a T-coil
network. The coupling factor 5 between both halves of the coil P and the bridging
capacitance Gb must be in a certain relation, dependent on the network poles layout. In
addition the relation V œ ÈPÎG must hold in order to obtain a constant input impedance
^i œ V at any frequency [Ref. 2.18, 2.21]. This is true if the elements of the network do
not have any losses. Owing to losses in a practical circuit, the input impedance may be
considered to be constant only up to a certain frequency, which, with a careful design, can
be high enough for the application of the T-coil circuit in a wideband amplifier.

Cb C A

k L I3
ii L ii B C
R Vi
La Lb
o
R LM I1 I2
C
o
Ii D E
R
C

a) b) c)
Fig. 2.4.1: a) The basic T-coil circuit: the voltage output is taken from the center tap node of the
inductance P and its two parts are magnetically coupled by the factor !  5  "; b) an equivalent
circuit, with no magnetic coupling between the coils — it has been replaced by the mutual
inductance PM ; c) a simplified generalized impedance circuit, excited by the current generator M3 ,
showing the current loops.

k L
L
x z x z
L1 L2 La Lb

− LM k=0
LM = k L1 L2
L 1 = L a − LM
L = L 1 + L 2 + 2 LM L 2 = L b − LM

a) b)

Fig. 2.4.2: Modeling the coupling factor: a) The T-coil coupling factor 5 between the two halves
P" and P# of the total inductance P can be represented by b) an equivalent circuit, having two
separate (non-coupled) inductances, in which the magnetic coupling is modeled by the mutual
inductance PM (negative in value), so that P" œ Pa  PM and P# œ Pb  PM .

1 Networks with tapped coils were already being used for amplifier peaking in 1954 by F.A. Muller [2.16],
but since the bridging capacitance Cb is not shown in that article, the networks described do not have a
constant input impedance as do have the T-coil networks discussed in this and the following three sections.

-2.35-
P. Stariè, E. Margan Inductive Peaking Circuits

If the output is taken from the loading resistor V , the network in Fig. 2.4.1a
behaves as an all pass network. However, for peaking purposes we take the output
voltage from the capacitor G and in this application the circuit is a low pass filter.
The equivalent network in Fig. 2.4.1b needs to be explained. We will do this with
the aid of Fig. 2.4.2. The original network has a center tapped coil whose inductance P can
be calculated by the same general equation for two coupled coils, [Ref. 2.18, 2.28]:

P œ P"  P #  # P M (2.4.1)

where P" and P# are the inductances of the respective coil parts (which, in general, need
not to be equal) and PM is their mutual inductance. PM is taken twice, since the magnetic
induction from P" to P# is equal to the induction from P# to P" and both contribute to the
total. If 5 is the factor of magnetic coupling between P" and P# the mutual inductance is:

PM œ 5 È P " P # (2.4.2)

In the equivalent circuit, with no coupling between the coils, we have:

Pa œ P "  P M P b œ P#  P M (2.4.3)

Then P" and P# are:

P" œ Pa  PM P# œ P b  P M (2.4.4)

Note the negative sign of PM , which is a consequence of magnetic coupling; owing to this
the driving impedance at the center tap as seen by G is lower than without the coupling. In
the symmetrical case, when P" œ P# , we can calculate the value of P" and P# from the
required coupling 5 and total inductance P:
P
P" œ P# œ (2.4.5)
# Ð"  5Ñ

Thus we have proved that the circuits in Fig. 2.4.1a and 2.4.1b are equivalent, even though
no coupling exists between the coils in the circuit of Fig. 2.4.1b.
The corresponding generalized impedance model of the T-coil circuit is shown in
Fig. 2.4.1c, where the input voltage Zi is equal to the product of the input current and the
circuit impedance, Mi ^i . The input current splits into M" and M# . The current M$ flows in the
remaining loop. The impedances in the branches are:

A œ "Î= Gb
B œ = Pa
C œ = Pb (2.4.6)
D œ  = PM  "Î= G
EœV

We have written = instead of 4= . With these substitutions the calculation will be


much easier. Frankly, from here on, the whole calculation could be done by a suitable
computer program, but then some important intermediate results, which we want to explain
in detail, would not be shown. So we will do a hand calculation and only at the very end,
where the difficulties will increase, shall we use a computer.

-2.36-
P. Stariè, E. Margan Inductive Peaking Circuits

We form a system of equations in accordance with the current loops in Fig. 2.4.1c:

Zi œ M" ÐB  DÑ  M# D  M$ B
! œ  M" D  M# ÐC  D  EÑ  M$ C (2.4.7)
! œ  M" B  M# C  M$ ÐA  B  CÑ

The determinant of the coefficients is:


â â
â BD D B â
â â
Jœâ D CDE C â (2.4.8)
â â
â B C ABCâ
with the solution:

J œ (B  D) [(C  D  E) (A  B  C )  C # ]
 D [  D (A  B  C )  BCÓ  B [DC  B (C  D  E )] (2.4.9)

After multiplication some terms will cancel. Thus the solution is simplified to:

J œ BCA  BDA  BEA  BEC  DCA  DEA  DEB  DEC (2.4.10)

For further calculation we shall need both cofactors J"" and J"# . The cofactor for M" is:
â â
â Zi D B â
â â
J"" œâ 0 CDE C â
â â
â 0 C ABCâ
œ Zi ÐCA  CB  DA  DB  DC  EA  EB  EC Ñ (2.4.11)

and in a similar way the cofactor for M# :


â â
â BD Zi D â
â â
J"# œâ D 0 C â
â â
â B 0 ABCâ

œ Zi (DA  DB  DC  BC ) (2.4.12)

Let us first find the input admittance, which we would like to be equal to "ÎV œ "ÎE.

M" J""
] œ œ
Zi Zi J
CA  CB  DA  DB  DC  EA  EB  EC "
œ œ (2.4.13)
BCA  BDA  BEA  BEC  DCA  DEA  DEB  DEC E

After eliminating the fractions and canceling some terms, we obtain the expression:

BCA  BDA  BEA  DCA  ECA  E # A  E # B  E # C œ ! (2.4.14)

Now we put in the values from Eq. 2.4.6, considering that Pa œ Pb , perform all the
multiplications, and arrange the terms with the decreasing powers of =. We obtain:

-2.37-
P. Stariè, E. Margan Inductive Peaking Circuits

P#a P PM " P V#
=”Š  ‹  V # P•  Š  ‹œ! (2.4.15)
Gb Gb = G Gb Gb

or, in a general form:


= K"  =" K# œ ! (2.4.16)

This expression tells us that the input admittance can indeed be made equal to "ÎV , as we
wanted in Eq. 2.4.13. For a constant input admittance circuit, Eq. 2.4.16 must be valid for
any = [Ref. 2.21]. This is possible only if both K" and K# are zero (Ross’ method):

P#a P PM
K" œ   V# P œ ! (2.4.17)
Gb Gb

P V#
K# œ  œ! (2.4.18)
G Gb Gb

From this we obtain the following two relations:

P œ V# G
(2.4.19)
P G
PM œ  V # Gb œ V # Š  Gb ‹
% %

For the symmetrical case, with the tap at the center of the coil, Pa œ P b œ PÎ#. Since only
two parameters, G and V, are known initially, we must obtain another, independent
equation in order to calculate the parameters PM and Gb . For this we can use the
transimpedance equation, Zo ÎM" (Mi œ M1 , see Fig. 2.4.2.b) . From Fig. 2.4.1c it is evident
that the current difference M"  M# flows through branch H. This difference current,
multiplied by the impedance "Î=G , is equal to the output voltage Zo . The transimpedance
is then:
Zo " M"  M #
œ † (2.4.20)
M" =G M"

The currents are calculated by Cramer’s rule:

J"" J"#
M" œ and M# œ (2.4.21)
J J

and if we put these expressions into Eq. 2.4.20 we obtain:

Zo " J""  J"#


œ † (2.4.22)
M" =G J""

Again we make use of the common expressions in Eq. 2.4.6. The difference of both
cofactors is:

J""  J"# œ Zi (CA  EA  EB  EC) (2.4.23)

With these expressions, the transimpedance is:

-2.38-
P. Stariè, E. Margan Inductive Peaking Circuits

Zo " CA  EA  EB  EC
œ † (2.4.24)
M" =G CA  CB  DA  DB  DC  EA  EB  EC

The voltage Zi is a factor of both the numerator and the denominator, so it cancels out.
Now we replace the common expressions with those from Eq. 2.4.6, express Q with
Eq. 2.4.19, perform the indicated multiplication, make the long division of the polynomials,
and the result is a relatively simple expression:

Zo V
J Ð=Ñ œ œ # # (2.4.25)
M" = V G Gb  =VG Î#  "

Although the author of this idea, Bob Ross, calculated it ‘by hand’ [Ref. 2.21], we will not
follow his example because this calculation is a formidable work. With modern computer
programs (such as Mathematica [Ref. 2.34] or similar [Ref. 2.35, 2.38, 2.39, 2.40]), the
calculation takes less time than is needed to type in the data.
For those designers who want to construct a distributed amplifier using electronic
tubes or FETs (but not transistors, as we will see in Part 3!), where the resistor V is
replaced by another T-coil circuit and so forth, it is important to know the transimpedance
from the input current to the voltage ZV . The result is:

ZV =# V # G Gb  =VGÎ#  "
œV # # (2.4.26)
M" = V G Gb  =VG Î#  "

Besides the two poles on the left side of the =-plane, =": and =#: , this equation also has two
symmetrically placed zeros on the right side of the =-plane, ="D and =#D , as shown in
Fig. 2.4.3. Since Eq. 2.4.26 has equal powers of = both in the numerator and the
denominator it is an all pass response. We shall return to this when we shall calculate the
step response.
The poles are the roots of the denominator of Eq. 2.4.25. The canonical form is:

" "
=#  =  # œ! (2.4.27)
#VGb V G Gb

In general the roots are complex conjugates:

" " "


=",2 œ 5" „ 4 =" œ  „Ë  # (2.4.28)
%VGb Ð%VGb Ñ# V G Gb

By factoring out "Î%VGb we obtain a more convenient expression:

" "' Gb
=",2 œ  "„Ê" (2.4.29)
%VGb  G 

-2.39-
P. Stariè, E. Margan Inductive Peaking Circuits

k increases jω
jω1 s1 z
s 1p

σ1 − σ1 σ
−2 2
RC RC

s 2p −j ω 1 s2 z

Fig. 2.4.3: The poles (=": and =#: ) and zeros (="D and =#D ) of the all pass transimpedance
function corresponding to Eq. 2.4.26 and Fig. 2.4.1a. By changing the bridge capacitance
G, and the mutual inductance PM (by the coupling factor 5) according to Eq. 2.4.19, both
poles and both zeros travel along the circles shown.

An efficient inductive peaking circuit must have complex poles. By taking the
imaginary unit out of the square root, the terms within it exchange signs. Then the pole
angle ) can be calculated from the ratio of its imaginary to the real component, as we have
done before. From Fig. 2.2.4:
"' Gb
Ê "
ee=" f G
tan ) œ œ (2.4.30)
d e=" f "
This gives a general result:
"  tan# )
Gb œ G (2.4.31)
"'

The Bessel pole placement is show in Fig. 2.4.4. The characteristic angle ) is measured
from the positive real axis.

s1
jω1

σ1 θ σ
−θ

−jω 1
s2

Fig. 2.4.4: The layout of complex conjugate poles =" and =# of a second-order
Bessel transfer function. In this case, the angle is ) œ "&!°.

By using the pole angle, which we have calculated previously, and Eq. 2.4.31 , the
corresponding bridging capacitance can be found:
For Bessel poles:
) œ "&!° tan# ) œ "Î$ Gb œ GÎ"# (2.4.32)
For Butterworth poles:
) œ "$&° tan# ) œ " Gb œ GÎ) (2.4.33)

-2.40-
P. Stariè, E. Margan Inductive Peaking Circuits

The corresponding mutual inductance is, according to Eq. 2.4.19:


for Bessel poles:
V# G
PM œ (2.4.34)
'
and for Butterworth poles:
V# G
PM œ (2.4.35)
)

The general expression for the coupling factor is [Ref. 2.21, 2.28, 2.33]:
PM PM
5œ œ (2.4.36)
È P" P# È (Pa  PM ) (Pb  PM )

By considering that Pa œ Pb œ PÎ# œ V # GÎ# we obtain:


G G
PM V# Š  Gb ‹  Gb
5œ œ % œ % (2.4.37)
P V# G G G
 PM  V# Š  Gb ‹  Gb
# # % %

If Gb is expressed by Eq. 2.4.31, we may derive a very interesting expression for the
coupling factor 5 :

$  tan# )
5œ (2.4.38)
&  tan# )

Since ) œ "&!° for the Bessel pole pair and "$&° for the Butterworth pole pair, the
corresponding coupling factor is:
for Bessel poles:
5 œ !Þ& (2.4.39)
for Butterworth poles:

5 œ !Þ$$ (2.4.40)

Let us calculate the parameters 5 , PM and Gb for two additional cases. If we want
to avoid any overshoot, then both poles must be real and equal. In this case ) œ ")!° and
the damping of the circuit is critical (CD). The expression under the root of Eq. 2.4.29 must
be zero and we obtain:
G $ V# G
Gb œ PM œ 5 œ !Þ' (2.4.41)
"' "'
We are also interested in the circuit values for the limiting case in which the coupling
factor 5 and consequently the mutual inductance PM are zero. Here we calculate Gb from
Eq. 2.4.31:
G
Gb œ º (2.4.42)
% 5œ!
PM œ!

The next task is to calculate the poles for all four cases. We will show only the
calculation for Bessel poles; the other calculations are equal.

-2.41-
P. Stariè, E. Margan Inductive Peaking Circuits

For the starting expression we use the denominator of Eq. 2.4.25 in the canonic
form, which we equate to zero:
" "
=#  =  # œ! (2.4.43)
#VGb V GGb
Now we insert Gb œ G Î"#, which corresponds to Bessel poles; the result is:
' "#
=#  =  # # œ! (2.4.44)
VG V G
By factoring out "ÎVG the roots (poles of Eq. 2.4.25) are:
"
=",# œ 5" „ 4 =" œ Ð  $ „ 4 È$ Ñ (2.4.45)
VG
In a similar way we calculate the Butterworth poles, where Gb œ GÎ):
"
=",# œ 5" „ 4 =" œ Ð#„4# Ñ (2.4.46)
VG
For critical damping (CD) the imaginary part of the poles is zero, so Gb œ G Î"', as found
before. The poles are:
%
=",# œ 5" œ  (2.4.47)
VG
In the no-coupling case (5 œ !) the bridging capacitance Gb œ GÎ% and the poles are:
"
=",# œ 5" „ 4 =" œ Ð  " „ 4 È$ Ñ (2.4.48)
VG
For all four kinds of poles, the input impedance ^i œ V œ ÈPÎG and it is independent of
frequency. Now we have all the necessary data to calculate the frequency, phase, time delay
and the step response.

2.4.1 Frequency Response

We can use the amplitude- and frequency-normalized Eq. 2.2.27:


5"#n  =#"n
¸J Ð=Ѹ œ Í
Í # #
Í # = =
–5" n  Œ  ="n  —–5"#n  Œ  ="n  —
Ì =h =h

By inserting the values for normalized poles, with VG œ " and =h œ "ÎVG , we
can plot the response for each of the four types of poles, as shown in Fig. 2.4.5.
By comparing this diagram with the frequency-response plot for a simple series
peaking circuit in Fig. 2.2.3, we realize that the upper cut off frequency =H of the T-coil
circuit is exactly twice as much as it is for the two-pole series peaking circuit (by
comparing, of course, the responses for the same kind of poles). I.e., for Butterworth poles
we had ="n,2n œ " „ 4 (Eq. 2.2.16) for the series peaking circuit, whilst here we have
=1n,2n œ # „ 4 #. Thus the bandwidth improvement factor for a two pole T-coil circuit,
compared with the single pole (VG ) circuit is (b œ #Þ)$ (the ratio of the absolute values of

-2.42-
P. Stariè, E. Margan Inductive Peaking Circuits

poles). Similarly, for other kinds of poles, the bandwidth improvement is greater too, as
reported in Table 2.4.1 at the end of this section. Owing to this property, it is worth
considering the use of a T-coil circuit whenever possible. For the same reason we shall
discuss T-coil circuits further in detail.

2.0
Cb
Vo
Ii R k
Ii L

1.0 Vo
R
C
0.7

0.5
L=0

k C b /C
a) 0.33 1/8 d
c
b) 0.5 1/12 a b
0.2 c) 0.6 1/16
d) 0.0 1/4
L = R2 C
ω h = 1/ RC
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.4.5: The frequency response magnitude of the T-coil, taken from the coil center tap. The
curve a) is the MFA (Butterworth) case, b) is the MFED (Bessel) case, c) is the critical damping
(CD) case and d) is the no-coupling (5 œ !) case. The non-peaking (P œ !) case is the reference.
The bandwidth extension is notably larger, not only compared with the two-pole series peaking,
but also to the three-pole series peaking circuit.

2.4.2 Phase Response

Here we use again the Eq. 2.2.31:


= =
 =1n  ="n
=h =h
: œ arctan  arctan
51n 5"n

and, by inserting the values for the normalized poles, as we did in the calculation of the
frequency response, we obtain the plots shown in Fig. 2.4.6.

2.4.3 Envelope Delay

We use again Eq. 2.2.35:


51n 51n
7e =h œ #  #
# = # =
51n Œ  =1n  51n Œ  =1n 
=h =h

and, with the pole values as before, we get the Fig. 2.4.7 responses.

-2.43-
P. Stariè, E. Margan Inductive Peaking Circuits

0
Cb
− 20 k
L=0 Ii L
− 40 Vo
R
− 60 C
ϕ[ ]
− 80

−100 k C b /C
a) 0.33 1/8 d a b c
−120 b) 0.5 1/12
c) 0.6 1/16
−140
d) 0.0 1/4
L = R2 C
−160 ω h = 1/ RC

−180
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.4.6: The transfer function phase angle of the T-coil circuit, for the same values of coupling
and capacitance ratio as for the frequency response magnitude. a) is MFA, b) is MFED, c) is CD
and d) is the no-coupling case. The non-peaking (P œ !) case is the reference.

0.00
k Cb /C
a) 0.33 1/8
b) 0.5 1/12 L = R2 C
ω h = 1/ RC L=0
− 0.25 c) 0.6 1/16
d) 0.0 1/4 c

b
− 0.50
a
τe ω h
d Cb
− 0.75
k
Ii L
Vo
− 1.00 R
C

−1.25
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.4.7: The envelope delay of the T-coil: a) MFA, b) MFED, c) CD, d) 5 œ !. The T-coil
circuit delay at low frequencies is exactly one half of that in the P œ ! case.

-2.44-
P. Stariè, E. Margan Inductive Peaking Circuits

2.4.4 Step Response

We derive the step response from Eq. 2.4.25, as was done in Part 1, Eq. 1.14.29. We
take Eq. 2.2.36 for complex poles (MFA, MFED, and the case 5 œ !) and Eq. 2.2.41 for
double pole (the CD case). To make the expressions simpler we insert the numerical values
of normalized poles and substitute >ÎVG œ >ÎX :

a) for Butterworth poles (MFA), where 5 œ !Þ$$ and Gb œ G Î):
ga Ð>Ñ œ "  È# e# >ÎX sin Ð# >ÎX  !Þ()&%  1 Ñ (2.4.49)
b) for Bessel poles (MFED), where 5 œ !Þ& and Gb œ GÎ"#:
gb Ð>Ñ œ "  # e$ >ÎX sin ÐÈ$ >ÎX  !Þ&#$'  1 Ñ (2.4.50)
c) for Critical Damping (CD), where 5 œ !Þ' and Gb œ GÎ"':
gc Ð>Ñ œ "  e% >ÎX Ð"  % >ÎX Ñ (2.4.51)
d) for 5 œ ! and Gb œ GÎ%:
#
gd Ð>Ñ œ "  e >ÎX sin ÐÈ$ >ÎX  "Þ!%(#  1 Ñ (2.4.52)
È$

The plots corresponding to these four equations are shown in Fig. 2.4.8. Also shown
are the corresponding four pole patterns.

1.2
k C b /C d
a) 0.33 1/8 a
1.0 b) 0.5 1/12 b
o
c
c) 0.6 1/16
ii R
d) 0.0 1/4 L=0
0.8

s 1a jω
0.6 Cb s 1b s 1d
ωh
k
ii L
0.4 o
s 1c s 2c
−4 −3 −2 −1 0 σ
R ωh
C
0.2
L = R2 C
ω h = 1/ RC s 2b s 2d
s 2a
0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0
t / RC
Fig. 2.4.8: The step response of the T-coil circuit. As before, a) is MFA, b) is MFED, c) is CD
and d) is the case 5 œ !. The non-peaking case (P œ !) is the reference. The no-coupling case
has excessive overshoot, 16.3 %, but MFA overshoot is also high, 4.3 %. Note the pole pattern
of the four cases: the closer the poles are to the imaginary axis, the greater is the overshoot. The
diameter of the circle on which the poles lie is %ÎVG .

-2.45-
P. Stariè, E. Margan Inductive Peaking Circuits

2.4.5 Step Response from Input to @V

For the application as a delay network, or if we want to design a distributed


amplifier with either electronic tubes or FETs we need to know the transmission from the
input to the load V , where we have the voltage @V . For the calculation we use Eq. 2.4.26,
which we normalize by making VG œ ". Here in addition to the two poles in the left half
of the =-plane we also have two symmetrically placed zeros in the right half of the =-plane:

=",# œ 5" „ 4 =" =$,4 œ  5" „ 4 =" (2.4.53)

We shall write Eq. 2.4.26 in the form:

=# V# G Gb  =VG Î#  " Ð=  =$ ÑÐ=  =% Ñ


J Ð=Ñ œ œ (2.4.54)
=# V# G Gb  =VG Î#  " Ð=  =" ÑÐ=  =# Ñ

By multiplication with "Î= we obtain the corresponding formula for the step response in
the frequency domain:
Ð=  =$ ÑÐ=  =% Ñ
KÐ=Ñ œ (2.4.55)
= Ð=  =" ÑÐ=  =# Ñ

and the corresponding time function:


Ð=  =$ ÑÐ=  =% Ñ =>
gÐ>Ñ œ _" ˜KÐ=Ñ™ œ " res e (2.4.56)
= Ð=  =" ÑÐ=  =# Ñ

This operation is performed by contour integration, as explained in Part 1. The integration


path must encircle all three poles; however, it is not necessary to encircle the zeros.
Since the function has the poles and zeros arranged symmetrically with respect to
both axes we shall express all the components with 5" and =" , taking care of the polarity
of each pole and zero, according to Eq. 2.4.53. We have three residues:

Ð=  =$ Ñ Ð=  =% Ñ => =$ =% 5 #  =#"
res! œ lim =” e •œ œ "# œ"
= p ! = Ð=  =" Ñ Ð=  =# Ñ =" =# 5"  =#"

Ð=  =$ Ñ Ð=  =% Ñ => Ð="  =$ ÑÐ="  =% Ñ =" >


res" œ lim Ð=  =" Ñ” e •œ e
= p =" = Ð=  =" Ñ Ð=  =# Ñ =" Ð="  =# Ñ
ÒÐ5"  4 =" Ñ  Ð  5"  4 =" ÑÓÒÐ5"  4 =" Ñ  Ð  5"  4 =" ÑÓ Ð5" 4 =" Ñ>
œ e
Ð5"  4 =" ÑÒÐ5"  4 =" Ñ  Ð5"  4 =" ÑÓ
# 5" 5 " > 4 = " >
œ e e (2.4.57)
4 ="

Ð=  =$ Ñ Ð=  =% Ñ => Ð=#  =$ ÑÐ=#  =% Ñ =# >


res# œ lim Ð=  =# Ñ” e •œ e
= p =# = Ð=  =" Ñ Ð=  =# Ñ =# Ð=#  =" Ñ
ÒÐ5"  4 =" Ñ  Ð  5"  4 =" ÑÓÒÐ5"  4 =" Ñ  Ð  5"  4 =" ÑÓ Ð5" 4 =" Ñ>
œ e
Ð5"  4 =" ÑÒÐ5"  4 =" Ñ  Ð5"  4 =" ÑÓ
# 5" 5" > 4 =" >
œ e e
 4 ="

-2.46-
P. Stariè, E. Margan Inductive Peaking Circuits

The sum of all three residues is:


# 5" 5" > 4 =" > # 5" 5" > 4 =" >
gÐ>Ñ œ "  e e  e e
4 =" 4 ="

% 5" 5 " > e4 =" >  e4 =" > % 5" 5" >
œ" e Œ œ" e sin =" > (2.4.58)
=" #4 ="

For critical damping (CD) both zeros and both poles are real. Then, =" œ =# and
=$ œ =% œ  =" . There are only two residues, which are calculated in two different ways
(because the residue of the double pole must be calculated from the first derivative):

Ð=  =$ Ñ# => =#
res! œ lim =” #
e • œ #$ œ " (because =$ œ  =" )
=p! = Ð=  =" Ñ ="

. Ð=  =$ Ñ# =>
res" œ lim ” Ð=  =" Ñ# e •
= p =" .= = Ð=  =" Ñ#
. Ð=  =$ Ñ# =>
œ lim ” e •
= p =" .= =
= > e=>  e=>
œ lim ” e=>  = > e=>  # =$ > e=>  =$# •
= p =" =#
œ % =" > e=" > (because =$ œ  =" ) (2.4.59)

The sum of both residues is the time response sought. We insert the normalized
poles and put >ÎVG œ >ÎX to obtain:

a) for Butterworth poles (MFA), where 5 œ !Þ$$ and Gb œ G Î):

ga Ð>Ñ œ "  % e# >ÎX sin Ð# >ÎX Ñ (2.4.60)

b) for Bessel poles (MFED), where 5 œ !Þ& and Gb œ GÎ"#:

gb Ð>Ñ œ "  %È$ e$ >ÎX sin ÐÈ $ >ÎX Ñ (2.4.61)

c) for critical damping (CD), where 5 œ !Þ' and Gb œ G Î"':

gc Ð>Ñ œ "  "'Ð>ÎX Ñ e 4 >ÎX (2.4.62)

d) for the case when 5 œ ! and Gb œ GÎ%:


%
gd Ð>Ñ œ "  e>ÎX sin ÐÈ $ >ÎX Ñ (2.4.63)
È$

All four plots are shown in Fig. 2.4.9. Note the initial transition owing to the bridge
capacitance Gb at high frequencies, the dip where the phase inversion between the high
pass and low pass section occurs, and the transition to the final value.

-2.47-
P. Stariè, E. Margan Inductive Peaking Circuits

1.2

1.0
R
ii R
0.8

0.6
c b a d
0.4
Cb
0.2 k k Cb /C
ii L R a) 0.33 1/8
0.0
b) 0.5 1/12
R c) 0.6 1/16
− 0.2 C
d) 0.0 1/4
− 0.4 L = R2 C
ω h = 1/ RC
− 0.6
− 0.3 0.0 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3.0
t / RC

Fig. 2.4.9: The step response of the T-coil circuit, but now with the output from the loading
resistor V (this is interesting for cascading stages, as explained later). As before, a) MFA,
b) MFED, c) CD, and d) 5 œ ! . The system has the characteristics of an all pass filter.

All the significant data of the T-coil peaking circuit are collected in the Table 2.4.1.

Table 2.4.1
response type 5 Gb ÎG (b (r $ Ò%Ó
a) MFA !Þ$$ "Î) #Þ)$ #Þ*) %Þ$!
b) MFED !Þ&! "Î"# #Þ(# #Þ(' !Þ%$
c) CD !Þ'! "Î"' #Þ&( #Þ'' !Þ!!
d) 5 œ ! !Þ!! "Î% #Þ&% $Þ#$ "'Þ$
Table 2.4.1: Two-pole T-coil circuit parameters.

2.4.6 A T-coil Application Example

One interesting application example of a T-coil all pass network is shown in


Fig. 2.4.10. The signal coming out of a TV camera via a (& H cable must be controlled by
the monitor and by the vectorscope before it enters the video modulator. The video monitor
and the vectorscope should not cause any reflections in the interconnecting cables.
Reflections would be caused mostly by the input capacitances of these devices, since their
input resistances V" , V# , V$ ¦ (& H and we shall neglect them in our calculations.
To avoid reflections we must connect an impedance matching circuit to each of
these inputs and the T-coil circuit can do well, as shown in Fig. 2.4.11. The signal from the
TV camera will pass any of three T-coils without reflections. However, the last T-coil in
the chain (at the video modulator) must be terminated by the characteristic impedance of
the cable, which is (& H. We will take the output for the three devices from their input

-2.48-
P. Stariè, E. Margan Inductive Peaking Circuits

capacitances G" , G# , and G$ . This will cause a slight decrease in bandwidth, but — as we
will see later — the decrease introduced by T-coils will not harm the operation of the total
system in any way.

TV-Camera Coaxial Cable


Signal Z 0 = 75 Ω
C 3 = 25 pF
R3 = 100 k Ω
Video Modulator
C 1 = 27 pF C 2 = 22 pF
R1 = 1 M Ω R2 = 30 kΩ
Video Monitor Vectorscope

Fig. 2.4.10: An example of a system with different input impedances (TV studio equipment).
The signal from a color TV camera is controlled on the monitor screen, the RGB color vectors
are measured by the vectorscope, and, finally, the signal is sent to the video modulator for
broadcasting. All interconnections are made by a coaxial cable with the characteristic
impedance of 75H. With long cables, adding considerable delay, the input capacitances can
affect the highest frequencies, causing reflections.

On the basis of the data in Fig. 2.4.10 we will calculate the T-coil for each of the
three devices. In addition we will calculate the bandwidth at each input. Since the whole
system must faithfully transmit pulses, we will consider Bessel poles for all three T-coils.
We use the following four relations:

P œ V# G (Eq. 2.4.19)

Gb œ GÎ"# (Eq. 2.4.32)

5 œ !Þ& (Eq. 2.4.37)

(b œ #Þ(# (Table 2.4.1)


(b =h (b
0H œ (b 0h œ œ (from Eq. 2.2.29) .
#1 # 1 VG

So we calculate:
a) for the monitor,
P" œ "&# nH, Gb" œ #Þ#& pF, 0h" œ ()Þ' MHz, 0H" œ #$" MHz;

b) for the vectorscope,


P# œ "#% nH, Gb# œ "Þ)$ pF, 0h# œ *'Þ& MHz, 0H# œ #'# MHz;

c) for the video modulator,


P$ œ "%" nH, Gb$ œ #Þ!) pF, 0h $ œ )%Þ* MHz, 0H$ œ #$! MHz.

The bandwidths are far above the requirement of the system, which is about 6 MHz
for either a color or a black and white signal. Fig. 2.4.11 shows the schematic diagram in
which the calculated component values are implemented.

-2.49-
P. Stariè, E. Margan Inductive Peaking Circuits

C b1 C b2 C b3

TV Camera k k k
Signal
Coax L1 1 Coax L2 2 Coax L3 3

75Ω 75Ω 75Ω


Source RL
R1 C1 R2 C2 R3 C3
Z o = 75Ω 75Ω

R1
Video Monitor Vectorscope Video Modulator

Fig. 2.4.11: Input impedance compensation by T-coil sections prevents signal reflections.
Each section of the coaxial cable sees the terminating 75 H resistor at the end of the chain.
The bandwidth is affected only slightly. The circuit values are shown in the text above.

Since a properly designed T-coil circuit has a constant input impedance, it may be
used in connection with a series peaking circuit in order to improve further the system
bandwidth, as we shall see in Sec. 2.6. But first we shall examine a 3-pole T-coil system.

-2.50-
P. Stariè, E. Margan Inductive Peaking Circuits

2.5 Three-Pole T-coil Peaking Circuit

As in the three-pole series peaking circuit of Fig. 2.3.1, an input capacitance Gi is


also always present at the input of the two-pole T-coil circuit, changing it into a three-pole
network. This Gi can be a sum of the driving circuit capacitance and the stray input
capacitance of the T-coil circuit itself. Actually, as shown in Fig. 2.5.1, the total
capacitance of the non-peaking circuit is split by the T-coil into the capacitance of the
driving node (Gi ) and that of the output stage (G ).

s1 jω
Cb

θ3
k
ii L θ1 σ
R s3 −θ1
o
R
Ci R
C s1 = − 1.839 + j 1.754
s2 s2 = − 1.839 − j 1.754
D2
s3 = − 2.322
D1
s3 s1 , s 2

Fig. 2.5.1: The three-pole T-coil network. Fig. 2.5.2: Bessel pole layout for the Fig. 2.5.1.

If properly designed, the basic two-pole T-coil circuit will have a constant input
impedance V , independent of frequency. This property allows a great simplification of the
three-pole network analysis. To both poles of the T-coil, =" and =# , we only need to add the
third input pole =$ œ  "ÎVGi . In order to design an efficient peaking circuit, the tap of
the coil must feed a greater capacitance, so Gi < G , because the T-coil has no influence on
the input pole =$ . Since the network is reciprocal (the current input and voltage output
nodes can be exchanged without afecting the response) we can always fulfil this
requirement. Also, because of the constant input impedance we can obtain the expression
for the transfer function from that of a two-pole T-coil circuit (Eq. 2.4.25) by adding to it
the influence of the third pole =$ (resulting in a simple multiplication of the first-order and
second-order transfer function):
Zo "
J Ð=Ñ œ œ (2.5.1)
M" V " # # VG
Œ=  Œ= V G Gb  =  "
VGi #

We shall resist the temptation to perform the suggested multiplication in the


denominator, since we would obtain a third-order equation and needlessly complicate the
analysis. Owing to the capacitance Gi we have a real pole in addition to the two complex
conjugate poles which the T-coil circuit has (in wideband application). With the input
capacitance Gi the input impedance is not constant any longer. Its value is equal to V at DC
and approaches that of Gi at frequencies beyond =H . As before, our task is to select such
parameters of the Eq. 2.5.1 that the network will have either Bessel or Butterworth poles.
We shall do this by using the trigonometrical relations as indicated in Fig. 2.5.3. The T-coil
parameters will carry an index ‘1’ (H" , )" , 5" , and =" ).

-2.51-
P. Stariè, E. Margan Inductive Peaking Circuits

From the analysis of a two-pole T-coil circuit we remember that the diameter of the
circle on which both poles =" and =# lie is H" œ %ÎVG (see Fig. 2.4.3 or 2.4.8). The
diameter H# of the circle which goes through the real pole =$ œ 5# is simply  "ÎVGi (the
reason why we have drawn the circle through this pole also, will become obvious later,
when we shall analyze the four-pole L+T circuits). We introduce a new parameter:
G
8œ (2.5.2)
Gi
The ratio of the diameters of these circles going through the poles and the origin is then:
" 8
H# VGi VG 8
œ œ œ (2.5.3)
H" % % %
VG VG
From this we obtain:
G H#
œ8œ% (2.5.4)
Gi H"

s1 ω1

ω1 = − 4 sin θ1 cos θ1
π M
2 1 =−
4
RC c
os θ
1 RC

θ1
D1 σ1
0 σ
σ1 = − 4 cos 2 θ 1
RC

D1 = − 4
RC
M1 = √ σ12 + ω 12
ω
θ 1 = arctan σ 1
1

Fig. 2.5.3: The basic trigonometric relations of the main parameters for one of
the poles of the T-coil circuit. Knowing one pair of parameters, it is possible to
calculate the rest by these simple relations.

Fig. 2.5.3 illustrates some basic trigonometric relations between the polar and
Cartesian expression of the poles by taking into account the similarity relationship between
the two right angle triangles: ˜! 5" =" and ˜! =" H" :
%
dÖ=" × œ 5" œ H" cos# )" œ  cos# )" (2.5.5)
VG
where H" is the circle diameter, H" œ  %ÎVG . Likewise:
%
eÖ=" × œ =" œ H" cos )" sin )" œ  cos )" sin )" (2.5.6)
VG

-2.52-
P. Stariè, E. Margan Inductive Peaking Circuits

From these equations we can calculate the coupling factor 5 and the bridging
capacitance Gb . Since:
eÖ=" × ="
tan )" œ œ (2.5.7)
d Ö=" × 5"
the corresponding coupling factor, according Eq. 2.4.36, is:
$  tan# )"

&  tan# )"
and, as before in Eq. 2.4.31, the bridging capacitance is:
"  tan# )"
Gb œ G
"'
Next we must calculate the parameter 8 from the table of poles in Part 4. For
Butterworth poles, listed in Table 4.3.1, the values for order 8 œ $ are:

="t,#t œ 5"t „ 4 ="t œ  !Þ&!!! „ 4 !Þ)''!


=$t œ 5$t œ  "Þ!!!! (2.5.8)
)" œ „ "#!°

From Eq. 2.5.5 it follows that:


5"t
H" œ (2.5.9)
cos# )"
Since H# œ 5"t the ratio H# ÎH" is:
H# 5$t cos# )"  " † cos# "#!°
œ œ œ !Þ&!!! (2.5.10)
H" 5"t  !Þ&!!!
Since 8 œ % H# ÎH" we obtain:

8 œ GÎGi œ # Ê Gi œ GÎ# (2.5.11)

Returning to the equations for 5 and Gb we find 5 œ ! (no coupling !!!) and
Gb œ !Þ#& G . Just as it was for a two-pole T-coil circuit, here, too, P œ V# G . So we have
all the circuit parameters for the Butterworth poles.
We can take the values for Bessel poles for order 8 œ $ either from Table 4.4.3 in
Part 4 , or by running the BESTAP routine (Part 6):

="t,#t œ 5"t „ 4 ="t œ  "Þ)$)* „ 4 "Þ(&%%


=$t œ 5$t œ  #Þ$##" (2.5.12)
)" œ „ "$'Þ$&°

In a similar way as before, we obtain:


5 œ !Þ$&$' Gb œ !Þ"# G Gi œ !Þ$) G (2.5.13)

To calculate the normalized transfer function we normalize the frequency variable


as =Î=h , where =h is the upper cut off frequency of the non-peaking amplifier (P œ !):
" "
=h œ œ (2.5.14)
VÐG  Gi Ñ VGc

-2.53-
P. Stariè, E. Margan Inductive Peaking Circuits

This is important, because if the coil is replaced by a short circuit both capacitances
appear in parallel with the loading resistor V . Since Gi œ GÎ8, we may express both
capacitances with the total capacitance Gc œ G  Gi and obtain:
8 "
G œ Gc and Gi œ Gc (2.5.15)
8" 8"

So far we have used the pole data from tables, since we needed only the ratios of
these poles. But, to calculate the frequency, phase, envelope delay, and step response we
shall need the actual values of the poles. We have calculated the poles of the T-coil circuit
by Eq. 2.4.29, which we repeat here for convenience:

" "' Gb
=",# œ  "„Ê"
%VGb  G 

We shall use the Butterworth poles to explain the procedure. For these poles we
have Gb œ GÎ% and 8 œ #. By inserting these values in the above formula we obtain:
"
=",# œ  Š" „ 4 È$ ‹ (2.5.16)
VG

Now let us express the capacitance G by Gc according to Eq. 2.5.15. Then:


" 8" " $
=",# œ  † Š" „ 4 È $ ‹ œ  † Š" „ 4 È $ ‹
VGc 8 VGc #

"
œ  Ð"Þ& „ 4 #Þ&*)"Ñ (2.5.17)
VGc
The input pole is:
" " $
=$ œ  œ  Ð8  "Ñ œ  (2.5.18)
VGi VGc VGc
In a similar way we also calculate the values for Bessel poles and obtain:

" $Þ'%%(
= ", # œ  Ð#Þ))'! „ 4 #Þ(&$#Ñ and =$ œ  (2.5.19)
VGc VGc

When calculating the values for the critical damping case (CD) we must consider
that the imaginary values of the poles =" and =# must be zero. This gives Gb œ GÎ"'. Here
we may choose 8 œ #, and this means that Gi œ G Î#. The corresponding poles, which are
all real, are:
' $
=",# œ  and =$ œ  (2.5.20)
VGc VGc

2.5.1 Frequency Response

To calculate the frequency response, we can use Eq. 2.2.27 for a two-pole series
peaking circuit and add the effect of the additional input real pole, 5$ . We insert the

-2.54-
P. Stariè, E. Margan Inductive Peaking Circuits

normalized poles (VGc œ ") and the normalized frequency =/=h . Thus we obtain the
following expression:

a5"#n  =#"n b 5$
¸J a=b¸ œ Í
Í # # #
Í # = = =
–5"n  Œ  ="n  —–5"#n  Œ  ="n  —–5$#n  Œ  —
Ì =h =h =h
(2.5.21)

The plot for all three types of poles is shown in Fig. 2.5.4. By comparing the curve
a, MFA, where 5 œ ! , with the curve a in Fig. 2.4.5, where also 5 œ ! , we realize that we
have achieved a bandwidth extension just by splitting the total circuit capacitance into the
input capacitance Gi and the coil loading capacitance G .

2.0

Vo
Ii R
1.0
c b a
0.7
Cb
L = R2 C
0.5 k ω h = 1/ R ( C + C i )
Ii L L=0
Vo
Ci R
C

0.2
k Cb /C C /C i
a) 0.00 0.25 2.00
b) 0.35 0.12 2.63
c) 0.60 0.06 2.00
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.5.4: Three-pole T-coil network frequency response: a) MFA; b) MFED; c) CD case. The
non-peaking response (P œ !) is the reference. The MFA bandwidth is larger then that of the two-
pole circuit in Fig.2.4.5; in contrast, MFED bandwidth is nearly the same, but the circuit can be
realized more easily, owing to the lower magnetic coupling factor required. Note also that, owing
to the possibility of separating the total capacitance into a driving and loading part, the reference
non-peaking cut off frequency =h must be defined as "ÎVÐG  Gi Ñ.

2.5.2 Phase Response

For the phase response we can use Eq. 2.3.25 again, and, by inserting the values for
the poles we can plot the responses as shown in Fig. 2.5.5:
= = =
 ="n  ="n
=h =h =h
: œ arctan  arctan  arctan
5"n 5"n 5$n

-2.55-
P. Stariè, E. Margan Inductive Peaking Circuits

− 30

− 60
L=0
Cb
− 90 L = R2 C
ϕ[ ] k ω h = 1/ R ( C + C i )
Ii L
−120
Vo
−150 Ci R
C
a b c
−180
k Cb /C C /C i
−210
a) 0.00 0.25 2.00
−240 b) 0.35 0.12 2.63
c) 0.60 0.06 2.00
−270
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.5.5: Three-pole T-coil network phase response: a) MFA; b) MFED; c) CD case. Note that
at high frequencies the 3-pole system phase asymptote is  #(!° (3 ‚ 90°).

2.5.3 Envelope Delay

We take Eq. 2.3.26 again, and by inserting the values for poles we can plot the
envelope delay, as in Fig. 2.5.6:
5"n 5"n 5$ n
7e =h œ #  #  #
= = =
5"#n  Œ  ="n  5"#n  Œ  ="n  5$#n  Œ 
=h =h =h
0.0 Cb
L = R2 C
k
Ii L ω h = 1/ R ( C + C i )
− 0.2 Vo
R L=0
Ci C
τe ω h

− 0.4 k C b /C C /C i
a) 0.00 0.25 2.00
b) 0.35 0.12 2.63
c
c) 0.60 0.06 2.00
− 0.6
b

a
− 0.8

− 1.0
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.5.6: Three-pole T-coil network envelope delay: a) MFA; b) MFED; c) CD case.
Note that the MFED flatness now extends to nearly "Þ& =h .

-2.56-
P. Stariè, E. Margan Inductive Peaking Circuits

2.5.4 Step Response

We shall use again Eq. 2.3.27 and Eq. 2.3.28 (see Appendix 2.3, Sec. A2.3.1 for
complete derivation). By inserting the values for poles we can calculate the responses and
plot them, as shown in Fig. 2.5.7a and 2.5.7b:

a) from Eq. 2.5.8 for MFA:


ga Ð>Ñ œ "  "Þ"&%( e"Þ& >ÎX sinÐ#Þ&*)" >ÎX  !  1Ñ  e$ >ÎX (2.5.22)

b) from Eq. 2.5.12 for MFED:


gb Ð>Ñ œ "  "Þ)%)* e#Þ))' >ÎX sinÐ#Þ(&$# >ÎX  !Þ&%!!  1Ñ  "Þ*&!( e$Þ'%%( >ÎX
(2.5.23)

For the CD case, where we have a double real pole (=# œ =" œ 5" ), the calculation
is different (see Eq. 1.11.12 in Part 1). The general expression:

 =#" =$
J Ð=Ñ œ (2.5.24)
Ð=  =" Ñ# Ð=  =$ Ñ

(where =" œ 5" and =$ œ 5$ ) must be multiplied by the unit step operator "Î= to obtain the
form appropriate for _" transform:
 =#" =$
KÐ=Ñ œ (2.5.25)
= Ð=  =" Ñ# Ð=  =$ Ñ

and the step response is the inverse Laplace transform of KÐ=Ñ, which in turn is equal to the
sum of its residues:
 =#" =$ e=>
gÐ>Ñ œ _" ˜KÐ=Ñ™ œ ! res (2.5.26)
= Ð=  =" Ñ# Ð=  =$ Ñ

We have three residues:

 =#" =$ e=>  ="# =$ e!>


res! œ lim =” # •œ # œ"
=p! = Ð=  =" Ñ Ð=  =$ Ñ =" Ð  =$ Ñ

.  =#" =$ e=> .  ="# =$ e=>


” Ð=  =" Ñ# • = p=" .= = Ð=  =$ Ñ •

res" œ lim œ lim
= p=" .= =Ð=  =" Ñ# Ð=  =$ Ñ

=$ Ð= >  "Ñ  = Ð= >  #Ñ =>


œ lim ”=#" =$ e •
= p=" =# Ð=3  =Ñ#

=$ =" >  =$  =#" >  #=" =" > =" Ð="  =$ Ñ >  #="  =$ =" >
œ =#" =$ e œ =$ e
=#" Ð=3  =" Ñ# Ð="  =$ Ñ#

 =#" =$ e=>  ="#


res# œ lim Ð=  =$ Ñ” •œ e=$ > (2.5.27)
= p=$ = Ð=  =" Ñ# Ð=  =$ Ñ Ð=$  =" Ñ#

The sum of all three residues is the sought step response:

-2.57-
P. Stariè, E. Margan Inductive Peaking Circuits

=" Ð="  =$ Ñ >  #="  =$ =" > =#"


gc Ð>Ñ œ "  =$ e  e=$ > (2.5.28)
Ð="  =$ Ñ# Ð=$  =" Ñ#
By inserting =" œ 5" and =$ œ 5$ we obtain:
5" Ð5"  5$ Ñ >  #5"  5$ 5" > 5"#
gc Ð>Ñ œ "  5$ e  e5$ > (2.5.29)
Ð5"  5$ Ñ# Ð5$  5" Ñ#

Finally, we normalize the poles (Eq. 2.5.20Ñ, 5"n œ  ' and 5$n œ  $ and
normalize the time as >ÎX , where X œ VÐGi  GÑ, to obtaint the formula by which the plot
c in Fig. 2.5.7 is calculated:
gc Ð>Ñ œ "  $ Ð"  # >ÎX Ñ e'>ÎX  % e$>ÎX (2.5.30)

1.2
a
b
1.0
o c
ii R
0.8
L=0

0.6 L = R2 C
ω h = 1/ R ( C + C i ) Cb

k
0.4 ii L
k C b /C C /C i
o
a) 0.00 0.25 2.00
Ci R
b) 0.35 0.12 2.63 C
0.2
c) 0.60 0.06 2.00

0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0
t /R (C + Ci )
Fig. 2.5.7: The step response of the three-pole T-coil circuit: a) MFA; b) MFED; c) CD.
The non-peaking case (P œ !) is the reference. Since the total capacitance G  Gi is equal
to G of the two-pole T-coil circuit, the MFED rise time is also nearly identical. However,
the three-pole circuit is much easier to realize in practice, owing to the lower 5 required .

2.5.5. Low Coupling Cases

In the practical realization of a T-coil the toughest problem is to achieve a high


coupling factor 5 . Even 5 œ !Þ&, as is needed for the two-pole circuit MFED response, is
not easy to achieve if we do not want to increase the bridging capacitance Gb excessively.
The three-pole T-coil circuits are easy to realize in practice, because of the low coupling
required (no coupling for MFED, and only 5 œ !Þ$& for MFED).
We have also seen that the low coupling factor required is achieved by simply
splitting the total circuit capacitance into G and Gi . Therefore it might be useful to further
investigate the effect of low coupling. Let us calculate the frequency and the step

-2.58-
P. Stariè, E. Margan Inductive Peaking Circuits

responses, making three plots each, with a different ratio Gi ÎG œ 8 . In the first group we

shall put 5 œ !Þ$$ and in the second group 5 œ !Þ#.
The corresponding poles are:

Group 1: 5 œ !Þ$$ , Gb œ G Î)
a) 8 œ #Þ& b) 8 œ # c) 8 œ "Þ&
† †
="n,#n œ  #Þ) „ 4 #Þ) ="n,#n œ  $ „ 4 $ ="n,#n œ  $Þ$$ „ 4 $Þ$$
=$n œ  $Þ& =$n œ  $ =$n œ  #Þ&

The poles are selected so that the sum of G  Gi is the same for all three cases. In
this way we have the same upper half power frequency =h for any set of poles. This is
necessary in order to have the same scale for all three plots. For the above poles we obtain
the frequency response as in Fig. 2.5.8 and the step response as in Fig. 2.5.9.
Group 2: 5 œ !Þ# , Gb œ !Þ"( G
a) 8 œ # b) 8 œ "Þ& c) 8 œ "
="n,#n œ  #Þ& „ 4 #Þ*!) ="n,#n œ  #Þ& „ 4 $Þ##( ="n,#n œ  $ „ 4 $Þ)$(
=$n œ  $ =$n œ  #Þ& =$n œ  #

The corresponding frequency response plots are displayed in Fig. 2.5.10 and the
step responses in Fig. 2.5.11. From Fig. 2.5.11 it is evident that we have decreased the
coupling factor in the second group too much. Nor is a single curve in this figure suitable
for the peaking circuit of a pulse amplifier. In curve a the overshoot is excessive, whilst the
curve c exhibits too slow a response. The curve b rounds off too soon, reaching the final
value with a much slower slope. In a plot with a coarser time scale this curve would clearly
show a missing chunk of the step response. Needless to say, it would be very annoying if
an oscilloscope amplifier were to have such a step response.
All the important data for the three-pole T-coil peaking circuits are collected in
Table 2.5.1. It is worth noting that we achieve a three-pole MFED response with the
coupling factor 5 œ !Þ$& ((r œ #Þ()), whilst for a two-pole T-coil MFED response the
5 œ !Þ& was necessary (for a similar (r œ #Þ('). If we are satisfied with a slightly smaller

bandwidth it is possible to use the parameters of Group 1, where the coupling factor is !Þ$$
only. Such a small coupling factor is much easier to achieve than 5 œ !Þ&. So for the
practical construction of a wideband amplifier we find the three-pole T-coil circuits very
convenient.
Table 2.5.1
response type 5 Gb ÎG GÎGi (b (r $ [%]
MFA ! !Þ#& # $Þ!! #Þ)) )Þ!)
MFED !Þ$& !Þ"#& #Þ'%& #Þ(& #Þ() !Þ(&
CD !Þ'! !Þ!'$ # #Þ## #Þ#' !Þ!!
Group 1, a !Þ$$ !Þ"#& #Þ& #Þ(& #Þ(& !Þ)!
Group 1, b !Þ$$ !Þ"#& # #Þ&* #Þ'% !Þ!!
Group 1, c !Þ$$ !Þ"#& "Þ& #Þ$& #Þ'! !Þ!!
Group 2, a !Þ# !Þ"'( # #Þ)% #Þ)! "Þ)&
Group 2, b !Þ# !Þ"'( "Þ& #Þ&* #Þ'# !Þ!!
Group 2, c !Þ# !Þ"'( " #Þ"$ #Þ"' !Þ!!
Table 2.5.1: Three-pole T-coil circuit parameters.

-2.59-
P. Stariè, E. Margan Inductive Peaking Circuits

2.0
Vo
Ii R

1.0
a b
0.7 c
Cb
0.5 k L = R2 C L=0
Ii L
ω h = 1/ R ( C + C i )
Vo
Ci R
C

0.2 k Cb /C C /C i
a) 1/3 1/8 2.5
b) 1/3 1/8 2.0
c) 1/3 1/8 1.5
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.5.8: Low coupling factor, Group 1: frequency response.

1.2
s 1abc jω

1.0
s 3b a
o θ1 σ
ii R s 3a s 3c b
c
0.8
s 2abc
Cb
0.6
k
L = R2 C ii L
ω h = 1/ R ( C + C i ) o
0.4 R
Ci C
k Cb /C C /C i
0.2 a) 1/3 1/8 2.5
b) 1/3 1/8 2.0
c) 1/3 1/8 1.5
0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0
t /R (C + C i)

Fig. 2.5.9: Low coupling factor, Group 1: step response. In all three cases the real pole =$ is
placed closer to the origin (becoming dominant) than in the MFA and MFED case, making the
responses more similar to the CD case. The characteristic circle of the complex conjugate pole
pair has a slightly different diameter in each case.

-2.60-
P. Stariè, E. Margan Inductive Peaking Circuits

2.0

Vo
Ii R
1.0 a b
c
0.7
Cb
0.5 k L = R2 C L=0
Ii L
ω h = 1/ R ( C + C i )
Vo
Ci R
C

0.2 k Cb /C C /C i
a) 0.2 1/6 2.0
b) 0.2 1/6 1.5
c) 0.2 1/6 1.0
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.5.10: Low coupling factor, Group 2: frequency response.

1.2
s 1abc jω

a
1.0
o s 3b θ1 σ b
ii R s 3a s 3c c
0.8
L=0
s 2abc
Cb
0.6 k
L = R2 C ii L
ω h = 1/ R ( C + C i )
o
0.4 Ci R
C
k Cb /C C /C i
a) 0.2 1/6 2.0
0.2
b) 0.2 1/6 1.5
c) 0.2 1/6 1.0
0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0
t /R (C + Ci )
Fig. 2.5.11: Low coupling factor, Group 2: step response. The pole =$a is slightly further
away from the real axis than =$b or =$c , therefore causing an overshoot larger than in the
MFED case. Both responses b and c are over-damped, reaching the final value much later
than in the MFED case.

-2.61-
P. Stariè, E. Margan Inductive Peaking Circuits

2.6 Four-pole L+T Peaking Circuit

Instead of leaving the input capacitance Gi without any peaking coil, as it was in the
three-pole T-coil circuit, we can add another coil between Gi and the T-coil. This is the
same as adding the series peaking components to the T-coil (it can be done since the input
impedance of the T-coil is resistive, if properly designed). In this way, the MFA bandwidth
can be extended by over 4 times, compared with the simple VG system. This is
substantially more than the 2.75 times found in the two-pole T-coil circuit, where there was
only one capacitance. In Fig. 2.6.1 we see such a circuit. The price to pay for such an
improvement is a coupling factor 5  !Þ&, which is difficult, but possible, to achieve.
s3
s1,2 = − 2.8962 ± j 0.8672 jω
Cb s3,4 = − 2.1038 ± j 2.6574
s1 θ1
k
ii Li L θ3 σ

o
R s2
Ci R
C
s4

D1
s3 , s4 s1 , s 2 D3

Fig. 2.6.1: The four-pole L+T network. Fig. 2.6.2: The Bessel four-pole pattern of L+T.

Here we utilize the basic property of a T-coil circuit — its constant and real input
impedance, presenting the loading resistor to the input series peaking section. Since the
input capacitance Gi and the input inductance Pi form a letter ‘L’ (inverted), we call the
network in Fig. 2.6.1 the L+T circuit. This is a four-pole network and its input impedance
is not constant, but it is similar to the series peaking system, which we have already
calculated (Eq. 2.2.44 — 2.2.48, plots in Fig. 2.2.9).
The transfer function of the L+T network is simply the product of the transfer
function for a two-pole series peaking circuit (Eq. 2.2.4) and the transfer function for a two-
pole T-coil circuit (Eq. 2.4.25). Explicitly:
" V
7V# Gi# V # GGb
J a=b œ † (2.6.1)
= " = "
=#   # # =#   #
7VGi 7V Gi VGb V GGb
Both polynomials in the denominator are written in the canonical form. It would be useless
to multiply them, because this would make the analysis very complicated. If we replace V
in the later numerator by 1, a normalized equation results.
The T-coil section has two poles, which we can rewrite from Eq. 2.4.28:

" " "


=",# œ 5" „ 4 =" œ  „Ë  # (2.6.2)
% VGb Ð% VGb Ñ# V G Gb

-2.63-
P. Stariè, E. Margan Inductive Peaking Circuits

whilst the input section L has two poles, rewritten from Eq. 2.2.5:

" " "


=$,% œ 5$ „ 4=$ œ  „Ë  (2.6.3)
#7VGi %7# V# Gi# 7V# Gi#

The T-coil circuit extends the bandwidth twice as much as the series peaking
circuit, so in order to extend the bandwidth of the L+T-network as much as possible, the
T-coil tap must be connected to whichever capacitance is greater. Thus Gi  G . Therefore,
the circle with the diameter H" and the angle )" , corresponding to the T-coil circuit poles
=",# , are smaller than the circle with the diameter H$ and the angle )$ corresponding to the
poles =$,% of the L branch of the circuit.
Our task is to calculate the circuit parameters for the Bessel pole pattern shown in
Fig. 2.6.2, which gives an MFED response. The corresponding values for Bessel poles,
shown in Table 4.4.3 in Part 4, order 8 œ %, are:

="t,#t œ 5"t „ 4 ="t œ  #Þ)*'# „ 4 !Þ)'(# Ê )" œ "'$Þ$$°


=$t,%t œ 5$t „ 4 =$t œ  #Þ"!$) „ 4 #Þ'&(% Ê )$ œ "#)Þ$(°

As indicated in Fig. 2.5.3, the circle diameters are:

l="t l È5"#t  ="#t l=$t l È5$#t  =$#t


H" œ œ and H$ œ œ (2.6.4)
cos )" cos )" cos )$ cos )$

The diameter ratio is:

H$ È5$#t  =$#t cos )" È#Þ"!$)#  #Þ'&(%# cos "'$Þ$$°


œ œ † œ "Þ($!% (2.6.5)
H" È5"#t  ="#t cos )$ È#Þ)*'##  !Þ)'(## cos "#)Þ$(°

From Fig. 2.2.2 it is evident that the diameter of the circle, on which the poles of the series
peaking circuit lie, is #ÎVGi . But from Fig. 2.4.3, in the case of a two-pole T-coil circuit,
the circle diameter is %ÎVG . Furthermore it is:
G G
œ8 or Gi œ (1.6.6)
Gi 8
From this we derive:
# #8
H$ VGi VG 8 H$
œ œ œ Ê 8œ# œ # † "Þ($!% œ $Þ%'!) (2.6.7)
H" % % # H"
VG VG

As for the three-pole T-coil analysis, here, too, we express the upper half power frequency
of the uncompensated circuit (without coils) as a function of the total capacitance Gc :
" "
=h œ œ
V ÐG  Gi Ñ VGc
We can define the capacitors in relation to their sum, like in EqÞ 2.5.15:
8 "
G œ Gc and Gi œ Gc
8" 8"

-2.64-
P. Stariè, E. Margan Inductive Peaking Circuits

With 8 œ $Þ%'!) form Eq. 2.6.7 we obtain:


Gc Gc
Gœ and Gi œ (2.6.8)
"Þ#)*! %Þ%'!)

Now we can calculate all other parameters of the L+T circuit and also the actual
values of the poles:
a) Coupling factor (Eq. 2.4.36):
$  tan# )" $  tan# "'$Þ$$°
5œ œ œ !Þ&(")
&  tan# )" &  tan# "'$Þ$$°
b) Bridging capacitance (Eq. 2.4.31):

"  tan# )" "  tan# "'$Þ$$°


Gb œ G œG œ !Þ!')" G
"' "'
c) The parameter 7 (Eq. 2.2.26):

"  tan# )$ "  tan# "#)Þ$(°


7œ œ œ !Þ'%))
% %
d) Input inductance Pi (Eq. 2.2.14):
Pi œ 7V # Gi œ !Þ'%)) V # Gi

e) Real part of the pole =" , de=" f œ 5" , (Eq. 2.5.5 and Fig. 2.5.3):
% % † "Þ#)*! %Þ($"(
5" œ  cos# )" œ  cos# "'$Þ$$° œ 
VG V Gc V Gc
f) Imaginary part of the pole =" , ee=" f œ =" , (Eq. 2.5.6 and Fig. 2.5.3):
% % † "Þ#)*! "Þ%"'(
=" œ  cos )" sin )" œ  cos "'$Þ$$° sin "'$Þ$$° œ
VG VGc VGc
g) Real part of the pole =$ , de=$ f œ 5$ , (Eq. 2.5.5 and Fig. 2.5.3):
# # † %Þ%'!) $Þ%$('
5$ œ  cos# )$ œ  cos# "#)Þ$(° œ 
VGi VGc VGc
h) Imaginary part of the pole =$ , ee=$ f œ =$ , (Eq. 2.5.6 and Fig. 2.5.3):
# # † %Þ%'!) %Þ$%"*
=$ œ  cos )$ sin )$ œ  cos "#)Þ$(° sin "#)Þ$(° œ
VGi VGc VGc

As above, we can calculate the parameters for the MFA response from normalized
ÐVGc œ "Ñ values of the 4th -order Butterworth system (Table 4.3.1 , BUTTAP, Part 6):

="n,#n œ 5"n „ 4 ="n œ  %Þ"#"$ „ 4 "Þ(!(" and )" œ "&(Þ&!°


=$n,%n œ 5$n „ 4 =$n œ  "Þ(!(" „ 4 %Þ"#"$ and )$ œ ""#Þ&!°

The L+T network parameters for some other types of poles are given in Table 2.6.1
at the end of this section.

-2.65-
P. Stariè, E. Margan Inductive Peaking Circuits

For Butterworth poles (and only for these!) it is very easy to calculate the upper half
power frequency =H : it is equal to the radius of the circle centered at the origin, on which
all four poles lie, which in turn is equal to the absolute value of any one of the four poles. If
we use the normalized pole values, the circle radius is also equal to the factor of bandwidth
improvement, (b . By dividing this value by VGc , we obtain =H . We can use any one of the
four poles, e.g. ="n :
=H
(b œ œ l="n l œ É5"#n  =#"n œ È%Þ"#"$#  "Þ(!("# œ %Þ%'!* (2.6.9)
=h
and this is a really impressive bandwidth improvement.

2.6.1 Frequency Response

Let us compose a general expression for the frequency response for our L+T circuit.
The formula with normalized values is very similar to Eq. 2.5.21, except that here we have
two pairs of poles, 5"n „ 4 ="n and 5$n „ 4 =$n :
#
a5"#n  ="n b
¸J a=b¸ œ Í
Í # #
Í # = #  =
– 5 "  Œ  = "n  —– 5" Œ  ="n  —
Ì
n
=h n
=h

#
a5$#n  =$n b
† Í (2.6.10)
Í # #
Í # = # =
–5$n  Œ =  =$n  —–5$n  Œ =  =$n  —
Ì h h

Since we have inserted the normalized poles, the frequency, too, had to be
normalized as =Î=h . Fig. 2.6.3 shows the frequency response for a) MFA and b) MFED
and also for two other pole placements, reported in [Ref. 2.28], the data of which are:

The curve c) corresponds to the poles of Group C of [Ref. 2.28]:

="n,#n œ 5"n „ 4 ="n œ  $Þ$#&# „ 4 !Þ&)'$ and )" œ "(!Þ!!°


=$n,%n œ 5$n „ 4 =$n œ  "Þ(!(" „ 4 %Þ"#"$ and )$ œ ""#Þ&!°

The curve d) corresponds to the poles of Group A of [Ref. 2.28]:

="n,#n œ 5"n „ 4 ="n œ  $Þ)$$# „ 4 "Þ()(% and )" œ "&&Þ!!°


s$n,%n œ 5$n „ 4 =$n œ  #Þ"!#% „ 4 &Þ!!"$ and )$ œ ""#Þ)!°

Whilst c and d offer an improvement in (b and (r , their step response is far from optimum.
In Fig. 2.6.4, the plot e) is the Chebyshev response, ?: œ „ !Þ!&° [Ref. 2.24, 2.30]:

="n,#n œ 5"n „ 4 ="n œ  $Þ(*"# „ 4 "Þ)'&' and )" œ "&$Þ)!°


=$n,%n œ 5$n „ 4 =$n œ  #Þ%)'" „ 4 %Þ'(&& and )$ œ "")Þ!!°

-2.66-
P. Stariè, E. Margan Inductive Peaking Circuits

The plot f) is the Gaussian frequency response (to  "# dB) [Ref. 2.24, 2.30]:
="n,#n œ 5"n „ 4 ="n œ  $Þ$)$& „ 4 #Þ!'%( and )" œ "%)Þ)$°
=$n,%n œ 5$n „ 4 =$n œ  $Þ%"&! „ 4 'Þ#&&' and )$ œ "")Þ'!°
The plot g) corresponds to a double pair of Bessel poles, with the following data:
="n,#n œ =$n,%n œ 5"n „ 4 ="n œ  %Þ&!!! „ 4 #Þ&*)" and )" œ "&!Þ!!°

a
1.0
d
0.7 c
Li = 0
Cb L = R2 C b
0.5 L i = mR 2Ci L=0
k
Ii Li L ω h = 1/ R ( C + C i )
Vo Vo
Ii R R
Ci
C

0.2 m k C /C i C b /C
a) 1.71 0.55 4.83 0.073
b) 0.65 0.57 3.46 0.068
c) 1.49 0.59 6.00 0.064
d) 1.66 0.53 6.00 0.067
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.6.3: Four-pole L+T peaking circuit frequency-response: a) MFA; b) MFED; c) Group C;
d) Group A. In the non-peaking reference case both inductances are zero.

1.0
e
0.7
Cb L = R2 C f
L i = mR 2Ci L=0 g
0.5 k
Ii Li L ω h = 1/ R ( C + C i ) Li = 0
Vo Vo
Ii R R
Ci
C

0.2 m k C /C i C b /C
e) 1.13 0.54 4.79 0.076
f) 1.09 0.49 6.43 0.085
g) 0.33 0.50 2.00 0.083
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.6.4: Some additional frequency response plots of the four-pole L+T peaking circuit:
e) Chebyshev with 0.05° phase ripple; f) Gaussian to  12dB; g) double 2nd -order Bessel.
Note the lower bandwidth of g) (2.9 =h ) compared with b) in Fig. 2.6.3 (3.47 =h ). This
clearly shows that the bandwidth of a cascade of identical stages is lower than if the stages
have staggered poles.

-2.67-
P. Stariè, E. Margan Inductive Peaking Circuits

Note: The lower bandwidth (=H œ 2.9 =h ) of the system with repeated poles, plot g,
compared with the staggered pole placement Fig. 2.6.2, plot b (=H œ 3.47 =h ), clearly
shows that using repeated poles is not a clever idea! See also the step response plots.

All these groups of poles can be found in the tables [Ref. 2.30]. Here we can see the
extreme adaptability of the calculation method based on the trigonometric relations as
shown in Fig. 2.5.2, 2.5.3, 2.6.2, and the corresponding formulae. We call this method
geometrical synthesis. By this method, the calculation of circuit parameters for the
inductive peaking amplifier with any suitable pole placement is very easy and we will use it
extensively throughout the rest of the book.

2.6.2 Phase Response

We use Eq. 2.2.30 for each of the four poles:


= =
 ="n  ="n
=h =h
: œ arctan  arctan
5" n 5"n
= =
 =$n  =$n
=h =h
 arctan  arctan (2.6.11)
5$n 5$n
The phase response plots for the first four groups of the poles are shown in Fig. 2.6.5.
Although the vertical scale ends at  $!!°, the phase asymptote at high frequencies is
 $'!° for all 4th -order responses.

L=0
− 60 Li = 0
ϕ[ ]
Cb L = R2 C
− 120
L i = mR 2Ci
k
Ii Li L ω h = 1/ R ( C + C i )
Vo
− 180 Ci R
C

m k C /C i Cb /C b
− 240 a) 1.71 0.55 4.83 0.073
b) 0.65 0.57 3.46 0.068 a d
c
c) 1.49 0.59 6.00 0.064
d) 1.66 0.53 6.00 0.067
− 300
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.6.5: Four-pole L+T peaking circuit phase response: a) MFA; b) MFED; c) Group C;
d) Group A. The non-peaking case, in which both inductors are zero, has a 90° maximum phase
shift; all other cases, being of 4th -order, have a 360° maximum phase shift.

-2.68-
P. Stariè, E. Margan Inductive Peaking Circuits

2.6.3 Envelope Delay

We apply Eq. 2.2.34 for each of the four poles:


5"n 5"n
7e = h œ #  #
= =
5"#n  Œ  ="n  5"#n  Œ  ="n 
=h =h
5$n 5$n
 #  # (2.6.12)
= =
5$#n  Œ  =$n  5$#n  Œ  =$n 
=h =h

The corresponding plots for the first four groups of poles are displayed in Fig. 2.6.6.
Note that the value of the delay at low frequency is slightly different for each pole group.
This is owed to a different normalization for each circuit.

0.0

Cb L = R2 C
L i = mR 2Ci
k ω h = 1/ R ( C + C i )
− 0.2 Ii Li L
Vo Li = 0
Ci R L=0
C
− 0.4
τeω h
c b
− 0.6
a d
m k C /C i Cb /C
a) 1.71 0.55 4.83 0.073
− 0.8 b) 0.65 0.57 3.46 0.068
c) 1.49 0.59 6.00 0.064
d) 1.66 0.53 6.00 0.067
− 1.0
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.6.6: Four-pole L+T peaking circuit envelope delay: a) MFA; b) MFED; c) Group C;
d) Group A. For the non-peaking case at DC. the envelope delay is 1; all other cases have a larger
bandwidth and consequently a lower delay. Note the MFED flatness up to nearly 3.3 =h .

2.6.4 Step Response

A general expression of a four-pole normalized complex frequency response is:


=" = # = $ = %
J Ð=Ñ œ (2.6.13)
Ð=  =" ÑÐ=  =# ÑÐ=  =$ ÑÐ=  =% Ñ

The _ transform of the step response is obtained by multiplying this function by the
unit step operator "Î=, resulting in a new, five-pole function:
=" =# =$ =%
KÐ=Ñ œ (2.6.14)
= Ð=  =" ÑÐ=  =# ÑÐ=  =$ ÑÐ=  =% Ñ

-2.69-
P. Stariè, E. Margan Inductive Peaking Circuits

and to obtain the step response in the time domain, we calculate the _" transform:
%
gÐ>Ñ œ _" ˜KÐ=Ñ™ œ " </=3 ˜KÐ=Ñ™ (2.6.15)
3œ!

The analytical calculation is a pure routine of algebra, but it would require some 8
pages to present. Readers who are interested in the details, can find it in Appendix 2.3.
Here we will write only the result:
K" 5 " > É #
ga>b œ "  e a5" A  =#" Bb  =#" aA  5" Bb# sina=" >  )" b
="
K$ 5 $ > É #
 e a5$ C  =#$ Bb  =#$ aC  5$ Bb# sina=$ >  )$ b (2.6.16)
=$
where:
# 5$#  =#$
A œ a5"  5$ b#  a=#"  =$ b K" œ
A#  =#" B#
B œ # a5"  5$ b
# 5"#  =#"
C œ a5"  5$ b#  a=#"  =$ b K$ œ
C#  =#$ B#

 =" aA  5" Bb  =$ aC  5$ Bb
)" œ arctan )$ œ arctan (2.6.17)
5" A  =#" B 5$ C  =#$ B

Note: The angles )" and )$ calculated by the arctangent will not always give a correct
result. Depending on the pole pattern, one or both will require an addition of 1 radians, as
we show in Appendix 2.3. In the following relations we give the correct values.

By inserting the normalized values for poles, and the normalized time >ÎX , where
X œ VÐGi  GÑ, we obtain the following step response functions:
a) MFA response (Butterworth poles)
ga>b œ "  #Þ%"%# e%Þ"#"$ >ÎX sinÐ"Þ(!(" >ÎX  !Þ()&%  1Ñ
 !Þ**') e"Þ(!(" >ÎX sinÐ%Þ"#"$ >ÎX  !Þ()&%  1Ñ

b) MFED response (Bessel poles)


ga>b œ "  &Þ''$# e%Þ($"( >ÎX sinÐ"Þ%"'( >ÎX  !Þ%)''  1Ñ
 "Þ'%)% e$Þ%$(' >ÎX sinÐ%Þ$%"* >ÎX  "Þ&$)*Ñ

c) Response for the poles of Group A


ga>b œ "  #Þ(#$$ e$Þ)$$# >ÎX sinÐ"Þ()(% >ÎX  !Þ')!(  1Ñ
 !Þ(&)( e#Þ"!#% >ÎX sinÐ&Þ!!"$ >ÎX  "Þ##&!  1Ñ

d) Response for the poles of Group C


ga>b œ "  &Þ*)(& e$Þ$#&# >ÎX sinÐ!Þ&)'$ >ÎX  !Þ#)%$  1Ñ
 !Þ($"! e"Þ(#)% >ÎX sinÐ$Þ)%(& >ÎX  "Þ"*#!  1Ñ

-2.70-
P. Stariè, E. Margan Inductive Peaking Circuits

e) Chebyshev poles with !Þ!&° phase ripple


ga>b œ "  $Þ!)!( e$Þ(*"# >ÎX sinÐ"Þ)&'& >ÎX  !Þ'*"&  1Ñ 
 !Þ*(%% e#Þ%)'" >ÎX sinÐ%Þ'((& >ÎX  "Þ%#)*  1Ñ

f) Gaussian response to  12 dB
ga>b œ "  #Þ)!)% e$Þ$)$& >ÎX sinÐ#Þ!%'( >ÎX  !Þ&%!$  1Ñ 
 !Þ&!*) e$Þ%"&! >ÎX sinÐ'Þ#&&' >ÎX  "Þ!&*)Ñ

g) Double 2nd -order Bessel pole pairs


ga>b œ "  e%Þ& >ÎX # sinÐ#Þ&*)" >ÎX  !Þ&#$'Ñ 
 % sinÐ#Þ&*)" >ÎX Ñ cosÐ!Þ&#$'Ñ  "!Þ$*#$ Ð>ÎX Ñ cosÐ #Þ&*)" >ÎX  !Þ&#$'Ñ‘

The last response was calculated by convolution. We will not repeat it here, because
it is a very lengthy procedure and it has already been published [Ref. 2.5]. As with any
function with repeating poles, the resulting step response is slow compared with the
function with the same number of poles but in an optimized pattern. Fig. 2.6.7 shows the
step responses of a), b), c) and d) ; Fig. 2.6.8 shows the step responses of e), f) and g) .
The data for the four-pole L+T peaking circuit are given in Table 2.6.1.

Table 2.6.1
response type 5 7 8 Gb ÎG (b (r $ [%]
MFA !Þ&%'* "Þ(!(" %Þ)#)$ !Þ!($# %Þ%' %Þ!# "!Þ*
MFED (4th -order Bessel) !Þ&(") !Þ'%)) $Þ%'!) !Þ!')" $Þ%( $Þ%' !Þ*!
Group A !Þ&$$$ "Þ''%( 'Þ!!!! !Þ"''( %Þ%! %Þ!) "Þ*!
Group C !Þ&*!" "Þ%)(( 'Þ!!!! !Þ!'%% %Þ(# %Þ"& 'Þ#!
Chebyshev 0.05° !Þ&$&) "Þ"$%$ %Þ(*!# !Þ#!)) %Þ!* $Þ&# $Þ&'
Gaussian to  12 dB !Þ%*!% "Þ!))) 'Þ%$&( !Þ"&&% $Þ(" $Þ%$ !Þ%(
Double 2nd -order Bessel !Þ&!!! !Þ$$$$ #Þ!!!! !Þ!)$$ #Þ*# #Þ*' !Þ%%
Table 2.6.1: Four-pole L+T peaking circuit parameters.

Thus we have concluded the section on four-pole L+T peaking networks. Here we
have discussed the geometrical synthesis in a very elementary way, which can be briefly
explained as follows:

If the main capacitance G loading the T-coil network tap is known and the
loading resistor V is selected upon the required gain, we can — based on the pole data
and the geometrical relations of their real and imaginary parts — calculate all the
remaining circuit parameters for the complete L+T network.

As we shall see later in the book, the same procedure can be used to calculate the
circuit parameters for a multi-stage amplifier by implementing the peaking networks
described so far.

-2.71-
P. Stariè, E. Margan Inductive Peaking Circuits

1.2
a
d
c
1.0
o b
ii R
L=0
0.8 Li = 0
Cb L = R2 C
L i = mR 2Ci
k
0.6 ii Li L ω h = 1/ R ( C + C i )

o
Ci R
C
0.4
m k C /C i Cb /C
a) 1.71 0.55 4.83 0.073
0.2
b) 0.65 0.57 3.46 0.068
c) 1.49 0.59 6.00 0.064
d) 1.66 0.53 6.00 0.067
0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0
t /R (C + Ci )

Fig. 2.6.7: Four-pole L+T circuit step response: a) MFA; b) MFED; c) Group C; d) Group A.

1.2

e
1.0
o f
ii R g
0.8 L=0
Li = 0 Cb L = R2C
L i = mR 2Ci
k
0.6 ii Li L ω h = 1/ R ( C + C i )
o
Ci R
0.4 C

m k C /C i Cb /C
0.2 e) 1.13 0.54 4.79 0.076
f) 1.09 0.49 6.43 0.085
g) 0.33 0.50 2.00 0.083
0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0
t /R ( C + Ci)

Fig. 2.6.8: Some additional four-pole L+T circuit step responses: e) Chebyshev 0.05°à
f) Gaussian to  12dBà g) double 2nd -order Bessel. Again, the step response confirms
that repeating the poles is not optimalà compare the rise times of g) and b) in Fig. 2.6.7.

-2.72-
P. Stariè, E. Margan Inductive Peaking Circuits

2.7 Two-Pole Shunt Peaking Circuit

In some cases, when a single amplifying stage is sufficient, we can use a very
simple and efficient shunt peaking circuit, shown in Fig. 2.7.1. This is equivalent to the
Fig. 2.1.3, but with the output taken from the capacitor G . Because shunt peaking networks
are very simple to make and their bandwidth extension and risetime improvement surpass
their series peaking counterparts, they have found very broad application in single stage
amplifiers, e.g., in TV receivers.
As the following analysis will show, the two-pole shunt peaking circuit in Fig. 2.7.1
also has one zero. Likewise, the three-pole shunt peaking circuit, which we will discuss in
Sec. 2.8, has two zeros. These zeros prevent us from using the geometrical synthesis
method for shunt peaking circuits.
ii iC

iL
L
o
C
R

Fig. 2.7.1: A shunt peaking network. It has two poles and one zero.

If we were to compensate the zeros (by intentionally adding a network containing


poles coincident with those zeros), we could still use the geometrical synthesis, but that
would spoil the optimum performance of the amplifier. Whilst there are no restrictions in
using the shunt peaking circuit in a multi-stage amplifier, the total bandwidth and rise time
improvement of such amplifier is lower than if the complete amplifier were designed on
the basis of the geometrical synthesis. Sometimes a shunt peaking circuit amplifier,
designed independently, may be used as an addition to a multi-stage amplifier with series
peaking and T-coil peaking circuits in order to shape the starting portion of the amplified
pulse to achieve a more symmetrical step response.
The output voltage Zo of the network in Fig. 2.7.1 is:
V  4 =P
4 =G
Zo œ Mi ^ œ Mi (2.7.1)
"
V  4 =P 
4 =G

This gives the input impedance:


V  4 =P
^Ð=Ñ œ (2.7.2)
"  4 =VG  =# PG

We introduce the parameters 7 and =h :


"
P œ 7 V# G and =h œ
VG

-2.73-
P. Stariè, E. Margan Inductive Peaking Circuits

We insert these parameters into Eq. 2.7.2 to obtain:


4=
" 7
V  4 = 7V # G =h
^Ð=Ñ œ œV (2.7.3)
4= # # # 4 = =
#
"  = 7V G "  7Œ 
=h =h =h

2.7.1 Frequency Response

To obtain the normalized frequency response magnitude, we normalize the


impedance (V œ "), then square the real and imaginary parts in both the numerator and the
denominator, and take a square root of the whole fraction:
Í
Í =
#
Í
Í "  7# Œ 
Í =h
¸J a=b¸ œ Í # % (2.7.4)
Í = =
#
Ì "  a "  #7 b Œ   7 Œ 
=h =h

We shall first find the value of the parameter 7 for the MFA response. In this case
the factors at Ð=Î=h Ñ# in the numerator and in the denominator must be equal [Ref. 2.4]:

"  #7 œ 7# Ê 7 œ È#  " œ !Þ%"%" (2.7.5)

If we put this value into Eq. 2.7.4 we obtain:


Í #
Í =
Í "  !Þ"("' Œ
Í 
=h
¸J Ð=Ѹ œ Í
Í # % (2.7.6)
Í = =
"  !Þ"("' Œ   !Þ"("' Œ 
Ì =h =h

The corresponding plot is shown in Fig. 2.7.2, curve a.


For the MFED response, we have to first find the envelope delay.

2.7.2 Phase Response And Envelope Delay

We calculate the value of the parameter 7 for the MFED response from the
envelope delay response, which we derive from the phase angle :a=b:

e˜J Ð=Ñ™
:a=b œ arctan
d ˜J Ð=Ñ™

where J Ð=Ñ can be derived from Eq. 2.7.3 by making the denominator real. This is done by
multiplying both the numerator and the denominator by the complex conjugate value of the
denominator: "  7Ð=Î=h Ñ  4 Ð=Î=h Ñ.

-2.74-
P. Stariè, E. Margan Inductive Peaking Circuits

The result is:


#
= = =
”"  47Œ •–"  7Œ   4Œ 
=h =h =h — a
J Ð=Ñ œ œ (2.7.7)
# # # W
= =
–"  7Œ =  —  Œ = 
h h

Next we multiply the brackets in the numerator and separate the real and imaginary parts:
$
= # =
a œ "  4 –a7  "bŒ 7 Œ  (2.7.8.)
=h =h —

By dividing the imaginary part of J Ð=Ñ by its real part, W cancels out from the phase:

$
e ˜a ™ = =
: œ arctan œ arctan –a7  "bŒ   7# Œ  (2.7.9)
d ˜a ™ =h =h —

By inserting 7 œ !Þ%"%" (Eq. 2.7.5), we would get the phase response of the MFA
case, as plotted in Fig. 2.7.3, curve a. But for the MFED response, the correct value of 7
must be found from the envelope delay, so we must calculate .:Î.= from Eq. 2.7.9:
$
.: . = # =
7e œ œ arctan –a7  "bŒ 7 Œ 
.= .=  =h =h —Ÿ
#
=
a7  "b  $ 7# Œ 
=h "
œ † (2.7.10)
$ # =h
= # =
"  –a7  "bŒ 7 Œ 
=h =h —

Let us square the bracket in the denominator, factor out Ð7  "Ñ and multiply both sides of
the equation by =h in order to obtain the normalized envelope delay:

#
$7# =
a7  "b–"  Œ 
7  " =h —
7e =h œ # % 6
(2.7.11)
= = =
"  Ð7  "Ñ# Œ #
  #7 a7  "bŒ
%
 7 Œ 
=h =h =h

A maximally flat envelope-delay is achieved when both factors at Ð=Î=h Ñ# in the


numerator and in the denominator are equal [Ref. 2.4]. Taking the sign into account, we
have:
$7#
œ Ð7  "Ñ# Ê $7# œ Ð7#  #7  "ÑÐ"  7Ñ (2.7.12)
"7
Finally:
7$  $ 7  " œ ! Ê 7 œ !Þ$### (2.7.13)

-2.75-
P. Stariè, E. Margan Inductive Peaking Circuits

The only real solution of this equation is 7 œ !Þ$### . If we put it into Eq. 2.7.4 for
the frequency response, Eq. 2.7.9 for phase response and Eq. 2.7.11 for envelope delay, we
can make the plots b in Fig. 2.7.2, 2.7.3, and 2.7.4.
Now we have enough data to calculate both poles and the zero for the MFA and
MFED case, which we shall also need to calculate the step response. But we still have to
find the value of 7 for the critical damping (CD) case. We can derive it from the fact that
for CD all the poles are real and equal. To find the poles, we take Eq. 2.7.3, divide it by V,
and replace the normalized fequency 4 =Î=h with the complex variable =:
"
^ a=b "7= =
J Ð=Ñ œ œ œ 7 (2.7.14)
V "  =  7 =# " "
=#  = 
7 7
We obtain the normalized poles from the denominator of J Ð=Ñ:
" " È "
="n,#n œ 5"n „ 4 ="n œ  „ "  %7 œ Š  " „ 4È%7  " ‹
#7 #7 #7
(2.7.15)
and the normalized real zero from the numerator:
"
=$n œ  5$n œ  (2.7.16)
7
Since the poles are usually complex, we have written the complex form in the
solution of the quadratic equation (Eq. 2.7.15). However, for CD, the solution must be real,
so the expression under the square root must be zero and this gives 7 œ !Þ#&. The curves
corresponding to CD in Fig. 2.7.2, 2.7.3, and 2.7.4 are marked with the letter c.
Note that, in spite of the higher cut off frequency, all the curves have the same high
frequency asymptote as the first-order response.
2.0

Vo
Ii R

1.0
a
0.7 c b

Ii
0.5 Vo
L L=0
a ) m = 0.41
C b ) m = 0.32
R c ) m = 0.25

0.2
L = mR 2C ω h = 1/ RC

0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.7.2: Shunt peaking circuit frequency response: a) MFA; b) MFED; c) CD. As usual, the
non-peaking case (P œ !Ñ is the reference. The system zero causes the high-frequency asymptote
to be the same as for the non-peaking system.

-2.76-
P. Stariè, E. Margan Inductive Peaking Circuits

Ii
− 10 Vo
L
− 20 a ) m = 0.41
C
b ) m = 0.32 R
c ) m = 0.25
− 30
L=0
− 40
ϕ[ ] L = mR 2C
− 50 ω h = 1/ RC

− 60

− 70
b c
− 80 a

− 90
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.7.3: Shunt peaking circuit phase response: a) MFA; b) MFED; c) CD. The non-peaking
case (P œ !Ñ is the reference. The system zero causes the high frequency phase to be  90°, the
same as for the non-peaking system.

0
Ii
Vo
L a ) m = 0.41
b ) m = 0.32
− 0.2 C c ) m = 0.25
R L=0

− 0.4
L = mR 2C ω h = 1 / RC
c
τ e ωh

− 0.6 b

− 0.8

− 1.0
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.7.4: Shunt peaking circuit envelope delay: a) MFA; b) MFED; c) CD. The non-peaking
case (P œ !Ñ is the reference. The peaking systems have a higher bandwidth, and consequently
a lower delay at DC than the non-peaking system.

-2.77-
P. Stariè, E. Margan Inductive Peaking Circuits

2.7.3 Step Response

For the calculated values of 7 the poles (Eq. 2.7.15) and the zero (Eq. 2.7.16) are:
a) for MFA response:
the poles ="n,#n œ  "Þ#!(" „ 4 !Þ)"!& and the zero =$n œ 5$n œ  #Þ%"%#
b) for MFED response:
the poles ="n,#n œ  "Þ&&") „ 4 !Þ&$(% and the zero =$n œ 5$n œ  $Þ"!$(
c) for CD response:
the double pole ="n,#n œ 5"n œ  # and the zero =$n œ 5$n œ  %

With these data we can calculate the step response. At first we calculate the MFA
and MFED responses, where in both cases we have two complex conjugate poles and one
real zero. The general expression for the frequency response is:
=" =# Ð=  =3 Ñ
J Ð=Ñ œ  (2.7.17)
=3 Ð=  =" ÑÐ=  =# Ñ

We multiply this equation by the unit step operator "Î= and obtain a new function:
 =" =# Ð=  =$ Ñ
KÐ=Ñ œ (2.7.18)
= =$ Ð=  =" ÑÐ=  =# Ñ

To calculate the step response in the time domain we take the _" transform:

 =" =# Ð=  =$ Ñ e=>
gÐ>Ñ œ _" ˜KÐ>Ñ™ œ ! res (2.7.19)
= =$ Ð=  =" Ñ Ð=  =# Ñ

We have three residues:


 =" =# Ð=  =$ Ñ e=> =" =# =$
res! œ lim = ” •œ œ"
= p! = = $ Ð=  = " ÑÐ=  = # Ñ =" =# =$

 =" =# Ð=  =$ Ñ e=>  =# Ð="  =$ Ñ =" >


res" œ lim Ð=  =" Ñ ” •œ e
= p =" = =$ Ð=  =" Ñ Ð=  =# Ñ =$ Ð="  =# Ñ

 =" =# Ð=  =$ Ñ e=>  =" Ð=#  =$ Ñ =# >


res# œ lim Ð=  =# Ñ ” •œ e (2.7.20)
= p =# = =$ Ð=  =" Ñ Ð=  =# Ñ =$ Ð=#  =" Ñ

Since the procedure is the same as for the previous circuits, we shall omit some
intermediate expressions. After inserting all the pole components, the sum of residues is:
A  4 B 5" > 4 = " > A  4 B 5" > 4 =" >
ga>b œ "  e e  e e (2.7.21)
#4B #4B

where A œ =#"  5" Ð5"  5$ Ñ and B œ =" 5$ . After factoring out  e5" > ÎB we obtain:

e5 " > A  4 B 4 =" > A  4 B 4 =" >


ga>b œ "  Œ e  e  (2.7.22)
B #4 #4

-2.78-
P. Stariè, E. Margan Inductive Peaking Circuits

The expression in parentheses can be simplified by sorting the real and imaginary parts:

A  4 B 4 =" > A  4 B 4 =" > e4 =" >  e4 =" > e4 =" >  e4 =" >
e  e œA B
#4 #4 #4 #

œ A sin =" >  B cos =" >

œ È A#  B # sina=" >  "b (2.7.23)


where:
B
" œ arctan
A
Again we have written " in order not to confuse it with the pole angle ). And here, too, we
will have to add 1 radians to " wherever appropriate, owing to the 1 period of the
arctangent function. By entering Eq. 2.7.23 into Eq. 2.7.22 and inserting the poles, we
obtain the general expression:

Éc=#"  5" a5"  5$ bd#  =#" 5$#


gÐ>Ñ œ "  e 5" > sina=" >  "b (2.7.24)
=" 5 $

where:
=" 5$
" œ arctan 1 (2.7.25)
=#"  5" Ð5"  5$ Ñ

We now need the general expression for the step response for the CD case, where
we have a double real pole =" and a real zero =$ . We start from the normalized frequency
response function:
 =#" a=  =$ b
J a=b œ (2.7.26)
=$ a=  =" b#

which must be multiplied by the unit step operator "Î=, thus obtaining:
 =#" a=  =$ b
Ka=b œ (2.7.27)
= =$ a=  =" b#
There are two residues:

 =#" a=  =$ b e=> ="# =$


res! œ lim =– — œ =# = œ "
= p! = =$ a=  =" b# " $

(2.7.28)
. =#"
 a=  = $ b e =>
="
res" œ lim a=  =" b# — œ ”=" > Œ"  =   "• e
=" >
= p=" .= – = =$ a=  =" b# $

If we express the poles in the second residue with their real and imaginary parts and
take the sum of both residues, we obtain:

5"
ga>b œ "  e 5" > ”5" > Œ"    "• (2.7.29)
5$

-2.79-
P. Stariè, E. Margan Inductive Peaking Circuits

Finally we insert the normalized numerical values for the poles, the zeros and the
time variable >ÎX œ >ÎVG . The step response is:

a) for MFA (5"n œ  "Þ#!(" , ="n œ „ !Þ)"!& , 5$n œ  #Þ%"%#)

gÐ>Ñ œ "  "Þ!)!% e"Þ#!(" >ÎX sinÐ!Þ)"!& >ÎX  "Þ")#'  1Ñ

b) for MFED (5"n œ  "Þ&&") , ="n œ „ !Þ&$(% , 5$n œ  $Þ"!$()

gÐ>Ñ œ "  "Þ'"(! e"Þ&&") >ÎX sinÐ!Þ&$(% >ÎX  !Þ''')  1Ñ

c) for CD (5"n œ  # , 5$n œ  %)

gÐ>Ñ œ "  e#>ÎX Ð>ÎX  "Ñ

The step response plots are shown in Fig. 2.7.5. By comparing them with those for
the two-pole series peaking circuit in Fig. 2.2.8 we note that the step response derivative at
time > œ ! is not zero, in contrast to those of the series peaking circuit. Instead the
responses look more like the step response of the non-peaking first-order case. The reason
for this is in the difference between the number of poles and zeros, which is only 1 in favor
of the poles in the shunt peaking circuit.

1.2

a
1.0
b
o
c
ii R
0.8
L=0

ii
0.6 o
L a ) m = 0.41
b ) m = 0.32
C c ) m = 0.25
0.4 R

0.2 L = mR 2C ω h = 1/ RC

0.0
0 1 2 3 4 5
t / RC
Fig. 2.7.5: Shunt peaking circuit step response: a) MFA; b) MFED; c) CD. The non-peaking
case (P œ !Ñ is the reference. The difference between the number of poles and the number of
zeros is only 1 for the shunt peaking systems, therefore the starting slope of the step response
is similar to that of the non-peaking first-order system.

-2.80-
P. Stariè, E. Margan Inductive Peaking Circuits

Fig. 2.7.6 shows the pole placements for the three cases. Note the placement of the
zero, which is farther from the origin for those systems which have the poles with lower
imaginary part.


1 a ) m = 0.4142 s1
s1,2 =
2 mRC
( −1 ± √ 1 − 4 m )
−1
s 3 = mRC θ = 135
− 2.41
RC
σ
s3 − 2 −1
s1 j ω RC RC

b ) m = 0.3222

θ = 150 s2
s3
σ
−3.104 −2 −1
RC RC RC c ) m = 0.2500 jω

s2
θ = 180
s3 s1
σ
s2
-4 −3 −2 −1
RC RC RC RC

Fig. 2.7.6: Shunt peaking circuit placement of the poles and the zero: a) MFA; b) MFED;
c) CD. Note the position of the zero =$ at the far left of the real axis. Although far from the
poles, its influence on the response is notable in each case.

We conclude the discussion with Table 2.7.1, in which all the important two-pole
shunt peaking circuit parameters are listed.

Table 2.7.1
response type 7 (b (r $ [%]
a) MFA !Þ%"%" "Þ(# "Þ)! $Þ!)
b) MFED !Þ$### "Þ&) "Þ&( !Þ%"
c) CD !Þ#&!! "Þ%% "Þ%" !Þ!!
Table 2.7.1: Two-pole shunt-peaking circuit parameters.

-2.81-
P. Stariè, E. Margan Inductive Peaking Circuits

2.8 Three-Pole Shunt Peaking Circuit

If we consider the self capacitance GP of the coil P the two-pole shunt peaking
circuit acquires an additional pole and an additional zero. If the value of GP can not be
neglected it must be in a well defined proportion against other circuit components in order
to achieve optimum performance in the MFA or MFED sense. Fig. 2.8.1 shows the
corresponding three-pole, two-zero shunt peaking circuit.
ii
o

R
C

L CL

Fig. 2.8.1: The shunt peaking circuit has three poles and two zeros.

The network impedance is:


"
^ a=b œ
"
4=G 
"
V
"
4 = GP 
4=P

V  4 = P  =# PGP V
œ (2.8.1)
4 = G Va"  =# PGP b  =# P aG  GP b  "

Let us introduce the following parameters:


"
P œ 7V # G =h œ GP œ 8 G (2.8.2)
VG
which we insert into Eq. 2.8.1. Then:
#
= =
"  78Œ   4 7Œ 
=h =h
^ a=b œ V # #
(2.8.3)
= = =
"  7 a"  8bŒ   4Œ  "  78Œ 
=h =h – =h —

2.8.1. Frequency Response

The system transfer function can be obtained easily from ^Ð=Ñ. We first replace the
normalized imaginary frequency 4 Ð=Î=h Ñ by the complex frequency variable =. Then, by
realizing that the output voltage is equal to the product of input current and the system
impedance: Zo œ Mi ^ a=b, we can express the transfer function by normalizing the output
to the final value at DC:
Zo ^ a=b
J a=b œ œ (2.8.4)
Mi V V

-2.83-
P. Stariè, E. Margan Inductive Peaking Circuits

With a little rearranging we obtain:


" "
=#  =

J a=b œ 8 7 8 (2.8.5)
"8 " "
=$  =# = 
8 78 78

The magnitude ¸J a=b¸ œ ÈJ a=b † J * a=b can be obtained more easily from the
impedance magnitudeÞ We start from Eq. 2.8.3, square the imaginary and real parts in the
numerator and in the denominator and take a square root of the whole fraction:
Í
Í # # #
Í
Í = # =
Í – "  7 8 Œ  — 7 Œ 
Í =h =h
¸^ a=b¸ œ V Í # (2.8.6)
Í # # # #
Í = = =
"  7 a "  8 b Œ 
 — Œ  – "  7 8 Œ  —
Ì – =h =h =h

Then we square the brackets and divide by V to obtain a normalized expression:

"  7#  # 7 8‘;#  7# 8# ;%


¸J a;b¸ œ Ë (2.8.7)
"  c"  # 7Ð"  8Ñd;#  7# Ð"  8Ñ#  # 7 8‘;%  7# 8# ;6

and here we have replaced the normalized frequency =Î=h with the simbol ; in order to be
able to write the equation on a single line.
For the MFA response the numerator and denominator factors at the same powers
of Ð=Î=h Ñ in Eq. 2.8.7 must be equal [Ref. 2.4]. Thus we have two equations:

7#  # 7 8 œ "  # 7 Ð"  8Ñ
(2.8.8)
7# 8# œ 7# Ð"  8Ñ#  # 7 8

from which we calculate the values of 7 and 8 for the MFA response:

7 œ !Þ%"% and 8 œ !Þ$&% (2.8.9)

For the MFED response the procedure for calculating the parameters 7 and 8 can
be similar to that for the two-pole shunt peaking circuit: we would first calculate the
formula for the envelope delay and equate the factors at the same powers of Ð=Î=h Ñ in the
numerator and the denominator, etc. But, with the increasing number of poles, the
calculation becomes more complicated. It is much simpler to compare the coefficients of
the characteristic polynomial of the complex frequency transfer function Eq. 2.8.5.
The numerical values of the coefficients of the 3rd -order Bessel polynomial, sorted
by the falling powers of =, are: ", ', "& and "& again. Thus, we have two equations:
"8 "
œ' and œ "& (2.8.10)
8 78
from which we get:
" "
8œ and 7œ (2.8.11)
& $

-2.84-
P. Stariè, E. Margan Inductive Peaking Circuits

Compare these values to those from the work of V.L. Krejcer [Ref. 2.4, loc. cit.].
His values for MFED responses are:

7 œ !Þ$& and 8 œ !Þ##

Krejcer also calculated the parameters for a "special" case circuit (SPEC):

7 œ !Þ%& and 8 œ !Þ## (2.8.12)

By inserting the values of parameters from Eq. 2.8.9 – 2.8.12 into Eq. 2.8.7, we can
calculate the corresponding frequency responses. However, for the phase, envelope delay,
and step response we also need to know the values of poles and zeros. Since we know all
the values of parameters 7 and 8, we can use Eq. 2.8.5 . We equate the denominator W of
J Ð=Ñ to zero and find the roots, which are the three poles of J Ð=Ñ. Similarly, by equating
the numerator a of J Ð=Ñ to zero we calculate the two zeros (for readers less experienced
in mathematics we have reported the general solutions for polynomials of first, second and
third order in Appendix 2.1 ).

a) MFA response ( 7 œ !Þ%"% and 8 œ !Þ$&% ):


WÊ =$  $Þ)#& =#  'Þ)#$ =  'Þ)#$ œ !
The poles: ="n,#n œ 5"n „ 4 ="n œ  !Þ)&! „ 4 "Þ&((
=$n œ 5$n œ  #Þ"#&
a Ê =#  #Þ)#& =  'Þ)#$ œ !

The zeros: =%n,&n œ 5&n „ 4 =&n œ  "Þ%"# „ 4 #Þ"*(

b) MFED response ( 7 œ !Þ$$$ and 8 œ !Þ#!! ):


WÊ =$  ' =#  "& =  "& œ !
The poles: ="n,#n œ 5"n „ 4 ="n œ  "Þ)$)* „ 4 "Þ(&%%
=$n œ 5$n œ  #Þ$###
a Ê =#  & =  "& œ !

The zeros: =%n,&n œ 5&n „ 4 =&n œ  #Þ&!! „ 4 #Þ*&)

c) Special case ( 7 œ !Þ%& and 8 œ !Þ## Ñ:


WÊ =$  &Þ&%& =#  "!Þ"!" =  "!Þ"!" œ !
The poles: ="n,#n œ 5"n „ 4 ="n œ  "Þ!$& „ 4 "Þ$&&
=$n œ 5$n œ  $Þ%(&
a Ê =#  %ß &%& =  "!Þ"!" œ !

The zeros: =%n,&n œ 5&n „ 4 =&n œ  #Þ#$( „ 4 #Þ###

By inserting the values of 7 and 8 in Eq. 2.8.7 we can calculate the frequency
response magnitude of the three cases. The resulting plots are shown in Fig. 2.8.2. Note the
high frequency asymptote, which is the same as for the non-peaking single-pole case.

-2.85-
P. Stariè, E. Margan Inductive Peaking Circuits

2.0

Vo
Ii R
1.0

0.7
Ii
Vo L=0
0.5
R L = mR 2C a b
C c
ω h = 1/ RC
L CL

m CL /C
0.2
a) 0.414 0.354
b) 0.333 0.200
c) 0.450 0.220
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.8.2: Three-pole shunt peaking circuit frequency response: a) MFA; b) MFED; c) SPEC.
The non-peaking case (P œ !Ñ is the reference. The difference of the number of poles and the
number of zeros is only 1 for peaking systems, therefore the ending slope of the frequency
response is similar to that of the non-peaking system.

2.8.2 Phase Response

We use Eq. 2.2.30 positive for each pole and negative for each zero and sum them:
= = =
 =" n  = "n
=h =h =h
: œ arctan  arctan  arctan 
5"n 5"n 5$n
= =
 =&n  = &n
=h =h
 arctan  arctan (2.8.13)
5&n 5&n

By entering the numerical values of poles and zeros we obtain the phase response
equations for each case. In Fig. 2.8.3 the corresponding plots are shown.

2.8.3 Envelope Delay

We use Eq. 2.2.34, adding a term for each pole and subtracting for each zero:
5"n 5"n 5$ n
7e =h œ #  #  #
= = =
5"#n  Œ  ="n  5"#n  Œ  ="n  5$#n  Œ 
=h =h =h
5&n 5&n
 #  # (2.8.14)
# = # =
5&n  Œ  =&n  5&n  Œ  =&n 
=h =h

-2.86-
P. Stariè, E. Margan Inductive Peaking Circuits

Again we insert the numerical values for poles and zeros in Eq. 2.8.14 to plot the
envelope delay as shown in Fig. 2.8.4. As we have explained in Fig. 2.2.6, there is an
envelope advance (owed to system zeros) in the high frequency range.

0
Ii
− 10 Vo
R L = mR 2C
C
− 20 ω h = 1/ RC
L CL
− 30
m CL /C
− 40 a) 0.414 0.354
ϕ[]
b) 0.333 0.200
− 50 c) 0.450 0.220

− 60
L=0
− 70

a c b
− 80

− 90
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.8.3: Three-pole shunt peaking circuit phase response: a) MFA; b) MFED; c) SPEC.
The non-peaking case (P œ !Ñ is the reference. The system zeros cause the phase response
bouncing up at the  90° boundary and then returning back.

0.2
Ii
Vo L = mR 2C
R
0.0 C ω h = 1/ RC
L CL
m CL /C
− 0.2
a) 0.414 0.354
L=0
τe ω h b) 0.333 0.200
c) 0.450 0.220
− 0.4

b
− 0.6
c

− 0.8
a

− 1.0
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.8.4: Three-pole shunt peaking circuit envelope delay: a) MFA; b) MFED; c) SPEC.
The non-peaking (P œ !Ñ case is shown as the reference. Note the envelope advance in the
high frequency range, owed to system zeros.

-2.87-
P. Stariè, E. Margan Inductive Peaking Circuits

2.8.4 Step Response

For the the step response we use the general transfer function for three poles and
two zeros, which we shall reshape to suit a solution in the Laplace Transform Tables.
 =" =# =$ Ð=  =% ÑÐ=  =& Ñ
J Ð=Ñ œ (2.8.15)
=% =& Ð=  =" ÑÐ=  =# ÑÐ=  =$ Ñ

We multiply this function by the unit step operator "Î= and obtain a new equation:
"  =" =# =$ Ð=  =% ÑÐ=  =& Ñ
KÐ=Ñ œ † (2.8.16)
= =% =& Ð=  =" ÑÐ=  =# ÑÐ=  =$ Ñ

The step response gÐ>Ñ is fully derived in Appendix 2.3. The result is:

K" 5" >ÎX


gÐ>Ñ œ "  e É A#  =#" B# sina=" >ÎX  " b  K$ e5$ >ÎX (2.8.17)
="

Besides the usual time normalization X œ VG , here we have:

A œ c5" a5"  5$ b  =#" dc5"#  =#"  5&#  =#&  # 5& 5" d  # 5& ="# a# 5"  5$ b

B œ a# 5"  5$ bc5"#  =#"  5&#  =#&  # 5& 5" d  # 5& c5" a5"  5$ b  ="# d

 =" B
" œ arctanŒ 1
A
5$
K" œ
a5&#  =&# bcÐ5"  5$ Ñ#  ="# d

a5"#  =#" bcÐ5$  5& Ñ#  =&# d


K$ œ (2.8.18)
a5&#  =#& bcÐ5$  5" Ñ#  ="# d

By inserting the numerical values of the poles and zeros we obtain the following relations:
a) MFA response ( 7 œ !Þ%"% and 8 œ !Þ$&% ):

gÐ>Ñ œ "  !Þ&&($ e!Þ)&! >ÎX sinÐ"Þ&(( >ÎX  !Þ((%"  1Ñ  !Þ'"!% e#Þ"#& >ÎX

b) MFED response ( 7 œ !Þ$$$ and 8 œ !Þ#!! ):

gÐ>Ñ œ "  !Þ)!&% e"Þ)$* >ÎX sinÐ"Þ(&% >ÎX  !Þ"((#  1Ñ  "Þ"%#! e#Þ$## >ÎX

c) Special case ( 7 œ !Þ%& and 8 œ !Þ## ):

gÐ>Ñ œ "  !Þ))"% e"Þ!$& >ÎX sinÐ"Þ$&& >ÎX  "Þ!$$$  1Ñ  !Þ#%#* e$Þ%(& >ÎX

The plots of these responses can be seen in Fig. 2.8.5Þ Because the difference
between the number of poles and zeros is one only, the initial slope of the response is the
same as for the non-peaking response.

-2.88-
P. Stariè, E. Margan Inductive Peaking Circuits

1.2

c
1.0 a
o b
ii R
L=0
0.8
ii
o
0.6 R L = mR 2C
C
ω h = 1/ RC
L CL
0.4
m CL /C
a) 0.414 0.354
0.2
b) 0.333 0.200
c) 0.450 0.220
0.0
0 1 2 3 4 5
t / RC

Fig. 2.8.5: Three-pole shunt peaking circuit step response: a) MFA; b) MFED; c) SPEC. The
non-peaking case (P œ !Ñ is the reference. The initial slope is similar to the non-peaking
response, since the difference between the number of system poles and zeros is one only.

We conclude the discussion of the three-pole and two-zero shunt peaking circuit by
the Table 2.8.1, which gives all the important circuit parameters.

Table 2.8.1
response type 7 8 (b (r $ [%]
a) MFA !Þ%"% !Þ$&% "Þ)% "Þ)( (Þ"
b) MFED !Þ$$$ !Þ#!! "Þ($ "Þ(& !Þ$(
c) SPEC !Þ%&! !Þ##! "Þ)& "Þ*$ (Þ!
Table 2.8.1: Three-pole shunt-peaking circuit parameters.

-2.89-
P. Stariè, E. Margan Inductive Peaking Circuits

2.9 Shunt–Series Peaking Circuit

In those cases in which the amplifier capacitance may be split into two parts, Gi and
G, we can combine the shunt and the series peaking to form a network, shown in Fig. 2.9.1,
named the shunt–series peaking circuit. The bandwidth of the shunt–series circuit is
increased further than can be achieved by each system alone.
ii i3 L 2
o
i1 i2
L1
Ci
C
R

Fig. 2.9.1: The shunt–series peaking circuit.

Although the improvement of the bandwidth and rise time in a shunt–series peaking
circuit exceeds that of a pure series or pure shunt peaking circuit, the improvement factors
just barely reach the values offered by the three-pole T-coil circuit, which is analytically
and practically much easier to deal with; not to speak of the improvement offered by the
L+T network, which is substantially greater. This circuit has been extensively treated in
literature [Ref. 2.4, 2.25, 2.26]. The calculation of the step response for this circuit can be
found in Appendix 2.3, so we shall give only the essential relations.
We start the analysis by calculating the input impedance:
Zi Zi
^i œ œ (2.9.1)
Mi M"  M#  M$
where:
Zi Zi Zi
M" œ M# œ M$ œ (2.9.2)
" V  = P" "
 =P#
= Gi =G
By introducing this into Eq. 2.9.1 and eliminating the double fractions we get:

ÐV  = P" ÑÐ"  =# P# GÑ
^i œ (2.9.3)
= Gi ÐV  = P" ÑÐ"  =# P# GÑ  =# P# G  "  = GÐV  = P" Ñ

The output voltage is:


"
=G "
Zo œ Z i œ Mi ^i (2.9.4)
" =# P# G  "
= P# 
=G
We insert Eq. 2.9.3 for ^i , cancel the =# P# G  " terms and extract V from the numerator:
"  = P" ÎV
Zo œ M i V (2.9.5)
= Gi ÐV  = P" ÑÐ"  =# P# GÑ  =# P# G  "  = GÐV  = P" Ñ

We divide this by Mi V to get the transfer function normalized in amplitude. Also we


multiply all the terms in parentheses and rearrange the result to obtain the canonical form

-2.91-
P. Stariè, E. Margan Inductive Peaking Circuits

(divide by the coefficient at the highest power of =) first in the numerator (because it is
easy) and then in the denominator:
P" V
Œ=  
Zo V P"
œ (2.9.6)
Mi V Ð= Gi V  =# Gi P" ÑÐ"  =# P# G Ñ  =# P# G  "  = G ÐV  = P" Ñ

P" V
Œ=  
V P"
œ %
= P" P# GGi  =$ P# GGi V  =# Gi P"  =# P# G  =# P" G  = GV  = Gi V  "
" P" V
† Œ=  
P" P # G G i V P"
œ
V P# G  P " G i  P " G V aGi  G b "
=%  = $  =# = 
P" P" P # G G i P" P # G G i P" P# G Gi

Since we would like to know how much we can improve the bandwidth with
respect to the non-peaking circuit (inductances shorted), let us normalize the transfer
function to =h œ "ÎVÐGi  GÑ by putting V œ " and Gi  G œ ". To simplify the
expressions, we introduce the following parameters:
G P" P#
8œ 7" œ 7# œ (2.9.7)
G  Gi V# ÐG  Gi Ñ V # ÐG  Gi Ñ

and by using the normalization we have:

G Ê8 Gi Ê Ð"  8Ñ P" Ê 7" P# Ê 7# (2.9.8)

With these expressions the frequency response Eq. 2.9.6 becomes:

" "
7" Œ =  
7" 7# 8 Ð"  8Ñ 7"
J Ð=Ñ œ
" 7 # 8  7" " "
=%  = $  =# = 
7" 7" 7# 8 Ð"  8Ñ 7" 7# 8 Ð"  8Ñ 7" 7# 8 Ð"  8Ñ
(2.9.9)
Now we compare this with the generalized four-pole one-zero transfer function:
Ð  "Ñ% =" =# =$ =% =  =&
J Ð=Ñ œ † (2.9.10)
a=  =" ba=  =# ba=  =$ ba=  =% b  =&

From the numerator it is immediately clear that the zero is:


"
=& œ  (2.9.11)
7"
and the product of the poles is:
"
=" = # = $ = % œ (2.9.12)
7" 7# 8 Ð"  8Ñ

Next we transform the denominator of Eq. 2.9.10 into a canonical form:


a=  =" ba=  =# ba=  =$ ba=  =% b œ =%  +=$  ,=#  -=  . (2.9.13)

-2.92-
P. Stariè, E. Margan Inductive Peaking Circuits

where:
+ œ  a="  =#  =$  =% b

, œ =" = #  = " = $  = " = %  = # = $  = # = %  = $ = %


(2.9.14)
- œ  a=" =# =$  =" =# =%  =" =$ =%  =# =$ =% b

. œ =" = # = $ = %

By comparing the coefficients at equal powers of = , we note that:


" 7# 8  7" "
+œ ,œ -œ.œ (2.9.15)
7" 7" 7# 8 Ð"  8Ñ 7" 7# 8 Ð"  8Ñ

For the MFED response the coefficients of the fourth-order Bessel polynomial (which we
obtain by running the BESTAP routine in Part 6) have the following numerical values:

+ œ "! , œ %& - œ "!& . œ "!& (2.9.16)

So, from +:
71 œ !Þ" (2.9.17)
From , and - :
,
 7"
, œ a7# 8  7" b - Ê 7# œ - (2.9.18)
8
From - or . :
" 8
8Ð"  8Ñ œ Ê 8  8#  œ!
- 7" 7 # %&
"!& † !Þ" † Œ  !Þ"
"!&
8
Ê 8  8#  œ!
$Þ%&
"
Ê 8Œ"  8  œ! (2.9.19)
$Þ%&
And, since 8 Á !:
"
8œ" œ !Þ("!" (2.9.20)
$Þ%&
With this we calculate 7# :
, %&
 7"  !Þ"
7# œ - œ "!& œ !Þ%'#( (2.9.21)
8 !Þ("!"
The component values for the MFED transfer function will be:

G œ 8ÐGi  GÑ œ !Þ("!" ÐGi  G Ñ

Gi œ Ð"  8ÑÐG  Gi Ñ œ !Þ#)** ÐGi  G Ñ


(2.9.22)
P" œ 7" V # ÐG  Gi Ñ œ !Þ" V # ÐGi  G Ñ

P# œ 7# V# ÐG  Gi Ñ œ !Þ%'#( V # ÐGi  G Ñ

-2.93-
P. Stariè, E. Margan Inductive Peaking Circuits

The MFED poles ="–% (BESTAP routine, Part 6) and the zero =& (Eq. 2.9.11) are:
="n,#n œ ="t,#t œ  #Þ)*'# „ 4 !Þ)'(#
=$n,%n œ =$t,%t œ  #Þ"!$) „ 4 #Þ'&(%
=&n œ  "!Þ!!! (2.9.23)

For the MFA we can use the same procedure as in Sec.2.3.1, but since we have a
system of 4th -order we would get an 8th -order polynomial and, consequently, a complicated
set of equations to solve. Instead we shall use a simpler approach (which, by the way, can
be used in any other case). We must first consider that our system will have a bandwidth
larger than the normalized Butterworth system. Let (b be the proportionality factor between
each normalized Butterworth pole and the shunt–series peaking system pole:

= 5 œ (b = 5 t (2.9.24)
th
The normalized 4 -order Butterworth system poles (see Part 6, BUTTAP routine) are:
="t,#t œ  !Þ$)#( „ 4 !Þ*#$*
=$t,%t œ  !Þ*#$* „ 4 !Þ$)#( (2.9.25)

and the values of the characteristic polynomial coefficients are:

"Þ!!!! #Þ'"$" $Þ%"%# #Þ'"$" "Þ!!!! (2.9.26)

The polynomial coefficients +, ,, - and . of the shunt-series peaking system are then:

+ œ  (b a="t  =#t  =$t  =%t b œ (b † #Þ'"$"

, œ (b# a="t =#t  ="t =$t  ="t =%t  =#t =$t  =#t =%t  =$t =%t b œ (b# † $Þ%"%#
(2.9.27)
- œ  (b$ a="t =#t =$t  ="t =#t =%t  ="t =$t =%t  =#t =$t =%t b œ (b$ † #Þ'"$"

. œ (b% ="t =#t =$t =%t œ (b%

Since the coefficients +, ,, - and . are the same as in Eq. 2.9.15, we get four equations
form which we will extract the values of factors (b , 7" , 7# and 8:
" 7# 8  7"
(b † #Þ'"$" œ (b# † $Þ%"%# œ
7" 7" 7# 8 Ð"  8Ñ
" "
(b$ † #Þ'"$" œ (b% œ (2.9.28)
7" 7# 8 Ð"  8Ñ 7" 7# 8 Ð"  8Ñ

From the last two equations we immediately find the value of (b :

(b$ † #Þ'"$" œ (b% Ê (b œ #Þ'"$" (2.9.29)

Effectively, the pole multiplication factor is equal to the MFA bandwidth extension. From
the first equation of 2.9.28 we can now calculate 7" :
" "
7" œ œ # œ !Þ"%'% (2.9.30)
#Þ'"$" (b (b

-2.94-
P. Stariè, E. Margan Inductive Peaking Circuits

From the last equation of 2.9.28 we can establish the relationship between 7# and 8:
" "
(b% œ Ê 7# œ (2.9.31)
7" 7# 8 Ð"  8Ñ (b# 8 Ð"  8Ñ

Finally, from the second equation we can derive 8:


7# 8  7"
(b# † $Þ%"%# œ (2.9.32)
7" 7# 8 Ð"  8Ñ

Ê (b# † $Þ%"%# 7" 7# 8 Ð"  8Ñ œ 7# 8  7"

Here we substitute 7" with "Î(b# and 7# with "Î(b# 8 Ð"  8Ñ:
" 8 "
Ê $Þ%"%# œ #  #
(b# (b 8 Ð"  8Ñ (b

"Î(b# cancels, as well as 8:


"
Ê $Þ%"%# œ "
a"  8b
"
Ê 8 œ" œ !Þ&)&) (2.9.33)
a$Þ%"%#  "b

And now we can calculate 7# :


"
7# œ œ !Þ'!$' (2.9.34)
(b# 8 Ð"  8Ñ

The MFA poles and the zero are:

="n,#n œ  #Þ%"%# „ 4 "Þ!!!!

=$n,%n œ  "Þ!!!! „ 4 #Þ%"%# (2.9.35)

=&n œ  'Þ)#)$

and the MFA coefficients are:

"Þ!!!! 'Þ)#)% #$Þ$"$# %'Þ'#'! %'Þ'#'! (2.9.36)

In addition to the numerical values of parameters 7" , 7# and 8 just calculated, we


will also show the results for MFED obtained from two other sources, Braude [Ref. 2.25]
and Shea [Ref. 2.26, loc. cit.], to illustrate the possibility of different optimization
strategies.
All the design parameters and performance indicators are in Table 2.9.1 at the end
of this section.

-2.95-
P. Stariè, E. Margan Inductive Peaking Circuits

Let us insert these data into Eq. 2.9.9 and calculate the poles and the zero À

+) MFA by PS/EM ( 7" œ !Þ"%'% , 7# œ !Þ'!$' , and 8 œ !Þ&)&) ):


=%  'Þ)#)% =$  #$Þ$"$# =#  %'Þ'#'! =  %'Þ'#'! œ !
The poles are: ="n,#n œ 5"n „ 4 ="n œ  #Þ%"%# „ 4 "Þ!!!!
=$n,%n œ 5$n „ 4 =$n œ  "Þ!!!! „ 4 #Þ%"%#
and the zero: =&n œ 5&n œ  'Þ)#)%

,) MFED by PS/EM ( 7" œ !Þ"!!! , 7# œ !Þ%'#( , and 8 œ !Þ("!" ):


=%  "!Þ!!!! =$  %%Þ**$$ =#  "!%Þ*)'$ =  "!%Þ*)'$ œ !
The poles are: ="n,#n œ 5"n „ 4 ="n œ  #Þ)*(' „ 4 !Þ)'%*
=$n,%n œ 5$n „ 4 =$n œ  #Þ"!#% „ 4 #Þ'&($
and the zero: =&n œ 5&n œ  "!Þ!!!!

- ) MFED by Shea ( 7" œ !Þ"$$ , 7# œ !Þ%'( and 8 œ !Þ''( ):


=%  (Þ&")) =$  $#Þ#"*) =#  (#Þ%)(# =  (#Þ%)(# œ !
The poles are: ="n,#n œ 5"n „ 4 ="n œ  #Þ"$'! „ 4 "Þ!*#&
=$n,%n œ 5$n „ 4 =$n œ  "Þ'#$% „ 4 $Þ"&&'
and the zero: =&n œ 5&n œ  (Þ&"))

. ) MFED by Braude ( 7" œ !Þ"## , 7# œ !Þ&"" , and 8 œ !Þ'&' ):


=%  )Þ"*'( =$  $#Þ%**' =#  ("Þ!)"' =  ("Þ!)"' œ !
The poles are: ="n,#n œ 5"n „ 4 ="n œ  #Þ'!$# „ 4 !Þ*'")
=$n,%n œ 5$n „ 4 =$n œ  "Þ%*&" „ 4 #Þ'%%'
and the zero: =&n œ 5&n œ  )Þ"*'(

2.9.1 Frequency Response

We shall use the normalized formula which we developed for the 4-pole L+T
circuit, (Eq. 2.6.10), to which we must include the influence of the zero. The magnitude of
the transfer function (to shorten the expression, we will omit the index ‘n’ here) is:

"
ˆ5"#  ="# ‰ˆ5$#  =$# ‰ É5&#  ;#
¸J Ð=Ѹ œ 5&
É5"#  a;  =" b# ‘5"#  a;  =" b# ‘5$#  a;  =$ b# ‘5$#  a;  =$ b# ‘

(2.9.37)
where again ; œ =Î=h . In Fig. 2.9.2 we have ploted the responses resulting from this
equation by inserting the values of the poles and the zero for our MFA and MFED.

-2.96-
P. Stariè, E. Margan Inductive Peaking Circuits

2.0

Vo
Ii R
1.0
a
b
0.7

Ii L2 ω h = 1 / R (C + C i )
0.5 Vo
L1 n = C / (C + C i ) L1 = L2 = 0
Ci
C
L 1 = m 1 R 2(C + C i )
R
L 2 = m 2 R 2(C + C i )

0.2
m1 m2 n
a) 0.146 0.604 0.586
b) 0.100 0.463 0.710
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.9.2: The shunt–series peaking circuit frequency-response: a) MFA; b) MFED. Note
the MFA not being exactly maximally flat, owing to the system zero.

2.9.2 Phase Response

As before, we apply Eq. 2.2.30 for each pole and (negative) for the zero:
= =
 ="n  ="n
=h =h
:a=b œ arctan  arctan
5"n 5"n
= = =
 = $n  = $n
=h =h =h
 arctan  arctan  arctan (2.9.38)
5 $n 5$n 5&n
By inserting the values for the poles and the zero from the equations above, we
obtain the responses shown in Fig. 2.9.3 .

2.9.3 Envelope Delay

By Eq. 2.2.34, for responses a) and b) we obtain:


5"n 5"n
7e =h œ #  #
= =
5"#n  Œ  ="n  5"#n  Œ  ="n 
=h =h
5$n 5$n 5&n
 #  #  # (2.9.39)
= = =
5$#n  Œ  =$n  5$#n  Œ  =$n  5&#n Œ 
=h =h =h

-2.97-
P. Stariè, E. Margan Inductive Peaking Circuits

By inserting the values for the poles and the zero from the equations above, we
obtain the responses shown in Fig. 2.9.4. Again, as in pure shunt peaking, we have different
low frequency delays for each type of poles, owing to the different normalization.

− 30
L1 = L 2 = 0
− 60
ϕ[ ]
− 90

− 120 Ii L2 ω h = 1 / R (C + C i )
Vo
L1 n = C / (C + C i )
− 150 Ci
C
L 1 = m 1 R 2 (C + C i )
R
− 180 L 2 = m 2 R 2 (C + C i )
b
− 210
m1 m2 n a
− 240 a) 0.146 0.604 0.586
b) 0.100 0.463 0.710
− 270
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.9.3: The shunt–series peaking circuit phase response: a) MFA; b) MFED.

0.0
Ii L2 ω h = 1 / R (C + C i )
Vo
− 0.2 L1 n = C / (C + C i )
Ci
C
L 1 = m 1 R 2 (C + C i )
R
− 0.4 L 2 = m 2 R 2 (C + C i )
τe ωh L1 = L2 = 0
m1 m2 n
− 0.6
a) 0.146 0.604 0.586
b) 0.100 0.463 0.710
− 0.8
b

− 1.0
a

− 1.2

− 1.4
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 2.9.4: The shunt–series peaking circuit envelope delay: a) MFA; b) MFED.

-2.98-
P. Stariè, E. Margan Inductive Peaking Circuits

2.9.4 Step Response

The normalized general expression for four poles and one zero in the frequency
domain is:
=" =# =$ =% Ð=  =& Ñ
J Ð=Ñ œ (2.9.40)
 =& Ð=  =" ÑÐ=  =# ÑÐ=  =$ ÑÐ=  =% Ñ

To get the step response in the = domain, we multiply J Ð=Ñ by the unit step operator "Î=:
=" =# =$ =% Ð=  =& Ñ
KÐ=Ñ œ (2.9.41)
 = =& Ð=  =" ÑÐ=  =# ÑÐ=  =$ ÑÐ=  =% Ñ

The step response in the time domain is obtained by taking the _" transform:

=" =# =$ =% Ð=  =& Ñ e=>


ga>b œ _" eKa=bf œ ! res (2.9.42)
 = =& Ð=  =" ÑÐ=  =# ÑÐ=  =$ ÑÐ=  =% Ñ

This formula requires even more effort than was spent for the L+T network. We
shall skip the lengthy procedure (which is presented in Appendix 2.3) and give only the
solution, which for all the listed poles and zeros is:

K"
gÐ>Ñ œ "  e5" > cM sinÐ=" >ÎX Ñ  =" N cos(=" >ÎX Ñd
5& ="

K$
 e5$ > cP sinÐ=$ >ÎX Ñ  =$ Q cosÐ=$ >ÎX Ñd (2.9.43)
5& =$
where:
M œ a5"  5& bc5" A  =#" Bd  =#" aA  5" Bb
N œ c5" A  =#" Bd  a5"  5& baA  5" Bb
P œ a5$  5& bc5$ C  =#$ Bd  =#$ aC  5$ Bb
Q œ c5$ C  =#$ Bd  a5$  5& baC  5$ Bb (2.9.44)

whilst A, B, C, K" and K$ are the same as for the L+T network (Eq. 2.6.17).
The plots in Fig. 2.9.5 and Fig. 2.9.6 were calculated and drawn by using these
formulae.
Let us now compare the MFED response with those obtained by Braude and Shea.
The step response relation is the same for all three systems (Eq. 2.9.43, 2.9.44), but the pole
and zero values are different. As it appears from the comparison of the characteristic
polynomial coefficients and even more so from the comparison of the poles and zeros, the
three systems were optimized in different ways. This is evident from Fig. 2.9.7.
Although at first glance all three step responses look very similar (Fig. 2.9.6), a
closer look reveals that the Braude case has an excessive overshoot. The Shea case has the
steepest slope (largest bandwidth), but this is paid for by an extra overshoot and ringing.
The Bessel system has the lowest transient slope; however, it has the minimal overshoot
and it is the first to settle to < 0.1% of the final amplitude value (in about #Þ( >ÎX ).

-2.99-
P. Stariè, E. Margan Inductive Peaking Circuits

1.2
a

b
1.0
o
ii R L1 = L2 = 0
0.8
ii L2 ω h = 1 / R (C + C i )
o

0.6 L1 n = C / (C + C i )
Ci
C
L 1 = m 1 R 2 (C + C i )
R
0.4 L 2 = m 2 R 2 (C + C i )

m1 m2 n
0.2
a) 0.146 0.604 0.586
b) 0.100 0.463 0.710
0.0
0 1 2 3 4 5
t / R (C + C i )
Fig. 2.9.5: The shunt–series peaking circuit step response: a) MFA; b) MFED.

o
ii R
b
1.0
a c L1 = L2 = 0
0.8 b 10 o
ii R
a 1.02
0.6
Vertical scale × 10
0.4 1.00
c
0.2 0.98
L1 = L2 = 0
0.0 0.96

m1 m2 n 0.94
a ) Shea 0.133 0.467 0.667
b ) Braude 0.122 0.511 0.656 0.92
c ) Bessel 0.100 0.464 0.710 0.90

0 1 2 3 4 5
t / R (C + C i )
Fig. 2.9.6: The MFED shunt–series peaking circuit step response: a) by Shea; b) by Braude;
c) a true Bessel system. The ×10 vertical scale expansion shows the top 10 % of the response.
The overshoot in the Braude case is excessive, whilst the Shea version has a prolonged
ringing. Although slowest, the Bessel system is the first to settle to < 0.1 % of the final value.

-2.100-
P. Stariè, E. Margan Inductive Peaking Circuits

The pole layout in Fig. 2.9.7 confirms the statements above. In the Braude case the
two poles with the smaller imaginary part are too far from the imaginary axis to
compensate the peaking of the two poles closer, so the overshoot is inevitable. The Shea
case has the widest pole spread and consequently the largest bandwidth, but the two poles
with the lower imaginary part are too close to the imaginary axis (this is needed in order to
level out the peaks and deeps in the frequency response). As a consequence, whilst the
overshoot is just acceptable, there is some long term ringing, impairing the system’s
settling time. The Bessel system pole layout follows the theoretical requirement. In spite of
the presence of the zero (located far from the poles, the farthest of all three systems), the
system performance is optimal.

poles : jω
s 1,2 = −2.1360 ± j 1.0925
Shea
s 1,2 = −1.6234 ± j 3.1556
s 1,2 = − 2.6032 ± j 0.9618
Braude s 1,2 = − 1.4951 Bessel
± j 2.6446
Braude
s 1,2 = − 2.8976 ± j 0.8649
Bessel
s 3,4 = − 2.1024 ± j 2.6573
Shea
zeros ( s 5 )
σ
−3 0
−10 −9 −8 −7 −6 −5 −4 −2 −1 1 2
Bessel Braude Shea
−10.0 − 8.2 −7.5

Fig. 2.9.7: The MFED shunt–series peaking circuit pole loci of the three different systems.
The zero of each system is too far from the poles to have much influence. It is interesting how
a similar step response can be obtained using three different optimization strategies. Strictly
speaking, only the Bessel system is optimal.

Let us conclude this section with Table 2.9.1, in which we have collected all the
design parameters, in addition to the bandwidth and rise time improvements and the
overshoots for the cases discussed.

Table 2.9.1
response type author 7" 7# 8 (b (r $ [%]
a) MFA PS/EM !Þ"%'% !Þ'!$' !Þ&)&) #Þ'" #Þ(# "#Þ#$
b) MFED PS/EM !Þ"!!! !Þ%'#( !Þ("!" #Þ") #Þ#" !Þ*!
c) MFED Shea !Þ"$$ !Þ%'( !Þ''( #Þ%% #Þ$* "Þ)'
d) MFED Braude !Þ"## !Þ&"" !Þ'&' #Þ&! #Þ$' %Þ%&
Table 2.9.1: Series–shunt peaking circuit parameters.

-2.101-
P. Stariè, E. Margan Inductive Peaking Circuits

This completes our discussion of inductive peaking circuits.


We have deliberately omitted the analysis of the combination of a 3-pole + 1-zero
shunt–peaking , combined with a two-pole series-peaking circuit. This network, introduced
by R.L. Dietzold, was thoroughly analyzed in 1948 in the book Vacuum Tube Amplifiers
[Ref. 2.2] (the reader who is interested in following the analysis there should consider that
several printing mistakes crept into some of the formulae). In those days that circuit
represented the ultimate in inductive peaking circuits. Today we achieve a much better
bandwidth and rise time improvement with the L+T circuit, discussed in Sec. 2.6, which is
easier to realize in practice and also requires substantially less mathematical work.
With shunt–series peaking it is sometimes not possible to achieve the required ratio
of stray capacitances 8 as in Table 2.9.1; but by adding an appropriate damping resistor
across the series peaking coil it is possible to adapt the shunt–series peaking circuit also to
an awkward ratio 8 . This is well described in [Ref. 2.2]. However the bandwidth and the
rise time improvement of such circuits may be either similar to that of a three-pole shunt
peaking circuit, or even worse than that, so we will not discuss them.
To summarize: in view of the advanced T-coil circuits, the shunt–series peaking
circuit may be considered obsolete. This is why we have not discussed it as extensive as all
the other inductive peaking circuits. On the other hand, by omitting the shunt–series
peaking circuit entirely, the discussion of the inductive peaking circuits would not be
complete.

-2.102-
P. Stariè, E. Margan Inductive Peaking Circuits

2.10 Comparison of MFA Frequency Responses and


of MFED Step Responses

In an actual process of amplifier design the choice of circuits used in different


amplifying stages is not just a matter of a designer’s personal taste or a simple collection of
best performing circuits. Rather, it is a process of carefully balancing the advantages and
disadvantages both at the system level and at each particular stage.
To help the designer in making a decision, now that we have analyzed the
frequency responses and step responses of the most important types of inductive peaking
circuits, we compare their performance in the following two plots.
We have drawn all the MFA frequency responses in Fig. 2.10.1 and all the MFED
step responses in Fig. 2.10.2.
On the basis of both figures we conclude that T-coil circuits surpass all the
other types of inductive peaking circuits.
In addition we have collected all the data for the circuit parameters corresponding
to both figures in the table in Appendix 2.4. The table contains the circuit schematic
diagram, the relations between the component values, the normalized pole and zero values,
the formulae for the frequency responses and the step responses, as well as the bandwidth
and the rise time enhancement and the step response overshoot.

-2.103-
P. Stariè, E. Margan Inductive Peaking Circuits

2.0
Vo
Ii R

1.0

b
0.7 c
MFA Inductive Peaking
a
0.5 a ) no peaking (single-pole)
d g i
b ) series, 2-pole
c ) series, 3-pole
d ) shunt, 2-pole, 1-zero h
e ) shunt, 3-pole, 1-zero
e
0.2 f ) shunt-series, 4-pole, 1-zero
g ) T-coil, 2-pole f
h ) T-coil, 3-pole
i ) L+T-coil, 4-pole
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h

Fig. 2.10.1: MFA frequency responses of all the circuit configurations discussed. By far the
4-pole T-coil response i) has the largest bandwidth.

1.2

1.0 i
o c
ii R g h e d b
a
0.8
f
MFED Inductive Peaking
a ) no peaking (single-pole)
0.6
b ) series, 2-pole
c ) series, 3-pole
d ) shunt, 2-pole, 1-zero
0.4 e ) shunt, 3-pole, 1-zero
f ) shunt-series, 4-pole, 1-zero
0.2 g ) T-coil, 2-pole
h ) T-coil, 3-pole
i ) L+T-coil, 4-pole
0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0
t /T
Fig. 2.10.2: MFED step responses of all the circuit configurations discussed. Again, the 4-pole
T-coil step response i) has the steepest slope, but the 3-pole T-coil response h) is close.

-2.104-
P. Stariè, E. Margan Inductive Peaking Circuits

2.11 The Construction of T-coils

Most of the ‘know how’ concerning the construction of T-coils is a classified


matter of different firms, mostly Tektronix, Inc., so we shall discuss only some basic facts
about how to make T-coils.
Fig. 2.11.1, made originally by Carl Battjes (although with different basic circuit
parameters as the reference), shows the performance sensitivity on each component’s
tolerance of an L+T circuit designed nominally for the MFED response.

1.0

o
ii R

+25% +10% +10% +10%


∆C b ∆C ∆L ∆k
−25% −10% −10% −10%

0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3
t/T

1.0
Cb
o T = R ( C + Ci )
ii R k
ii Li L L = R2 C
L i = mR 2Ci
o
Ci R
C
+40% +40%
∆L i ∆Ci
− 40% − 40%

m k C /C i C b /C
0.65 0.57 3.46 0.068

0 1 2 3 0 1 2 3

Fig. 2.11.1: Four-pole L+T peaking circuit step response sensitivity on component tolerances.
Such graphs were drawn originally by Carl Battjes for a class lecture at Tektronix, but with
another set of parameters as the reference. The responses presented here were obtained using the
MicroCAP–5 circuit analysis program. [Ref.2.36].

-2.105-
P. Stariè, E. Margan Inductive Peaking Circuits

These figures prove that the inductance P, the coupling factor 5 , and the loading
capacitance G must be kept within close tolerances in order to achieve the desired
performance, whilst the tolerances of the bridging capacitance Gb of the T-coil, the input
coil Pi , and input capacitance Gi are less critical. Therefore, the construction of a properly
calculated T-coil is not a simple matter. In some respect it resembles a digital AND
function: only if all the parameters are set correctly will the result be an efficient peaking
circuit. There is not much rom for compromise here.
In the serial production of wideband amplifiers there are always some tolerances of
stray capacitances, so the T-coils must be made adjustable. Usually the coils are wound on
a simple polystyrene cylindrical coil form, with a threaded ferrite core inside. By adjusting
the core the required inductance can be set. However, the coupling factor 5 depends only
on the coil length to diameter ratio (6ÎH) [Ref. 2.33] and it is independent of whether the
coil has a ferrite core inside or not. The relation between the coupling factor 5 and the
ratio 6ÎH is shown in the diagram in Fig. 2.11.2, which is valid for the center tapped
cylindrical coils.

1.0
N = 2n
k
k

n n D
0.5

0.2

0.1
0.01 0.02 0.05 0.10 0.20 0.50 1.00 2.00 5.00 10.00
l /D

Fig. 2.11.2: T-coil coupling factor 5 as a function of the coil’s length to diameter ( 6ÎH ) ratio.

The coil inductance P depends on the number of turns, the length to diameter ratio,
set by the coil form on which the coil is wound, and on the ferrite core if any is used; both
the coil form and the core can be obtained from different manufacturers, together with the
formulae for calculating the required number of turns. However, these formulae are often
given as some sort of ‘cookery book receipts’, with the key parameters usually in numerical
form for a particular product. As such, they are satisfactory for building general purpose
coils, but they do not offer the understanding needed to perform the optimization procedure
within a set of possible solutions.
The reader is therefore forced to look for more theoretical explanations in standard
physics and electronics text books.

-2.106-
P. Stariè, E. Margan Inductive Peaking Circuits

In those text books the following relation can be found:


.! R # E
Pœ (2.11.1)
6
but this is valid only for a single layer coreless coil with a homogeneous magnetic field
(such as a tightly wound toroid or a long coil). The parameters represent:
P inductance in henrys (after Joseph Henry, 1791-1878), [1 H œ 1 VsÎA]; the
inductance of 1 H produces a self-induced voltage of 1 V when a current
flowing through it changes at a rate of 1 AÎs;
.! the free space magnetic permeability, .! œ %1 † "!( Vs/Am;
R the total number of turns;
E the area encircled by one wire turn, measured from the wire central path; for a
cylindrical coil, E œ 1 V# œ 1 H# Î%, where V is the radius and H is the
diameter, both in meters [m];
6 the total coil length in meters [m]; if the turns are wound adjacent to one
another with a wire of a diameter . , then 6 œ R ..

The main problem with the Eq. 2.11.1 is the term ‘homogeneous’; this implies a
uniform magnetic field, entirely contained within the coil, with no appreciable leakage
outwards. Toroidal coils are not easy to build and can not be made adjustable, so in
practice cylindrical coils are widely used. For a cylindrical coil the magnetic field is of the
same form as that of a stick magnet: the field lines close outside the coil and at both ends
the field is non-homogeneous. Because of this, the inductance is reduced by a form factor
', which is a function of the ratio HÎ6 ( Fig. 2.11.3 ).
An important note for T-coil production: the form factor, and consequently the
inductance, increase with H and decrease with 6, in contrast to the coupling factor 5 . This
additionally restricts our choice of H and 6.
Also, if the coil is going to be adjustable the relative permeability of the core
material, .r , must be taken into account; however, only a part of the magnetic field will be
contained within the core, so we introduce the average permeability, — .r , reflecting that only
a part of the turns will encircle the core. The relative permeability of the air is 1 and that of
the ferromagnetic core material can be anything up to several hundred. However, since the
field path in air will be much longer than inside the core, the average permeability will be
rather low. Note also that the core material is ‘slow’, i.e., its permeability has an upper
frequency limit, often lower than our bandwidth requirement.
Finally, if the bridging capacitance Gb of the T-coil network has to be precisely
known, we must take into account the coil’s self capacitance, Gs , which appears in parallel
with the coil, with a value equivalent to a series connection of capacitances between
adjacent turns. Owing to Gs the inductance will appear lower when measured, so Gs should
also be measured and the actual inductance value calculated from the two measurements. If
the turns are tightly wound the relative permittivity &r of the wire isolation lacquer must be
considered. Its value is several times larger than for air. The lacquer thickness is also
influencing Gs . If Gs is too large it can easily be reduced by increasing the distance
between the turns by a small amount, $ , but this will also cause additional field leakage and
reduce the inductance slightly. To compensate, the number of turns can be increased;

-2.107-
P. Stariè, E. Margan Inductive Peaking Circuits

because the inductance increases with R # , it will outperform the slight decrease resulting
from the larger length 6.
Multi-layer coils are less suitable for use in wideband amplifiers, because of their
high capacitance between the adjacent layers.
Fortunately wideband amplification does not require large inductance values. Also,
since the inductances are always in series with relatively large resistive loads (almost never
less than 50 H), the wire resistance and the skin effect can usually be neglected.
With all these considerations the inductance becomes:

.r . ! 1 H # 6

Pœ' (2.11.2)
% a.  $ b#

The Fig. 2.11.3 shows the value of ' as a function of the ratio HÎ6. The actual
function is found through elliptic integration of the magnetic field flux density, which is
too complex to be solved analytically here. But a fair approximation, fitting the
experimental data to better than 1%, can be obtained using the following relation:
+
'œ ,
(2.11.3)
H
+Œ 
6
where:
+œ#
1
, œ sin
$

10 0

ζ= a
a + ( D/ l ) b
ζ a=2
b = sin π
3

−1
10
µr µ0 π D2 l l
L= ζ N=
4 ( d + δ )2 d +δ
0≤ δ ≤ d δ d

µr N D

l
−2
10
−2 −1
10 10 10 0 10 1 10 2
D/ l

Fig. 2.11.3: The ' factor as a function of the coil diameter to length ratio, HÎ6. The
equation shown in the upper right corner fits experimental data to better than 1%.

-2.108-
P. Stariè, E. Margan Inductive Peaking Circuits

Inductances are susceptible to external fields, mainly from the power supply, or
other nearby inductances. The influence of a nearby inductance can be minimized by a
perpendicular orientation of coil axes. Otherwise, the circuit should be appropriately
shielded, but the shield will act as a shorted single-turn inductance, lowering the effective
coil inductance if it is too close.
In modern miniaturized, bandwidth hungry circuits the coil dimensions become
critical, and one possible solution is to construct the coil in a planar spiral form on two
sides of a printed circuit board or even within an integrated circuit. This gives the
possibility of more tightly controlled parameter tolerances, but there is no free lunch: the
price to pay is in many weeks or even months of computer simulation before a satisfactory
solution is found by trial and error, since the exact mathematical relations are extremely
complicated (a good example of how this is done can be found in the excellent article by
J.Van Hese of Agilent Technology [Ref. 2.37], where the finite element numerical analysis
method is used).
The following figures show a few examples of planar coils made directly on the
surface of IC chips, ceramic hybrids, or double-sided conventional PCBs.

Fig. 2.11.4: Examples of coil structures made directly on an IC chip (left)


and on a hybrid circuit (right).

Fig. 2.11.5: A possible compensation of the bonding inductance of an IC chip, mounted


on a hybrid circuit, by the negative inductance present at the T-coil center tap.

-2.109-
P. Stariè, E. Margan Inductive Peaking Circuits

Fig. 2.11.6: A planar T-coil with a high coupling factor, realized on a conventional
double-sided PCB. Multi-turn spiral structures are also possible, but need at least a
three-layer board for making the inner to outer turn connections.

-2.110-
P. Stariè, E. Margan Inductive Peaking Circuits

References:
[2.1] S. Butterworth, On the Theory of Filter Amplifiers,
Experimental Wireless & Wireless Engineer, Vol. 7, October, 1930, pp. 536–471.
[2.2] G.E. Valley & H. Wallman, Vacuum tube amplifiers,
MIT, Radiation Laboratory Series, Vol. 18, McGraw-Hill, New York, 1948.
[2.3] J. Bednařík & J. Daněk, Obrazove´ zesilovače pro televisi a měřicí techniku
(Video Amplifiers for Television and Measuring Techniques),
Statní nakladatelství technicke´ literatury, Prague, 1957.
[2.4] E.L. Ginzton, W.R. Hewlett, J.H. Jasberg, J.D. Noe, Distributed Amplification,
Proc. I.R.E., Vol. 36, August, 1948, pp. 956–969.
[2.5] P. Starič, An Analysis of the Tapped-Coil Peaking Circuit for Wideband/Pulse Amplifiers,
Elektrotehniški vestnik, 1982, pp. 66–79.
[2.6] P. Starič, Three- and Four-Pole Tapped Coil Circuits for Wideband/Pulse Amplifiers,
Elektrotehniški vestnik, 1983, pp. 129–137.
[2.7] P. Starič, Application of T-coil Interstage Coupling in Wideband/Pulse Amplifiers,
Elektrotehniški vestnik, 1990, pp. 143–152.
[2.8] D.L. Feucht, Handbook of Analog Circuit Design,
Academic Press, Inc. San Diego, 1990.
[2.9] J.L. Addis, Good Engineering and Fast Vertical Amplifiers, Part 4, section 14,
Analog Circuit Design, edited by J. Williams,
Butterworth-Heinemann, Boston, 1991.
[2.10] M.E. Van Valkenburg, Introduction to Modern Network Synthesis,
John Wiley, New York, 1960.
[2.11] G. Daryanani, Principles of Active Network Synthesis and Design,
John Wiley, New York, 1976.
[2.12] T.R. Cuthbert, Circuit Design using Personal Computers,
John Wiley, 1983.
[2.13] G.J.A. Byrd, Design of Continuous and Digital Electronic Systems,
McGraw-Hill (U K), London, 1980.
[2.14] P. Starič, Interpolacija med Butterworthovimi in Thomsonovimi poli
(Interpolation between Butterworth’s and Thomson’s poles),
Elektrotehniški vestnik, 1987, pp. 133–139.
[2.15] G.A. Korn & T.M. Korn, Mathematical Handbook for Scientists and Engineers,
McGraw-Hill, New York, 1961.
[2.16] F.A. Muller, High-Frequency Compensation of RC Amplifiers,
Proceedings of the I.R.E., August, 1954, pp. 1271–1276.
[2.17] C.R. Battjes, Technical Notes on Bridged T-coil Peaking,
(Internal Publication), Tektronix, Inc., Beaverton, Ore., 1969.
[2.18] C.R. Battjes, Who Wakes the Bugler?, Part 2, section 10,
The Art and Sciece of Analog Circuit Design, edited by J. Williams,
Butterworth-Heinemann, Boston, 1995.
[2.19] N.B. Schrock, A New Amplifier for Milli-Microsecond Pulses,
Hewlett-Packard Journal, Vol. 1, No. 1., September, 1949.
[2.20] W.R. Horton, J.H. Jasberg, J.D. Noe, Distributed Amplifiers: Practical Consideration and
Experimental Results, Proc. I.R.E. Vol. 39, pp. 748–753.
[2.21] R.I. Ross, Wang Algebra Speeds Network Computation of Constant Input Impedance Networks,
(Internal Publication), Tektronix, Inc., Beaverton, Ore. 1968.
[2.22] R.J. Duffin, An Analysis of the Wang Algebra of the Networks,
Trans. Amer. Math. Soc., 1959, pp. 114–131.

-2.111-
P. Stariè, E. Margan Inductive Peaking Circuits

[2.23] S.P. Chan, Topology Cuts Circuit Drudgery,


Electronics, November 14, 1966, pp. 112–121.
[2.24] A.I. Zverev, Handbook of Filter Synthesis,
John Wiley, New York, 1967.
[2.25] G.B. Braude, K.V. Epaneshnikov, B.J. Klymushev, Calculation of a Combined Circuit for the
Correction of Television Amplifiers, (in Russian),
Radiotekhnika, T. 4, No. 6. Moscow, 1949, pp. 24–33.
[2.26] L.J. Giacoletto, Electronics Designer’s Handbook,
Second Edition, McGraw-Hill, New York, 1977.
[2.27] B. Orwiller, Vertical Amplifier Circuits,
Tektronix, Inc., Beaverton, Ore., 1969.
[2.28] D.E. Scott, An Introduction to Circuit Analysis, A System Approach,
McGraw-Hill, New York, 1987.
[2.29] J.L. Addis, Mutual Inductance & T-coils,
(Internal Publication), Tektronix, Inc., Beaverton, Ore. 1977.
[2.30] A.B. Williams, Electronic Filter Design Handbook,
McGraw-Hill, New York 1981.
[2.31] C.R. Battjes, Amplifier Risetime and Frequency Response,
Class Notes, Tektronix, Inc., Beaverton, Ore. 1969.
[2.32] W.E. Thomson, Networks with Maximally Flat Delay,
Wireless Engineer, Vol. 29, 1952, pp 536–541.
[2.33] F.W. Grover, Inductance Calculation, (Reprint)
Instrument Society of America, Research Triangle Park, N.C. 27 709, 1973.
[2.34] Mathematica, Wolfram Research, Inc., 100 Trade Center Drive, Champaign, Illinois,
http://www.wolfram.com/
[2.35] Macsyma, Symbolics, Inc., 8 New England Executive Park, East Burlington, Massachusetts, 01803,
http://www.scientek.com/macsyma/main.htm
See also Maxima (free version, GNU Public Licence):
http://www.ma.utexas.edu/users/wfs/maxima.html
[2.36] MicroCAP–5, Spectrum Software, Inc.,
http://www.spectrum-soft.com/
[2.37] J. Van Hese, Accurate Modeling of Spiral Inductors on Silicon for Wireless RF-IC Designs,
http://www.techonline.com/ È Feature Articles È Feature Archive È November 20, 2001
See also: http://www.agilent.com/, and:
L. Knockaert, J. Sercu and D. Zutter, "Generalized Polygonal Basis Functions for the
Electromagnetic Simulation of Complex Geometrical Planar Structures," IMS-2001
[2.38] J.N. Little and C.B. Moller, The MathWorks, Inc.: MATLAB-V For Students
(with the Matlab program on a CD), Prentice-Hall, 1998,
http://www.mathworks.com/
[2.39] Derive, http://education.ti.com/product/software/derive/
[2.40] MathCAD, http://www.mathcad.com/

-2.112-
P.Starič, E.Margan Appendix 2.1

Appendix 2.1
General Solutions for 1 -, 2nd -, 3rd - and 4th -order polynomials
st

First-order polynomial: +B, œ !


,
Canonical form: B œ!
+
,
Solution: Bœ 
+

Second-order polynomial: + B#  , B  - œ !
, -
Canonical form: B#  B œ!
+ +

 , … È,#  % + -
Solutions: B"ß# œ
#+

Third-order polynomial, canonical form:

B$  + B#  , B  - œ !
Solutions:
By substituting: O œ È+#  $,
Q œ %+$ -  +# ,#  ")+,-  %,$  #(- #
R œ #+$  *+,  #(-
the real solution is:
4R
+>+8
+ # È
$ $Q
B" œ   O =38
$ $ $

and the two complex conjugate solutions are:

4R 4R
+>+8 +>+8
+ È
$ $Q È$ È
$ $Q
B# œ   O =38  O -9=
$ $ $ $
4R 4R
+>+8 +>+8
+ È
$ $Q È$ È
$ $Q
B$ œ   O =38  O -9=
$ $ $ $

Note: If prefered, here is a purely algebraic (non-trigonometric) result, obtained by


using the new symbolic calculus capability of Matlab (Version 5.3 for Students). The
command lines needed are symply:
syms x a b c % define x, a, b and c as symbols
r = solve( x^3 + a*x^2 + b*x + c ) ;

-A2.1.1-
P.Starič, E.Margan Appendix 2.1

The real solution is:


r(1) =
1/6*
(36*a*b-108*c-8*a^3+12*
(12*b^3-3*b^2*a^2-54*b*c*a+81*c^2+12*c*a^3)^(1/2)
)^(1/3)-
6*(1/3*b-1/9*a^2)/
(36*a*b-108*c-8*a^3+12*
(12*b^3-3*b^2*a^2-54*b*c*a+81*c^2+12*c*a^3)^(1/2)
)^(1/3)-
1/3*a
The two complex-conjugate solutions are:
r(2) =
-1/12*
(36*a*b-108*c-8*a^3+12*
(12*b^3-3*b^2*a^2-54*b*c*a+81*c^2+12*c*a^3)^(1/2)
)^(1/3)+
3*(1/3*b-1/9*a^2)/
(36*a*b-108*c-8*a^3+12*
(12*b^3-3*b^2*a^2-54*b*c*a+81*c^2+12*c*a^3)^(1/2)
)^(1/3)-
1/3*a+1/2*i*3^(1/2)*
(1/6*
(36*a*b-108*c-8*a^3+12*
(12*b^3-3*b^2*a^2-54*b*c*a+81*c^2+12*c*a^3)^(1/2)
)^(1/3)+
6*(1/3*b-1/9*a^2)/
(36*a*b-108*c-8*a^3+12*
(12*b^3-3*b^2*a^2-54*b*c*a+81*c^2+12*c*a^3)^(1/2)
)^(1/3)
)
r(3) =
-1/12*
(36*a*b-108*c-8*a^3+12*
(12*b^3-3*b^2*a^2-54*b*c*a+81*c^2+12*c*a^3)^(1/2)
)^(1/3)+
3*(1/3*b-1/9*a^2)/
(36*a*b-108*c-8*a^3+12*
(12*b^3-3*b^2*a^2-54*b*c*a+81*c^2+12*c*a^3)^(1/2)
)^(1/3)-
1/3*a-1/2*i*3^(1/2)*
(1/6*
(36*a*b-108*c-8*a^3+12*
(12*b^3-3*b^2*a^2-54*b*c*a+81*c^2+12*c*a^3)^(1/2)
)^(1/3)+
6*(1/3*b-1/9*a^2)/
(36*a*b-108*c-8*a^3+12*
(12*b^3-3*b^2*a^2-54*b*c*a+81*c^2+12*c*a^3)^(1/2)
)^(1/3)
)

-A2.1.2-
P.Starič, E.Margan Appendix 2.1

Fourth-order equation, canonical form:


B%  + B$  , B#  - B  . œ !
Solutions:
The roots are identical to the roots of two lower order equations:

B +C  -
B#  a+  Eb  ŠC  ‹œ!
# E
where:
E œ „ È)C  +#  %,

and C is any real root of the third-order equation:

)C$  %,C  a#+-  ).bC  .a%,  +# b  -# œ !

As has been proven by the French mathematician Evariste Galois (1811-


1832), the solutions of polynomials of order 5 or higher can not be expressed
analytically as rational functions of the polynomial coefficients. In such cases, the
roots can be found by numerical computation methods (users of Matlab can try the
ROOTS routine, which calculates the roots from polynomial coefficients by numerical
methods; see also the POLY routine, which finds the coefficients from the roots).

-A2.1.3-
P.Starič, E.Margan Appendix 2.2

Appendix 2.2
Normalization of complex frequency response functions
or
Why do some expressions have strange signs?

Do not be afraid of mathematics!


It is probably the only rational product of the human mind!
(E. Margan)

A generalized expression of a non-normalized complex frequency response


function of an all pole system (no zeros) of the order R can be written in form of a
product of poles of the characteristic polynomial:

"
J Ð=Ñ œ R
(A2.2.1)
# Ð=  =3 Ñ
3 œ"

At DC (= œ !) the system has a gain factor equal to the product of negated


poles:

" "
J Ð!Ñ œ R
œ R
(A2.2.2)
# Ð!  =3 Ñ # Ð=3 Ñ
3 œ" 3 œ"

We would like to compare different systems on a fair basis. This is why we


introduce parametric normalization in our equations. We have already seen the
frequency being normalized to =Î=h . This enabled us to compare the cut off
frequencies of different systems, which have some components (or even just one)
equal, thus helping us to decide which circuit configuration is either better, simpler, or
more economical to build.
Obviously the system’s gain is one such parameter which would influence the
comparison in frequency if not taken into account. This situation is best resolved if we
also normalize the DC gain of every system to some predefined value, preferably
unity. Mathematically, we will have a normalized expression by simply dividing
Eq. A2.2.1 by Eq. A2.2.2:

"
R R
# Ð=  =3 Ñ # Ð=3 Ñ
J Ð=Ñ 3 œ" 3 œ"
Jn Ð=Ñ œ œ œ (A2.2.3)
J Ð!Ñ " R
R
# Ð=  =3 Ñ
# Ð=3 Ñ 3 œ"
3 œ"

-A2.2.1-
P.Starič, E.Margan Appendix 2.2

The numerator of the last term in Eq. A2.2.3 can be written so that the signs
are collected together in a separate product, defining the sign of the total:

R
Ð"ÑR # =3
3 œ"
Jn Ð=Ñ œ R
(A2.2.4)
# Ð=  =3 Ñ
3 œ"

This means that all odd order functions must be multiplied by " in order
to have a correctly normalized expression. But please, note that the sign defining
expression Ð"ÑR is not the consequence of all our poles lying in the left half of the
complex plane, as is sometimes wrongly explained in literature!
In Eq. A2.2.4 the poles still retain their actual value, be it positive or negative.
The term Ð"ÑR is just the consequence of the mathematical operation (subtraction)
required by the function: = must acquire the exact value of =3 , sign included, if the
function is to have a pole at =3 :

=  =3 œ ! º Ê J Ð=3 Ñ œ „ _ (A2.2.5)
= œ =3

In some literature the sign is usually neglected because we are all too often
interested in the frequency response magnitude, which is the absolute value of J Ð=Ñ,
or lJ Ð=Ñl. However, as amplifier designers we are interested mainly in the system’s
time domain performance. If we calculate it by the inverse Laplace transform we must
have the correct sign of the transfer function, and consequently the signs of the
residues at each pole.
In addition it is important to note that a system with zeros must have the
product of zeros normalized in the same way (even if some of the systems with zeros
do not have a DC response, such as high pass and band pass systems). If our system
has poles :3 and zeros D5 , the normalized transfer function is:

R Q
Ð"ÑR # :3 # Ð=  D5 Ñ
3 œ" 5 œ"
Jn Ð=Ñ œ R
· Q
(A2.2.6)
# Ð=  :3 Ñ Ð"ÑQ # D5
3 œ" 5 œ"

From Eq. A2.2.6 the resulting sign factor is obviously:

Ð"ÑR · Ð"ÑQ œ Ð"ÑR Q (A2.2.7)

which is, incidentally, also equal to Ð"ÑR Q , but there is nothing mystical about
that, really.

-A2.2.2-
P. Starič, E. Margan

Wideband Amplifiers

Part 3:

Wideband Amplifier Stages With Semiconductor Devices

The only way to find the limits of what is possible


is by pushing towards the impossible!
Arthur C. Clarke
P. Starič, E. Margan Amplifier Stages with Semiconductor Devices

Back To Basics

This part deals with some elementary amplifier configurations


which can serve as building blocks of multi-stage amplifiers
described in Part 4 and 5, together with the inductive peaking
circuits described in Part 2.
Today two schools of thought prevail amongst amplifier
designers: the first one (to which most of the more experienced
generation belongs) says that cheap operational amplifiers can
never fulfil the conflicting requirements of good wideband design;
the other (mostly the fresh forces) says that the analog IC
production technology advances so fast that by the time needed to
design a good wideband amplifier, the new opamps on the market
will render it obsolete.
Both of them are right, of course!
An important point, however, is that very few amplifier
designers have a silicon chip manufacturing facility next door.
Those who have often discover that component size reduction
solves half of the problems, whilst packing the components close
together produces a nearly equal number of new problems.
Another important point is that computer simulation tells you
only a part of what will be going on in the actual circuit. Not
because there would be anything wrong with the computer, its
program or the circuit modeling method used, but because
designers, however experienced they are, can not take everything
into account right from the start; and to be able to complete the
simulation in the foreseeable future many things are left out
intentionally.
A third important point is that by being satisfied with the
performance offered by LEGO-tronics (playing with general
purpose building blocks, as we call it — and we really do not mean
anything bad by that!), one intentionally limits oneself to a
performance which, in most cases, is an order of magnitude below
of what is achievable by the current ‘state of the art’ technology.
Not to speak of there being only a limited amount of experience to
be gained by playing just with the outside of those nice little black
boxes.
A wise electronics engineer always takes some time to build a
discrete model of the circuit (the most critical part, at least) in
order to evaluate the influence of strays and parasitics and find a
way of improving it. Even if the circuit will eventually be put on a
silicon chip those strays will be scaled down, but will not
disappear.
That is why we think that it is important to go back to basics.

-3.2-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

Contents .................................................................................................................................... 3.3


List of Figures ........................................................................................................................... 3.4
List of Tables ............................................................................................................................ 3.5

Contents:
3.0 Introduction: A Farewell to Exact Calculations ............................................................................... 3.7
3.1 Common Emitter Amplifier ............................................................................................................... 3.9
3.1.1 Calculation of Voltage Amplification (Based on Fig. 3.1.1d) ........................................ 3.14
3.2 Transistor as an Impedance Converter ............................................................................................ 3.17
3.2.1 Common Base Transistor Small Signal HF Model ........................................................ 3.17
3.2.2 The Conversion of Impedances ...................................................................................... 3.20
3.2.3 Examples of Impedance Transformations ...................................................................... 3.21
3.2.4 Transformation of Combined Impedances ..................................................................... 3.26
3.3 Common Base Amplifier ................................................................................................................. 3.33
3.3.1 Input Impedance ............................................................................................................ 3.34
3.4 Cascode Amplifier ........................................................................................................................... 3.37
3.4.1 Basic Analysis ................................................................................................................ 3.37
3.4.2 Damping of the Emitter circuit of U# ............................................................................. 3.38
3.4.3 Thermal Compensation of Transistor U" ....................................................................... 3.42
3.5 Emitter Peaking in a Cascode Amplifier ......................................................................................... 3.49
3.5.1 Basic Analysis ................................................................................................................ 3.49
3.5.2 Input Impedance Compensation ..................................................................................... 3.54
3.6 Transistor Interstage T-coil Peaking ............................................................................................... 3.57
3.6.1 Frequency Response ...................................................................................................... 3.61
3.6.2 Phase Response .............................................................................................................. 3.62
3.6.3 Envelope Delay .............................................................................................................. 3.62
3.6.4 Step Response ................................................................................................................ 3.64
3.6.5 Consideration of the Transistor Input Resistance ........................................................... 3.65
3.6.6 Consideration of the Base Lead Stray Inductance .......................................................... 3.66
3.6.7 Consideration of the Collector to Base Spread Capacitance .......................................... 3.67
3.6.8 The ‘Folded’ Cascode .................................................................................................... 3.68
3.7 Differential Amplifiers .................................................................................................................... 3.69
3.7.1 Differential Cascode Amplifier ...................................................................................... 3.70
3.7.2 Current Source in the Emitter Circuit ............................................................................ 3.72
3.8 The 0T Doubler ............................................................................................................................... 3.75
3.9. JFET Source Follower .................................................................................................................... 3.79
3.9.1 Frequency Response Magnitude ................................................................................... 3.82
3.9.2 Phase Response .............................................................................................................. 3.84
3.9.3 Envelope Delay .............................................................................................................. 3.84
3.9.4 Step Response ................................................................................................................ 3.85
3.9.5 Input Impedance ............................................................................................................ 3.89

Résumé of Part 3 ................................................................................................................................... 3.95

References ............................................................................................................................................. 3.97

Appendix 3.1: Thermal analysis ........................................................................................................... A3.1

-3.3-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

List of Figures:

Fig. 3.1.1: The common emitter amplifier .............................................................................................. 3.9


Fig. 3.2.1: The common base amplifier ................................................................................................ 3.17
Fig. 3.2.2: Transistor gain as a function of frequency .......................................................................... 3.19
Fig. 3.2.3: Emitter to base impedance conversion ................................................................................ 3.20
Fig. 3.2.4: Base to emitter impedance conversion ................................................................................ 3.21
Fig. 3.2.5: Capacitive load reflects into the base with negative components ....................................... 3.23
Fig. 3.2.6: Inductive source reflects into the emitter with negative components .................................. 3.24
Fig. 3.2.7: Base to emitter VG network transformation ....................................................................... 3.26
Fig. 3.2.8: Emitter to base VG network transformation ....................................................................... 3.28
Fig. 3.2.9: Ve Ge network transformation ............................................................................................. 3.30
Fig. 3.2.10: Common collector amplifier ............................................................................................. 3.30
Fig. 3.3.1: Common base amplifier ...................................................................................................... 3.33
Fig. 3.3.2: Common base amplifier input impedance ........................................................................... 3.35
Fig. 3.4.1: Cascode amplifier circuit schematic ................................................................................... 3.37
Fig. 3.4.2: Cascode amplifier small signal model ................................................................................. 3.37
Fig. 3.4.3: Cascode amplifier parasitic resonance damping ................................................................. 3.39
Fig. 3.4.4: U# emitter input impedance with damping ......................................................................... 3.40
Fig. 3.4.5: The step response pre-shoot due to G." cross-talk .............................................................. 3.40
Fig. 3.4.6: U# compensation method with a base VG network ............................................................ 3.41
Fig. 3.4.7: U# emitter impedance compensation .................................................................................. 3.42
Fig. 3.4.8: Thermally distorted step response ....................................................................................... 3.43
Fig. 3.4.9: The optimum bias point ...................................................................................................... 3.44
Fig. 3.4.10: The thermal compensation network .................................................................................. 3.46
Fig. 3.4.11: The dynamic collector resistance and the Early voltage ................................................... 3.46
Fig. 3.4.12: The compensated cascode amplifier ................................................................................. 3.47
Fig. 3.5.1: Emitter peaking in cascode amplifiers ................................................................................ 3.50
Fig. 3.5.2: Emitter peaking pole pattern ............................................................................................... 3.53
Fig. 3.5.3: Negative input impedance compensation ............................................................................ 3.56
Fig. 3.6.1: Cascode amplifier with a T-coil interstage coupling ........................................................... 3.57
Fig. 3.6.2: T-coil loaded with the simplified input impedance ............................................................. 3.58
Fig. 3.6.3: T-coil coulpling frequency response ................................................................................... 3.62
Fig. 3.6.4: T-coil coupling phase response ........................................................................................... 3.63
Fig. 3.6.5: T-coil coupling envelope delay response ............................................................................ 3.63
Fig. 3.6.6: T-coil coupling step response ............................................................................................. 3.64
Fig. 3.6.7: Cascode input impedance including the base spread resistance .......................................... 3.65
Fig. 3.6.8: T-coil compensation for the base spread resistance ............................................................ 3.65
Fig. 3.6.9: T-coil including the base lead inductance ........................................................................... 3.66
Fig. 3.6.10: A more accurate model of <b G. ........................................................................................ 3.67
Fig. 3.6.11: The ‘folded’ cascode ......................................................................................................... 3.68
Fig. 3.7.1: The differential amplifier .................................................................................................... 3.69
Fig. 3.7.2: The differential cascode amplifier ...................................................................................... 3.71
Fig. 3.7.3: Basic current mirror ............................................................................................................ 3.72
Fig. 3.7.4: Improved current generator ................................................................................................. 3.74
Fig. 3.8.1: Current driven cascode amplifier ........................................................................................ 3.75
Fig. 3.8.2: Basic 0T doubler circuit ...................................................................................................... 3.76
Fig. 3.9.1: JFET source follower .......................................................................................................... 3.79
Fig. 3.9.2: JFET capacitive loading and the input impdance ................................................................ 3.81
Fig. 3.9.3: JFET frequency response .................................................................................................... 3.83
Fig. 3.9.4: JFET phase response ........................................................................................................... 3.84
Fig. 3.9.5: JFET envelope delay ........................................................................................................... 3.85
Fig. 3.9.6: JFET step response ............................................................................................................. 3.86
Fig. 3.9.7: JFET frequency response including the signal source resistance ........................................ 3.88

-3.4-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

Fig. 3.9.8: JFET step response including the signal source resistance ................................................. 3.88
Fig. 3.9.9: JFET source follower input impedance model .................................................................... 3.90
Fig. 3.9.10: Normalized negative input conductance ........................................................................... 3.91
Fig. 3.9.11: JFET negative input impedance compensation ................................................................. 3.92
Fig. 3.9.12: JFET input impedance Nyquist diagrams ......................................................................... 3.93
Fig. 3.9.13: Alternative JFET negative input impedance compensation .............................................. 3.94

List of Tables:

Table 3.2.1: The Table of impedance conversions ............................................................................... 3.25


Table 3.4.1: Poles of T-coil interstage coupling for different loadings ................................................ 3.61

-3.5-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

3.0 Introduction: A Farewell to Exact Calculations

The inductive peaking circuits discussed in Part 2 should be applied to some


amplifier stages in order to have any sense. Today the amplifiers are made almost
exclusively using semiconductor devices (bipolar junction transistors — BJTs and
field effect transistors — FETs). The number of books describing semiconductor
amplifiers is enormous [e.g., Ref. 3.7, 3.12, 3.13, 3.14, 3.15], whilst books discussing
wideband amplifiers in necessary depth are quite rare.
Wideband amplifiers with BJTs and FETs are found in a great diversity of
electronic products, in measuring instruments and oscilloscopes in particular. They
require a different design approach from the ‘classical’ low frequency amplifier circuits,
where the inter-electrode and stray capacitances are less important. In order to improve
the wideband performance many special circuits were invented [e.g., Ref. 3.1, Ch.8 and
3.34, Ch.7] and to discuss thoroughly all of them in a book like this would not be
possible for several reasons. Here we will analyze only some basic circuits (the
common emitter, common base and common collector) and some of the most used
wideband circuits (e.g., the differential, cascode, common source, and the 0T doubler
configurations).
In the first section we shall analyze the common emitter amplifier. Since the
base pole =h (set by the base spread resistance <b and the total input capacitance
G1  GM ), represents the most prevalent bandwidth limit in the transconductance
equation, we shall discuss how to reduce the input capacitance. In this type of amplifier,
as well as in all other types that we intend to discuss, the analysis becomes extremely
complicated if all the parameters are considered. Since our intention is to acquire a clear
picture of the basic facts, an analysis with all the minute details would needlessly fog
the view. The market is still waiting (probably in vain) for a transistor with „ 1 %
tolerances in electrical parameters. Also, it is very difficult to specify the stray
inductances and capacitances of the wiring with comparable precision. Therefore we
shall simplify our expressions and neglect all the parameters which have either little
influence or which must be solved numerically for a specific case. After the basic
picture is acquired, the reader who wants to make a more precise analysis, can use a
computer with a suitable program such as SPICE [Ref. 3.28], PSPICE [Ref. 3.29], or
MicroCAP [Ref. 3.30], to name a few. Be warned, however, that the result will be as
good as is allowed by the models of semiconductor devices and, of course, it will be
influenced by the user’s ability to correctly model the stray components dependent on
the layout and which are not explicitly shown either in the initial circuit schematic
diagram or included in any semiconductor device model.
The nature of the HF impedance in the emitter circuit changes drastically if we
look at it from the base, and vice versa. Since it is useful to know the possible
transformations from base to emitter circuit and back, we shall discuss all the
transformations of resistive, capacitive, and inductive impedances. Next we shall
analyze the common base circuit. The cascode circuit, which is effectively a
combination of a common emitter and a common base configuration, is often used in
wideband amplifiers, so it deserves a thorough analysis. The same is valid for the
differential cascode amplifier. We shall also discuss the emitter peaking in a cascode
amplifier. The invention of the 0T doubler made possible an almost twofold bandwidth

-3.7-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

extension within a single stage and we shall discuss it next. This will be followed by an
analysis of JFET source follower, commonly used as the input stage of oscilloscopes
and other measuring instruments. Such a stage can have the input impedance negative
at high frequencies when the JFET source is loaded by a capacitor (which is always the
case) and we shall show how to compensate this very undesirable property.
In Part 2 we have analyzed the T-coil peaking circuit with a purely capacitive tap
to ground impedance. However, if the T-coil circuit is used for a transistor interstage
coupling the tap to ground impedance ceases to be purely capacitive. This fact requires
a new analysis, with which we shall deal in the last section.
Probably, the reader will ask how accurately we need to model the active
devices in our circuits to obtain a satisfying approximation. In 1954, Ebers and Moll
[Ref. 3.9] had already described a relatively simple non-linear model, which, over the
years, was improved by the same authors, and lastly in 1970 by Gummel and Poon [Ref.
3.10]. Modern computer programs for circuit simulation allow the user to trade
simulation speed for accuracy by selecting models with different levels of complexity
(e.g., an older version of MicroCAP [Ref. 3.30] had 3 EM and 2 GP models for the
BJT, the most complex GP2 using a total of 51 parameters). For simpler circuit analysis
we shall use the linearized high frequency EM2 model, explained in detail in Sec. 3.1.
All these models look so simple and perform so well, that it seems as if anyone
could have created them. Nothing could be farther from the truth. It takes lots of
classical physics (Boltzmann’s transport theory, Gauss’ theorem, Poisson’s equation,
the charge current mean density integral calculus, the complicated p–n junction
boundary conditions, Maxwell’s equations, ... ), as well as quantum physics (Fermi
levels, Schrödinger’s equation, the Pauli principle, charge generation, injection,
recombination and photon–electron and phonon–electron scattering phenomena, to
name just a few important topics) to be judiciously applied in order to find well defined
special cases and clever approximations that would, within limits, provide a model
simple enough for everyday use. Of course, if pushed too far the model fails, and there
is no other way to the solution but to rework the physics neglected. In our analysis we
shall try not to go that far.
It cannot be overstressed that in our analysis we are dealing with models of
semiconductor devices! As Philip Darrington, former Wireless World editor, put it in
one of his editorials, “the map is not the territory”, just as the schematic diagram is not
the circuit. As in any branch of science, we build a (theoretical) model, analyze it, and
then compare with the real thing; if it fits, we have had a good nose there, or perhaps we
have simply been lucky; if it does not fit we go back to the drawing board.
In the macroscopic world, from which all our experience arises, most models are
quite simple, since the size ratio of objects, which can still be perceived directly with
our senses, to the atomic size, where some odd phenomena begin to show up, is some 6
orders of magnitude; thus the world appears to us to be smooth and continuous.
However, in the world of ever shrinking semiconductor devices we are getting ever
closer to the quantum phenomena (e.g., the dynamic range of our amplifiers is limited
by thermal noise, which is ultimately a quantum effect). But long before we approach
the quantum level we should stay alert: even if we forget that the oscilloscope probe
loads our circuit with a shunt capacitance of some 10–20 pF and a serial inductance of
about 70–150 nH of the ground lead, the circuit will not forget, and sometimes not
forgive, either!

-3.8-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

3.1 Common Emitter Amplifier

Fig. 3.1.1a shows a common emitter amplifier (the name reflects that the emitter
is the common reference for both input and output signals), whilst Fig. 3.1.1b represents
its small signal equivalent circuit (the EM2 model, see [Ref. 3.4 and 3.9]). If a signal
source amplitude is much smaller than the base bias voltage, resulting in an output
signal small enough compared to the supply voltage, we can assume that all the
equivalent circuit components are linear (not changing with the signal). However, when
the transistor has to handle large signals, the equivalent model becomes rather
complicated and the analysis is usually carried out by a suitable computer program.

iµ ic rco
Q1 o

rµ Cµ
Vcc ib rb π gm
RL o ro

RL
Ic + ic rπ Cπ π C sub
Ib + ib
Q1 is Rs
is iπ
Rs i Ie + ie r eo
ie

a) b)

iµ ic
Q1 o ic
o

ib π gm ib π gm
rb
rb
is is
rπ Cπ RL rπ
Rs Rs Ct RL
ie

c) d)
Fig. 3.1.1: The common emitter amplifier: a) circuit schematic diagram – the current and voltage
vectors are drawn for the npn type of transistor; b) high frequency small signal equivalent circuit;
the components included in the U" model are explained in the text; c) simplified equivalent
circuit; d) an oversimplified circuit where Gt œ G1  EG. œ constant.

During the first steps of circuit design we can usually neglect the non-linear
effects and obtain a satisfactory performance description by using a first order
approximation of the transistor model, Fig. 3.1.1c. Some of the circuit parameters can
even be estimated by an oversimplified model of Fig. 3.1.1d. By assuming a certain

-3.9-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

operating point OP, set by the DC bias conditions, we can explain the meaning of the
model components. On the basis of these explanations it will become clear why and
when we may neglect some of them, thus simplifying the model and its analysis.

3c
gm transistor mutual conductance in siemens [S œ "ÎH]: gm œ ¸ "Î<e
@1
where 3c is the instantaneous collector current;
@1 is the voltage across <1 (see below);
ZT
and <e is the dynamic emitter resistance: <e œ [H] (ohm);
Me
Me is the d.c. emitter current [A] (ampere);
5B X
ZT is the p-n junction ‘thermal’ voltage œ [V] (volt);
;
where ; is the electron charge œ "Þ'!# † "!"* [C] (coulomb œ [As]);
X is the absolute temperature [K] (kelvin);
5B is the Boltzmann constant œ "Þ$) † "!#$ [JÎK] (jouleÎkelvin)
( œ [VAsÎK] ).
<o equivalent collector–emitter output resistance, representing the variation of
the collector–emitter potential due to collector signal current at a specified
operating point (its value is nearly always much greater than the load):
@ce . Zce ZA  Zce
<o œ œ º œ
3c . Mc OP Mc
where ZA is the Early voltage (see Fig. 3.4.10);
In wideband amplifiers: <o ¦ VL
<. collector to base resistance, representing the variation of the collector to base
potential due to base signal current at some specified DC operating point
condition OP(Zcc , Mb ); in wideband amplifiers its value is always much
larger than the source resistance or the base spread resistance:
@cb . Zcb ZA  Zce
<. œ œ º œ
3b . Mb OP Mb
<. ¦ V s  < b
<1 base to emitter input resistance (forward biased BE junction), representing
the variation of the base to emitter potential owed to the base signal current
at a specified operating point:
@1 . Zbe ZT "
<1 œ œ º œ" œ ¸ " <e
3b . Mb OP Mc gm
<b base spread resistance (resistance between the base terminal and the base–
emitter junction); value range: "! H  <b  #!! H.
<co presumably constant collector resistance of internal bonding and external
lead; approximate value  " H.

-3.10-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

<eo presumably constant emitter resistance of internal bonding and external


leadà approximate value  " H.
G. collector–base (reverse biased) junction capacitance; usually the value is
small (< "! pF ); however, in the common emitter circuit it is effectively
amplified by the voltage gain (the Miller effect, see Eq. 3.1.9).
G1 base–emitter junction effective capacitance; depends on DC bias:
G1 œ gm 7T  G.
where 7T is the characteristic time constant, derived from the ‘transition’
frequency, 0T , the frequency at which "Ð0T Ñ œ " (see below):
" "
7T œ œ
=T #10T
Gsub collector to substrate capacitance; must be accounted for only in integrated
circuits; in discrete devices it can be neglected.
" the transistor current gain, the collector to base current ratio; the gain
frequency dependence is modeled by the characteristic time constant 7T :
3c " Mc
"œ œ "! where: "! œ
3b "  "! = 7T Mb

It is important to realize that some of those resistances are only ‘equivalent


resistances’, or ‘incremental resistances’, which can be represented by a tangent to the
voltage–current relation curvature at a particular bias point, and as such they are highly
non-linear for large signals. Also, the capacitances are only in part a consequence of the
actual p–n junction geometry; they are dominantly volumes in the semiconductor
material in which there are energy gaps capable of charge trapping, storing and
releasing. In turn the gap energy is voltage dependent, so the effective capacitances are
also voltage dependent and therefore also non-linear.
With bias conditions encountered in wideband amplifier applications, the
collector to base resistance <. and the collector to emitter resistance <o are several
orders of magnitude larger than the source resistance Vs and the load resistance VL . In
order to simplify the analysis we shall neglect <. and <o ; that is why we have not drawn
them in Fig. 3.1.1c.
Likewise the DC power supply voltages are also not drawn, because their
sources (should!) represent a short circuit for the signal, so we have simply tied the
loading resistor and the bias voltages to the ground. Remember that it is the duty of the
circuit designer — that is you, yourself ! — to provide good power supply bypassing by
both adding appropriate capacitors and using wide and short, low resistance, low
inductance PCB traces.
The resistors <co and <eo represent the external leads and internal wires, and
since their value is usually less than " H, they are also neglected.
In general we can assume that small signal transistors work at an internal
junction temperature of 300 K (27 °C or 80 °F, roughly the ‘room temperature’
increased by a few degrees owing to some small power dissipation caused by DC bias).

-3.11-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

Output transistors work at higher temperatures, depending on the output signal


amplitude, the load and the power efficiency of the amplifying stage.
It is interesting to note that 5B X Î; has the dimension of voltage ( J = VAs,
C = As and K cancels) and it has been named the junction ‘thermal’ voltage, its value
ZX œ 5B X Î; œ #' mV at X œ $!! K. By assuming that the collector current Mc is
almost equal to the emitter current Me ( actually ‚ "ÎÐ"  "Ñ ), we obtain a well known
relation for the effective emitter resistance:

ZX #' mV #' mV
<e œ œ ¸ (3.1.1)
Me Me Mc

The collector to base capacitance G. depends on the collector to base voltage


Zcb . In normal operating conditions (CB junction reverse biased), the corresponding
relation [Ref. 3.4, 3.20] is:

G.0
G. ÐZcb Ñ œ (3.1.2)
Zcb 7c
”"  •
Zjc

This equation is valid under the assumption that there is no charge in the
collector to base depletion layer. The meaning of the symbols are:
G.0 œ junction capacitance [F] (farad) (when Zcb œ 0 V)
Zcb œ collector to base voltage [V] (volts)
Zjc œ collector to base barrier potential ¶ !Þ'–!Þ) V for silicon transistors
7c œ collector voltage potential gradient factor
(!Þ& for abrupt junctions and !Þ$$ for graded junctions)

Obviously, G. decreases inversely with collector voltage. For small signals


(amplifier input stage) Zcb does not change very much, so we can assume G. to be
constant, or, in other words, ‘linear’. However, in middle stage and output transistors,
Zcb changes considerably. As already mentioned, in such cases the computer aided
circuit simulation is mandatory (after the initial linearized approximation has been
found acceptable). Fortunately most transistor manufacturers provide the diagrams
showing the dependence of G. from Zcb . The reader can find a very good review for 25
of the most commonly used transistors in [Ref. 3.21, p. 556].
The input base emitter capacitance G1 strongly depends on the emitter current,
respectively, on the transconductance gm . Since we can not directly access the internal
base junction to measure G1 and G. separately, we calculate G1 from the total
equivalent input capacitance Gt (see Fig. 3.1.1d), from which we first subtract G. :

"
G1 œ Gt  G. œ gm 7T  G. œ  G. (3.1.3)
#10T <e

-3.12-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

where 0T is the frequency at which the (frequency dependent) current amplification


factor " œ 3c Î3b is reduced to 1. Because G. is usually small compared to Gt we can
simplify Eq. 3.1.3 to obtain:

"
G1 ¸ (3.1.4)
# 10T <e

The product <e G1 is called the transistor time constant 7T œ "Î=T , where
=T œ # 1 0T . The value =" œ =T œ "ÎÐ<e G1 Ñ represents the dominant pole of the
amplifier and thus it is the main bandwidth limiting factor. In our further discussions we
shall find the way to drastically reduce the influence of G1 , at the expense of the
amplification factor.
The next problem is to calculate the input impedance. Here we must consider
the Miller effect [Ref. 3.7, 3.12] owed to capacitance G. (in practice, there is also a CB
leads stray capacitance, parallel to the junction capacitance, that has to be taken into
account). Therefore we first calculate the input admittance looking right into the
internal <b G. junction in Fig. 3.1.1c. The current 3. flowing through G. is:

3. œ Ð@1  @o Ñ G. = (3.1.5)

where @1 is the voltage across <1 . The output voltage is ":

@o œ  3c VL œ  gm @1 VL (3.1.6)

By inserting Eq. 3.1.6 into Eq. 3.1.5, we obtain:

3. œ Ð"  gm VL Ñ = G. @1 (3.1.7)

and from this the junction input admittance:


3.
œ Ð"  gm VL Ñ G. = (3.1.8)
@1

From this equation it follows that the junction input impedance, owed to
capacitance G. , is a capacitance with the value:

GM œ Ð"  gm VL Ñ G. œ Ð"  Ev Ñ G. (3.1.9)

where gm VL œ Ev œ @o Î@i , the voltage gain.


The capacitance GM is called the Miller capacitance, after the Miller effect —
bandwidth reduction with increasing voltage gain (the Miller effect is probably named
after John Milton Miller, [Ref. 3.36]).

1Note the negative sign in Eq. 3.1.6: actually, the output voltage is @o œ Zcc  3c VL . However, since we
have agreed to replace the supply voltage with a short circuit, we are left with the negative part only.

-3.13-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

When the voltage gain is large the effect of G. (and, respectively, GM ) becomes
prevalent. However, there are ways of reducing the effect of Ev G. (by lowering the
voltage gain or by using other circuit configurations); we discuss it in later sections.
The complete input impedance that the signal source would see at the base is:

" <1
^b œ <b  œ <b  (3.1.10)
" "  =aG1  GM b <1
 =G1  =GM
<1

3.1.1 Calculation of voltage amplification (based on Fig. 3.1.1d)

On the basis of the analysis made so far, we come to the conclusion that the two
capacitances G1 and G. (effectively GM ) are connected in parallel in the base circuit.
We can simply add them together and consider their sum to be Gt . This has been drawn
in Fig. 3.1.1d. This equivalent circuit is appropriate for the calculation of both input
impedance and the transimpedance. But since we have removed any connection
between the output and input (where G. is effective), this circuit is not suitable for the
calculation of output impedance. Therefore when calculating the voltage gain we must
also consider the pole =# ¸ "ÎVL G. on the collector side, according to Fig. 3.1.1c (in
general, some collector to ground stray capacitance must be added to G. , but for the
time being, we shall write only G. ).
According to Fig. 3.1.1d, thus neglecting the pole =# , but including the source
impedance Vs , we have:
@i  @1 "
œ @1 Œ  = Gt  (3.1.11)
Vs  <b <1

From this we can express @1 as:


<1
@1 œ @ i (3.1.12)
<1  Vs  <b  = Gt <1 aVs  <b b
The output voltage is:
<1
@o œ  gm @1 VL œ  gm VL @i (3.1.13)
<1  Vs  <b  = Gt <1 aVs  <b b

and the voltage amplification is:


@o <1
Ev œ œ  gm VL (3.1.14)
@i <1  Vs  <b  = Gt <1 aVs  <b b

We would like to separate the frequency dependent part of Ev from the frequency
independent part, in a normalized form, as:
 ="
Ev Ð=Ñ œ E! (3.1.15)
=  ="

where E! is the DC gain and =" is the system’s pole.

-3.14-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

To achieve this separation we first divide both the numerator and the
denominator of Eq. 3.1.14 by Gt <1 aVs  <b b:
<1

Gt <1 aVs  <b b
Ev œ  gm VL (3.1.16)
<1  Vs  <b
=Œ 
Gt <1 aVs  <b b

To make the two ratios equal we must multiply the numerator by a<1  Vs  <b bÎ<1
and then multiply the whole by the reciprocal:

<1 <1  Vs  <b


 †
Gt <1 aVs  <b b <1 <1
Ev œ  gm VL † (3.1.17)
<1  V s  < b <1  Vs  <b
=Œ 
Gt <1 aVs  <b b

We rearrange this to obtain:


<1  V s  < b

gm VL <1 Gt <1 aVs  <b b
Ev œ  † (3.1.18)
<1  Vs  <b <1  V s  < b
=Œ 
Gt <1 aVs  <b b

and by comparing this with Eq. 3.1.15 the DC gain is:

gm VL <1
E! œ  (3.1.19)
<1  Vs  <b

and the pole =" is:


Vs  <b  <1
=" œ  (3.1.20)
ÐVs  <b Ñ <1 Gt

Since =" is a simple real pole it is equal to the system’s upper half power frequency:

Vs  < b  < 1 "


=h œ k=" k œ †
ÐVs  <b Ñ <1 Gt

Vs  < b  < 1 "


œ † (3.1.21)
ÐVs  <b Ñ <1 G1  Ð"  gm VL Ñ G.

and it can be seen that it is proportional to the sum Vs  <b in parallel with <1 and is
inversely proportional to G1 , G. , gm , and VL .
If Vs  <b ¥ <1 and if VL is very small the approximate value of the pole is:

" " gm =T
k=" k ¸ œ † œ (3.1.22)
<1 G1 "o G1 "!

where "! œ Mc ÎMb , the DC current amplification factor.

-3.15-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

Before more sophisticated circuits were invented, the common emitter amplifier
was used extensively (with many amplifier designers having hard times and probably
cursing both G1 and G. ). In 1956 G. Bruun [Ref. 3.22] thoroughly analyzed this type of
amplifier with the added shunt–series inductive peaking circuit. In view of modern
wideband amplifier circuits, this reference is only of historical value today.
Nevertheless, the common emitter stage represents a good starting point for the
discussion of more efficient wideband amplifier circuits.

-3.16-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

3.2 Transistor as an Impedance Converter

In the previous section we have realized that the amplification factor is


frequency dependent, decreasing with frequency above some upper frequency limit
(asymptotically to 20 dBÎ100 , just like a first-order low pass system). This can help
us to derive different impedance transformations from the emitter to the base circuit and
back [Ref. 3.1, 3.2]. Knowing the possible transformations is extremely useful in the
wideband amplifier design. We shall show how the nature of the impedance changes
with these transformations. A capacitive impedance may become inductive and positive
impedances may occasionally become negative!

3.2.1 Common base small signal transistor model

As we explained in Sec. 3.1, if the voltage gain is not too high the base emitter
capacitance G1 is the dominant cause of the frequency response rolling off at high
frequencies. By considering this we can make a simplified small signal high frequency
transistor model, as shown in Fig. 3.2.1, for the common base configuration, where 3c ,
3e and 3b are the collector-, emitter-, and base-current respectively. For this figure the
DC current amplification factor is:
Mc
!! œ (3.2.1)
Me

A more correct expression for the mutual conductance is:

!! "!
gm œ œ (3.2.2)
<e Ð"  "! Ñ <e

where "! is the common emitter DC current amplification factor. If "! ¦ " then
!! ¶ ", so the collector current Mc is almost equal to the emitter current Me and
gm ¶ "Î<e . This simplification is often used in practice.

ie Q1 π gm ic

π Cµ
ie Q1 ic is Rs Cπ iµ
o


Rs RL rb RL
i
ib o
i
is Vcc ib

a) b)

Fig. 3.2.1: The common base amplifier: a) circuit schematic diagram;


b) high frequency small signal equivalent circuit.

-3.17-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

For the moment let us assume that the base resistance <b œ ! and consider the
low frequency relations. The input resistance is:
@1
<1 œ (3.2.3)
3b
where @1 is the base to emitter voltage. Since the emitter current is:

3e œ 3b  3c œ 3b  "! 3b œ 3b Ð"  "! Ñ (3.2.4)

then the base current is:


3e
3b œ (3.2.5)
"  "!
and consequently:
@1 Ð"  "! Ñ
<1 œ œ <e Ð"  "! Ñ ¸ "! <e (3.2.6)
3e

The last simplification is valid if "! ¦ ". To obtain the input impedance at high
frequencies the parallel connection of G1 must be taken into account:
Ð"  "! Ñ <e
^b œ (3.2.7)
"  Ð"  "! Ñ = G1 <e
The base current is:
@1 "  Ð"  "! Ñ = G1 <e
3b œ œ @1 (3.2.8)
^b Ð"  "! Ñ <e
Furthermore it is:
Ð"  "! Ñ <e
@1 œ 3 b (3.2.9)
"  Ð"  "! Ñ = G1 <e
The collector current is:
"! " !!
3c œ gm @1 œ † @1 œ @1 (3.2.10)
"  "! <e <e

If we put Eq. 3.2.9 into Eq. 3.2.10 we obtain:


"! " Ð"  "! Ñ <e
3c œ 3b † †
"  "! <e "  = Ð"  "! Ñ <e G1
"
œ 3b
" "!  "
 =Œ  <e G1
"! "!
"
¸ 3b œ 3b " Ð=Ñ (3.2.11)
"
 = 7T
"!

In the very last expression we assumed that "! ¦ " and 7T œ <e G1 œ "Î=T ,
where =T œ #10T is the angular frequency at which the current amplification factor "
decreases to unity. The parameter 7T (and consequently =T ) depends on the internal
configuration and structure of the transistor. Fig. 3.2.2 shows the frequency dependence
of " and the equivalent current generator.

-3.18-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

102
β0 → ∞
ωT
s >>
β0 β0
β0
ic
√2 C
1
10 ib
B s τT
ib
ωT
ωT ≈ 1 ie
β0 re C π
E
100
−3 −2 −1 0
10 10 10 10
ω /ω T
a) b)
Fig. 3.2.2: a) The transistor gain as a function of frequency, modeled by
the Eq. 3.2.11; b) the equivalent HF current generator.

In order to correlate Fig. 3.2.2 with Eq. 3.2.11 we rewrite it as:


=T

3c "! ="
¸ "! œ "! (3.2.12)
3b =T =  ="
=  Œ 
"!

where =" is the pole at =T Î"! . This relation will become useful later, when we shall
apply one of the peaking circuits (from Part 2) to the amplifier. At very high
frequencies, or if "! ¦ ", the term = 7T prevails. In this case, from Eq. 3.2.11:

3c " "
œ " Ð=Ñ ¸ œ (3.2.13)
3b = 7T 4 = <e G1

Obviously " is decreasing with frequency. By definition, at = œ =T the current ratio


3c Î3b œ "; then the capacitance G1 can be found as:
"
G1 ¸ (3.2.14)
=T <e

This simplified relation represents the 20 dBÎ100 asymptote in Fig. 3.2.2a.

-3.19-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

3.2.2 The conversion of impedances

We can use the result of Eq. 3.2.11 to transform the transistor internal (and the
added external) impedances from the emitter to the base circuitry, and vice versa.
Suppose we have the impedance ^e in the emitter circuit, as displayed in Fig. 3.2.3a,
and we are interested in the corresponding base impedance ^b :

1
Ze β 0 >> s τ T << 1
sτT
sτT
Ze
β0 sτT
τT β0 Z e
Zb Zb Zb Zb
Ze Ze ( β 0+ 1) Z e
Ze

a) b) c) d)

Fig. 3.2.3: Emitter to base impedance conversion: a) schematic; b) equivalent circuità


c) simplified for high "! ; d) simplified for low frequencies.

We know that:
^b œ " Ð=Ñ ^e  ^e œ  " Ð=Ñ  " ‘ ^e (3.2.15)

If we insert "Ð=Ñ according to Eq. 3.2.11, we obtain:


^e
^b œ  ^e (3.2.16)
"
 = 7T
"!

The admittance of the first part of this equation is:


"
 = 7T
"! " = 7T
] œ œ  (3.2.17)
^e "! ^ e ^e

and this represents a parallel connection of impedances "! ^e and ^e Î= 7T . By adding


the series impedance ^e , as in Eq. 3.2.16, we obtain the equivalent circuit of Fig. 3.2.3b.
At medium frequencies and with a high value of "! we can assume that "Î"! ¥ = 7T , so
we can delete the impedance "! ^e and simplify the circuit, as in Fig. 3.2.3c. On the
other hand, at low frequencies, where = 7T ¥ ", we can neglect the ^e Î= 7T component
and get a very basic equivalent circuit, displayed in Fig. 3.2.3d.
Eq. 3.2.11 is equally useful when we want to transform the impedance from the
base into the emitter circuit as shown in Fig. 3.2.4a. In this case we have:

^b
^e œ (3.2.18)
"Ð=Ñ  "

-3.20-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

Again we calculate the admittance, which is:

" Ð=Ñ  "


]e œ œ c" Ð=Ñ  "d ]b œ " Ð=Ñ ]b  ]b (3.2.19)
^b

The first part of this admittance is:

" Ð=Ñ ]b " "


] œ œ œ † (3.2.20)
^b " ^b "
 = 7T  = 7T
"! "!
and the impedance is:
^b
^œ  = 7T ^ b (3.2.21)
"!

Thus the transformed impedance ^e is composed of three elements: the series


connection of ^b Î"! and = 7T ^b , in parallel with the impedance ^b .

1 s τ T << 1
β 0 >>
sτT
β0 Z b sτT
τT Zb
Ze Zb Ze Ze
Zb Zb Z b sτT β 0+ 1
Ze Zb β0

a) b) c) d)

Fig. 3.2.4: Base to emitter impedance conversion: a) schematic; b) equivalent circuit;


c) simplified for high "! or for 0 ¶ 0T ; d) simplified for low frequencies.

The equivalent emitter impedance is shown in Fig. 3.2.4b.


As in the previous example, for some specific conditions the circuit can be
simplified. At medium frequencies and a high "! , we can assume "! ¦ "Î= 7T and
therefore neglect the impedance ^b Î"! , as in Fig. 3.2.4c. At low frequencies, where
= 7T ¥ " , the impedance ^b Îa"!  "b prevails and we can neglect the parallel
impedance ^b , as in Fig. 3.2.4d.

3.2.3 Examples of impedance transformations

The most interesting examples of impedance transformations are the emitter to


base transformation of a capacitive emitter impedance and the base to emitter
transformation of an inductive base impedance.
In the first case we have ^e œ "Î= G , where G is the emitter to ground
capacitance.
To obtain the base impedance we apply Eq. 3.2.5:

-3.21-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

Ô × "
 = 7T  "
" Ð=Ñ  " Ö " Ù " "!
^b œ œÖ  "Ù œ
=G " =G "
Õ "  = 7T Ø Œ  = 7T  = G
! "!

"
= 7T  Œ"  
"!
œ (3.2.22)
=G
=# 7T G 
"!

The inverse of ^b is the admittance:


=G
=# 7T G 
"!
]b œ (3.2.23)
"
= 7T  Œ"  
"!

Let us synthesize this expression by a simple continued fraction expansion [Ref. 3.27]:

=G
=# 7T G 
"! =G
œ =G  (3.2.24)
" "
= 7T  Œ"   = 7T  Œ"  
"! "!

The fraction on the right is a negative admittance with the corresponding impedance:

" "
= 7T  Œ"   "
"! 7T "!
^bw œ  œ   (3.2.25)
=G G =G

It is evident that this impedance is a series connection of a negative resistance:


7T G1
Vn œ  œ  <e (3.2.26)
G G
and a negative capacitance:
G "!
Gn œ  œ  G œ  !! G (3.2.27)
" "  "!
"
"!

By adding the positive parallel capacitance G , as required by Eq. 3.2.24, we


obtain the equivalent circuit which is shown in Fig. 3.2.5. Since we deal with an active
circuit (transistor) it is quite normal to encounter negative impedances. The complete
base admittance is then:
"
]b œ = G  (3.2.28)
7T "

G = !! G

-3.22-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

By rearranging this expression and substituting = œ 4 = we can separate the real


and imaginary parts, obtaining:

]b œ d ˜]b ™  4 e˜]b ™ œ Kb  4 = Gb

7T !!  "
7T# 
G =# !#!
œ   4=G (3.2.29)
" "
7T#  7T#  # #
=# !#! = !!

β0 − α0 C
τT Zb
C
Zb τT
−C
C

a) b)
Fig. 3.2.5: A capacitive emitter load is reflected into the
base (junction) with negative components.

The negative input (base) conductance Kb can cause ringing on steep signals or
even continuous oscillations if the signal source impedance has an emphasized
inductive component. We shall thoroughly discuss this effect and its compensation
later, when we shall analyze the emitter–follower (i.e., common collector) and the JFET
source–follower amplifiers.
Now let us derive the emitter impedance ^e in the case in which the base
impedance is inductive (= P). Here we apply Eq. 3.2.18:
=P =P
^e œ œ œ (3.2.30)
" Ð=Ñ  " "
"
"
 = 7T
"!
" =P
=PŒ  = 7T  =# P 7T 
"! "!
œ œ (3.2.31)
" "
"  = 7T = 7T  Œ"  
"! "!
By continued fraction expansion we obtain:
=P
=# P 7T 
"! =P
œ =P (3.2.32)
" "
= 7T  Œ"   = 7T  Œ"  
"! "!
The negative part of the result can be inverted to obtain the admittance:
" "
= 7T  Œ"   "
"! 7T "!
]ew œ  œ   (3.2.33)
=P P =P

-3.23-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

This means we have two parallel impedances. The first one is a negative resistance:
P
Vx œ  (3.2.34)
7T
and the second one is a negative inductance:
P "!
Px œ  œ  P œ  !! P (3.2.35)
" "  "!
"
"!
As required by Eq. 3.2.32, we must add the inductance P in series, thus arriving at the
equivalent emitter impedance shown in the figure below:

−L −α0 L
β0 τ T τT

Ze Ze
L L

a) b)

Fig. 3.2.6: The inductive source is reflected into the emitter with negative components.

We have just analyzed an important aspect of a common base amplifier, with an


inductance (i.e., long lead) between the base and ground. The negative resistance, as
given by Eq. 3.2.34, may become the reason for ringing or oscillations if the driving
circuit seen by the emitter has a capacitive character. We shall discuss this problem
more thoroughly when we shall analyze the cascode circuit.
In a way similar to those used for deriving the previous two results, we can
transform other impedance types from emitter to base, and vice versa. The Table 3.2.1
displays the six possible variations and the reader is encouraged to derive the remaining
four, which we have not discussed.
Note that all the three transformations for the comon base circuit in the table
apply to the base–emitter junction to ground only. In order to obtain the correct base
terminal to ground impedance the transistor base spread resistance <b must be added in
series to the circuits shown in the table.

-3.24-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

β0 β0 τ T
τT
Zb Ze
Z Z Z

τT
R

τT R
β0 R
R
R R
R
β0

β0 L −α 0 L

L L
τT
−L
τT

L L

−α 0 C β 0C

C C
τT
C
τT
−C C

Table 3.2.1: The Table of impedance conversions [Ref. 3.8].

-3.25-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

3.2.4 Transformation of combined impedances

The Table 3.2.1 can also help us in transforming impedances consisting of two
or more components.
Example 1:
Suppose we have a parallel Vb Gb combination in the base circuit, as shown in
Fig. 3.2.7a. What is the emitter impedance ^e if the common base transistor has a
current amplification factor "! and the time constant 7T ( œ "Î=T )?

β0 τ T

τ T Rb β 0 Cb
Ze Ze
Rb
Rb τT Cb
Rb Cb
β0 Cb

a)
b)

Ze Rb Ze Rb
Rb β0 Cb β 0+ 1 Cb

if Rb Cb = β0 τ T
c) d)

Fig. 3.2.7: Base to emitter VG network transformation: a) schematic;


b) equivalent circuit; c) if Vb Gb œ "! 7T the middle components form
a resistance; d) final equivalent circuit.

From the Table 3.2.1 we first transform the resistance Vb from base to emitter
and obtain what is shown on the left half of Fig. 3.2.7b. Then we transform the
capacitance Gb and obtain the right half of Fig. 3.2.7b. If we want the transformed
network to have the smallest possible influence in the emitter circuit, we can apply the
principle of constant resistance network (P and G cancel each other when VG œ PÎV ,
[Ref. 3.27]). To do so let us focus on both middle branches of the transformed network,
where we select such values of Vb and Gb that:
Vb 7 T Vb
ËG " œ " (3.2.36)
b ! !

which is resolved as:


Vb Gb œ 7T "! (3.2.37)

-3.26-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

With such values of Vb and Gb the middle two branches obtain the form of a
resistor with the value Vb Î"! , as shown in Fig. 3.2.7c, which allows us to further
simplify the complete circuit to that in Fig. 3.2.7d.
To acquire a feeling for practical values, let us make a numerical example. Our
transistor has:
"! œ )! 0T œ '!! MHz Vb œ %( H

where Vb is the external base resistance, as in Fig. 3.2.7.


What should be the value of the capacitance Gb , connected in parallel with Vb ,
which would fulfil the requirement expressed by Eq. 3.2.36?
We start by calculating the transistor time constant:

" " "


7T œ œ œ œ #'& ps (3.2.38)
=T # 1 0T # 1 † '!! † "!'

Then we calculate the capacitance by using Eq. 3.2.37:

7T "! #'& † "!"# † )!


Gb œ œ œ %&" pF (3.2.39)
Vb %(

The equivalent parallel resistance, Vq , according to Fig. 3.2.7d, is:

Vb %(
Vq œ œ œ !Þ&) H (3.2.40)
"  "! "  )!

The time constant of the equivalent circuit, 7q , is:

7q œ Vq Gb œ !Þ&' † %&" † "!"# œ #'"Þ&) ps (3.2.41)

whilst the base time constant is:

7b œ Vb Gb œ %( † %&" † "!"# œ #"Þ"*( ns (3.2.42)

and the ratio of time constants is:


7b
œ "!  " (3.2.43)
7q

We shall consider these results in the design of the common base amplifier and
of the cascode amplifier.

Example 2:
By using the same principles as we have used above, we shall take another
example, which is also very important for the wideband amplifier design. We shall
transform a parallel combination Ve Ge , as shown in Fig. 3.2.8a, from emitter to base.
With the data from Table 3.2.1, we can draw the transformed base network separately
for Ve and Ge and then connect them in parallel. This is shown in Fig. 3.2.8b. Now we

-3.27-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

focus only on the middle part of the circuit, which is drawn in Fig. 3.2.8c. If we select
such values of Ve Ge that:

Ve Ge œ 7T (3.2.44)

and if we consider that !0 ¸ ", then the admittance ] of the middle part of the circuit
becomes zero, because in this case:
7T
Ve œ  (3.2.45)
Ge
and:
7T
œ  !! Ge ¸  Ge (3.2.46)
Ve
and the parallel connection of a positive and an equal negative admittance gives zero
admittance:

" "
] œ  œ!» (3.2.47)
" 7T "
Ve  7T   Ve G e œ 7 T
= Ge = Ge
Ve
A zero admittance is an infinite impedance. So in this case the only components
that remain are the parallel connection of Ge and a"!  "bVe , as in Fig. 3.2.9d.
Note that this transformation was carried out by taking the internal base
junction as the viewing point. The actual input impedance at the external base terminal
will be equal to the parallel VG combination of Fig. 3.2.8d to which the series
connected base spread resistance <b must be added.

β0
τT
Zb β0 Re τT − α 0C e
Zb Re
Re τT Ce
Ce −
Re Ce

a)
b)
ReCe = τT
Y = 0 if
α0 = 1

− α 0 Ce Zb
τT
Y Re
( β0 +1)Re Ce
τT

Re Ce

c) d)
Fig. 3.2.8: Transformation of the emitter VG network as seen from the base:
a) schematic; b) equivalent circuit; c) if VG œ 7T and !! œ ", the sum of
admitances of the components in the middle is zero; d) final equivalent circuit.

-3.28-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

The transformation in Fig. 3.2.8 allows us to trade the gain of a common emitter
circuit for the reduced input capacitance. Instead of G1 (large) if the emitter was
grounded, the input capacitance is the same as the capacitance Ge (small) which we
have inserted in the emitter circuit. Of course, we still have to add the base to collector
capacitance G. or Miller capacitance GM . As we will see in the Sec. 3.4, where we shall
discuss the cascode amplifier, the gain is reduced in proportion to VL ÎVe . Since in a
wideband amplifier stage we almost never exceed the voltage gain of ten, we can
always apply the above transformation.
For a numerical example let us use the same transistor as before ("! œ )!,
0T œ '!! MHz). According to Eq. 3.2.38 the corresponding 7T is 265 ps. Let us say that
on the basis of the desired current gain of the common-emitter stage we select an
emitter resistor Ve œ "!! H. What is the value of the parallel emitter capacitance Ge
which would give the input impedance according to Fig. 3.2.8d?
Since Ve Ge œ 7T œ #'& ps, it follows that:

7T #&' † "!"#
Ge œ œ œ #Þ'& pF only! (3.2.48)
Ve "!!

If the stage has an emitter current Me œ "! mA, then:

#' mV
<e œ œ #Þ' H (3.2.49)
"! mA

Without the Ve Ge network in the emitter, the base to emitter capacitance G1 would
define the bandwidth and its estimated value would be (Eq. 3.1.4):

" "
G1 œ œ œ "!# pF (3.2.50)
# 1 <e 0T # 1 † #Þ' † '!! † "!'

By introducing the Ve Ge network in the emitter circuit, we have reduced the


base to emitter capacitance, seen by the input current, by 102Î2.65 œ 38.5 times! Of
course, to our 2.65 pF we must add in parallel the capacitance GM œ G. Ð"  Ev Ñ and
the base spread resistance <b in series with this network to get the complete input
impedance. Since now the collector to base capacitance G. has become significant,
especially because it is ‘magnified’ Ð"  Ev Ñ times, we must look for some methods to
reduce its effect as much as possible. We will discus this in the following sections.
Owing to Ve the input resistance is increased to:

Vi ¶ Ð"  "0 Ñ Ve œ Ð"  )!Ñ "!! œ )"!! H (3.2.51)

Here, too, we have neglected the base resistance <b ; it must be added to the value above
to obtain a more accurate figure.
In Fig. 3.2.9a the transistor stage with Ve Ge is shown again and in Fig. 3.2.9b is
its small signal equivalent input circuit.
In wideband amplifiers we usually make the emitter network with Ge Ÿ #! pF.
In order to match Ve Ge œ 7T the capacitor Ge is often made adjustable, because 7T in
commercially available transistors have rather large tolerances.

-3.29-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

bJ
β0
B bJ
τT
rb Cµ
B C

Re Ce ( β0 +1)R e Ce

R e C e = τT
a) b)

Fig. 3.2.9: Ve Ge network transformation: a) schematic; b) equivalent circuit.


The symbol bJ represents the internal base junction.

With an appropriate VL in the collector (not shown in Fig. 3.2.9) we might now
calculate the (decreased) voltage amplification Ev owed to the Ve Ge network in the
emitter circuit of the common emitter stage and consider the decreased value of the
Miller capacitance GM . Since we shall not use exactly such amplifier configuration we
leave this as an exercise to the reader.
But for the application in the cascode amplifier, which we are going to discuss
in Sec. 3.4 , it is important to know the transconductance 3o Î@i of the amplifier with the
Ve Ge network. The corresponding circuit is drawn again in Fig. 3.2.10a and Fig. 3.2.10b
shows the equivalent small signal circuit.

Cµ io
io
rb
ii
Q1 rπ π π gm

ii
vi
Re Ce i
Re Ce

a) b)
Fig. 3.2.10: Common collector amplifier: a) schematic; b) equivalent small signal circuit.

If we neglect the resistance <b and the capacitance G. the following relation is
valid for the remaining circuit:

@i œ 3i ÐD1  ^e Ñ  3o ^e (3.2.52)

where:
3o œ gm @1 and @1 œ 3i D1

therefore:
3o œ gm 3i D1 (3.2.53)

-3.30-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

The impedances D1 and ^e are:


<1 Ve
D1 œ and ^e œ (3.2.54)
"  = G1 <1 "  = Ge Ve

We can rewrite Eq. 3.2.52 as:


@i œ 3i ÐD1  ^e  gm D1 ^e Ñ (3.2.55)
and the input current is:
@i
3i œ (3.2.56)
D1  ^e  gm D1 ^e

The output current can be obtained by inserting Eq. 3.2.56 back into Eq. 3.2.53:
gm D1 @i
3o œ (3.2.57)
D1  ^e  gm D1 ^e
The transadmittance is:
3o gm D1
œ (3.2.58)
@i D 1  ^ e  gm D 1 ^ e

We can divide the numerator and denominator by gm D1 ^e :

3o " "
œ † (3.2.59)
@i ^e " "
 "
gm ^e gm D1

Now we insert the expressions for D1 and ^e from Eq. 3.2.54 and replace gm by "Î<e :

3o " "  = Ge Ve
œ † <e <e (3.2.60)
@i Ve a"  = Ge Ve b  a"  = G1 <1 b  "
Ve <1
and with a slight rearrangement we obtain:

3o " "  = Ge Ve
œ † <e <e (3.2.61)
@i Ve   "  = aGe  G1 b <e
Ve <1
Because a<e ÎVe b ¥ " and a<e Î<1 b ¥ " we can neglect them, so:
3o " "  = Ge Ve
œ † (3.2.62)
@i Ve "  = aGe  G1 b <e

This equation can be simplified if we ‘tune’ the emitter network so that:

Ge Ve œ aGe  G1 b <e

Ê Ge aVe  <e b œ G1 <e

Ê Ge Ve ¸ G1 <e (3.2.63)

-3.31-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

The transadmittance can thus be expressed simply as:

3o "
œ (3.2.64)
@i Ve

Here we must not forget that at the beginning of our analysis we have neglected
the resistance <b , which, together with the transformed input capacitance Ge and the
collector to base capacitance G. , makes a pole at:

=" œ "ÎÐGe  G. Ñ <b (3.2.65)

The magnitude of =" is equal to the upper half power frequency: l=" l œ =h . This
pole makes the stage frequency dependent in spite of Eq. 3.2.64. We have also
neglected the input resistance "! Ve , but, since it is much larger than <b , we shall not
consider its influence (with it, the bandwidth would increase slightly). By introducing
the pole =" back into Eq. 3.2.64, we obtain a more accurate expression for the
transadmittance:

"

3o " ÐGe  G. Ñ <b
œ † (3.2.66)
@i Ve "
=” •
ÐGe  G. Ñ <b

-3.32-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

3.3 Common-Base Amplifier

In the previous sections we have realized that the collector to base capacitance
G. has a very undesirable effect on the stage bandwidth (Miller effect). But in the
common base configuration the base is effectively grounded and any current through G.
is fed to the ground, not affecting the base current (actually, owing to the physical
construction of the CB junction G. is spread across the whole base resistance <b , so part
of that current would nevertheless reach the base, which we will analyze later).
The common base circuit is drawn in Fig. 3.3.1a and its small signal equivalent
circuit in Fig. 3.3.1b. In wideband amplifiers the loading resistor V L is much smaller
than the collector to base resistance <. , so we shall neglect the later. In order to make
the expressions still simpler, at the beginning of our analysis we shall also not take into
account the base resistance <b . However, we shall have to include <b later, when we
shall discuss the input impedance.
π gm
ie ic

ie Q1 ic rπ
is Cπ π Cµ
Rs o

Rs RL o i rb RL
i ib
is ib

a) b)
Fig. 3.3.1: Common base amplifier: a) schematic; b) equivalent small signal model.

The main characteristics of the common base stage are a very low input
impedance, a very high output impedance, the current amplification factor !0 ¸ ", and,
with the correct value of the loading resistor VL , the possibility of achieving higher
bandwidths. The last property is owed to a near elimination of the Miller effect, since
G. is now grounded and does not affect the input Ev times. Thus G. is effectively in
parallel with the loading resistor VL and — because we can make the time constant
VL G. relatively small — the bandwidth of the stage may be correspondingly large.
Another very useful property of the common base stage is that the collector to
base breakdown voltage Zcbo is highest when the base is connected to ground and the
higher reverse voltage reduces G. further (Eq. 3.1.2). Owing to all the listed properties
the common base stage is used almost exclusively for wideband amplifier stages where
large output signals are expected.
Following the current directions in Fig. 3.3.1, the input emitter current is:
@1
3e œ  gm @1 (3.3.1)
D1
where:
<1
D1 œ (3.3.2)
"  = G1 <1

-3.33-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

From these two equations it follows that:


"
3e œ @1 Šgm   = G1 ‹ (3.3.3)
<1
The output collector current is:
3c œ gm @1 (3.3.4)

If we put Eq. 3.3.3 into Eq. 3.3.4 we obtain:


3c gm gm !!
œ ¸ œ (3.3.5)
3e " gm  = G1 "  = G1 <e
gm   = G1
<1
since gm œ "Î<e and !! œ "! ÎÐ"!  "Ñ ¸ ". This equation has the pole at "ÎG1 <e ,
which lies extremely high in frequency, because <e is normally very low. Since the
output pole "ÎVL G. , which we shall consider next, becomes prevalent we can neglect
= G1 <e and assume that 3c ¸ 3e . The output voltage is:
VL
@o œ  3c ^L œ  3c (3.3.6)
"  = G. VL
With the simplifications considered above we can write the expression for the
transimpedance, which is:
@o VL
¸  (3.3.7)
3e "  = G. VL

Since the capacitance G. is in parallel with the loading resistor V L , we can


improve the performance by applying any of the inductive peaking circuits from Part 2.
In practice we never consider only the ‘pure’ capacitance G. , because some stray
capacitances are always present and must be taken into account. Also, if the transistor
U" is a part of an integrated circuit, we must consider the collector to substrate
capacitance GS . In such a case we use Eq. 3.1.2 with the exponent 7c œ !Þ&.

3.3.1 Input Impedance

We shall calculate the input impedance of the common base stage by taking into
account the base resistance <b , which — as we will realize very soon — represents a
very nasty obstacle in achieving a wide bandwidth. We shall make our derivation on the
basis of Table 3.2.1, from which we have drawn Fig. 3.3.2. This figure represents the
equivalent small signal input circuit owed to <b .
The input admittance of the circuit in Fig. 3.3.2a is:
" "
]e œ  <b (3.3.8)
<b  = 7T <b
"!
Within the frequency range of interest the value <b Î"! in the second fraction is
small and can be neglected. The simplified input admittance is thus:
" "
]e ¸  (3.3.9)
<b = 7T <b

-3.34-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

τ T rb
Ze rb Z'e rb τ T rb
rb
β0

a) b)
Fig. 3.3.2: Common base amplifier input impedance: a) <b transformed to ^e ;
b) within the frequency range of interest, <b Î"! can be neglected.

The real part represents a resistance:


Ve ¸ <b (3.3.10)

and the imaginary part is an inductance:


Pe ¸ <b 7T (3.3.11)

Normally, if the amplifier is built with discrete components, there is always some lead
inductance Ps which must be added in series in order to obtain the total impedance.
In the next section, where we shall discuss the cascode amplifier, we shall find
that the inductance Pe , together with the capacitance G. of the common emitter current
driving stage, forms a parallel resonant circuit which may cause ringing in the
amplification of steep signals. In most cases the resistance Ve is too large to damp the
ringing effectively enough by itself, so additional citcuitry will be required.
Eq. 3.3.10 and Eq. 3.3.11 , respectively, disclose the fact that the annoying
inductance Pe and the resistance Ve are directly proportional to the base spread
resistance <b . When using this type of amplifier for the output stages, where the
amplitudes are large (e.g., in oscilloscopes), we must use more powerful transistors,
mostly in TO5 case type. In this case the internal transistor connections are relatively
long and its total active area is large, the corresponding <b is large as well. In order to
decrease Ve and Pe we must select transistors which have low <b . To decrease base
spread resistance as much as possible and also to decrease the transition time (the time
needed by the current carriers to pass the base width), the firm RCA has developed
(already in the late 1960s) the so called overlay transistor. A typical overlay transistor
is the 2N3866. Such transistors are essentially integrated circuits, composed of many
identical small transistors connected in parallel.

-3.35-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

3.4 Cascode Amplifier

If the common emitter amplifier of Fig. 3.2.11a is used as a driver of the


common base amplifier of Fig. 3.3.1a, a cascode amplifier [Ref. 3.3, 3.7, 3.12] is
obtained. The name cascode springs from the times when electronic tubes were the
circuit building blocks. The anode of the first tube (the equivalent of the common
emitter stage) was loaded by the cathode input of the second tube with a grounded grid
(the equivalent of the common base stage). Both electronic tubes were therefore
connected in cascade, hence the compound word cascode.

io1 = i i2 Q2 io2
i i1
Q1 o

i RL
V b2
Rs R e1 Ce1

V c2
s V e1

Fig. 3.4.1: Cascode amplifier schematic

Cµ1 π 2 gm2
ii1 rb1 io1 = ii2 io2
o
b1 c1 e2 c2
Rs r π1 Cπ 1 π 1 gm1 rπ 2 Cπ 2 Cµ 2
e1 RL

s R e1 r b2
Ce1
b2

Fig. 3.4.2: Equivalent small signal model of a cascode amplifier. The components
belonging to the common emitter circuit bear the index ‘1’ and those of the common
base circuit bear the index ‘2’.

3.4.1 Basic Analysis

A transistor cascode amplifier is drawn in Fig. 3.4.1 and Fig. 3.4.2 shows its
small signal equivalent circuit. All the components that belong to transistor U" bear the
index ‘1’ and all that belong to transistor U# bear the index ‘2’.
For the emitter network of U" we select the values such that Ve" Ge" œ 7T" .
In order to simplify the initial analysis, we shall first neglect Vs , <b# , and G." .
Later we shall reintroduce these elements one by one to get a closer approximation.
We have already derived the equations needed for each part of the combined
circuit: for the common emitter stage we have Eq. 3.2.66 and for the common base we

-3.37-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

have Eq. 3.3.7. We only need to multiply these two equations to get the voltage gain of
our cascode amplifier:

3o" @o @o " " VL


Ev œ † œ ¸ † †Œ  (3.4.1)
@i 3o" @i Ve1 "  = Ge" <b" "  = G.# VL

Here we have approximated !!# ¸ ", and therefore 3o# œ 3o" . The first fraction,
multiplied by VL from the third fractionß is the DC voltage amplification and the
remainder represents the frequency dependence:

VL "
Ev ¸  † (3.4.2)
Ve" Ð"  = Ge" <b" Ñ Ð"  = G.# VL Ñ

Obviously, the frequency dependence is a second-order function. There are two poles:
the pole at the input is "Î Ge" <b" œ "Î7T" whilst the pole "ÎG.# V L œ "Î7T# is
on the output side. As we shall see later, it is possible to apply the peaking technique on
both sides.
In an ideal case the common base stage input (emitter) impedance is very low.
Because of this low load the first stage voltage gain Ev" ¥ ", so G." would not be
amplified by it. And if we could neglect <b# the capacitance G.# would appear in
parallel to the loading resistor VL , and therefore it would neither be multiplied by the
second stage’s voltage gain Ev# . Both G." and G.# are relatively small, so it is obvious
that the cascode amplifier has, potentially, a much greater bandwidth in comparison
with a simple common emitter amplifier (for the same total voltage gain). The price we
pay for this improvement is the additional transistor U# .
Of course, in practice things are not so simple, and in addition we should not
neglect the inevitable stray capacitances. Those should be added to G." and G.# . Also,
ÐVs  <b" Ñ with G." will affect the behavior of U" and <b# with G.# will affect the
behavior of U# , as we shall see in the following analysis.

3.4.2 Damping of the U# Emitter

Owing to the base spread resistance <b# the U# input (emitter) has an inductive
component with the inductance Pe# œ 7T# <b# in parallel with <b# , as already shown in
Table 3.2.1 and also in Fig. 3.3.2. The equivalent input impedance of the transistor U#
was derived in Eq. 3.3.9 to Eq. 3.3.11.
As shown in the simplified circuit in Fig. 3.4.3, the inductance Pe# and the
collector to base capacitance G." of U" form a series resonant circuit, damped by <b# in
parallel with Pe# (and a series emitter resistance <b# Î"# , which is very small, so it was
neglected). The other end of G." is connected to the base of U" , where we must
consider two effects:
— at very high frequencies the input signal goes directly through <b" and G." ;
— at somewhat lower frequencies, the signal is inverted and amplified by U" and
the internal base junction can then be treated as the virtual ground.

-3.38-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

In an actual cascode amplifier U# operates at a much higher collector voltage


than U" , and since the collector currents are nearly equal this means a higher power
dissipation for U# . By using for U# a transistor capable of higher power dissipation, we
will probably have to accept its higher <b# as well. This increases the inductance Pe#
and lowers the damping of the series resonance formed by G." and Pe# , resulting in a
large peaking near the upper cut off frequency.
To prevent this from happening, an additional impedance ^d , consisting of a
resistor Vd in parallel with a capacitor Gd , is connected between the collector of U" and
the emitter of U# . If Vd is made equal to <b# , then Gd can be chosen so, that it cancels
Pe# ; the result is a resistive input impedance of the U# emitter: ^e# ¸ <b# .

Cd
bJ1
Cµ 1 Rd i e2
i b1 B r b1
1 C1 E2

Rs
π1 gm1
Z e2 rb2 L e2 =
τ T2 rb2
s
E1

Fig. 3.4.3: Parasitic resonance damping of the cascode amplifier. Two current paths must
be considered: at highest frequencies, for 3b" , G." represents a non-inverting cross-talk
path; at lower frequencies, for 3c" , G." provides a negative feedback loop, thus it can be
viewed as if being connected to a virtual ground (U" base junction bJ" ). The parasitic
resonance, formed by G." and Pe# is only partially damped by <b# ; the required additional
damping is provided by inserting Vd and Gd between U" collector and U# emitter.

So let us put:
Pe#
Vd œ <b# œ Ë (3.4.3)
Gd
The value of Gd is then:
7T# "
Gd œ œ (3.4.4)
<b# #10T# <b#

To get a feeling for actual values let us have two equal transistors with parameters such
as in the examples in Sec. 3.2.4 (0T œ '!! MHz, <b# œ %( H):
"
Vd œ <b2 œ %( H Gd œ œ &Þ' pF (3.4.5)
#1 † '!! † "!' † %(
The input impedance of the emitter circuit of transistor U# now becomes:

^e# ¸ <b# (3.4.6)

which is resistive at all frequencies (approximately so, because we have allowed


ourselves a simplification). The corresponding equivalent circuit is shown in Fig. 3.4.4.
The task of the impedance ^d is actually twofold: it must damp the inductive input

-3.39-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

circuit of transistor U# , and as we shall see later, it can be a good choice for providing
the thermal stabilization of the cascode stage.
Since we have introduced ^d into the collector circuit of U" we must now
account for the U" Miller capacitance:
^e# <b#
GM" ¸ G." Œ"   œ G." Š"  ‹ (3.4.7)
^e" Ve"
where ^e# is the U# compensated emitter input impedance and ^e" is the impedance of
the emitter circuit of U" . With this consideration the gain, Eq. 3.4.2, becomes:

VL "
Ev ¸  † (3.4.8)
Ve" "  = ÐGe"  GM" Ñ <b" ‘ Ð"  = G.# V L Ñ

Cd
i e2HF
i c1 C1 i e2LF

Rd E2

CM1 Z'e2 = r b2 Z e2 L e2 =
rb2 τ T2 rb2
when
L
R d C d = r e2
B1 b2
(virtual
ground)

Fig. 3.4.4: With damping the simplified U# input impedance becomes


(approximately) resistive. Note the high and low frequency paths.

The collector to base capacitance of the transistor U" allows very high frequency
signals from the input bypassing this transistor and directly flowing into the emitter of
transistor U# . Transistor U" amplifies, inverts, and delays the low frequency signals. In
contrast, all of what comes through G." is non-delayed, non-amplified, and non-
inverted, causing a pre-shoot [Ref. 3.1] in the step response, as shown in Fig. 3.4.5. The
U" collector current, 3c" , is the sum of 3." and @1" gm" . Note that both the pre-shoot
owed to 3." and the overshoot of @1" gm" are reduced in @c# by the U# pole due to G.# .

i C µ1

Pre-shoot − π1 gm1 Ze2

i
c2

−1 0 1 2 3 4
t [ns]
Fig. 3.4.5: The step response @c# has a pre-shoot owed to the signal cross-talk
through G." (arbitrary vertical units, but corresponding to Ev œ  # ).

-3.40-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

So far we have excluded G.# from our analysis. When included, its effect on
bandwidth is severe, owing to the Miller effect and <b# . But it also affects the emitter
input impedance of U# since GM# œ G.# a"  Ev b appears in parallel to <b# and is
consequently transformed into the emitter in accordance with Table 3.2.1, in the same
way as in Fig. 3.2.7. If Ev is relatively high the pronounced resonance owed to G.# can
cause long ringing, even if the bandwidth is lower than the resonant frequency.
Instead of using increased damping in series with the emitter of U# , which
would reduce the bandwidth further, John Addis [Ref. 3.26] suggested an alternative
approach. The U# base, instead of being connected directly to a low impedance bias
voltage, should be connected to it through a resistor VA of up to "!! H and grounded by
a small capacitor GA ¸ G.# . In Fig. 3.4.6 the voltage gain, the phase, and the group
delay as functions of frequency are shown for the two cases: VA œ ! and VA œ $$ H,
respectivelyÞ The change of U# input impedance is exposed by the lower drive stage
current 3c" near the cut off frequency.

20 −180
ϕ
ϕ[ ]

τe> 0
10 −270 1.0
c2 c2
b1 b1 0.5
b a
[dB] τ e [ns]
RL
0 −360 0.0
i c1 Q2 c2
a − 0.5
τe b
−10 RA − 450 −1.0
b
CA a) 0 V b2
b) 33 a 30
a
i c1
− 20 20
i c1 b [mA]
10

−30
10 6 10 7 10 8 10 9 1010
f [Hz]

Fig. 3.4.6: The compensation method of U# as suggested by John Addis. a) With VA œ ! , the
frequency response has a notch at the resonance and a phase-reversed cross-talk, which makes the
group delay 7e positive. b) with VA œ $$ H and GA œ $ pF the frequency response, the phase and
the group delay are smoothed and, although the bandwidth is reduced slightly, the group delay
linearity is extended and the undesirable positive region is reduced to negative. The U# emitter
impedance is increased, as can be deduced from the lower 3c" peak.

To analyze the U# emitter impedance compensation we shall consider the


equivalent circuit in Fig. 3.4.7. Here the capacitance G.# is seen by the base as the
Miller capacitance GM# ( remembering that Ev œ VL ÎVe" ):

GM# œ G.# a"  Ev b (3.4.9)

which appears in parallel with the base spread resistance <b# Þ

-3.41-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

The external compensation network, the parallel connection of VA and GA , is


added in series and the total can then be transformed by the same rule as in Fig. 3.2.7.
The base impedance is:

" "
^b# œ  (3.4.10)
" "
 =GM#  =GA
<b# VA

If VA ¸ <b# and GA ¸ G.# then ^b# becomes:

#VA Vs
^b# œ œ (3.4.11)
"  =GA VA "  =Gs Vs

which is a parallel connection of Vs œ #VA and Gs œ GA Î# .

Vcc Vcc Vcc


RL RL RL
i c1 Q2 c2 i c1 Q2 c2 i c1 Q2 c2

Cµ 2 C M2
r b2 r b2
Rs Cs

CA RA CA RA

Vb2

a) b) c)
Fig. 3.4.7: The U# emitter input impedance compensation.

3.4.3 Thermal Compensation of Transistor U"

We already know (Eq. 3.1.1) that the effective emitter resistance is:

5B X
<e œ (3.4.12)
; Me

The ‘room temperature’ ( 20°C or 68°F ) expressed as the absolute temperature


is X œ 20  273 œ 293 K. An increase of, say, ?X œ 30 K (30°C) means increasing it
by about 10 %. Assuming that the transistor is biased by a constant current source
(Me œ constant), this would affect <e and consequently decrease the gain
[Ev ¸ VL ÎÐVe  <e Ñ ]. If, on the other hand, the transistor is biased by a constant
voltage, both <e and Me will be affected and it might not be easy to compensate (as
James M. Bryant of Analog Devices likes to joke, we can not change the Boltzmann
constant 5B because Boltzmann is already dead, and neither can we change the electron
charge ; because both Coulomb and J.J. Thompson are dead, too).

-3.42-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

When we apply the bias and supply voltage to a transistor, the collector current
Mc will cause an increase in the temperature X of the transistor, owing to the power
dissipated in the transistor:
TD œ Mc Zce (3.4.13)

where Zce is the collector to emitter voltage. If the ambient temperature is XA and the
total thermal resistance from junction to ambient is KJA [KÎW] (kelvin per watt), the
junction temperature XJ will be:
XJ œ XA  KJA TD (3.4.14)

In a properly designed amplifier, at a certain time after we apply the operating


voltages, an equilibrium is reached: all the heat generated by the transistor dissipates
into the ambient and the transistor obtains a stable, but higher temperature (see
Appendix 3.1 for the calculation of the internal junction temperature, the proper cooling
requirement, and the prevention of thermal runaway).
Now we apply a step signal to the base–emitter junction (input). The collector
current follows the step, changing the power dissipation. A little later (provided that the
transistor stage is properly designed), a new temperature equilibrium is reached.
The dynamic emitter resistance <e is directly dependent on the temperature X
and inversely dependent on the emitter current Me , but Me œ Ms ˆe;Zbe Î5b X  "‰; since Zbe
and Ms are temperature dependent, we can not expect all effects to cancel. If, e.g., Mc
( ¸ Me ) increases, Zce decreases, and because TD œ Mc Zce , it can either increase or
decrease, depending on the initial values of Mc and Zce . Consequently, the temperature
may also increase or decrease, influencing the dynamic emitter resistance to change
accordingly. Since <e ¸ "Îgm , a change in the emitter resistance also changes the DC
gain, but <e ¥ Ve , so the change in Zbe is dominant. As shown in Fig. 3.4.8, the
change does not occur suddenly at the moment when we apply the input step voltage,
but gradually, depending on the transistor thermal time constant. This is known as the
‘thermal distortion’.

r e ( T ) increase

r e ( T ) decrease

0 10 20 30 40
t [ µs]
Fig. 3.4.8: Thermal distortion can cause long term drift after the transient
(exaggerated, but not too much !). In addition to this thermal time constant, there can
also be a much slower one, owed to the change in the transistor case temperature.

-3.43-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

If we go back to Eq. 3.2.61 for a moment, we note that the gain depends also on
the ratio <e ÎVe (in the denominator), which we have assumed to be much smaller than 1
and thus neglected. But if Ve is small the change in temperature and emitter current can
alter the gain up to a few percent in worst cases.
Before we look for the remedy for the problem of how to cancel, or at least how
to substantially reduce, the thermal distortion, let us take a look at Fig. 3.4.9, which
shows a simple common emitter stage, and the way in which the power dissipation is
shared between the load and the transistor as a function of the collector current. As
usual, we use capital letters for the applied DC voltages, loading resistor, etc. and small
letters for the instantaneous signal voltages and power dissipation. The instantaneous
transistor power dissipation is:
@L Zcc  @ce Zcc @#
:U" œ @ce 3c œ @ce œ @ce œ @ce  ce (3.4.15)
VL VL VL VL

Since @ce cannot exceed Zcc if the collector load is purely resistive, the right
vertical axis ends at Zcc . The left vertical axis is normalized to the maximum load
power, which is simply :Lmax œ Zcc# ÎVL (corresponding to @ce œ ! and thus :Q1 œ !).
The transistor’s power dissipation, however, follows an inverse parabolic function with
a maximum at:
#
Zcc " Zcc#
TU"max œ Œ  œ (3.4.16)
# VL % VL

where @ce œ Zcc Î#. This point is the optimum bias for a transistor stage. If excited by
small signals, the transistor power dissipation moves either left or right, close to the top
of the parabola and thus it does not change very much. This means that the transistor’s
temperature does not change very much either.

1.00 Vcc
Vcc
ce > 2

p 0.75
V cc RL L
L
p Lmax
pL
ic Vcc
0.50 ce 2
Q1
ib
ce = L
V
ce < 2cc
0.25
p
Q1

0 0
0 0.2 0.4 0.6 0.8 1.0
ic / icmax
Fig. 3.4.9: The optimum bias point is when the voltage across the load is
equal to the voltage across the transistor. This is optimal both in the sense of
thermal stability, as well as in the available signal range sense.

-3.44-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

If different design conditions were forcing us to move the bias point far from the
top of the parabola, the bias with Zce  Zcc Î# is preferred to the range Zce  Zcc Î#,
because the latter situation is unstable. However, in wideband amplifiers we can hardly
avoid it, because we want to have a low VL , a high Mc and a high Zcb (to reduce G. ) and
all three conditions are required for high bandwidth.
The typical temperature coefficient of a base–emitter p–n junction voltage
( ¸ !Þ' V) is approximately # mVÎK for silicon transistors, so we can explain the
instability in the following way:
When the circuit is powered up the transistor conducts a certain collector
current, which heats the transistor, increasing the transistor base–emitter p–n junction
temperature. If the base is biased from a voltage source (low impedance, which in
wideband amplifiers is usually the case), the temperature increase will, owing to the
negative temperature coefficient, decrease the base–emitter voltage. In turn, the base
current increases and, consequently, both the emitter and the collector current increase,
which further increases the dissipation and the junction temperature. The load voltage
drop would also increase with current and therefore reduce the collector voltage, thus
lowering the transistor power dissipation. But with a low VL , the change in the drop of
load voltage will be small, so the transistor power dissipation increase will be reduced
only slightly.
The effect described is cumulative; it may even lead to a thermal runaway and
the consequent destruction of the transistor if the top of the parabola exceeds the
maximum permissible power dissipation of the transistor (which is often the case, since
we want low VL and high Zcc and Me , as stated above). In a similar way, on the basis of
the # mVÎK temperature dependence of Zbe , the reader can understand why the bias
point for Zce  Zcc Î# is thermally stable.
According to Eq. 3.4.16, to have the transistor thermally stable means having
resistances VL or VL  Ve such that at the bias point the voltage drop across them is
equal to Zcc Î# . In general, this principle is successfully applied in differential
amplifiers: when one transistor is excited so that its bias point is pushed to one side of
the parabola the bias point of the other transistor is moved exactly to the same
dissipation on the opposite side of the parabola. As a result the temperature becomes
lower but equal in both transistors. Thus in the differential amplifier both transistors can
always have the same temperature, independent of the excitation (provided that we
remain within the linear range of excitation) and there is (ideally) no reason for a
thermal drift in the differential output signal (in practice, it is difficult to make two
transistors with similar parameter values, let alone equal values, even within an
integrated circuit).
In our cascode circuit of Fig. 3.4.1 the transistor U" already has an emitter
resistor Ve" as dictated by the required current gain, and we do not want to change it.
However, we can add a resistor, which we label V) , in the collector circuit of U" to
make Zce" ¸ Mc" ÐVe"  V) Ñ ¸ Zcc" Î#, where Zcc" is the voltage at the emitter of the
transistor U# . Suppose now that the emitter current is Me" ¸ Mc" œ &! mA and the U#
base voltage Zbb œ  "& V. Then the collector voltage of transistor U" is:

Zcc" œ Zbb  Zbe# ¸ "&  !Þ' œ "%Þ% V (3.4.17)

where Zbe# is the base–emitter voltage (about !Þ' V for a silicon transistor).

-3.45-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

With Ve" œ #! H the value of the thermal compensation resistor is:

Zcc# Î#  Mc" Ve" "%Þ%Î#  !Þ!& † #!


V) œ œ œ "#% H (3.4.18)
Mc" !Þ!&

Such a resistor should be used instead of Vd as calculated before to achieve both


the ringing suppression and the thermal compensation. But by inserting these "#% H
instead of Vd œ %( H, the Miller capacitance GM would increase to #(Þ) pF, decreasing
the amplifier bandwidth too much. Fortunately the compensating resistor V) needs to
correct the behavior of the amplifier only at very low frequencies. So we can bridge that
part of it which is not needed for the suppression of ringing by an appropriate capacitor;
let us call it G) , as shown in Fig. 3.4.10. By doing so, we prevent an excessive increase
of Miller capacitance.
Cd Cθ
i c1 Q2
Rd Rθ - Rd

r c1 Rθ
Cµ 1
base of Q 1
(virtual ground)
Fig. 3.4.10: The modified compensation network: Vd Gd provide the
HF damping, whilst V) G) provide thermal compensation.

The question is how does one calculate the proper value of G) ? The obvious
way would be to calculate the thermal capacity of the transistor’s die and case mass
(adding an eventual heat sink) and all the thermal resistances (junction to case, case to
heath sink, heath sink to air), as is usually done for high power output stages.
Bruce Hofer [Ref.3.8] suggests the following — more elegant — procedure,
based on Fig. 3.4.10. The two larger time constants in this figure must be equal:

ÐV)  Vd Ñ G) œ <c" GM" (3.4.19)

Here <c" is the dynamic collector resistance of transistor U" , derived from Fig. 3.4.11 as
?Zce Î?Mc . In this figure ZA is the Early voltage:
I Ib5
∆Ic
Ic Ib4
Ib3
Ib2

Ib1

0 V
VA Vce ∆ Vce
Fig. 3.4.11: The dynamic collector resistance <c" is derived from
the Mc aZce , Mb b characteristic and the Early voltage ZA .

-3.46-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

The meaning of the Early voltage can be derived from Fig. 3.4.11, where
several curves of the collector current Mc vs. collector to emitter voltage Zce are drawn,
with the base current Mb as the parameter. With increasing collector voltage the
collector current increases even if the base current is kept constant. This is because
the collector to base depleted area widens on the account of the (active) base width as
the collector voltage increases. This in turn causes the diffusion gradient of the
current carriers in the base to increase, hence the increased collector current.
By extending the lines of the collector current characteristics back, as shown in
Fig. 3.4.11, all the lines intersect the abscissa at the same virtual voltage point ZA
(negative for npn transistors), called the Early voltage (after J.M. Early, [Ref. 3.11]);
from the similarity of triangles we can derive the collector’s dynamic resistance:
J Zce Zc  a  Z A b
<c" œ œ (3.4.20)
J Mc Mc

Since the voltage gain of the common emitter stage is low, GM" will be only
slightly larger than G." . If we now suppose that transistor U" has an <c" œ !Þ& † "!' H
and G." œ $ pF, the value of G) should be:
<c" GM" !Þ& † "!' † $ † "!"#
G) œ œ œ "*Þ& nF (3.4.21)
V)  V d "#%  %(
In practice, we can take the closest standard values, e.g., G) œ ## nF and
V)  Vd œ "#%  %( œ (( H ¸ (& H. Since, in general, a wideband amplifier has
several amplifying stages, each one having its own temperature and damping problems,
these values can be varied substantially in order to achieve the desired performance.
Thermal problems tend to be more pronounced towards the output stages where the
signal amplitudes are high. Here experimentation should have the last word.
Nevertheless, the values obtained in this way represent a good starting point.

i o1 i i2 Q i o2
Cθ Cd 2
i i1
Q1
Rθ - Rd Rd RL
CL o
Rs
R e1 Ce1
RA
CA V cc
s V b2
V e1

Fig. 3.4.12: The compensated cascode amplifier. If VA GA compensation


is used, Vd Gd may become unnecessary. The main bandwidth limiting
time-constants are now aVs  <b" bGe" and aG.#  GL bVL .

Note that, to achieve a good thermal compensation, the same half-voltage


principle (Fig. 3.4.9) should be applied to both U" and U# , although this will rarely be
possible. On the other hand, a cascode amplifier can always be implemented as a
differential amplifier, simplifying the problem of thermal compensation and, as a bonus,
doubling the gain. We will see this later in Sec.3.7. There are, as we are going to see in
following sections, still some points for possible improvement of the cascode amplifier.

-3.47-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

3.5 Emitter Peaking in a Cascode Amplifier

Here we shall examine the possibility of improving the cascode amplifier


bandwidth by applying the emitter peaking technique.
Let us return to the basic cascode amplifier of Fig. 3.4.1 and Fig. 3.4.2, but with
a little modification: we shall assume a current signal source in parallel with the source
resistance Vs . We shall also assume that there is an output capacitance in parallel with
the load resistance VL such that its value is Go œ G.#  GL . To simplify the analysis
we shall disregard the damping and thermal compensation described in the previous
section. We shall basically follow the steps of Carl Battjes [Ref. 3.1], to show a
different approach to the cascode amplifier design.

3.5.1 Basic Analysis

The transimpedance of the amplifier in Fig. 3.5.1 is:


@o 3b" 3c" 3c# @o
œ † † † (3.5.1)
3s 3s 3b" 3c" 3c#
where:
3b" Vs 3c" "
œ and œ " Ð=Ñ œ (3.5.2)
3s Vs  ^i 3b" "
 = 7T"
"!
with ^i being the input impedance looking into the base of transistor U" and Vs the
source resistance. At higher frequencies, when the input capacitance of transistor U"
prevails (see Eq. 3.2.12 and [Ref.3.1]), we have:
3c" " "
¸ and 7T " œ (3.5.3)
3b" = 7T" #10T"
Further, for U# we have:
3c# @o VL VL
œ !# ¸ " and œ œ (3.5.4)
3c" 3c# "  = VL Go "  = 7L

where 7L œ VL Go and Go œ G.#  GL .


We shall temporarily neglect the base spread resistance <b" and calculate the
input impedance ^i at the base–emitter junction of U" by Eq. 3.2.15:
" "  = 7T"
^i œ "Ð=Ñ  " ‘ ^e" œ Š  "‹ ^e" œ ^e" (3.5.5)
= 7T" = 7T"
The emitter peaking technique basically involves the introduction of a zero at
=z œ =R œ "Î7R in the emitter network of U" to cancel the pole of the U# collector
load at =p œ =L œ "Î7L . For an efficient peaking 7R must be lower than 7L , but still
above the time constant 7T" of U" :

7T"  7R  7L (3.5.6)

-3.49-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

The complete emitter circuit should look as in Fig. 3.5.1a, in which 7R œ VG


and its HF impedance is:

Ve" Ð"  = 7R Ñ Ve" Ð"  = 7R Ñ


^e" œ œ # (3.5.7)
Ð"  = 7T1 Ñ Ð"  = 7L Ñ = 7T" 7L  = Ð7T"  7L Ñ  "

Vcc log | Z i | b J1
RL Zi
o
| Zi |
−Rc
i c2 CL
Vb2 (β0 +1) R e1 Ci
i c1 1 − Cc
Zi
i b1 | sC i |
Q1
c)
i
R
is
Rs R e1 Ce1
C
log ω
ωL ωR ω T1
a) b)

Fig. 3.5.1: Emitter peaking in cascode amplifiers. a) By adding the zero at ȦR = 1/ RC to


the emitter network of U" ,we modify the input impedance ^i at the base junction of U" ; b) the
frequency dependent asymptote plot of ^i ; c) the equivalent schematic has negative
components, too. See the explanation in Sec. 3.5.2.

By introducing Eq. 3.5.7 back into Eq. 3.5.5 the input impedance can be expressed as:
"  = 7T" Ve" Ð"  = 7R Ñ Ve" Ð"  = 7R Ñ
^i œ † œ (3.5.8)
= 7T" Ð"  = 7T" Ñ Ð"  = 7L Ñ = 7T" Ð"  = 7L Ñ

Now we put Eq. 3.5.2 and Eq. 3.5.8 into Eq. 3.5.1:
@o Vs V L
œ
3s Ve" Ð"  = 7R Ñ
”Vs  • = 7T" Ð"  =7L Ñ
= 7T" Ð"  = 7L Ñ
Vs V L
œ (3.5.9)
=# Vs 7T" 7L  = ÐVs 7T"  Ve" 7R Ñ  Ve"

Next we put the denominator into the canonical form and equate it to zero:
Vs 7T"  Ve" 7R Ve"
=#  =  œ =#  + =  , œ ! (3.5.10)
Vs 7T" 7L Vs 7T" 7L
where we set the coefficients:
Vs 7T"  Ve" 7R Ve"
+œ and ,œ (3.5.11)
Vs 7T" 7L Vs 7T" 7L

-3.50-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

The general solution of Eq. 3.5.10 is:


+ +#
=",# œ  „Ê , (3.5.12)
# %

An efficient peaking must have complex poles, so the expression under the
square root must be negative, therefore: ,  +# Î%. We can then extract the negative
sign as the imaginary unit and write Eq. 3.5.12 in the form:

+ +#
=",# œ  „ 4 Ê,  (3.5.13)
# %

From Eq. 3.5.13 we can calculate the tangent section of the pole angle ):
+#
e ="
˜ ™ Ê ,  %,
tan ) œ œ % œÊ " (3.5.14)
d ˜=" ™ + +#
#
It follows that:
%,
"  tan# ) œ (3.5.15)
+#

Now we insert the expressions from Eq. 3.5.11 for + and , and obtain:
Ve"
%
V s 7T" 7L % Ve" Vs 7T" 7L
"  tan# ) œ # œ (3.5.16)
Vs 7T"  Ve" 7R aVs 7T"  Ve" 7R b#
Œ 
Vs 7T" 7L

By taking the square root the result is:

È"  tan# ) œ # Ve" Vs 7T" 7L


È
(3.5.17)
Vs 7T"  Ve" 7R

Finally we solve this for 7R and obtain:

Vs 7T" 7L Vs 7T"
7R œ V G œ # Ë  (3.5.18)
Ve" ˆ"  tan# )‰ Ve"

The admittance ]e1 of the emitter circuit in Fig. 3.5.1 is:

" "
]e" œ  = Ge" 
Ve" "
V
=G

=# GV Ge" Ve"  = ˆGV  GVe"  Ge" Ve" ‰  "


œ (3.5.19)
Ve" ˆ"  = GV ‰

-3.51-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

The emitter impedance ^e" is the inverse value of ]e" and it must be equal to Eq. 3.5.7:

Ve" ˆ"  = GV ‰
^e" œ
=# GV Ge" Ve"  = ˆGV  GVe"  Ge" Ve" ‰  "

Ve" ˆ"  = 7R ‰
œ (3.5.20)
=# 7T" 7L  = ˆ7T"  7L ‰  "

The coefficients of =# and = respectively must be equal in both fractions, therefore:

G V Ge" Ve" œ 7T" 7L (3.5.21)


and:
G V  Ve" aG  Ge" b œ 7T"  7L (3.5.22)

The value of Ve" is constrained by the DC current amplification VL ÎVe" . Thus we need
the expressions for G , Ge" , and V. By using Eq. 3.5.18, 3.5.21, and 3.5.22 we obtain:
7T" 7L
Ge" œ (3.5.23)
Ve" 7R
and:
7L
7T"  7L  7R  7T"
7R
Gœ (3.5.24)
Ve

where 7R should be calculated by Eq. 3.5.18. Once the value of G is known we can
easily calculate the value of the resistor V œ 7R ÎG . Of course, 7R is determined by the
angle ) of the poles selected for the specified type of response.
Fig. 3.5.2a and 3.5.2b show the normalized pole loci in the complex plane. As
seen already in examples in Part 1 and Part 2, to achieve the maximally flat envelope
delay response (MFED), a single stage 2nd -order function must have the pole angle
) œ „"&!°. The original circuit has two real poles =T" and =L , but when the emitter
peaking zero =R is brought close to =L the poles form a complex conjugate pair.
The frequency response is altered as shown in Fig. 3.5.2c and the bandwidth is
extended. The emitter current increase 3e" a=bÎMe" owing to the introduced VG network
has two break points: the lower is owed to Ve" aG  Ge" b and the upper is owed to VG .
If the break point at =R is brought exactly over =L they cancel each other, and the final
response is shaped by the break point =T" of the transistor and the second break point in
the emitter peaking network, =C . The peaking can thus be adjusted by V and G .
Let us consider an example with these data: 0T" œ #!!! MHz, Vs œ '! H,
Ve œ #! H, Go œ * pF, VL œ $*! H. We want to make the amplifier with such an
emitter peaking network which will suit the Bessel pole loci (MFED), where the pole
angle ) œ „"&!° . First we calculate both time constants:
" "
7T" œ œ œ (*Þ&) ps (3.5.25)
#10T" #1 † #!!! † "!'
and:
7L œ VL Go œ $*! † * † "!"# œ $Þ&" ns (3.5.26)

-3.52-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

1.5 1.5

1.0 1.0
s1
0.5 0.5
j ω1 θ
s T1 sR sL
jω 0 jω 0 σ1
−θ
− 0.5 − 0.5
s2
−1.0 −1.0

− 1.5 − 1.5
− 3.0 − 2.5 − 2.0 −1.5 − 1.0 − 0.5 0 −1.5 − 1.0 − 0.5 0 0.5 1.0 1.5
σ σ
a) b)
log | F ( ω ) |
FP ( ω )
F(ω )

log ω

i e1 ( ω )
I e1(DC)

ωL ωR ω T1 ωC
c)

Fig. 3.5.2: Emitter peaking: a) two real poles travel towards each other when the emitter
network zero goes from _ towards =L , forming eventually a complex conjugate pair; b)
poles for Eq. 3.5.14 for the 2nd -order Bessel (MFED) response. c) frequency response
asymptotes — the badwidth is extended to =T" if =R œ =L .

By using Eq. 3.5.18 we then calculate the third time constant:

Vs 7T" 7L Vs 7T"
7R œ # Ë  œ (3.5.27)
Ve" Ð"  tan# )Ñ Ve"

'! † (*Þ&) † "!"# † $Þ&" † "!* '! † (*Þ&) † "!"#


œ #Ë  œ "Þ$& ns
#! † Ð"  tan# $!°Ñ #!

Now we can calculate Ge" using Eq. 3.5.23:

7T" 7L (*Þ&) † "!"# † $Þ&" † "!*


Ge" œ œ œ "!Þ$& pF (3.5.28)
Ve" 7R #! † "Þ$& † "!*
According to Eq. 3.5.24 the value of the capacitor G is:
7L
7T"  7L  7R  7T"
7R
Gœ œ
Ve

-3.53-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

$Þ&" † "!*
(*Þ&) † "!"#  $Þ&" † "!*  "Þ$& † "!*  (*Þ&) † "!"# †
œ "Þ$& † "!*
#!
œ "!"Þ' pF (3.5.29)

Finally we calculate the value of the resistor V , from the time constant 7R :

7R "Þ$& † "!*
Vœ œ œ "$Þ#* H (3.5.30)
G "!"Þ' † "!"#

3.5.2 Input Impedance Compensation

The introduction of the peaking elements V and G affects the HF input


impedance in an unfavorable way. We have calculated the input impedance at the base
emitter junction in Eq. 3.5.8, which we rewrite as:
Ve" Ð"  = 7R Ñ = 7R Ve"  Ve"
^i œ œ # (3.5.31)
= 7T" Ð"  = 7L Ñ = 7T" 7L  = 7T"

Here we have an additional pole at =L œ =L œ "Î7L and a zero at


=R œ =R œ "Î7R , both spoiling the purely capacitive character of the input
impedance, which we would like to have (frankly, we would prefer the input
capacitance to be zero as well, but this is not feasible). We have seen the Bode plot of
the input impedance and its configuration already in Fig. 3.5.1b and 3.5.1c. At very high
frequencies, where = becomes dominant, the input impedance obtains a simple
capacitive character:

Ve" 7R 7T" 7L
^i œ Ê Gi œ (3.5.32)
= 7 T" 7 L Ve" 7R

Our objective is to keep such an input impedance (at the base-emitter junction of
U" ) at lower frequencies also. In other words, at lower frequencies the plot of the input
impedance should correspond to the l"Î=Gi l line in Fig. 3.5.1b . All other impedances
that appear in the input circuit should be canceled by an appropriate compensating
network. To find these impedances, we perform a ‘continued fraction expansion’
synthesis of the input admittance ]i as derived from the right side of Eq. 3.5.8. Thus:
7T" 7L
= 7T"  =
" =# 7T" 7L  = 7T" 7T" 7L 7R
]i œ œ œ=  (3.5.33)
^i = 7R Ve"  Ve" V e " 7R = 7R Ve"  Ve"
The first fraction we recognize to be the input admittance = Gi . The second fraction can
be inverted and, by canceling out =, we obtain the impedance:
= 7R Ve"  Ve" Ve" 7R# Ve" 7R
^iw œ 7L œ 
= 7T" Š"  ‹ 7 Ð
T" R7  7L Ñ = 7T" Ð7R  7L Ñ
7R
"
œ  Vc  (3.5.34)
= Gc

-3.54-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

This means a resistor Vc and a capacitor Gc connected in series, and this
combination is in parallel with the input capacitance Gi . The values are negative
because 7R  7L as was required in Eq. 3.5.6. On the basis of these results we can draw
the equivalent input impedance circuit corresponding to Fig. 3.5.1c. The expression for
the capacitance Gc is:
7T" Ð7L  7R Ñ
Gc œ (3.5.35)
Ve" 7R
From Eq. 3.5.33, as well as from our previous analysis, we can derive that Vc Gc œ 7R
and obtain a simpler expression for Vc :
7R
Vc œ (3.5.36)
Gc

Let us now continue our example of the emitter peaking cascode amplifier with
the data Ve" œ #! H, 7T" œ (*Þ&) ps, 7R œ "Þ&" ns, and 7L œ $Þ&" ns, and calculate the
values of Gi , Gc , and Vc . The input capacitance Gi , without GM , is:

7T" 7L (*Þ&) † "!"# † $Þ&" † "!*


Gi œ œ œ "!Þ$& pF (3.5.37)
Ve" 7R #! † "Þ$& † "!*
The value of the capacitance Gc is:
7T" Ð7L  7R Ñ (*Þ&) † "!"# † Ð$Þ&"  "Þ$&Ñ † "!*
Gc œ œ œ 'Þ$( pF (3.5.38)
Ve" 7R #! † "Þ$& † "!*
and the resistor Vc has a resistance of:
7R "Þ$& † "!*
Vc œ œ œ $)& H (3.5.39)
Gc $Þ&" † "!"#

The next step is to compensate the series connected Gc and Vc . This can be
done by connecting in parallel an equal combination with positive elements. The
admittance of such a combination is zero and thus the impedance becomes infinity. The
mathematical proof for this operation is:
" "
]i w œ  œ!
" "
 Vc  Vc 
= Gc = Gc
and:
"
^iw œ Ê _ (3.5.40)
]i w

By doing so, only the input capacitances Gi  GM and the input resistance
a"  "bVe" remain effective at the (junction) input.
The impedance ^i as given by Eq. 3.5.8 is effective between the base–emitter
junction and the ground. Unfortunately, no direct access is possible to the junction,
because from there to the base terminal we have the base spread resistance <b" . This
means that <b" must be subtracted from Vc to get the proper value of the compensating
resistor. Supposing that <b" œ #& H, the proper compensating resistor is simply:

Vcw œ Vc  <b" œ $)&  #& œ $'! H (3.5.41)

-3.55-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

The complete input circuit is shown in Fig. 3.5.3; the input impedance
components which are reflected from the emittor to base, are within the box.

i i1 b J1 C µ1
r b1
Q1

is R c − r b1
−R c R
Rs Ci = Ce1
R e1 Ce1
(1+ β )R e1
Cc − Cc C

Fig. 3.5.3: The impedances in the emitter are reflected into the base junction of U" .
The emitter peaking components V and G are reflected into negative elements Vc
and Gc , which must be compensated by adding externally an equal and positive V-
and Gc ; for proper compensation <b" must be subtracted from Vc .

The compensation of the input impedance is mandatory if we intend to apply an


inductive peaking network at the input of that amplifying stage. The equivalent input
capacitance, which will be seen as the load by the peaking circuit, is the capacitance
from the transistor base–emitter junction to ground, Gi  GM .
However, as mentioned above, these capacitances will be seen with the base
resistance <b1 in series. Also there will be a parasitic base lead inductance, in addition to
some PCB track with its own inductance and capacitance.
Therefore the inductive peaking circuit at the input of a transistor amplifying
stage will not see a pure capacitance as its load. Special ‘tricks of trade’ must be
applied, e.g., a modified T-coil peaking at the amplifier input, which we shall discuss in
the next section.

-3.56-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

3.6 Transistor Interstage T-coil Peaking

In Part 2 we have shown that the greatest bandwidth extension is achieved by


using T-coil peaking. The analysis was based on the assumption that the T-coil tap was
loaded by a pure capacitance. Unfortunately, for a transistor amplifier this is not the
case. In case of a cascode amplifier the emitter network, formed by the parallel
connection of Ve Ge œ 7T , is reflected into the base circuit as a parallel connection of
a"  "bVe and Ge , paralleled also by the Miller capacitance GM ; to this the series base
spread resistance <b must be added.
In Fig. 3.6.1a we draw such stage [Ref. 3.5]. Since in the following analysis we
do not need the transistor U# , we shall consider its emitter input as an ideal ground for
the U" collector current signal (of course, the value of the Miller capacitance GM has to
be calculated by considering the actual U# emitter impedance). Thus we can drop the
index ‘1’, as all the parameters will belong to U" only. To further simplify the analysis,
we shall neglect the damping and thermal compensation impedances ^d and ^) , as well
as the emitter peaking. The resistor VL is the T-coil loading resistor, which is also the
load of the driving stage and we shall assume that its other end is also connected to an
ideal AC ground. Fig. 3.6.1b shows the T-coil loaded by the equivalent small signal,
high frequency input impedance of the transistor U" .
Cb
i c2
ii Q2 k
ii L
i c1
Cµ B o RL
Cb k o
rb
rb
L Q1
bJ

(1+ β ) R e
RL Ce Re Co = Ce + CM

a) b)

Fig. 3.6.1: a) The cascode amplifier with a T-coil interstage network. b) The
T-coil loaded by the equivalent small signal, high frequency input impedance.

To prevent any confusion we must stress that G b is the T-coil bridging


capacitance and not the base capacitance, which is represented by Go œ Ge  GM . As
we know from the symmetrical T-coil analysis in Part 2, Sec. 2.4:

P œ Pa  Pb œ VL# Go (3.6.1)

Since the input shunt resistance a"  "bVe is usually much higher than <b , we
will neglect it also, thus arriving at the circuit in Fig. 3.6.2a . Fig. 3.6.2b shows the
equivalent T-coil circuit in which we have replaced the magnetic field coupling factor 5
with the negative mutual inductance PM and the coil branches by their equivalent
inductances Pa and Pb . Finally, in Fig. 3.6.2c we have replaced the branch impedances
by symbols A to E to determine the three current loops M" , M# , M$ .

-3.57-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

Cb
Cb
ii La Lb A
k
L
ii −L M RL
k=0 I3
B C
o RL o
rb rb
I1 I2
bJ bJ is
Co Co
D E

a) b) c)

Fig. 3.6.2: a) The T-coil loaded by the simplified input impedance; b) the equivalent T-coil circuit
in which 5 is substituted by PM ; c) the equivalent branch impedances and the three current loops.

By comparing Fig. 2.4.1b,c, Sec. 2.4 with Fig. 3.6.2b,c we see that they are
almost equal, except that in the branch D, we have the additional series resistance <b .
Let us list all these impedances again, but now including <b :

"

= Gb
B œ =Pa

C œ =Pb
"
D œ  = PM  <b 
= Go
E œ VL (3.6.2)

The general analysis of the branches, Eq. 2.4.6 – 2.4.13, showed that the input
impedance of the T-coil network is equal to its loading impedance ^i œ E œ VL . As we
shall soon see, <b between the T-coil tap and Go spoils this nice property; we shall have
to compensate it. The analysis here is similar to that in Sec. 2.4, so we do not have to
repeat it. Here we give the final result, Eq. 2.4.14, for convenience:

BCA  BDA  BEA  DCA  ECA  E # A  E # B  E # C œ ! (3.6.3)

By entering all substitutions from Eq. 3.6.2, performing all the required multiplications
and arranging the terms in the decreasing powers of =, we obtain:

Pa Pb P PM P <b VL " P V#
=”Š  ‹  VL# P•   ÐPa  Pb Ñ  Š  L ‹œ!
Gb Gb Gb Gb = Go G b Gb
(3.6.4)
or, more simply:
= O"  O#  =" O$ œ ! (3.6.5)

-3.58-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

The difference between Eq. 3.6.4, 3.6.5 and Eq. 2.4.15, 2.4.16 is in the middle
term. Again, if we want to have an input impedance independent of the frequency =,
then each of the coefficients O" , O# , and O$ must be zero [Ref. 3.5]:
 P PM Pa Pb
O" œ   VL# P œ !
Gb Gb
P <b VL
O# œ  ÐPa  Pb Ñ œ !
Gb Gb
P V#
O$ œ  L œ! (3.6.6)
Go Gb Gb

So we have the three equations from which we can calculate the parameters Pa ,
Pb , and PM . By considering Eq. 3.6.1 we obtain:

P <b V # Go <b
Pa œ Š"  ‹œ L Š"  ‹ (3.6.7)
# VL # VL
P <b V # Go <b
Pb œ Š"  ‹œ L Š"  ‹ (3.6.8)
# VL # VL

P <# V # Go <#
PM œ  "  b#   VL# Gb œ L  "  b#   VL# Gb (3.6.9)
% VL % VL

Two interesting facts become evident from Eq. 3.6.7 and 3.6.8. First, Pa  Pb ,
and this means that the coil tap is not at the coil’s center any longer, but it is moved
towards the coil’s signal input node. Secondly, V L must always be larger than <b ,
otherwise Pa becomes negative. But we reach the limit of realizability long before that,
since we know from Part 2 that P" œ Pa  PM (and also P# œ Pb  PM ).
In Eq. 3.6.9 we have two unknowns, PM and G b ; therefore we need a fourth
equation to calculate them. Similarly as we did in Sec. 2.4, we shall use the
transimpedance equation for this purpose. The procedure is well described from
Eq. 2.4.20 to 2.4.24 and we write the last one again:

Zo " CA  EA  EB  EC
œ † (3.6.10)
M" = G! CA  CB  DA  DB  DC  EA  EB  EC

If we insert the substitutions from Eq. 3.6.2, we obtain the following result:
Zo VL
J Ð=Ñ œ œ (3.6.11)
M" V L  <b
=# VL# Go Gb  = Go "
#
In a similar way, for the transimpedance from the input to V L we would obtain:

V L  <b
ZR =# VL# Go G b  = Go "
œ VL # (3.6.12)
Mi V L  <b
=# VL# Go G b  = Go "
#

-3.59-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

Since we have the factor ÐV L  <b Ñ in the numerator and a different factor
ÐV L  <b Ñ in the denominator, this means that the two zeros are not symmetrically
placed in relation to the two poles in the =-plane. Therefore Eq. 3.6.12 does not
describe an all pass network and the input impedance is not simply VL as before. This
represents the basic obstacle to using T-coils in a transistor distributed amplifier,
because the T-coil load can not be replaced by another T-coil network (for comparison
see [Ref. 2.18 and 2.19] ).
Eq. 3.6.11 has two poles, which we calculate from the canonical form of the
denominator:
VL  < b "
=#  =  # œ! (3.6.13)
# VL# Gb VL Go Gb
and both poles are:
Í #
Í
VL  <b Í VL  < b "
=",# œ  „  # (3.6.14)
% VL# Gb Ì  % VL# Gb  VL Go G b

or, by extracting the common factor:

"  <b ÎVL "' Gb


=",# œ  " „ Ë"  (3.6.15)
% Gb V L  Go Ð"  <b ÎVL Ñ# 

An efficient inductive peaking must have complex poles. For Bessel poles, as
shown in Fig. 3.6.2, the pole angles )",# œ „ "&!° and with this pole arrangement we
obtain the MFED response. If the poles are complex the tangent of the pole angle is the
ratio of the imaginary to the real component of Eq. 3.6.15:

eÖ=" × "' Gb
tan ) œ œË " (3.6.16)
d Ö=" × Go Ð"  <b ÎVL Ñ#

By solving this equation for Gb we obtain:


"  tan# ) <b #
Gb œ Go Š"  ‹ (3.6.17)
"' VL
Compared to the symmetrical T-coil, here we have the additional factor a"  <b ÎVL b# .
For Bessel poles ) œ "&!° œ &1Î' and tan# ) œ "Î$, thus for a single stage case:
Go <b #
Gb œ Š"  ‹ (3.6.18)
"# VL
If we replace Gb in Eq. 3.6.9 with Eq. 3.6.18, the mutual inductance is:
#
Ô" <# " <b ×
PM œ VL# Go "  b#   "  (3.6.19)
Õ% VL "# VL Ø

With this we can calculate the coupling factor 5 between the coil P" and P# [Ref. 3.23]:
PM PM
5œ œ (3.6.20)
ÈP" P# ÈÐPa  PM Ñ ÐPb  PM Ñ

Now we have all the equations needed for the T-coil transistor interstage coupling.

-3.60-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

3.6.1 Frequency response

To calculate the frequency response we apply Eq. 3.6.11, in which we shall


replace the coil’s bridging capacitance Gb by Eq. 3.6.18, since we shall discuss only the
MFED response. Then we put it into the canonical form by factoring out the common
factor:
<b #
VL# Go# Š"  ‹
VL VL
J Ð=Ñ œ †
"# ' < b "# <b
=#  = Š"  ‹  # # Š"  ‹
VL Go VL VL G o VL
(3.6.21)
The denominator of the second fraction has two roots:
"
=",# œ Ð  $ „ 4 È$ Ñ œ 5" „ 4 =" (3.6.22)
VL Go Ð"  <b ÎVL Ñ

Sometimes we prefer the normalized form of the roots and in this case
VL Go œ "Î=h œ ". To emphasize the normalization, we add the subscript ‘n’, so
=",#n œ 5"n „ 4 ="n . By applying the normalized poles of Eq. 3.6.22 to Eq. 2.2.27, which
is a generalized second-order magnitude function, we obtain:
5"#n  =#"n
¸J Ð=Ѹ œ Í (3.6.23)
Í # #
Í # = # =
5  Œ  ="  5  Œ  =" 
Ì– —– "n —
"n n n
=h =h

By comparing the Bessel poles for a simple T-coil (Eq. 2.4.42) with Eq. 3.6.22,
we notice that in the denominator we have an additional factor Ð"  <b ÎV L Ñ. Therefore
it would be interesting to make several frequency responses with different ratios <b ÎVL ,
as listed in Table 3.6.1:
Table 3.6.1
<b ÎVL 5" n ="n Note
!Þ!!  $Þ!  "Þ($# symmetrical T-coil
!Þ#&  #Þ%  "Þ$)' -
!Þ&!  #Þ!  "Þ"&& -

The corresponding frequency-response plots are drawn in Fig. 3.6.3, together


with the three non-peaking responses (P œ !) as references. The bandwidth
improvement factor for all three cases is (b œ #Þ(# . That is because the base resistance
<b decreases the bandwidth also for the non-peaking stage, where P œ ! . We can prove
this if we multiply the denominator of Eq. 3.6.22 by VL :
"
=",# œ Ð  $ „ 4 È$ Ñ œ =h Ð  $ „ 4 È$ Ñ (3.6.24)
Go ÐVL  <b Ñ
where =h œ "ÎGo ÐVL  <b Ñ is the non-peaking bandwidth considering <b . So we shall
obtain three different curves for three different ratios <b ÎVL , also for P œ !.

-3.61-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

2.0
Cb
vo
k
i i RL ii L
1.0 L1 L 2
rb
vo
0.7
a RL
Co
0.5 b
d
c
r b /R L Cb /C o L 1 /L 2 k M e
L=0
a) 0 0.083 1 0.50 0.0265
b ) 0.25 0.130 0.52 0.44 0.0166 f

0.2 c ) 0.50 0.1875 0.33 0 0

L = R L2 C o
ω h = 1/ Co ( R L +r b )
0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h

Fig. 3.6.3: MFED frequency response of the T-Coil transistor interstage coupling circuit for
three different values of <b : a) <b œ !; b) <b œ !Þ#& VL ; c) <b œ !Þ& VL . For comparison the
three reference cases (P œ !): d), e), and f), which correspond to the same three <b ÎVL ratios,
are drawn. The bandwidth improvement factor of the peaking system remains 2.72 times over
the non-peaking reference for each value of <b .

From the analysis above, we can draw an important result:

The upper half-power frequency of a non-peaking transistor amplifier must be


calculated by taking into account the sum VL  <b (and not just VL ).

3.6.2 Phase Response

To calculate the phase response we insert our poles into Eq. 2.2.31:
=Î=h  ="n =Î=h  ="n
: œ arctan  arctan (3.6.25)
5"n 5"n
In Fig. 3.6.4 the phase plots for the same three ratios of <b ÎVL as in the
frequency response are shown, along with the three references (P œ !).

3.6.3 Envelope Delay

The envelope delay is calculated using Eq. 2.2.35:


5"n 51n
7e =h œ #  # (3.6.26)
5"n  Ð=Î=h  =1n Ñ# 5"n  Ð=Î=h  =1n Ñ#

and the responses are drawn in Fig. 3.6.5, for the three different ratios <b ÎVL , in
addition to the three references (P œ !).

-3.62-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

0
Cb
k
-30 ii L
L1 L 2
rb
-60 vo
L=0 RL
ϕ[ ] d Co
e
f
-90

r b /R L Cb /C o L 1 /L 2 k M
b a
-120 a) 0 0.083 1 0.50 0.0265 c
b ) 0.25 0.130 0.52 0.44 0.0166
c ) 0.50 0.1875 0.33 0 0
-150
L = R L2 C o
ω h = 1/ Co ( R L +r b )
-180
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 3.6.4: MFED phase response of the T-Coil transistor interstage coupling circuit compared
with the references (P œ !), for the same three values of <b ÎVL .

0.0
r b /R L C b /C o L 1 /L 2 k M

a) 0 0.083 1 0.50 0.0265


− 0.3 b ) 0.25 0.130 0.52 0.44 0.0166
c ) 0.50 0.1875 0.33 0 0

a
− 0.6
b
τe ω h c Cb

− 0.9 k
d Ii L
L1 L 2
rb
L=0 e
L = R L2 C o Vo
− 1.2
f ω h = 1/ C o ( R L +r b ) Co
RL

− 1.5
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h

Fig. 3.6.5: MFED envelope delay response of the T-Coil transistor interstage coupling
circuit compared with the references (P œ !) for the same three values of <b ÎVL .

-3.63-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

3.6.4 Step Response

For plotting the step response we can use Eq. 2.4.47 (which was fully derived in
Part 1, Eq. 1.14.18):
"
gÐ>Ñ œ "  e5" > sinÐ=" >  )  1Ñ (3.6.27)
lsin )l

where ) is the pole angle according to Fig. 3.6.2. By inserting the pole angle ) œ "&!°
or &1Î', as required by the 2nd -order Bessel system, we obtain:

&
gÐ>Ñ œ "  # e$ >ÎX sin”È$ >ÎX  Œ"  1• (3.6.28)
'

However, here we must use X œ "Î=h œ Go ÐVL  <b Ñ, as in Eq. 3.6.22 and
3.6.24. In Fig. 3.6.6 the step-response plots are drawn for three different ratios <b ÎVL as
well as the three reference cases with P œ !.

1.2

1.0
o a
ii RL
b d
0.8
c e
L=0
f
0.6 Cb
L = R L2 C o
k
T = Co ( R L +r b ) ii L
0.4 L1 L 2
r b /R L Cb /C o L 1 /L 2 k M
rb
o
0.2 a) 0 0.083 1 0.50 0.0265 RL
b ) 0.25 0.130 0.52 0.44 0.0166 Co
c ) 0.50 0.1875 0.33 0 0
0.0
0 1 2 3 4 5
t T

Fig. 3.6.6: MFED step-response of the T-coil transistor interstage coupling circuit, compared
to the references (P œ !), for the same three values of <b ÎVL .

Thus we have completed the analysis of the basic case of a transistor T-coil
interstage coupling. The reader who would like to have more information should study
[Ref. 3.5]. In order to simplify the analysis, we have purposely neglected the transistor
input resistance " Ð<1  "Ñ and also stray inductance Ps of the tap to transistor base
terminal. In the next steps we shall discuss both of them.

-3.64-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

3.6.5 Consideration of the transistor input resistance

Fig. 3.6.7 shows the basic configuration of the transistor input circuit. We have
also drawn the base-lead inductance Ps which will be discussed in the next section.

c
C µ1
i i1 i i1 Ls r b1
Q1 bj

Ci = Ce1

R e1 Ce1 (1+ β )R e1

a) b)
Fig. 3.6.7: The complete transistor input impedance: a) shematic; b) equivalent
circuit, in which we include also the base lead inductance Pb" . The presence of
the shunt resistance Ð"  " Ñ Ve" requires a modified inter-stage T-coil circuit.

The resistance from the base–emitter junction to ground is:

Vi œ Ð"  "Ñ Ve (3.6.29)

as we have derived in Eq. 3.2.15. The effect of this resistance may be canceled if we
insert an appropriate resistor Vs from the end of coil P" to the start of coil P# , as shown
in Fig. 3.6.8. It is essential that the resistor Vs is inserted on the ‘left’ side of P# (at the
T-coil tap node), because in this case the bridging capacitance of Gb (self-capacitance of
the coil) and the magnetic field coupling (5 ) are utilized. With the resistor placed on the
‘right’ side of P# (at the VL Gb node) , that would not be the case.

Cb
L1 < L2
k
L
Ii
L1 Rs L2 RL
rb
Z i = RL Vo
bJ Compensates for
when Co
(1+ β ) R e
[ r b + (1+ β ) Re ] ||( Rs + RL ) = RL

Fig. 3.6.8: The resistance Vs in series with P# is inserted near the T-coil tap
to compensate the error in the impedance seen by the input current at low
frequencies, owed to the parallel connection of VL ll Ò <b  Ð"  " ÑVe Ó.

At very high frequencies we can replace all capacitors by short circuit and all
inductors by open circuit. In this case the input resistance of the T-coil circuit is VL . But

-3.65-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

at very low frequencies the capacitors represent an open circuit and the inductors a short
circuit. The transistor input resistance is then effectively in parallel with VL . It is the
task of the series resistor Vs to prevent this reduction of resistance. The idea is that:

Ð<b  Vi Ñ ll ÐVs  VL Ñ œ VL (3.6.30)

If we solve this for Vs we obtain:


VL#
Vs œ (3.6.31)
Vi  < b  V L

The introduction of this resistor spoils all the expressions from our previous
analysis, and, to be exact, everything we derived to determine the basic T-coil
parameters should be calculated anew, considering the additional parameter Vs . Since
in practice the value of this resistor is very small, no substantial changes in other circuit
parameters may be expected and the additional effort, which would be required by an
exact analysis, would be worthless.
A sneaky method for implementing this compensation, while at the same time
decreasing the stray capacitance (and also to create difficulties to the competition to
copy the circuit) is to make the coil P# using an appropriate resistive wire.

3.6.6 Consideration of the base lead stray inductance

Fig. 3.6.9a shows the T-coil with the base lead stray inductance Ps at the tap.
From Fig. 3.6.9b we realize that the positive inductance of the base lead Ps actually
decreases the negative mutual inductance PM of the T-coil. To retain the same
conditions as in Fig. 3.6.2c at the beginning of the basic T-coil analysis, the coupling
factor must be increased, thus increasing the mutual inductance to PM  Ps .
Cb
Cb

k k=0
L L1 L2
Ii Ii

RL RL
Ls −L M
B
B
Ls
rb rb
Co Co
o o
bj
a) b)
Fig. 3.6.9: a) The base lead inductance Ps decreases the value of mutual
inductance, as indicated by the equivalent circuit in b). This can be
compensated by recalculating the circuit with an increased coupling factor.

We shall mark the new T-coil circuit parameters with a prime ( w ) to distinguish
them from the original parameters of the transistor interstage T-coil:

PwM œ PM  Ps (3.6.32)

-3.66-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

Because the inductance from <b to either end of the coil P is now increased by
Ps , both inductances Pa and Pb must be decreased by the value of Ps :

Paw œ Pa  Ps and Pwb œ Pb  Ps (3.6.33)

By considering all these changes, the new (larger) value of the coupling factor is

PwM
5w œ (3.6.34)
ÈÐPa  PM  # Ps Ñ ÐPb  PM  # Ps Ñ

Since it is sometimes difficult to achieve the required coupling factor we must


take care that the base lead stray inductance Ps is as small as possible.

3.6.7 Consideration of the collector to base spread capacitance

So far we have considered only the lumped collector to base capacitance G. .


However, in a real transistor the capacitance G. is spread along the base resistance <b ,
as is drawn in Fig. 3.6.10a [Ref. 3.4]. For the analysis of such a circuit we should know
the actual geometry involved; unless we are designing the transistor by ourselves, this
would be difficult to find out. So we will, rather, approximate this by splitting G. into
three parts, G." , G.# , and G.$ , as suggested in Fig. 3.6.10b, adding also a constant value
Gs to account for the external leads and PCB strays.

Cs C µ3 Cµ2 C µ1
i i1 Cµ i i1
Q1 Q1
rb r b2 r b1

a) b)
Fig. 3.6.10: a) The base–collector reverse capacitance G. is actually spread
across the base resistance <b . b) A good approximation is achieved by splitting <b
in half and G. in three parts, adding also the external stray capacitance Gs .

Those readers who are interested in the results and further suggestions, should
study [Ref. 3.20].

3.6.8 The ‘Folded’ Cascode

While we are still speaking about cascode amplifiers, let us examine the ‘folded’
cascode circuit, Fig. 3.6.11. This circuit is a handy solution in cases of a limited supply
voltage, a situation commonly encountered in modern integrated circuits and battery
supplied equipment.
The first thing to note is that the collector dc currents can be different, since the
bias conditions for U" are set by the input base voltage and Ve" , while for U# the bias
is set by Zcc  Zb# and Vcc .

-3.67-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

Another interesting point is that Vcc (or a current source in its place) must
supply the current for both transistors. Therefore when a signal is applied at the input
the currents in U" and U# will be in anti-phase, i.e., when 3o" increases, 3i# decreases
and vice-versa. It is thus easier to achieve good thermal balance with such a circuit, than
with the original cascode.

V cc

R cc
Icc
io1 i i2

i i1
Q1 Q2 V b2

i
R e1 Ce1 RL o

Fig. 3.6.11: The ‘folded’ cascode is formed by a complementary, npn and pnp, transistor
pair, connected in the otherwise usual cascode configuration. Since thermionic devices are
not produced in complementary pairs, this circuit can not be realized with electronic tubes.

In an integrated circuit it is always more difficult to make fast pnp transistors,


because of the lower mobility of the virtual positive charge (a vacant charge region, left
by an accelerated electron, can exist for a considerable time before another slow
electron comes near enough to be trapped by it). It could then be advantageous to flip
the circuit about the horizontal axis and use a pnp type for U" and npn for U# .
In all other respects, the circuit presents problems identical to those of the
original cascode, so all the solutions already discussed apply here as well.

-3.68-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

3.7 Differential Amplifiers

In Sec. 3.4.2 we have explained that the instability of the transistor’s DC bias
depends on the ambient temperature and the heat generated internally, as a consequence
of its power dissipation. The current amplification factor "! also depends on
temperature. These effects multiply in a multi-stage DC amplifier. They can be greatly
reduced using a symmetrical differential amplifier.
The basic differential amplifier is shown in Fig. 3.7.1 . The input voltage of
transistor U" is @i" and that of U# is @i# œ @i" (we are assuming these symmetrical
driving voltages in order to eliminate any common mode voltages, thus simplifying the
initial analysis). The emitters of both transistors are connected together and fed via the
resistor Vee from the voltage Zee  !. If we assume the circuit to be entirely
symmetrical, e.g., U" œ U# and VL" œ VL# , and if both input voltages @i" and @i# are
zero, the DC Output voltages are also equal, Zo" œ Zo# , independently of the ambient
temperature.

V cc V cc

R L1 R L2
simmetry line

i c1 o1 o2 i c2

Q1 Q2

i1 i2
i e1 i e2
Iee Iee
2 2
2R ee 2R ee

V ee V ee
Fig. 3.7.1: The differential amplifier. We simplify the initial analysis by
assuming @i# œ @i" , VL" œ VL# and U" œ U# (all parameters).

The name differential amplifier suggests that we are interested in the


amplification of voltage differences. In general, if one signal input voltage, say, @i" ,
goes positive, the other input voltage @i# goes negative by the same amount. (We have
accounted for this by drawing the polarity of the voltage generator @i# in Fig. 3.7.1
opposite to @i" ). This means an increase of the emitter current in transistor U" is
accompanied by an equal decrease in the emitter current in transistor U# . So the current
through the resistor Vee and the voltage at the emitter node does not change. Therefore
we can consider the emitter node as a virtual ground. The difference of the input
voltages is:
@i"  Ð  @i# Ñ œ @i"  @i# (3.7.1)

In a similar way as the input voltages, the signal output voltages @o" and @o# go
up and down for an equal amount, however, we must account for the signal inversion in

-3.69-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

the common emitter amplifier. If the voltage amplification of the input voltage
difference is Evd (which we can take directly from Eq. 3.1.14, where we discussed a
simple common-base amplifier), the output signal voltage difference is:

@o#  Ð  @o" Ñ œ @o"  @o# œ  Evd Ð@i"  @i# Ñ (3.7.2)

An attentive reader will note that we have added the subscript ‘d’ to denote the
differential mode gain.
In the case of both input voltages being equal and of the same polarity, both
output voltages will also be equal and of same polarity; however, the output signal’s
polarity is the inverse of the input signal’s polarity (owing to the 180° phase inversion
of each common emitter amplifier stage). If the symmetry of the circuit were perfect,
the output voltage difference would be zero, provided that the common mode excitation
at the input remains well within the linear range of the amplifier. Such operation is
named common-mode amplification, Evc (here we have added the subscript ‘c’).
For the common mode signal the excursion of both output voltages with respect
to their DC value is:

@i "  @ i # VL"
@o" œ @o# œ  Evc ¸  Ð@i"  @i# Ñ (3.7.3)
# # Vee

A good idea to visualize the common mode operation is to ‘fold’ the circuit
across the symmetry line and consider it as a ‘single ended’ amplifier with both
transistors and both loading resistors in parallel.
Since we are more interested in the differential mode amplification, we have
pulled the expression for the common mode amplification, so to say, ‘out of the hat’.
The analysis so far is more intuitive than exact. The reason we did not bother to make a
corresponding derivation of exact formulae is that the simple circuit shown in Fig. 3.7.1
is almost never used as a wideband amplifier owing to its large input capacitance. A
basic differential amplifier in cascode configuration, with a constant current generator
instead of the resistor Vee , is drawn in Fig. 3.7.2. The reader who wants to study the full
analysis of the basic low-frequency differential amplifier according to Fig. 3.7.1 should
look in [Ref. 3.7].

3.7.1 Differential cascode amplifier

The analysis of the circuit in Fig. 3.7.2, a differential cascode amplifier, with the
same rigor, considering all the complex impedances, would quickly run out of control
owing to its complexity (remember Table 3.2.1, which should be applied to each
transistor). However, if we take the emitter node to be a virtual ground, each half of the
differential amplifier can be analyzed separately; actually, owing to symmetry only one
analysis is needed, which we already have done in Sec. 3.4. So here we can focus on
other problems which are peculiar to differential amplifiers only.
First of all, no differential amplifier is perfectly symmetrical, even if all of its
transistors are on the same chip. The lack of a perfect symmetry causes the common
mode input signal Ð@i"  @i# ÑÎ# to appear partly as a differential output signal. For the

-3.70-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

same reason there is still some temperature drift, although greatly reduced in
comparison with the single ended amplifier. The appearance of common mode signals
at the output is especially annoying in electrobiological amplifiers (electrocardiographs
and electroencephalographs). In these amplifiers, very small input signal differences (of
the order of several .V) must be amplified in the presence of large (up to 1V) common
mode signals from power lines, owing to capacitive pickup. The level of ability of the
differential amplifier to reject the common mode signal is called common mode
rejection ratio, CMRR œ Evc ÎEvd , generally expressed in decibels (dB). Since this is
out of scope of this book, we will not pursue these effects in detail.

V cc

R L1 R L2

o1 o2

V bb
Q3 Q4

Q1 Q2
Cee
i1 i2

R e1 R e2

Iee

V ee
Fig. 3.7.2: The basic circuit of the differential cascode amplifier.

The optimum thermal stability of the differential cascode circuit could again be
obtained by adjusting the quiescent currents in both halves of the differential amplifier
to values such that the voltage drop on each loading resistor is equal to the voltage
ÐZcc  Zbb  Zbe ÑÎ#, (see Eq. 3.4.29 and the corresponding explanation). However, as
has been said for the simple cascode amplifier, the requirements for large bandwidth
will prevent this from being realized. We would want to have low VL , high Zcc and Zbb ,
and high Mee to maximize the bandwidth. So the thermal stability will have to be
established in a different way.
Differential amplifiers are particularly suitable for compensation of many
otherwise unsolvable errors. This is achieved by cross-coupling and adding anti-phase
signals, so that the errors cancel out. For example, the pre-shoot of the simple cascode
amplifier, which is owed to capacitive feedthrough, can be effectively eliminated if two
capacitors with the same value as G.",# are connected from the U" emitter to the U#
collector, and vice-versa.
Similarily, by cross-coupling diodes or transistors we can achieve non-linearity
cancellation, leakage current compensation, better gain stability, DC stability, etc. In
integrated circuits, even production process variations can be compensated in this way.
Some such examples are given in Part 5.

-3.71-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

3.7.2 Current source in the emitter circuit

To improve the common mode rejection, instead of having a large resistor


Vee and a correspondingly high voltage Zee (resulting in an unnecessarily high power
dissipation T ¸ Zee# ÎVee ) the common practice is to use high incremental resistance.
This is usually done by a special connection of active devices in a configuration called
current source. Such generators can have a high incremental resistance even when
working with a relatively low Zee .
The ideal current generator should be independent as much as possible from the
applied voltage and from ambient temperature. A simple current sink, composed of two
equal transistors U& and U' is shown in Fig. 3.7.3a. (We have given them the indices
‘5’ and ‘6’ in order to avoid any confusion with the former figures). The circuit is
named the current mirror [Ref. 3.31], because the collector current Mc' is in some
respect a ‘mirror image’ of MV , as shown in Fig. 3.7.3b, where the current symmetry
analysis is performed by normalizing the currents to those of the base. Current mirrors
are in widespread use in integrated circuits in which complex multi-stage biassing is
controlled from a single point.
Vcc
Vcc Vcc
β + β+2 β+2
β+1 β
Ic6 R β+2 β+1
IR β+2
R R β+1
β Q6
β β +2
Ic5 β
β
Q5 Q6 Q5 Q6 Q5 Q7
Ib5 Ib6 1 1 1 1

Vee Vee Vee


a) b) c)
Fig. 3.7.3: a) The basic current mirror. b) Current symmetry analysis with the currents
normalized to those of the base. c) The symmetry is improved in the Wilson mirror.

The current MV is:


Mc& Mc'
MV œ Mc&  Mb&  Mb' œ Mc&   (3.7.4)
"& "'

If both transistors are identical, then "& œ "' œ " and Mc& œ Mc' œ " Mb . In this case:

#
MV œ Mc& Œ"   (3.7.5)
"
and the collector current is:
MV
Mc& œ (3.7.6)
#
"
"
If " is very large then:
Zcc  Zbe
Mc' ¸ MV œ (3.7.7)
V

-3.72-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

In general the collector current of a transistor is [Ref. 3.4]:


Zbe Zce
Mc œ Ms e ZX
Œ"   (3.7.8)
ZA  Zce
where:
Ms œ the collector saturation current (approx. "!"# to "!"% A);
ZA œ Early voltage (usually between "!! and "&! V, see Fig. 3.4.11);
ZX œ 5B X Î; (as defined at the beginning of Sec. 3.1).

Eq. 3.7.8 can be written simply from the geometric relations taken from
Fig. 3.4.11. For a common silicon transistor the Early voltage is at least "!! V. Suppose
both silicon transistors in Fig. 3.7.3a are identical and subject to the same temperature
variations (on the same chip). The collector–emitter voltage of transistor U& is the same
as the base–emitter voltage, Zce& œ Zbe& ¸ !Þ'& V. In contrast, the collector–emitter
voltage of transistor U' is higher, say, Zce' œ "& V. The ratio of both collector currents
is then:
Zce' "&
" "
Mc' ZA  Zce' "!!  "& œ "Þ#$(
œ œ (3.7.9)
Mc& Zce& !Þ'
" "
ZA  Zce& "!!  !Þ'

By using more sophisticated circuits it is possible to make either Mc& œ Mc' or


MV œ Mc' , or even to make any desired ratio between any of them [Ref. 3.31, 3.32, 3.33].
This is important in order to decrease the power dissipation in the resistor V. The power
dissipated by U' will then be the product of the desired current and the collector–
emitter voltage set by the desired common mode range of the differential amplifier.
Since we are interested in wideband aspects of differential amplifiers, we shall
not discuss further particularities here. However, current mirrors can also be used to
convert the output signal of the differential amplifier into a single ended push pull
drive, as is often done in modern operational amplifiers, and we shall return to this
subject later in Part 5, with a discussion of the Wilson mirror [Ref. 3.32], Fig. 3.7.3c.
Before closing the analysis of current sinks, let us calculate the incremental
collector resistance <o of transistor U' , which can be derived simply from Fig. 3.4.11:
J Zce' ZA  Zce'
<o œ œ (3.7.10)
J Mc' Mc'
Returning to our differential amplifier example of Fig. 3.7.2 with Zee œ "& V,
suppose we require a differential amplifier current Mee œ Me"  Me# œ Mc' œ !Þ!$ A. By
assuming the Early voltage ZA œ "$& V and Zce' ¸ Zee , the incremental collector
resistance is:
"$&  "&
<o œ œ & kH (3.7.11)
!Þ!$
If we were to replace the current generator by a simple resistor Vee œ <o , the voltage Zee
in Fig. 3.7.2 would have to be:

aMe"  Me$ b Vee œ !Þ!$ A † &!!! H œ "&! V (3.7.12)

-3.73-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

which is 10 times more. Correspondingly, the power dissipation in the resistor Vee
would also be 10 times greater, or %Þ& W, compared to !Þ%& W for U' .
A high incremental resistance <o is also important for achieving high CMRR,
because it gives the differential amplifier a higher immunity to power supply voltage
variations (which is also a common mode signal).
A simple way of improving the current generator, thus achieving even greater
CMRR factors, is shown in Fig. 3.7.4, where negative feedback, provided by the U&
gain, is used to stabilize the collector current of U' and increase the incremental
resistance, whilst a low voltage zener diode (named after its inventor, the American
physicist Clarence M. Zener, 1905-1993) reduces the Zbe thermal drift of U& , owing to
an almost equal, but opposite thermal coefficient.
In this circuit any increase in U' collector current Mc' is sensed by its voltage
drop on V# , increasing Zb& , which in turn increasess Me& , thus reducing Mb' and therefore
also Mc' . The reduction feedback factor is nearly equal to the U& current gain ".
Effectively, the output resistance is increased from <o of Eq. 3.7.10 to about " <o . Note
that this circuit does not rely on identical transistor parameters, so it can be used in
discrete circuits.

Vcc Iee Vcc


VZ + V be2
IR1 Iee =
R2
R1 R1 Ib6
when

Q6 Q6 Ib5 = Ib6

Ie6
Q5 Q5
VZ + 2 V be Ib5 VZ + Vbe2
IR2 =
VZ + V be R2
DZ R2 DZ R2

a) Vee b) Vee
Fig. 3.7.4: Improved current generator: a) voltage drops; b) current analysis.

The circuits shown and only briefly discussed here should give the reader a
starting point in current control design. Many more circuits, either simple or more
elaborate, can be found in the references quoted.

-3.74-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

3.8 The 0T Doubler

Let us return to the differential cascode amplifier of Fig. 3.7.2, where the basic
analysis was done with the assumption that the inputs were driven by a differential
voltage source. Here we shall analyze a current driven stage, which could suitably be
used as an intermediate amplifier stage with the T-coil peaking at the input, such as the
cascode of Fig. 3.6.1. The T-coil peaking has already been analyzed in detail, so we
shall concentrate on the active part of the circuit and see how it can be improved.
In Fig. 3.8.1 we have the differential cascode circuit, driven differentially by the
current source 3s . This current source is loaded by the resistance Vs which we have split
in half and grounded its middle point. The output currents from the collectors of U$ and
U% drive the differential inputs of the next amplifying stage and we are not interested in
it now. What we are interested in, is the current gain of this differential stage. The
reader who has followed the previous analysis with some attention should by now be
able to understand the following relation intuitively:
3o Vs " "
¸ † † (3.8.1)
3s Ve 7T" Vs "  = 7T#
"  =ŒVs  G. 
Ve #

The last fraction is the frequency dependent part owed to U$,% .

io
τ T2 V bb
Q3 Q4
Cµ1 Cµ 2
τ T1
Q1 τ T1 Q2
Cee =
Re
is Rs / 2 Rs / 2
Re / 2 Re / 2

Iee

V ee

Fig. 3.8.1: The current driven differential cascode amplifier. We assume that the transistor
pair U",# , and the pair U$,% , respectively, have identical main parameters. The emitters of
U",# see the Ve Gee network, set to equal the time constant 7T" of transistors U",# .

The low frequency current gain is simply:


Vs
E3 œ (3.8.2)
Ve
We can substitute this in Eq. 3.8.1 to highlight the bandwidth dependence on gain:
3o " "
¸ E3 † (3.8.3)
3s Ve G . "  = 7T#
"  =ŒE3 7T"  E3 
#

-3.75-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

This means that by reducing the gain E3 we can extend the bandwidth. On the
basis of this there arises the idea that we can add another differential stage to double the
current (and therefore also the current gain) and then optimize the stage, choosing
between the doubling of gain with the same bandwidth and the doubling of bandwidth
with the same gain, or any factor in between.
The basic 0T doubler circuit, developed by C.R. Battjes, [Ref. 3.1], is presented
in Fig. 3.8.2. Each differential pair amplifies the voltage drop on its own Vs Î# and each
pair sees its own Ve between the emitters, thus the current gain is simply:

3o Vs " "
¸ † †
3s Ve 7T" Vs G. "  = 7T#
"  =ŒVs  
# Ve #

" "
¸ E3 † (3.8.4)
7T" Ve G. "  = 7T#
"  = E3 Œ  
# #

io
τ T2 Vbb τ T2
Q3 Q4

Cµ1 C µ2
τ T1 τ T1 τ T1 τ T1
Q1a τ T1 Q2a Q1b τ Q
Cee = Cee = T1 2b
Re Re
is Rs / 2 Rs / 2
Re / 2 Re / 2 Re / 2 Re / 2

Iee1 Iee2

Fig. 3.8.2: The basic 0T doubler circuit. We assume equal transistors for U"aß#a and U"b,#b Þ
The low input impedance of U$ß% emitters allows summing the collector current of each
pair, but cross-coupled for in phase signal summing.

Another advantage of this circuit is the reduced input capacitance:

# 7T"
Gi œ  G. for the circuit in Fig. 3.8.1 (3.8.5)
Ve
7 T"
Gi œ  G. for the circuit in Fig. 3.8.2 (3.8.6)
Ve

This will ease the application of T-coil peaking at the input.


But there are also limitations. As can be deduced from Eq. 3.8.4, the ‘doubler’
term in the circuit name is misleading, because the term with 7T# is not influenced by
the reduced gain. Therefore, the amount of bandwidth improvement depends on which
time constant is larger. The part with Ve G. can be reduced by selecting transistors with
low G. , but this we would do anyway. And we do not want to reduce Ve , because it
would increase the gain (for same Vs ).

-3.76-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

Another problem is that, although the transfer function is of second order, there
are two real poles, so we can not ‘tune’ the system for an efficient peaking. By forcing
the system to have complex conjugate poles with emitter peaking, we would increase
the emitter capacitance Gee , which would be reflected into the base as an increased
input capacitance; this would increase exactly that term which we have just halved.
A quick estimate will give us a little more feeling of the improvement
achievable. Let us have a number of transistors, with 0T œ $Þ& GHz, G. œ " pF,
Vs œ # ‚ &! H, U$,% collector load VL œ # ‚ &! H, GL œ " pF and the total current
gain E3 œ $. Assuming that the system’s response is governed by a dominant pole, we
can calculate the rise time of the conventional system as:

" Ve G . # "
#
#
>r œ #Þ#ËE#3 Œ   Œ   cVL aGL  G. bd (3.8.7)
#1 0T # #1 0T
Then, for the ordinary differential cascode in Fig. 3.8.1:

>r" œ %(' ps (3.8.8)

whilst for the 0T doubler we have:

>r# œ $&& ps (3.8.9)

and the improvement factor is 1.34, much less than 2. Transistors with lower 0T might
give an apparently greater improvement (about 1.7 could be expected) owing to the
lower contribution of the source’s impedance. However, it seems that a better idea
would be to remain with the original bandwidth and use the gain doubling instead,
which could lead to a system with a lower number of stages, which in turn could be
optimized more easily.
On the other hand, the reduced input capacitance is really beneficial to the
loading of the input T-coils. With the data from the example above, we can calculate
the T-coils for the conventional and the doubler system and the resulting bandwidths.
From Eq. 3.6.21 we can find that:

"# <b
=H œ Ë # Œ"   (3.8.10)
aVL Gi b VL

By assuming an <b œ "& H and Gi of 'Þ& and $Þ( pF, respectively (Eq. 3.8.5 and 3.8.6),
we can calculate an 0H of "Þ* and $Þ% GHz, a ratio of nearly 1.8, which is worth
considering.
In principle one could use the same doubler implementation with 4, 6, or more
transistor pairs; however, the input capacitance poses a practical limit. A system with 4
pairs is already slower than the system with two pairs.

-3.77-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

3.9 JFET Source Follower

Wideband signals come usually from two source types: low impedance sources
are usually those from the output of other wideband amplifiers, while medium and high
impedance sources are usually those from sensors or other very low power sources.
In the first case, we employ standardized impedances, 50 H or 75 H, so that both
the source and the load have the same impedance as the characteristic impedance of the
cable that connects them. In this way we preserve the bandwidth and prevent
reflections which would distort the signal, but we pay for this by the 50 % (6 dB)
attenuation of the signal’s amplitude.
In the second case we also want a standardized value of the impedance, but this
time the value is 1 MH, in parallel with some (inevitable) capacitance, usually 20 pF
(but values from 10 to 25 pF can also be found). The standardized capacitance is
helpful not only in determinating the loading of the source at high frequencies, but also
to allow the use of ‘probes’, which are actually special HF attenuators (÷10 or ÷100 ),
so that the source can be loaded by a 10 MH or even a 100 MH resistance, while
keeping the loading capacitance below some 12 pF.
With the improvement of semiconductor production processes, the so called
‘active’ probes have been developed, used mostly for extremely wideband signals, such
as those found in modern communications and digital computers. Active probes usually
have a 10 kH ll 2 pF input impedances, with no reduction in amplitude.
The key component of both high input impedance amplifiers and active probes
is the JFET (junction field effect transistor) source follower [Ref. 3.16].
The basic JFET source follower circuit configuration is shown in Fig. 3.9.1. In
contrast to the BJT emitter follower (with an input resistance of about " Ve ), the JFET
source follower has a very high input resistance (between 109 and 1012 H), owed to the
specific construction of the JFET. Its gate (a p–n junction with the drain–source
channel) is reverse biased in normal operation, modulating the channel width by the
electrical field only, so the input current is mainly owed to the reverse biased p–n
junction leakage and the input capacitances, Ggd and Ggs .
V dd V dd
Cgd
d Q1
Q1 g Q1 g Cgs s s
s
s s
g m GS
Cgs G
G ZL G ZL Cgd ZL
d
a) b) c)

Fig. 3.9.1: The JFET source follower: a) circuit schematic; b) the same
circuit, but with an ideal JFET and the inter-electrode capacitances drawn as
external components; c) equivalent circuit.

A MOSFET (metal oxide silicon field effect transistor) has even greater input
resistance (up to ~1015 H); however it also has a greater input capacitance (between 20

-3.79-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

and 200 pF; it is also more noisy and more sensitive to damage by being overdriven), so
it is not suitable for a wideband amplifier input stage.
In Fig. 3.9.1b we have drawn an ideal JFET device and its inter-electrode
capacitances are modeled as external components. These capacitances determine the
response at high frequencies [Ref. 3.8, 3.16, 3.20, 3.35]. Fig. 3.9.1c shows the
equivalent circuit.
The source follower is actually the common drain circuit with a voltage gain of
nearly unity, as the name ‘follower’ implies. The meaning of the circuit components is:

Ggd gate–drain capacitance; in most manufacturer data sheet it is labeled as


Grss (common source circuit reverse capacitance); values usually range
between 1 and 5 pF;
Ggs gate–source capacitance; in their data sheet, manufacturers usually report
the value of Giss , the common source total input capacitance, therefore we
obtain Ggs ¸ Giss  Grss ; values of Ggs usually range from 3 to 15 pF;
gm JFET transconductance; usual values range between 1 000 and 15 000 .S
(in some data-sheets, the symbol ‘mho’ is used to express that the unit
siemens [S] œ ["ÎH] );
^L the loading impedance of the JFET source.

The JFET drain is connected to the power supply which must be a short circuit
for the drain signal current, therefore we can connect Ggd to ground, in parallel with the
signal source. We assume the signal source impedance to be zero; so we can forget
about Ggd for a while.
From the equivalent circuit in Fig. 3.91c we find the currents for the node g:

3g œ @G = Ggd  Ð@G  @s Ñ = Ggs (3.9.1)

and the currents for the node s:


@s
a@G  @s b = Ggs  a@G  @s b gm œ (3.9.2)
^L
which can be rewritten as:
gm gm "
@G Œ"   œ @s Œ"    (3.9.3)
= Ggs = Ggs = Ggs ^L
From this we obtain the system’s voltage gain:
= Ggs
^L Œ"  
@s gm
Ev œ œ (3.9.4)
@G = Ggs "
^L Œ"  
gm gm

In practical follower circuits we want to make the output signal’s dynamic


range as high and the voltage gain as close to 1 as possible. The simplest way to
achieve this is by replacing ^L with a constant current generator, Fig. 3.9.2, much as we

-3.80-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

have done in the differential amplifier. By doing this, we increase the real (resistive)
part of the loading impedance, but we can do little to reduce the always present loading
capacitance GL .

V dd
Cgd

Zi d
g Q1 Cgs CL
Cgs CL − Cx = −
s
s Zi Cgd Cx = Cgs + CL
Cgs + CL
Cgs (Cgs + CL ) 2
CL − Rx = −
G
Is g m Cgs CL

V ss
a) b)

Fig. 3.9.2: The JFET source follower biased by a current generator and loaded only by the
inevitable stray capacitance GL : a) circuit schematic; b) the input impedance has two
negative components, owed to Ggs and gm (see Sec. 3.9.5).

In Eq. 3.9.4 the term Ggs Îgm is obviously the characteristic JFET time constant, 7FET :
Ggs "
œ 7FET œ (3.9.5)
gm =FET

Since we now have ^L œ "Î4= GL we can rewrite Eq. 3.9.4 as:

4=
"
@s =FET
Ev œ œ (3.9.6.)
@G " GL
"  4= Œ  
=FET gm

and by replacing gm with =FET Ggs (Eq. 3.9.5) we obtain:

4=
"
@s =FET
œ (3.9.7.)
@G 4= "
" †
=FET Hc

Here Hc is the input to output capacitive divider, which would set the output voltage if
only the capacitances were in place:
Ggs
Hc œ (3.9.8.)
Ggs  GL

We would like to express Eq. 3.9.7 by its pole =" and zero =# , so we need the
normalized canonical form:
=" =  =#
J a=b œ E! † (3.9.9)
=  =" =#

-3.81-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

Therefore we replace 4= with =, multiply both the numerator and the


denominator by =FET Hc and obtain:

=  a=FET b
J a=b œ Hc (3.9.10)
=  a=FET Hc b

At zero frequency, the transfer function gain is:


=FET
E! œ J a!b œ Hc œ" (3.9.11)
=FET Hc
so the zero is:
=# œ =FET (3.9.12)

and the pole is:


=" œ =FET Hc (3.9.13)

These simple relations are the basis from which we shall calculate the frequency
response magnitude, the phase, the group delay and the step response of the JFET
source follower (simplified at first and including the neglected components later).

3.9.1 Frequency response magnitude

The frequency response magnitude is the normalized absolute value of J a=b


and we want to have the normalization in both gain and frequency. Eq. 3.9.7 is already
normalized in frequency (to =FET ) and the gain E! œ "Þ To get the magnitude, we must
multiply J a4=b by its complex conjugate, J a  4=b and take the square root:
Í
Í 4= 4=
Í " "
Í = =FET
lJ a=bl œ ÈJ a4=b J a4=b œ Í FET

4= " 4= "
Ì " = †
H
"
=FET

Hc
FET c

Í
Í =
#
Í
Í "Œ 
Í =FET
lJ a=bl œ Í # (3.9.14)
Í =
Ì "  Œ 
=FET Hc

Since we want to examine the influence of loading we shall plot the transfer
function for three different values of the ratio GL ÎGgs : 0.5, 1.0, and 2.0 (the
corresponding values of Hc being 0.67, 0.5, and 0.33, respectively). The plots, shown
in FigÞ 3.9.3, have three distinct frequency regions: in the lower one, the circuit behaves
as a voltage follower, with the JFET playing an active role, so that @s œ @G , whilst in
the highest frequency region, only the capacitances are important; in between we have a
transition between both operating modes.

-3.82-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

Source follower ... transition to ... ... capacitive divider


1.0

a
0.7
b

0.5
c
Vs
VG
V dd

gm gm
ω FET =
Vs Cgs
0.2 Cgs
VG CL C L / Cgs
Is
RG = 0 a) 0.5
V ss b) 1.0
c) 2.0
0.1
0.01 0.1 1.0 10.0 100.0
ω /ω FET

Fig. 3.9.3: Magnitude of the frequency response of the JFET source follower for three
different capacitance ratios GL ÎGgs . The pole =$ œ "ÎVG Gi has not been taken into
account here Ðsee Fig. 3.9.7 and Fig. 3.9.8ÑÞ

The relation for the upper cutoff frequency is very interesting. If we set lJ a=bl
to be equal to:

"  a=h Î=FET b# "


Ë œ (3.9.15)
"  a=h Î=FET Hc b# È#

it follows that:
Hc
=h œ =FET (3.9.16)
È"  # Hc#

From Eq. 3.9.15 we can conclude that by putting Hc œ "ÎÈ# the denominator is
reduced to zero, thus =h œ _ . This means that for such a capacitive ratio the
magnitude never falls below "ÎÈ# . However attractive the possibility of achieving
infinite bandwidth may seem, this can never be realized in practice, because any signal
source will have some, although small, internal resistance VG , resulting in an
additional input pole =$ œ "ÎVG Gi , where Gi is the total input capacitance of the
JFET. The complete transfer function will now be (see Fig. 3.9.7 and 3.9.8):
=" = $ =  =#
J a=b œ † (3.9.17)
=# a=  =" ba=  =$ b

-3.83-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

3.9.2 Phase

We obtain the phase response from Eq. 3.9.7 by taking the arctangent of the
ratio of the imaginary to the real part of J a4=b:
eeJ a4=bf
:a=b œ arctan (3.9.18)
d eJ a4=bf

Since in Eq. 3.9.7 we have a single real pole an a single real zero, the resulting
phase angle is calculated as:
= =
:Ð=Ñ œ arctan  arctan (3.9.19)
=FET =FET Dc

In Fig. 3.9.4 the three phase plots with same GL ÎGgs ratios are shown. Because
of the zero, the phase returns to the initial value at high frequencies.

−5
a
− 10
ϕ[ ]
b
− 15

− 20
c
V dd
− 25
gm
gm ω FET =
− 30 Cgs
Vs

− 35 Cgs C L / Cgs
VG CL
Is a) 0.5
RG = 0
− 40 b) 1.0
V ss c) 2.0
− 45
0.01 0.1 1.0 10.0 100.0
ω /ω FET

Fig. 3.9.4: Phase plots of the JFET source follower for the same three capacitance ratios.

3.9.3 Envelope delay

We obtain the envelope delay by taking the = derivative of the phase:

.:
7e œ (3.9.20)
.=

but we usually prefer the normalized expression 7e =h . In our case, however, the upper
cut-off frequency, =h , is changing with the capacitance divider Hc . So instead of =h we

-3.84-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

shall, rather, normalize the envelope delay to the characteristic frequency of the JFET
itself, =FET :
" Hc
7e =FET œ  (3.5.21)
"  a=Î=FET b# Hc#  a=Î=FET b#

The envelope delay plots for the three capacitance ratios are shown in Fig. 3.9.5.
Note that for all three ratios there is a frequency region in which the envelope delay
becomes positive, implying that the output signal advance, in correlation with the
phase plots, goes up with frequency. We have explained the physical background of
this behavior in Part 2, Fig. 2.2.5, 2.2.6. The positive envelope delay influences the
input impedance in a very unfavorable way, as we shall soon see.

0.5
C L / Cgs
a) 0.5
b) 1.0
Advance c) 2.0
0.0
Delay
τ T ω FET
a
− 0.5
gm
ω FET =
Cgs
b V dd
− 1.0
gm
Vs
c
Cgs
− 1.5 VG CL
Is
RG = 0
V ss
− 2.0
0.01 0.1 1.0 10.0 100.0
ω /ω FET

Fig. 3.9.5: The JFET envelope delay for the three capacitance ratios. Note the positive
peak (phase advance region): trouble in sight!

3.9.4 Step response

We are going to use Eq. 3.9.9, which we multiply by the unit step operator "Î=
to obtain the step response in the complex frequency domain; we then obtain the time
domain response by applying the inverse Laplace transform:

" " =  =#
Ka=b œ J a=b œ Hc (3.9.22)
= = =  ="
="
a=  =# b e=>
ga>b œ _–1 eKa=bf œ Hc " res (3.9.23)
=œ!
= Ð=  =" Ñ

-3.85-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

The residue at = Ä ! is:


a=  =# b e=> =#
res0 œ lim = œ (3.9.24)
= p ! = Ð=  =" Ñ ="

and the residue at = Ä =" is:


a=  =# b e=> ="  =# =" >
res" œ lim a=  =" b œ e (3.9.25)
= p =" = Ð=  =" Ñ ="

By entering both residues back into Eq. 3.9.23 we get:


=# ="  =# =" >
ga>b œ Hc Š  e ‹ (3.9.26)
=" ="

and, by considering Eq. 3.9.12 and 3.9.13, as well as that =FET œ "Î7FET and using the
normalized time >Î7FET , we end up with:
ga>b œ "  a"  Hc b eHc >Î7FET (3.9.27)

The plot of this relation is shown in Fig. 3.9.6, again for the same three
capacitance ratios. The initial output signal jump at > œ ! is the input signal cross-talk
(through Ggs ) multiplied by the Hc factor:
ga!b œ Hc (3.9.28)

Following the jump is the exponential relaxation towards the normal follower action at
lower frequencies. If the input pole =$ œ "ÎVG Gi is taken into account, the jump
would be slowed down to an exponential rise with a time constant of VG Gi .

1.2

vG ... source follower


1.0

.
a to ..
t ion
0.8 ansi
s ... tr
b
G
0.6
c
C
τ FET = g gs V dd
Capacitive divider

m
0.4
gm
s
0.2
C L / Cgs Cgs
G CL
0.0 a) 0.5 Is
RG = 0
b) 1.0
c) 2.0 V ss
− 0.2
0 2 4 6 8
t / τFET
Fig. 3.9.6: The JFET source follower step response for the three capacitance ratios.

-3.86-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

As mentioned earlier in connection with Eq. 3.9.17, when the value of VG is


comparable to "Îgm , an additional pole has to be to considered. We must derive the
system transfer function again, from the following two equations:
For the currents into the node g:
@G  @ g @g @g  @ s
œ  (3.9.29)
VG " "
= Ggd = Ggs
and for the currents into the node s:
@g  @ s @s
 a@g  @s b gm œ (3.9.30)
" "
= Ggs = GL
We first express @g as a function of @s from Eq. 3.9.30:
= GL
@g œ @s Œ"   (3.9.31)
= Ggs  gm
Then we replace @g in Eq. 3.9.29 by 3.9.31:

= GL
@G œ @s Œ"  a"  VG = Ggd  VG = Ggs b  @s VG = Ggs (3.9.32)
= Ggs  gm

After some further manipulation we arrive at:


@s "
œ (3.9.33)
@G = GL
"  VG = Ggd  c"  VG = aGgd  Ggs bd
= Ggs  gm

Now we put this into the normalized canonical form and use Eq. 3.9.5 again to
replace the term gm ÎGgs with =FET . Also, we express all the time constants as functions
of =FET and the appropriate capacitance ratios. Finally, we want to see how the reponse
depends on the product gm VG , so we multiply all the terms containing VG with gm and
compensate each of them accordingly. The final expression is:
Ggs
=FET
" Ggd
a=  =FET b †
gm VG GL GL
Œ"   
@s Ggs Ggd
œ
@G " Ggs GL Ggs
=FET ”"  Œ  • =#FET
g m VG Ggd G gd " G gd
=#  =  †
GL GL gm VG GL GL
Œ"    Œ"   
Ggs Ggd Ggs Ggd
(3.9.34)
To plot the responses we shall set:
GL Ggs
œ ", œ &, =FET œ ", = œ 4=, and gm VG œ Ò!Þ$; "; $Ó
Ggs Ggd

-3.87-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

2.0
gm RG
CL a) 0.3
=1
Cgs b) 1.0
c) 3.0
1.0

0.7
RG = 0
0.5 gm
ω FET =
Cgs V dd
Vs
VG RG
gm
Vs c b a
0.2 Cgs
VG C gd CL
Is

V ss
0.1
0.01 0.1 1.0 10.0 100.0
ω /ω FET

Fig. 3.9.7: The JFET source follower frequency response for a ratio GL ÎGgs œ " and a
variable signal source impedance, so that VG gm is 0.3, 1, and 3, respectively. Note the
response peaking for gm VG œ $.

1.2

G
1.0
s
G RG = 0
0.8

0.6 gm
ω FET =
a Cgs
b V dd
c
0.4 RG
CL gm
=1
Cgs s
0.2 Cgs
Cgd CL
g m RG G
Is
0.0 a) 0.3
b) 1.0
V ss
c) 3.0
− 0.2
0 2 4 6 8
t / τ FET

Fig. 3.9.8: The JFET source follower step response for the same conditions as in Fig. 3.9.7.

-3.88-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

3.9.5 Input impedance

In Fig. 3.9.7 and 3.9.8 we have seen how the JFET source follower response is
affected by its input impedance; this behavior becomes evident when the signal source
has a non-zero resistance. Here, we are going to explore the circuit in more depth to
examine the influence of a complex and, in particular, inductive signal source.
As we have done in the previous analysis, the gate–drain capacitance Ggd will
appear in parallel with the input, so we can treat its admittance separately and
concentrate on the remaining input components.
We start from Eq. 3.9.1 by solving it for @s :
3i
@s œ @ G  (3.9.35)
= Ggs
This we insert into Eq. 3.9.2:
3i "
@G a= Ggs  gm b  Œ@g  Œ= Ggs  gm  œ! (3.9.36)
= Ggs ^L

Because the JFET source is biased from a constant current generator (whose
impedance we assume to be infinite) the loading admittance is "Î^L œ = GL . Let us put
this back into Eq. 3.3.36 and rearrange it a little:
gm GL
@G = GL œ 3i Œ"    (3.9.37)
= Ggs Ggs
Furthermore:
" gm "
@G œ 3 i Œ  #  
= GL = Ggs GL = Ggs

= aGgs  GL b  gm
œ 3i (3.9.38)
=# Ggs GL

The input impedance (without Ggd , hence the prime [ w ]) is then:

@g = aGgs  GL b  gm
^iw œ œ (3.9.39)
3i =# Ggs GL

To see more clearly how this impedance is comprised, we invert it to find the
admittance and apply the continuous fraction synthesis in order to identify the
individual components.
Ggs GL
# gm =
= G G G G G gs  GL
]i w œ
gs L gs L
œ=  (3.9.40)
= aGgs  GL b  gm Ggs  GL = aGgs  GL b  gm

The first fraction is the admittance of the capacitances Ggs and GL connected in series.
Let us name this combination Gx :
Ggs GL
Gx œ (3.9.41)
Ggs  GL

-3.89-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

The second fraction, which has a negative sign, must be further simplified. We invert it
again, and after some simple rearrangement we obtain the impedance:

aGgs  GL b# Ggs  GL
^x œ   (3.9.42)
gm Ggs GL = Ggs GL

The first part is interpreted as a negative resistance, which we will label Vx in order
to follow the negative sign in the following analysis more clearly:

aGgs  GL b#
Vx œ  (3.9.43)
gm Ggs GL

The second part as a negative capacitance, which we label G x because it has the same
absolute value as Gx from the Eq. 3.9.41:
Ggs GL
Gx œ  (3.9.44)
Ggs  GL

Now that we have all the components we can reintroduce the gate–drain capacitance
Ggd , so that the final equivalent input impedance looks like Fig. 3.9.9. We can write the
complete input admittance:
"
]i œ 4= aGgd  Gx b  (3.9.45)
"
Vx 
4= Gx

V dd V dd
Cgd

Q1
− Rx Q1
Zi Cgd Cx s Cgs
LG
− Cx Cgs s
CL L
Is CL Is
G

V ss Vss
a) b) c)

Fig. 3.9.9: a) The equivalent input impedance of the capacitively loaded JFET source
follower has negative components which can be a nuisance if, as in b), the signal source
has an inductive impedance, forming c) a familiar Colpitts oscillator. If Ggd is small, the
circuit will oscillate for a broad range of inductance values.

We can separate the real and imaginary part of ]i by putting Eq. 3.9.45 on a
common denominator:

]i œ Åe]i f  ¼e]i f

=# Gx# Vx Ggd  =# Gx# Vx# aGgd  Gx b


œ  # # #
 4= (3.9.46)
"  = Gx Vx "  =# Gx# Vx#

-3.90-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

The negative real part can cause some serious trouble [Ref. 24]. Suppose we are
troubleshooting a circuit with a switching power supply and we suspect it to be a cause
of a strong electromagnetic interference (EMI); we want to use a coil with an
appropriate inductance P (which, of course, has its own real and imaginary admittance)
to inspect the various parts of the circuit for EMI intensity and field direction. If we
connect this coil to the source follower and if the coil resistance is low, we would have:

Åe]P f  Åe]i f Ÿ ! (3.9.47)

and the source follower becomes a familiar Colpitts oscillator, Fig. 3.9.9c [Ref. 25].
Indeed, some older oscilloscopes would burst into oscillation if connected to such a
coil and with its input attenuator switched to maximum sensitivity (a few highly priced
instruments built by respectable firms, back in early 1970's, were no exception).
By taking into account Eq. 3.9.42, 3.9.43 and 3.9.9 and substituting
=FET œ gm ÎGgs , the real part of the input impedance can be rewritten as:
GL Ð=Î=FET Ñ#
ÅÐ]i Ñ œ Ki œ  gm † (3.9.48)
Ggs "  Ð=Î=FET Hc Ñ#

The last fraction represents the normalized frequency dependence of this admittance:
Ð=Î=FET Ñ#
KiN œ (3.9.49)
"  Ð=Î=FET Hc Ñ#

Fig. 3.9.10 shows the plots of KiN for the same ratios of GL ÎGgs as before. Note
the quadratic dependence (of Hc ) at high frequencies.
0

−2
a
G iN
b
−4

−6

C L / C gs
a) 0.5
−8 b) 1.0
c
c) 2.0

− 10
0.01 0.1 1.0 10.0 100.0
ω /ω FET
Fig. 3.9.10: Normalized negative input conductance KiN vs. frequency.

A negative input impedance is always highly undesirable and we shall show a


few possible solutions. The obvious way would be to introduce a resistor in series with

-3.91-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

the JFET gate; since we should have some series resistance anyway in order to protect
the sensitive input from static discharge or accidental overdrive, this will seem to be the
preferred choice. However, after a closer look, this protection resistance is too small to
prevent oscillations in case of an inductive signal source impedance. The required
resistance value which will guarantee stability in all conditions will be so high that the
bandwidth will be reduced by nearly an order of magnitude. Thus this method of
compensation is used only if we do not care how much bandwidth we obtain.
A more elegant method of compensation is the one which we have used already
in Fig. 3.5.3. If we introduce a serially connected Vx Gx network in parallel with the
JFET input, as shown in Fig. 3.9.11, we obtain ]x œ ! and ^x œ _. Note the
corresponding phasor diagram: we first draw the negative components, Vx and Gx ,
find the impedance vector ^x and invert it to find the negative admittance, ]x . We
then compensate it by a positive admittance ]x such that their sum ]ic œ !. We finally
invert ]x to find ^x and decompose it into its real and imaginary part, Vx and Gx :

" "
]ic œ  œ! (3.9.50)
" "
 Vx  Vx 
4= G x 4= G x


j
−Zx
−1
j ω Cx
Rx − Rx Yx
Rx ℜ
Y ic = 0 1
− Rx
−Y x
Cx − Cx 1
j ω Cx
Zx

a) b)
Fig. 3.9.11: a) The negative components of the input impedance can be compensated by an
equal but positive network, connected in parallel, so that their admittances sum to zero
(infinite impedance). In b) we see the coresponding phasor diagram.

With this compensation, the total input impedance is the one belonging to the
parallel connection of Ggd with Gx and assuming a 1 MH gate bias resistor Vin :

" Vin
^i œ œ (3.9.51)
" Ggs GL
 4=aGgd  Gx b "  4=ŒGgd  Vin
Vin Ggs  GL

The analysis of the input impedance would be incomplete without Fig. 3.9.12,
where the Nyquist diagrams of the impedance are shown revealing its frequency
dependence, as well as the influence of different signal source impedances.

-3.92-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices


V dd
f =∞ f =0
XC = 0
Zi Rin = 1M Ω ℜ
Q1
0
a) s Zi
Rin Cin
CL
Is f>0
| XC | = Rin
s
V ss t

V dd ℑ
f =∞
XC = 0
Zi −Rx 0 ℜ
Q1
b) s
s
Rin Cin −Cx Zi t
CL
Is
− Rx
f>0
V ss

V dd ℑ
X LG
f =∞
−Rx ℜ
Q1
f osc 0
c)
s
LG Zi
− Cx s
Zi CL t
Rin Cin
Is
vG − Rx
f>0
V ss

V dd ℑ
X LG
f =∞
−Rx ℜ
Q1
0 Rx
d) s s
LG Zi t
Cx − Cx
Zi CL
Rin Cin Is
G Rx − Rx
f>0
V ss

Fig. 3.9.12: a) The input impedance of the JFET source follower, assumed to be purely capacitive and in
parallel with a 1 MH gate biasing resistor; thus at 0 œ ! we see only the resistor and at 0 œ _ the
reactance of the input capacitance is zero; b) the negative input impedance components affect the input
impedance near the origin; c) with an inductive signal source, the point in which the impedance crosses
the negative real axis corresponds to the system resonant frequency, provoking oscillations. d) The
compensation removes the negative components.

-3.93-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

In Fig. 3.9.12a the JFET gate is tied to ground by a 1 MH resistor, which, with a
purely capacitive input impedance, would give a phasor diagram in the form of a half
circle with frequency varying from DC to infinity.
In Fig. 3.9.12b we concentrate on the small area near the complex plane origin
(high frequencies, close to 0FET ), where we draw the influence of the negative input
impedance components, assuming a resistive signal source.
In Fig. 3.9.12c an inductive signal source (with a small resistive component)
will cause the impedance crossing the real axis in the negative region, therefore the
circuit would oscillate at the frequency at which this crossing occurs.
Finally, in Fig. 3.9.12d we see the same situation but with the negative
components compensated as in Fig. 3.9.11. Note the small loop in the first quadrant of
the impedance plot — it is caused by the small resistance VG of the coil PG , the coil
inductance, and the total input capacitance Gin .
In Fig. 3.9.13 we see yet another way of compensating the negative input
impedance. Here the compensation is achieved by inserting a small resistance Vd in the
drain, thus allowing the anti-phase signal at the drain to influence the gate via Ggd and
cancel the in-phase signal from the JFET source via Ggs . This method is sometimes
preferred over the former method, because the PCB pads, which are needed to
accommodate the additional compensation components, also create some additional
parasitic capacitance from the gate to ground.

V dd

Rd
Cgd
i comp id
Zi Q1
s
− Cx
Cgs
ix CL
Is
− Rx

V ss

Fig. 3.9.13: Alternative compensation of the input impedance


negative components, using negative feedback from the JFET drain.

It should be noted, however, that the negative input impedance compensation


can be achieved for small signals only. Large signals vary the JFET gate’s reverse bias
voltage and the drain–source voltage considerably, therefore both Ggs and Ggd , as well
as gm , change with voltage. We therefore expect some non-linear effects to appear with
a large signal drive. We shall examine this and some other aspects of the source
follower’s performance in Part 5.

-3.94-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

Résumé of Part 3

In this part we have analyzed some basic circuits for wideband amplification,
examined their most important limitations, and explained several ways of improving
their high frequency performance. The property which can cause most trouble, even for
experienced designers, is the negative input impedance of some of the most useful
wideband circuits, and we have shown a few possible solutions.
The reader must realize, however, that the analytical tools and solutions
presented are by no means the ultimate design examples. For a final design, many other
aspects of circuit performance must also be carefully considered, and, more often than
not, these other factors will compromise the wideband performance severely.
As we have indicated at some points, there are ways of compensating certain
unwanted circuit behavior by implementing the system in a differential configuration,
but, on the negative side, this doubles the number of active components, increasing
cost, power dissipation, circuit size, strays and parasitics and also the production and
testing complexity. From the wideband design point of view, having many active
components usually means many more poles and zeros that must be carefully analyzed
and appropriately ‘tuned’.
In Part 4 and Part 5 we shall explain some theoretical and practical techniques
for an efficient design approach at the system level.

-3.95-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

References:

[3.1] C.R. Battjes, Amplifier Frequency Response and Risetime, AFTR Class Notes
(Amplifier Frequency and Transient Response), Tektronix, Inc., Beaverton, 1977
[3.2] P. Starič, Transistor as an Impedance Converter (in Slovenian with an English Abstract),
Elektrotehniški Vestnik, 1989, pp. 17–22.
[3.3] D.L. Feucht, Handbook of Analog Circuit Design,
Academic Press, Inc., San Diego, 1990
[3.4] I.E. Getreu, Modeling the Bipolar Transistor,
Elsevier Scientific Publishing Company, Amsterdam, 1978
[3.5] R.I. Ross, T-coil Transistor Interstage Coupling,
AFTR Class Notes, Tektronix, Inc., Beaverton, 1968
[3.6] P. Starič, Application of T-coil Transistor Interstage Coupling in Wideband Pulse Amplifiers,
Elektrotehniški Vestnik, 1990, pp. 143–152.
[3.7] P.R. Gray & R.G. Meyer, Analysis and Design of Analog Integrated Circuits,
John Wiley, New York, 1969
[3.8] B.E. Hofer, Amplifier Risetime and Frequency Response,
AFTR Class Notes, Tektronix, Inc., Beaverton, Oregon, 1982
[3.9] J. J. Ebers & J.L. Moll, Large-Signal Behavior of Junction Transistors,
Proceedings of IRE, Vol. 42, pp. 1761–1772, December 1954
[3.10] H.K. Gummel & H.C. Poon, An Integral Charge Control Model of Bipolar Transistors,
Bell Systems Technical Journal, Vol. 49, pp. 827–852, May 1970
[3.11] J.M. Early, Effects of Space-Charge Layer Widening in Junction Transistors,
Proceedings of IRE, Vol. 40, pp. 1401–1406, November 1952
[3.12] P.E. Gray & C.L Searle, Electronic Principles, Physics, Models and Circuits,
John Wiley, New York, 1969
[3.13] L.J. Giacoletto, Electronics Designer's Handbook,
Second Edition, McGraw-Hill, New York 1961.
[3.14] P.M. Chirlian, Electronic Circuits, Physical Principles and Design,
McGraw-Hill, 1971.
[3.15] J.E. Cathey, Theory and Problems of Electronic Devices and Circuits,
Schaum's Outline Series in Engineering, McGraw-Hill, New York, 1989
[3.16] A.D. Evans, Designing with Field-Effect Transistors, (Siliconix Inc.),
McGraw-Hill, New York, 1981
[3.17] J.M. Pettit & M. M. McWhorter, Electronic Amplifier Circuits, Theory and Design,
McGraw-Hill, New York, 1961
[3.18] B. Orwiller, Vertical Amplifier Circuits,
Tektronix Inc., Beaverton, Oregon, 1969
[3.19] C.R. Battjes, Technical Notes on Bridged T-coil Peaking,
Internal Publication, Tektronix, Inc., Beaverton, Oregon, 1969
[3.20] P. Antognetti & G. Massobrio, Semiconductor Device Modelling with SPICE,
McGraw-Hill, New York, 1988
[3.21] P. Horowitz & W. Hill, The Art of Electronics,
Cambridge University Press, Cambridge, 1987
[3.22] G. Bruun, Common-Emitter Transistor Video-Amplifiers,
Proceedings of the IRE, Nov. 1956, pp. 1561–1572

-3.97-
P. Starič, E. Margan Wideband Amplifier Stages with Semiconductor Devices

[3.23] F.W. Grover, Inductance Calculation, (Reprint),


Instrument Society of America, Research Triangle Park, N. C. 27 709, 1973.
[3.24] G.B. DeBella, Stability of Capacitively-Loaded Emitter Follower,
Hewlett-Packard Journal, Vol. 17, No. 8, April 1996
[3.25] L. Strauss, Wave Generation and Shaping,
McGraw-Hill, New York, 1960
[3.26] N .L. E..3=, private e-mail exchange with the authors (see JAemail.PDF)
[3.27] M.E. Van Valkenburg, Introduction to Modern Network Synthesis,
John Wiley, New York, 1960
[3.28] SPICE - An Overview,
http://www.seas.upenn.edu/~jan/spice/spice.overview.html
[3.29] ORCAD PSpice page - Info and download of latest free evaluation version,
http://www.orcad.com/products/pspice/pspice.htm
[3.30] MicroCAP, Spectrum Software, Inc.,
http://www.spectrum-soft.com
[3.31] R.J. Widlar, Some Circuit Design Techniques for Linear Integrated Circuits,
IEEE Transactions onCircuit Theory, Vol. CT-12, Dec. 1965, pp. 586–590
[3.32] G.R. Wilson, A Monolitic Junction FET-NPN Operational Amplifier,
IEEE Journal of Solid-State Circuits, Vol. SC-3, Dec. 1968, pp. 341–348
[3.33] A.P. Brokaw, A Simple Three-Terminal IC Bandgap Reference,
IEEE Journal of Solid-State Circuits, Vol. SC-9, Dec. 1974, pp. 388–393
[3.34] S. Roach: Signal Conditioning in Oscilloscopes and the Spirit of Invention,
J. Williams (Editor), The Art and Science of Analog Circuit Design, Ch. 7,
Butterworth-Heinemann, 1995, ISBN 0-7506-9505-6
[3.35] P. Starič, Wideband JFET Source Follower,
Electronic Engineering, Aug. 1992, pp. 29–34
[3.36] J.M. Miller, <http://www.ieee.org/organizations/history_center/legacies/miller.html>

-3.98-
P. Starič, E. Margan:

Wideband Amplifiers

Part 4:

Cascading Amplifier Stages, Selection of Poles

For every circuit design there is an equivalent and opposite redesign!


A generalization of Newton's Law by
Derek F. Bowers, Analog Devices
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

To Calculate Or Not To Calculate — That Is Not A Question

In the fourth part of this book we discuss some basic system


integration procedures and derive, based on two different optimization
criteria, the two system families, which we have already used extensively
in previous parts: the Butterworth system family, i.e., the systems with a
maximally flat amplitude response, and the Bessel system family, i.e., the
systems with a maximally flat envelope delay.
Once we derive the relations from which the system poles are
calculated, we shall present the resulting poles in table form, in the same
way as was traditionally done in the literature (but, more often than not,
without the corresponding derivation procedures).
Some readers might ask why on earth in the computer era do we
bother to write tables full of numbers, which very probably no one will
ever refer to? The answer is that many amplifier designers are ‘analog by
vocation’, they use the computer only when they must and they like to do
lots of paperwork before they finally sit by the breadboard. For them,
without those tables a book like this would be incomplete (even if there
are many of those who first sit by the breadboard with the soldering iron in
one hand and a ’scope probe in the other, and do the paperwork later!).
Anyway, for the younger generations we provide the necessary
computer routines in Part 6.
The principles developed in this part will be used in Part 5, in
which some more sophisticated amplifier building blocks are discussed.

- 4.2 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

Contents .................................................................................................................................... 4.3


List of Figures ........................................................................................................................... 4.4
List of Tables ............................................................................................................................ 4.5

Contents:
4.0 Introduction ....................................................................................................................................... 4.7
4.1 A Cascade of Identical, DC Coupled, VG Loaded Stages ................................................................ 4.9
4.1.1 Frequency Response and the Upper Half Power Frequency ............................................ 4.9
4.1.2 Phase Response .............................................................................................................. 4.12
4.1.3 Envelope Delay .............................................................................................................. 4.12
4.1.4 Step Response ................................................................................................................ 4.13
4.1.5 Rise time Calculation ..................................................................................................... 4.15
4.1.6 Slew Rate Limit ............................................................................................................. 4.16
4.1.7 Optimum Single Stage Gain and Optimum Number of Stages ...................................... 4.17
4.2 A Multi-stage Amplifier with Identical AC Coupled Stages ........................................................... 4.21
4.2.1 Frequency Response and Lower Half Power Frequency ................................................ 4.22
4.2.2 Phase Response .............................................................................................................. 4.23
4.2.3. Step Response ............................................................................................................... 4.24
4.3 A Multi-stage Amplifier with Butterworth Poles (MFA Response) ................................................ 4.27
4.3.1. Frequency Response ..................................................................................................... 4.31
4.3.2. Phase response .............................................................................................................. 4.32
4.3.3. Envelope Delay ............................................................................................................. 4.33
4.3.4 Step Response ................................................................................................................ 4.33
4.3.5 Ideal MFA Filter, Paley–Wiener Criterion .................................................................... 4.36
4.4 Derivation of Bessel Poles for MFED Response ............................................................................. 4.39
4.4.1 Frequency Response ...................................................................................................... 4.42
4.4.2 Upper Half Power Frequency ........................................................................................ 4.43
4.4.3 Phase Response .............................................................................................................. 4.43
4.4.4. Envelope delay .............................................................................................................. 4.45
4.4.4 Step Response ................................................................................................................ 4.45
4.4.5. Ideal Gaussian Frequency Response ............................................................................. 4.49
4.4.6. Bessel Poles Normalized to Equal Cut Off Frequency ................................................. 4.51
4.5. Pole Interpolation ........................................................................................................................... 4.55
4.5.1. Derivation of Modified Bessel poles ............................................................................ 4.55
4.5.2. Pole Interpolation Procedure ........................................................................................ 4.56
4.5.3. A Practical Example of Pole interpolation .................................................................... 4.59
4.6. Staggered vs. Repeated Bessel Pole Pairs ...................................................................................... 4.63
4.6.1. Assigning the Poles For Maximum dynamic Range ...................................................... 4.65
Résumé of Part 4 ................................................................................................................................... 4.69
References ............................................................................................................................................. 4.71

- 4.3 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

List of Figures:

Fig. 4.1.1: A multi-stage amplifier with identical, DC coupled, VG loaded stages ............................... 4.9
Fig. 4.1.2: Frequency response of a 8-stage amplifier, 8 œ "–"! ........................................................ 4.10
Fig. 4.1.3: A slope of 6 dB/octave equals 20 dB/decade ................................................................ 4.11
Fig. 4.1.4: Phase angle of the amplifier in Fig. 4.1.1, 8 œ "–"! ........................................................... 4.12
Fig. 4.1.5: Envelope delay of the amplifier in Fig. 4.1.1, 8 œ "–"! ..................................................... 4.13
Fig. 4.1.6: Amplifier with 8 identical DC coupled stages, excited by the unit step .............................. 4.13
Fig. 4.1.7: Step response of the amplifier in Fig. 4.1.6, 8 œ "–"! ....................................................... 4.15
Fig. 4.1.8: Slew rate limiting: definition of parameters ........................................................................ 4.17
Fig. 4.1.9: Minimal relative rise time as a function of total gain and number of stages ....................... 4.18
Fig. 4.1.10: Optimal number of stages required for minimal rise time at given gain ............................ 4.20
Fig. 4.2.1: Multi-stage amplifier with AC coupled stages .................................................................... 4.21
Fig. 4.2.2: Frequency response of the amplifier in Fig. 4.2.1, 8 œ "–"! .............................................. 4.22
Fig. 4.2.3: Phase angle of the amplifier in Fig. 4.2.1, 8 œ "–"! ........................................................... 4.23
Fig. 4.2.4: Step response of the amplifier in Fig. 4.2.4, 8 œ "–5 and "! .............................................. 4.25
Fig. 4.2.5: Pulse response of the amplifier in Fig. 4.2.4, 8 œ ", $, and ) ............................................. 4.25
Fig. 4.3.1: Impulse response of three different complex conjugate pole pairs ...................................... 4.28
Fig. 4.3.2: Butterworth poles for the system order 8 œ "–& ................................................................. 4.30
Fig. 4.3.3: Frequency response magnitude of Butterworth systems, 8 œ "–"! .................................... 4.31
Fig. 4.3.4: Phase response of Butterworth systems, 8 œ "–"! ............................................................. 4.32
Fig. 4.3.5: Envelope delay of Butterworth systems, 8 œ "–"! ............................................................. 4.33
Fig. 4.3.6: Step response of Butterworth systems, 8 œ "–"! ............................................................... 4.34
Fig. 4.3.7: Ideal MFA frequency response ........................................................................................... 4.36
Fig. 4.3.8: Step response of a network having an ideal MFA frequency response ............................... 4.36
Fig. 4.4.1: Bessel poles of order 8 œ "–"! ......................................................................................... 4.41
Fig. 4.4.2: Frequency response of systems with Bessel poles, 8 œ "–"! ............................................. 4.42
Fig. 4.4.3: Phase angle of systems with Bessel poles, 8 œ "–"! .......................................................... 4.44
Fig. 4.4.4: Phase angle as in Fig. 4.4.3, but in linear frequency scale ................................................... 4.44
Fig. 4.4.5: Envelope delay of systems with Bessel poles, 8 œ "–"! .................................................... 4.45
Fig. 4.4.6: Step response of systems with Bessel poles, 8 œ "–"! ....................................................... 4.46
Fig. 4.4.7: Ideal Gaussian frequency response (MFED) ....................................................................... 4.49
Fig. 4.4.8: Ideal Gaussian frequency response in loglog scale ............................................................. 4.50
Fig. 4.4.9: Step response of a system with an ideal Gaussian frequency response ............................... 4.50
Fig. 4.4.10: Frequency response of systems with normalized Bessel poles, 8 œ "–"! ........................ 4.52
Fig. 4.4.11: Phase angle of systems with normalized Bessel poles, 8 œ "–"! ..................................... 4.52
Fig. 4.4.12: Envelope delay of systems with normalized Bessel poles, 8 œ "–"! ............................... 4.53
Fig. 4.4.13: Step response of systems with normalized Bessel poles, 8 œ "–"! .................................. 4.53
Fig. 4.5.1: Pole interpolation procedure ............................................................................................... 4.56
Fig. 4.5.2: Frequency response of a system with interpolated poles ..................................................... 4.60
Fig. 4.5.3: Phase angle of a system with interpolated poles .................................................................. 4.60
Fig. 4.5.4: Envelope delay of a system with interpolated poles ............................................................ 4.61
Fig. 4.5.5: Step response of a system with interpolated poles .............................................................. 4.62
Fig. 4.6.1: Comparison of frequency responses of systems with staggered vs. repeated pole pairs ...... 4.63
Fig. 4.6.2: Step response comparison of systems with staggered vs. repeated pole pairs ..................... 4.64
Fig. 4.6.3: Individual stage step response of a 3-stage, 5-pole system ................................................. 4.66
Fig. 4.6.4: Step response of the complete 3-stage, 5-pole system, reverse pole order .......................... 4.66
Fig. 4.6.5: Step response of the complete 3-stage, 5-pole system, correct pole order .......................... 4.67

- 4.4 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

List of Tables:

Table 4.1.1: Values of the upper cut off frequency of a multi-stage amplifier for 8 œ "–"! ................ 4.11
Table 4.1.2: Values of relative rise time of a multi-stage amplifier for 8 œ "–"! ................................ 4.15
Table 4.2.1: Values of the lower cutoff frequency of an AC coupled amplifier for 8 œ "–"! ............. 4.23
Table 4.3.1: Butterworth poles of order 8 œ "–"! ............................................................................... 4.35
Table 4.4.1: Relative bandwidth improvement of systems with Bessel poles ....................................... 4.43
Table 4.4.2: Relative rise time improvement of systems with Bessel poles .......................................... 4.46
Table 4.4.3: Bessel poles (equal envelope delay) of order 8 œ "–"! ................................................... 4.48
Table 4.4.4: Bessel poles (equal cut off frequency) of order 8 œ "–"! ................................................ 4.54
Table 4.5.1: Modified Bessel poles (equal asymptote as Butterworth) ................................................. 4.58

- 4.5 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.0 Introduction

In a majority of cases the desired gain ‚ bandwidth product is not achievable


with a single transistor amplifier stage. Therefore more stages must be connected in
cascade. But to do it correctly we must find the answer to several questions:

ì Should all stages be equal or different?


ì What is the optimal pole pattern for obtaining a desired response?
ì Is it important which pole (pair) is assigned to which stage?
ì Is it better to use many simple (first- and second-order) stages or is it worth
the trouble to try more complex (third-, fourth- or higher order) stages?
ì What is the optimum single stage gain to achieve the greatest possible
gain ‚ bandwidth product for a given number of stages?
ì Is it possible to construct an ideal multi-stage amplifier with either
maximally flat amplitude (MFA) or maximally flat envelope delay (MFED)
response and how close to the ideal response can we come?

These are the main questions which we shall try to answer in this part.
In Sec. 4.1 we discuss a cascade of identical DC coupled amplifier stages, with
loads consisting of a parallel connection of a resistance and a (stray) capacitance. There
we derive the formula for the calculation of an optimum number of amplifying stages to
obtain the required gain with the smallest rise time possible for the complete amplifier.
Next we derive the expression for the optimum gain of an individual amplifying
stage of a multi-stage amplifier in order to achieve the smallest possible rise time. We
also discuss the effect of AC coupling between particular stages by means of a simple
VG network.
Butterworth poles, which are needed to achieve an MFA response, are derived
next. This leads to the discussion of the (im)possibility to design an ideal MFA
amplifier.
Then we derive the Bessel poles which provide the MFED response. Since they
are derived from the condition for a unit envelope delay, the upper cut off frequency
increases with the number of poles. Therefore we also present the derivation of two
different pole normalizations: to equal cut off frequency and to equal stop band
asymptote. We discuss the (im)possibility of designing an amplifier with the frequency
response approaching an ideal Gaussian curve. Further we discuss the interpolation
between the Bessel and the Butterworth poles.
Finally, we explain the merit of using staggered Bessel poles versus repeated
second-order Bessel pole pairs.
Wherever practical, we calculate and plot the frequency, phase, group delay and
step response to allow a quick comparison of different concepts.

- 4.7 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.1 A Cascade of Identical, DC Coupled, VG Loaded Stages

A multi-stage amplifier with DC coupled, VG loaded stages is shown in


Fig. 4.1.1. All stages are assumed to be identical. Junction field effect transistors
(JFETs) are being used as active devices, since we want to focus on essential design
problems; with bipolar junction transistors (BJTs), we would have to consider the
loading effect of a relatively complicated base input impedance [Ref. 4.1].
At each stage load the capacitance G represents the sum of all the stray
capacitances along with the GGS Ð"  E5 Ñ equivalent capacitance (where GGS is the
gate–drain capacitance and E5 is the voltage gain of the individual stage). By doing so
we obtain a simple parallel VG load in each stage. The input resistance of a JFET is
many orders of magnitude higher than the loading resistor V , so we can neglect it. All
loading resistors are of equal value and so are the mutual conductances gm of all the
JFETs; therefore all individual gains E5 are equal as well. Consequently, all the half
power frequencies =h5 and all the rise times 7r5 of the individual stages are also equal.
In order to simplify the circuit further, we have not drawn the the bias voltages and the
power supply (which must represent a short circuit for AC signals; obviously, a short at
DC is not very useful!).
Vo1 Vo2 Vo n

Vi1 Q1 Vi2 Q2 Vi3 Vi n Qn

R1 C1 R2 C2 Rn C
Vg

Fig. 4.1.1: A multi-stage amplifier as a cascade of identical, DC coupled, VG loaded stages.

The voltage gain of an individual stage is:


"
E 5 œ gm V (4.1.1)
"  4 =VG
with the magnitude:
gm V
lE5 l œ (4.1.2)
È"  Ð=Î=h Ñ#
where:
gm œ mutual conductance of the JFET, [S] (siemens, [S] œ ["ÎH]);
=h œ "ÎVG œ upper half power frequency of an individual stage, [radÎs].

4.1.1 Frequency Response and the Upper Half Power Frequency

We have 8 equal stages with equal gains: E" œ E# œ â œ E8 œ E5 . The gain


of the complete amplifier is then:
8
gm V
E œ E" † E# † E$ â E8 œ E85 œ ” • (4.1.3)
"  4 = VG

- 4.9 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

The magnitude is:


8
Ô ×
gm V
lEl œ Ö Ù (4.1.4)
#
Õ É"  a=Î=h b Ø

To be able to compare the bandwidth of the multi-stage amplifier for different


number of stages, we must normalize the magnitude by dividing Eq. 4.1.4 by the system
DC gain E! œ agm Vb8 . The plots are shown in Fig. 4.1.2. It is evident that the system
bandwidth, =H , shrinks with each additional amplifying stage.

2.0
ωH 1/n
|A| ωh = 2 −1
A0
ω h = 1/ RC
1.0

0.7

0.5
n=1
2
3

10
0.2

0.1
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 4.1.2: Frequency response of an 8-stage amplifier (8 œ ", #, á , "!). To compare the
bandwidth, the gain was normalized, i.e., divided by the system DC gain, Ðgm VÑ8 . For each 8,
the bandwidth (the crossing of the !Þ(!( level) shrinks by È#"Î8  " .

The upper half power frequency of the amplifier can be calculated by a simple relation:
8
Ô ×
Ö " Ù œ " (4.1.5)
# È#
Õ É"  a=H Î=h b Ø

By squaring we obtain:
#
8 =H
"  a=H Î=h b# ‘ œ # Ê Œ "Î8
 œ# " (4.1.6)
=h

The upper half power frequency of the complete 8-stage amplifier is:

=H œ =h È#"Î8  " (4.1.7)

- 4.10 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

At high frequencies, the first stage response slope approaches the 6 dBÎoctave
asymptote (20 dBÎdecade). The meaning of this slope is explained in Fig. 4.1.3. For
the second stage the slope is twice as steep, and for the 8th stage it is 8 times steeper.

| A|
[dB]
A0
2

0
n=1
−2
− 3 dB
−4

−6

−8

− 10

−6
− 12

dB
/2
f
− 14

=
−2
0d
− 16

B
/ 10
− 18

f
− 20
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 4.1.3: The first-order system response and its asymptotes. Below the cut off, the
asymptote is the level equal to the system gain at DC (normalized here to ! dB). Above the
cut off, the slope is ' dB/octave (an octave is a frequency span from 0 to #0 ), which is
also equal to #! dB/decade (a frequency decade is a span from 0 to "! 0 ).

The values of =H for 8 œ " – "! are reported in Table 4.1.1.

Table 4.1.1

8 " # $ % & ' ( ) * "!


=H ".!!! !.'%% !.&"! !.%$& !.$)' !.$&! !.$#$ !.$!" !.#)$ !.#'*

With ten equal stages connected in cascade the bandwidth is reduced to a poor
!.#'* =h ; such an amplifier is definitively not very efficient for wideband amplification.
Alternatively, in order to preserve the bandwidth a 8-stage amplifier should
have all its capacitors reduced by the same factor, È#"Î8  " . But in wideband
amplifiers we already strive to work with stray capacitances only, so this approach is
not a solution.
Nevertheless, the amplifier in Fig. 4.1.1 is the basis for more efficient amplifier
configurations, which we shall discuss later.

- 4.11 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.1.2 Phase Response

Each individual stage of the amplifier in Fig. 4.1.1 has a frequency dependent
phase angle:
e˜J Ð4 =Ñ™
:5 œ arctan œ arctanÐ  = Î=h Ñ (4.1.8)
d ˜J Ð4 =Ñ™
where J Ð4 =Ñ is taken from Eq. 4.1.1. For 8 equal stages the total phase angle is simply
8 times as much:

:8 œ 8 arctanÐ  =Î=h Ñ (4.1.9)

The phase responses are plotted in Fig. 4.1.4. Note the high frequency
asymptotic phase shift increasing by 1Î# (or 90°) for each 8. Also note the shift at
= œ =h being exactly 8 1Î%, in spite of a reduced =h for each 8.
0
n=1
2
3
−180
ϕ (ω )
[°]
− 360

− 540
10

− 720
ϕ n = n arctan ( − ω /ω h ) ω h = 1/ RC

− 900
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h
Fig. 4.1.4: Phase angle of the amplifier in Fig. 4.1.1, for 8 œ 1–10 amplifying stages.

4.1.3 Envelope Delay

For a single amplifying stage (8 œ ") the envelope delay is the frequency
derivative of the phase, 7e8 œ .:8 Î.= (where :8 is given by Eq. 4.1.9). The
normalized single stage envelope delay is:
"
7e = h œ (4.1.10)
"  Ð=Î=h Ñ#

- 4.12 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

and for 8 equal stages:


8
7e8 =h œ (4.1.11)
"  Ð=Î=h Ñ#

Fig. 4.1.5 shows the frequency dependent envelope delay for 8 œ "–"!. Note the
delay at = œ =h being exactly "Î# of the low frequency asymptotic value.

0.0
n=1

2
− 2.0
3

− 4.0

τ en ω h

− 6.0

− 8.0
ω h = 1/ RC

10
−10.0
0.1 0.2 0.5 1.0 2.0 5.0 10.0
ω /ω h

Fig. 4.1.5: Envelope delay of the amplifier in Fig. 4.1.1, for 8 œ 1–10 amplifying stages. The
delay at = œ =h is "Î# of the low frequency asymptotic value. Note that if we were using 0 Î0h
for the abscissa, we would have to divide the 7e scale by #1.

4.1.4 Step Response

To obtain the step response, the amplifier in Fig. 4.1.1 must be driven by the
unit step function:
Vo1 Vo2 Vo n

Vi1 Q1 Vi2 Q2 Vi3 Vi n Qn

R1 C1 R2 C2 Rn Cn
Vg

Fig. 4.1.6: Amplifier with 8 equal DC coupled stages, excited by the unit step.

We can derive the step response expression from Eq. 4.1.1 and Eq. 4.1.3. In
order to simplify and generalize the expression we shall normalize the magnitude by
dividing the transfer function by the DC gain, gm V, and normalize the frequency by
setting =h œ "ÎVG œ ". Since we shall use the _" transform we shall replace the
variable 4 = by the complex variable = œ 5  4 =.

- 4.13 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

With all these changes we obtain:


"
J Ð=Ñ œ (4.1.12)
Ð"  =Ñ8

The amplifier input is excited by the unit step, therefore we must multiply the
above formula by the unit step operator "Î=:
"
KÐ=Ñ œ (4.1.13)
= Ð"  =Ñ8

The corresponding function in the time-domain is:


e= >
gÐ>Ñ œ _" eKÐ=Ñf œ " res (41.14)
= Ð"  =Ñ8

We have two residues. The first one does not depend of 8:


e= >
res! œ lim =” •œ"
=p! = Ð"  =Ñ8
whilst the second does:
" .Ð8"Ñ 8 e= >
res" œ lim † ”Ð"  =Ñ •œ
= p " Ð8  "Ñx .= Ð8"Ñ = Ð"  =Ñ8

" . Ð8"Ñ e= >


œ lim † Ð8"Ñ Œ  (4.1.15)
= p " Ð8  "Ñx .= =

Since res" depends on 8, for 8 œ " we obtain:

res" ¹ œ  e> (4.1.16)


8œ"

for 8 œ #:
res" ¹ œ  e> Ð"  >Ñ (4.1.17)
8œ#

for 8 œ $:
>#
res" ¹ œ  e> Œ"  >   (4.1.18)
8œ$ #
á etc.
The general expression for the step response for any 8 is:

8 >5"
g8 Ð>Ñ œ _" ˜GÐ=Ñ™ œ res!  res" œ "  e> " (4.1.19)
Ð5  "Ñx
5 ="

Here we must consider that !x œ ", by definition.


As an example, by inserting 8 œ & into Eq. 4.1.19 we obtain:
> ># >$ >%
g& Ð>Ñ œ "  e> Œ"      (4.1.20)
"x #x $x %x

- 4.14 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

The step response plots for 8 œ "–"!, calculated by Eq. 4.1.19, are shown in
Fig. 4.1.7. Note that there is no overshoot in any of the curves. Unfortunately, the
efficiency of this kind of amplifier in the sense of the ‘bandwidth per number of stages’
is poor, since it has no peaking networks which would prevent the decrease of
bandwidth with 8.

1.0

n=1
2
0.8
3
g n( t )

0.6

10
0.4

0.2

0.0
0 4 8 12 16 20
t /RC

Fig. 4.1.7: Step response of the amplifier in Fig. 4.1.6, for 8 œ 1–10 amplifying stages.

4.1.5 Rise Time Calculation

In the case of a multi-stage amplifier, where each particular stage has its
respective rise time, 7r" , 7r# , á , 7r8 , we calculate the system’s rise time [Ref. 4.2] as:

7r œ É 7r#"  7r##  7r#$  â  7r#8 (4.1.21)

In Part 2, Sec. 2.1.1, Eq. 2.1.1 – 4, we have calculated the rise time of an
amplifier with a simple VG load to be 7r" œ #Þ#! VG . Since here we have 8 equal
stages the rise time of the complete amplifier is:

7r œ 7r" È8 œ #Þ#! VG È8 (4.1.22)

Table 4.1.2 shows the rise time increasing with the number of stages:

Table 4.1.2
8 " # $ % & ' ( ) * "!
7r8 Î7r" "Þ!! "Þ%" "Þ($ #Þ!! #Þ#% #Þ%& #Þ'& #Þ)$ $Þ!! $Þ"'

- 4.15 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.1.6 Slew Rate Limit

The equations derived so far describe the small signal properties of an amplifier.
If the signal amplitude is increased the maximum slope of the output signal .@o Î.>
becomes limited by the maximum current available to charge any capacitance present at
the particular node. The amplifier stage which must handle the largest signal is the one
which runs into the slew rate limiting first; usually it is the output stage.
To find the slew rate limit we drive the amplifier by a sinusoidal signal and
increase the input amplitude until the output amplitude just begins to saturate; then we
increase the frequency until we notice that the middle part of the sinusoidal waveform
becomes distorted (changing linearly with time) and then decrease the frequency until
the distortion just disappears. That frequency is equal to the full power bandwidth =FP
(in radians per second, [rad/s]). Generally, an amplifier need not have the positive and
negative slope equally steep; then it is the less steep slope to set the limit, which is:
.@o Momax
œ (4.1.23)
.> G
Here Momax is the maximum output current available to drive the loading capaciatnce G .
If @o is a sinusoidal signal of angular frequency =FP and amplitude Zmax , such
that @o œ Zmax sin =FP >, the slope varies with time as:
.ÐZmax sin =FP >Ñ
œ Zmax =FP cos =FP > (4.1.24)
.>
and it has a maximum at lcos =FP >l œ " (which is at > œ ! „ 1Î=FP ; see Fig. 4.1.8a).
Therefore:

slew rate: WV œ Zmax =FP (4.1.25)

The slew rate is usually expressed in volts per microsecond [VÎ.s]; for contemporary
amplifiers a more appropriate figure would be volts per nanosecond [VÎns].
If we increase the signal frequency beyond =FP the waveform will eventually be
distorted into a linear ramp shape, but reduced in amplitude because the slope of a
sinusoidal signal is reduced to zero at the peak voltage „ Zmax . However, if driven by a
step signal of the same amplitude the linear ramp will span the full amplitude range. We
need to find the total slewing time between Zmax and Zmax ; by equating the small
signal derivative .@o Î.> with the large signal slewing ?Z Î?> we find:
.@o ?Z # Zmax
WV œ œ œ (4.1.26)
.> ?> >slew
The slewing time is then:
# Zmax # XFP
>slew œ œ œ (4.1.27)
Zmax =FP =FP 1
where XFP œ #1Î=FP , i.e., the period of the full power bandwidth sinewave signal.

- 4.16 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

As shown in Fig. 4.1.8b, the frequency of a full amplitude triangular vaveform


(such as the one resulting from a square wave excitation) will be higher that =FP .
Vmax
Vmax
Vmax sin ( ω FP t )
Vmax cos ( ω FP t ) 90%

FP
ωax
t = −π π
FP

Vm
ω FP t= ω
ω

FP h( t )

SR =
max

t t
V

0
SR =

T FP = ω2 π
FP

10% τ rS

−Vmax t slew
−Vmax
a) b)
Fig. 4.1.8: Definitions of Slew rate limiting parameters; a) the slew rate is equal to the highest
.@Î.> of the full-power undistorted sine wave; b) the slewing time defined as the large signal
step response; the equivalent slewing frequency (of the triangle waveform) is higher than =FP of
the sine wave.

Of course, this is an ideal waveform, since in reality the finite amplifier


bandwidth will round the sharp corners. Nevertheless, the large signal step response is
the good starting point to calculate the rise time in slewing conditions 7rS . From "!% to
*!% of #Zmax the output voltage rises by !Þ) ‚ #Zmax . So the result of Eq. 4.1.27 must
be multiplied by !Þ) to obtain:

XFP
7rS œ !Þ) >slew œ !Þ) ¸ !Þ#&%' XFP (4.1.28)
1

Eq. 4.1.25 is generally used to characterize operational amplifiers, whilst for


wideband and pulse amplifiers we prefer Eq. 4.1.28. It is very important to distinguish
between the two definitions, since in most technical specifications they are tacitly
assumed.

4.1.7 Optimum Single Stage Gain and Optimum Number of Stages

The 8-stage cascade amplifier of Fig. 4.1.6 has the voltage gain E" œ @o" Î@i" ,
the second E# œ @o# Î@i# œ @o# Î@o" and so on, up to E8 œ @o8 Î@i8 œ @o8 Î@oÐ8"Ñ .
The total gain is the product of individual stage gains:
@o8
Eœ œ E" † E# â E8 (4.1.29)
@i"
If all the amplifying stages are identical, we denote the individual stage gain as
E5 , the loading resistors V, and the loading capacitances G . Then the total gain is:

E œ E85 (4.1.30)

- 4.17 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

The rise time of the complete multi-stage amplifier is equal to the square root of
the sum of the individual rise times squared, but since the amplitude after each gain
stage is different, we must normalize the rise times by multiplying each with it's own
gain factor:
8
7r œ Ë ! aE3 7r3 b# (4.1.31)
3œ"

We have assumed that all stages have identical gain, E3 œ E5 , and rise time, 7r3 œ 7r5 .
Thherefore:
7r œ È8 ÐE5 7r5 Ñ# œ È8 E5 7r5 œ È8 E5 #Þ# VG (4.1.32)

where 7r5 is the rise time of an individual stage, as calculated in Part 2. By considering
Eq. 4.1.30, we obtain the following relation:
7r
œ È8 E"Î8 (4.1.33)
7r5

We have ploted this relation in Fig. 4.1.9, using the system total gain E as the
parameter. Note that, in order to see the function 7r a8b better, we have assumed a
continuous 8; of course, we can not have, say, 4.63 stages — 8 must be an integer.

10
9
+10% tolerance
8
A [total system gain]
τr 7 1000
τ rk 500
6 200
100
5 50
20
4 10
5
3

2
2
minima
1
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
n [number of stages]

Fig. 4.1.9: Minimal relative rise time as a function of the number of stages 8
and the total system gain E. Close to the minima the curves are relatively flat,
so in practice we can trade off, say, a 10% increase in the system rise time and
reduce the required number of stages accordingly; i.e., to achieve the gain of
100, only 5 stages could be used instead of 9, with a slight rise time increaseÞ

From this diagram we can find the optimum number of the amplifying stages
8opt if the total system gain E is known. These optima lie on the valleys of the curves
and in the following discussion we will derive the necessary formulae.

- 4.18 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

The ratio E5 Î7r5 characterizes the efficiency of an amplifier stage. If we want to


design a multi-stage amplifier with the smallest possible rise time, we express 7r from
Eq. 4.1.33, differentiate it with respect to 8, and equate the result to zero to find the
minimum:
` 7r ` ˆÈ8 E"Î8 7r5 ‰
œ œ! (4.1.34)
`8 `8

The differentiation is solved as:

7r5 "
E"Î8  È8 E"Î8 ln E œ ! (4.1.35)
E  # È8

Because neither 7r5 nor E are zero we equate the expression in parentheses to zero:
"
 È8 ln E œ ! (4.1.36)
È
# 8

By multiplying this by the # È8 , we obtain an important intermediate result:


"
"  # 8 ln E œ ! Ê 8œ  œ # ln E (4.1.37)
# ln E
Therefore the optimum number of amplifying stages for a given total gain E is:

" Ÿ 8opt Ÿ inta# ln Eb (4.1.38)

and since we can not have, say, 3.47 amplifying stages, we round the result to the
nearest integer, the smallest obviously being ".
On the basis of this simple relation we can draw the line a in Fig. 4.1.10 for a
quick estimation of the number of amplifying stages necessary to obtain the smallest
rise time if the total system gain E is known. Again, the required number of amplifying
stages can be reduced in practice, as indicated in Fig. 4.1.10 by the line b, without
significantly increasing the rise time. Owing to reasons of economy, the most simple
systems are often designed far from optimum, as indicated by the bars and the line c.
Eq. 4.1.33 and 4.1.38 and the corresponding diagrams are valid also for peaking
stages, although peaking stages can be designed much more efficiently, as we shall see.
From Eq. 4.1.38 we can find the optimal gain value of the individual stage,
independent of the actual number of stages in the system. For 8 equal stages it is:

E5 œ E"Î8 œ E"ÎÐ# ln EÑ (4.1.39)

By taking a logarithm of this expression, we obtain:


" "
ln E5 œ ln E œ (4.1.40)
# ln E #
The optimal individual stage gain for the total minimal rise time is then:

E5 opt œ e"Î# œ Èe ¶ "Þ'& (4.1.41)

- 4.19 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

This expression gives us the optimal value of the gain of the individual
amplifying stage which minimizes the total rise time of a multi-stage amplifier. In
practice we usually take a higher value, say, between 2 and 4, in order to decrease the
cost and simplify the design. Eq. 4.1.41 can also be used for peaking stages.

14
13
12
11
10
9
8
n
7 a
6 b
5
4
3
c
2
1
0 1 2 3
10 10 A 10 10

Fig. 4.1.10: The optimal number of stages, 8, required to achieve the minimal
rise time, given the total system gain E, as calculated by Eq. 4.1.38, is shown
by line a. In practice, owing to economy reasons, we tend to use a lower
number; the line b shows the same +10% rise time trade off as in Fig. 4.1.9. In
low complexity systems we usually make even greater tradeoffs, as in c.

- 4.20 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.2 A Multi-stage Amplifier with Identical AC Coupled Stages

In Fig. 4.1.1 all the amplifying stages were DC coupled. In times when the
amplifiers were designed with electronic tubes the DC coupling was generally avoided
to prevent drift owed to tube aging, unstable cathode heater voltages (changing of
temperature), hum and poor insulation between the heater and cathode. With the
apearance of bipolar transistors and FETs only the temperature dependent drift
remained on this nasty list. Because of it, many TV video and broadband RF amplifiers
still use AC coupling between amplifying stages.
However, AC coupling introduces other undesirable properties, which in certain
cases might not be acceptable. It is therefore interesting to investigate a multi-stage
amplifier with equal stages, similar to that in Fig. 4.1.1, except that all the stages are AC
coupled. In Fig. 4.2.1 we see a simplified circuit diagram of such amplifier and here we
are interested in its low-frequency performance.
Vo1 Vo2 Vo n
C1 C2 Cn V i n
Vi1 Q1 Vi2 Q2 Qn

R1 RL R2 RL Rn RL
Vg

Fig. 4.2.1: Multi-stage amplifier with AC coupled stages. Again, to simplify


the analysis, the power supply and the bias voltages are not shown.

Since we want to focus on essential problems only, here, too, we use FETs,
instead of BJTs, in order to avoid the complicated expression for the base input
impedance of each stage. Moreover, in a wideband amplifier we can assume VL ¥ V8 ,
so we shall neglect the effect of V8 on gain. On the other hand, V8 and G8 set the low
frequency limit of each stage, which is =" œ "ÎV8 G8 œ "ÎVG , if all stages are
identical. Usually, =" is many orders of magnitude below =h , so we can neglect the
stray and input capacitances (both effectively in parallel to the loading resistors VL ) as
well. Thus, near =" , the voltage gain of each stage is:
4 =Î="
E8 œ gm VL (4.2.1)
"  4 =Î="
and the magnitude is:
=Î="
kE8 k œ gm VL (4.2.2)
È"  Ð=Î=" Ñ#

With all input time constants equal to VG , the total system gain is E8 to the 8th power:
8
4 =Î="
E œ ”gm VL • (4.2.3)
"  4 =Î="
with the magnitude:
8
=Î="
lEl œ –gm VL (4.2.4)
È"  Ð=Î=" Ñ# —

- 4.21 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.2.1 Frequency Response and Lower Half Power Frequency

In Fig. 4.2.2 we show the frequency response plots according to Eq. 4.2.4,
normalized in amplitude by dividing it by agm VL b8 .

2.0

1.0

|F ( jω)|

1
2 1
3 4 10 ωl =
RC

0.1
0.1 1.0 10.0
ω /ω l

Fig. 4.2.2: Frequency response magnitude of the AC coupled amplifier for 8 œ "–"!. The
frequency scale is normalized to the lower cut off frequency =" of the single stage.

It is evident that the lower half power frequency =L of the complete amplifier
increases with the number of stages. We can express =L as a function of 8 from:
8
=L Î=" "
– È"  Ð= Î= Ñ# — œ È (4.2.5)
L " #

By eliminating the fractions:


8
c"  Ð=L Î=" Ñ# d œ # Ð=L Î=" Ñ#8 (4.2.6)
th
and taking the 8 root:
"Î8
"  Ð=L Î=" Ñ# œ c# Ð=L Î=" Ñ#8 d œ #"Î8 Ð=L Î=" Ñ# (4.2.7)

and rearranging a little:


Ð=L Î=" Ñ# Ð#"Î#  "Ñ œ " (4.2.8)
we obtain the lower half power frequency of the complete multi-stage amplifier:

"
=L œ = " (4.2.9)
È#"Î8  "

- 4.22 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

We normalize this equation in frequency by setting =" œ "ÎVG œ ". The


normalized values of the lower half power frequency for 8 equal stages, 8 œ "–"!, are
shown in Table 4.2.1:
Table 4.2.1
8 " # $ % & ' ( ) * "!
=L "Þ!!! "Þ&&% "Þ*'" #Þ#** #Þ&*$ #Þ)&) $Þ"!! $Þ$$% $Þ&$% $Þ($$

4.2.2 Phase Response

The phase shift for a single stage is:

:3 œ arctana=" Î=b (4.2.10)

The phase shift is positive and this means a phase advance. For 8 stages the total
phase advance is simply 8 times as much:

:8 œ 8 arctana=" Î=b (4.2.11)

The corresponding plots for 8 œ "–"! are shown in Fig. 4.2.3. Note the phase
shift at = œ =" being exactly "Î# the low frequency asymptotic value.
900

10
720
1
ϕ (ω ) ωl =
RC
[°]
540

360

180 2

0
0.1 1.0 10.0
ω /ω l

Fig. 4.2.3: Phase angle as a function of frequency for the AC coupled 8-stage amplifier,
8 œ "–"!. The frequency scale is normalized to the lower cutoff frequency of the single stage.

We will omit the calculation of envelope delay since in the low frequency region
this aspect of amplifier performance is not very important.

- 4.23 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.2.3 Step Response

By replacing the sine wave generator in Fig. 4.2.1 with a unit step generator, we
obtain the time domain step response of the AC coupled multi-stage amplifier. We want
the plots to be normalized in amplitude, so we normalize Eq. 4.2.3 by dividing it by
agm VL b8 , the total amplifier gain at DC. We will use the _" transform, so we replace
the normalized variable 4=Î=" by the complex variable = œ 5  4=:
= 8
J8 Ð=Ñ œ Š ‹ (4.2.12)
"=
The system’s frequency response must be multiplied by the unit step operator "Î=:
" = 8
K8 Ð=Ñ œ Š ‹ (4.2.13)
= "=
Now we apply the _" transformation and obtain the time domain step response:
=8"
g8 Ð>Ñ œ _" ˜K8 Ð=Ñ™ œ res K8 Ð=Ñ e=> œ res e=> (4.2.14)
Ð"  =Ñ8

Since we have here a single pole repeated 8 times we have only a single residue,
but — as we will see — it is composed of 8 summands. A general expression for the
residue for an arbitrary 8 is:
" . 8" 8 =8"
g8 Ð>Ñ œ lim † 8" ”Ð"  =Ñ e=> • (4.2.15)
= p  " Ð8  "Ñx .= Ð"  =Ñ8
or, simplified:
" . 8"
g8 Ð>Ñ œ † =8" e=> ‘º (4.2.16)
Ð8  "Ñx . = 8" = p "

A few examples:
" >
8 œ " Ê g" Ð>Ñ œ e œ e>
!x
"
8 œ # Ê g# Ð>Ñ œ ˆe>  > e> ‰ œ e> Ð"  >Ñ
"x
8 œ $ Ê g$ Ð>Ñ œ e> ˆ"  # >  !Þ& ># ‰
8 œ % Ê g% Ð>Ñ œ e> ˆ"  $ >  "Þ& >#  !Þ"''( >$ ‰
8 œ & Ê g& Ð>Ñ œ e> ˆ"  % >  $ >#  !Þ'''( >$  !Þ!%"( >% ‰ (4.2.17)

The coefficients decrease rapidly with increasing number of stages 8; i.e., the
last summand for 8 œ "! is #Þ((& † "!' >* .
The corresponding plots are drawn in Fig. 4.2.4. The plots for 8 œ '–* are not
shown, since the individual curves would impossible to distinguish. We note that the
8th -order response intersects the abscissa Ð8  "Ñ times.

- 4.24 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

1.0

0.8

0.6
gn ( t )
1
0.4
2
0.2 3
4
0.0
5

− 0.2 10

− 0.4
− 0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5
/
t RC

Fig. 4.2.4: Step response of the multi-stage AC coupled amplifier for 8 œ "–& and 8 œ "!.

For pulse amplification only the short starting portions of the curves come into
consideration. An example for 8 œ ", $, and ) is shown in Fig. 4.2.5 for a pulse width
?> œ !Þ" VG.

1.0

δ8
0.8
t
10
0.6

0.4
∆ t = 0.1 RC

0.2

0.0 1
3
δ8
− 0.2
8

− 0.4
− 0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5
t / RC
Fig. 4.2.5: Pulse response of the AC coupled multi-stage amplifier (8 œ ", $, and )).

- 4.25 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

Note that the pulse in Fig. 4.2.5 sags, both on the leading and trailing edge, the
sag increasing with the number of stages. We conclude that the AC coupled amplifier of
Fig. 4.2.1 is not suitable for a faithful pulse amplification, except when the pulse
duration is very short in comparison with the time constant VG of a single amplifying
stage (say, ?> Ÿ !Þ!!" VG ).
Another undesirable property of the AC coupled amplifier is that the output
voltage makes 8  " damped oscillations when the pulse ends, no matter how short its
duration is. This is especially annoying because the input voltage is by now already
zero. The undesirable result is that the effective output DC level will depend on the
pulse repetition rate.
Since today the DC amplification technique has reached a very high quality
level, we can consider the AC coupled amplifier an inheritance from the era of
electronic tubes and thus almost obsolete. However, we still use AC coupled amplifiers
to avoid the drift in those cases where the deficiencies described are not important.

- 4.26 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.3 A Multi-stage Amplifier with Butterworth Poles (MFA)

In multi-stage amplifiers, like the one in Fig. 4.1.1, we can apply inductive
peaking at each stage. As we have seen in Part 2, Sec. 2.9, where we discussed the
shunt–series peaking circuit, the equations became very complicated because we had to
consider the mutual influence of the shunt and series peaking circuit. If both circuits are
separated by a buffer amplifier, the analysis is simplified. Basically, this was considered
by S. Butterworth in his article On the Theory of Filter Amplifiers in the review
Experimental Wireless & the Wireless Engineer in 1930 [Ref. 4.6]. When writing the
article, which Butterworth wrote when he was serving in the British Navy, he obviously
did not expect that his technique might also be applied to wideband amplifiers. In
general his article became the basis for filter design for generations of engineers up to
the present time.
The basic Butterworth equation, which, besides to filters, can also be applied to
wideband amplifiers, either with a single or many stages, is:

"
J Ð=Ñ œ (4.3.1)
"  4 Ð=Î=H Ñ8

where =H is the upper half power frequency of the (peaking) amplifier and 8 is an
integer, representing the number of stages. A network corresponding to this equation
has a maximally flat amplitude response (MFA). The magnitude of J Ð=Ñ is:

"
lJ Ð=Ñl œ (4.3.2)
È"  Ð=Î=H Ñ# 8

The magnitude derivative, .¸J Ð=ѸÎ.= is zero at = œ !:

. ¸J Ð=Ѹ  8 Ð=Î=H Ñ#8" "


œ † œ !º (4.3.3)
.= c"  Ð=Î=H Ñ#8 d$Î# =H =œ!

and not just the first derivative, but all the 8  " derivatives of an 8th -order system are
also zero at origin. This means that the filter is essentially flat at very low frequencies
(= ¥ =H ). The number of poles in Eq. 4.3.1 is equal to the parameter 8 and the flatness
of the frequency response in the passband also increases with 8. The parameter 8 is
called the system order. To derive the expression for the poles we start with the
denominator of Eq. 4.3.2, where the expression under the root can be simplified into:

"  a=Î=H b# 8 œ "  B# 8 (4.3.4)

Whenever this expression is equal to zero, we have a pole, J a4=b p „ _. Thus:

"  B# 8 œ ! or B# 8 œ " (4.3.5)

- 4.27 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

The roots of these equations are the poles of Eq. 4.3.1 and they can be calculated
by the following general expression:
B œ #8
È" (4.3.6)

We solve this equation using De Moivre’s formula [Ref. 4.7]:

" œ cos Ð1  # 1 ;Ñ  4 sin Ð1  # 1 ;Ñ (4.3.7)

and ; is either zero or a positive integer. Consequently:

=; œ #8
È" œ #8
Ècos Ð1  # 1 ;Ñ  4 sin Ð1  # 1 ;Ñ

"#; "#;
œ cosŒ1   4 sinŒ1  (4.3.8)
#8 #8

If we insert the value !, ", #, á , Ð#8  "Ñ for ; , we obtain #8 roots. The roots
lie on a circle of radius < œ ", spaced by the angles 1Î#8. With this condition no pole is
repeated. One half ( œ 8) of the poles lie in the left side of the =-plane; these are the
poles of Eq. 4.3.1. None of the poles lies on the imaginary axis. The other half of the
poles lie in the positive half of the =-plane and they can be associated with the complex
conjugate of J a4 =b; as shown in Fig. 4.3.1, owing to the Hurwitz stability requirement,
they are not useful for our purpose.

e 0.8 t
e0.8 t sin ω t
1 sin ω t
t
e− 0.8 t sin ω t ω =1
1
b
1
e− 0.8 t t

t
s1a s1c
a j c
s1b

σ
− 0.8 0.8 1
−1

s2a s2b − j s2c

Fig. 4.3.1: Impulse response of three different complex conjugate pole pairs: The real part
determines the system stability: ="a and s#a make the system unconditionally stable, since the
negative exponent forces the response to decrease with time; ="b and =#b make the system
conditionally stable, whilst ="c and =#c make it unstable.

- 4.28 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

This left and right half pole division is not arbitrary, but, as we have explained
in Part 1, it reflects the direction of energy flow. If an unconditionally stable system is
energized and then left alone, it will eventually dissipate all the energy into heath and
RF radiation, so it is lost (from the system point of view) and therefore we agree to give
it a negative sign. This is typical of a dominantly resistive systems. On the other hand,
generators produce energy and we agree to give them a positive sign. In effect,
generators can be treated as negative resistors. Inductances and capacitances can not
dissipate energy, they can only store it in their associated electromagnetic fields (for a
while). We therefore assign the resistive and regenerative action to the real axis and the
inductive and capacitive action to imaginary axis.
For example, if we take a two–pole system with poles forming a complex
conjugate pair, =" œ 5  4= and =# œ 5  4=, the system impulse response function
has the form:
0 Ð>Ñ œ e5> sin = > (4.3.9)

By referring to Fig. 4.3.1, let us first consider the poles ="a œ  !Þ)  4 and
=#a œ  !Þ)  4, where = œ ". Their impulse function is a damped sinusoid:

0 Ð>Ñ œ e!Þ) > sin = > (4.3.10)

This means that for any impulse disturbance the system reacts with a sinusoidal
oscillation (governed by =), exponentially damped (by the rate set by 5 ). Such behavior
is typical for an unconditionally stable system. If we move the poles to the imaginary
axis (5 œ !) so that s"b œ 4 and =#b œ  4 (again, = œ "), then an impulse excites the
system to a continuous sine wave:

0 Ð>Ñ œ sin = > (4.3.11)

If we push the poles further to the right side of the = plane, so that ="c œ !Þ)  4
and =#c œ !Þ)  4, keeping = œ ", the slightest impulse disturbance, or even just the
system's own noise, excites an exponentially rising sine wave:

0 Ð>Ñ œ e!Þ) > sin = > (4.3.12)

The poles on the imaginary axis are characteristic of a sine wave oscillator, in
which we have the active components (amplifiers) set to make up for (and exactly
match) any energy lost in resistive components. The poles on the right side of the
=-plane also result in oscillations, but there the final amplitude is limited by the system
power supply voltages. Because the active components provide much more energy than
the system is capable of dissipating thermally, the top and bottom part of the waveform
will be saturated, thus limiting the energy produced. Since we are interested in the
design of amplifiers and not of oscillators, we shall not use the last two kinds of poles.
Let us return to the Butterworth poles. We want to find the general expression
for 8 poles on the left side of the =-plane. A general expression for a pole =; , derived
from Eq. 4.3.8 is:
=; œ cos );  sin ); (4.3.13)
where:
"  #;
); œ 1 (4.3.14)
#8

- 4.29 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

The poles with the angle:


1 $1
 );  (4.3.15)
# #
lie in the left side of the =-plane. If we multiply Eq. 4.3.8 by 4, we rotate it by 1Î# and
thus for the first 8 poles we achieve the condition expressed in Eq. 4.3.13:
"#; "#;
=; œ  sin 1  4 cos 1 (4.3.16)
#8 #8
The parameter ; is an integer from !, ", á , Ð8  "Ñ. We would prefer to have
the poles starting with " and ending with 8. To do so, we introduce a new parameter
5 œ ;  " and consequently ; œ 5  ". With this, we arrive to the final expression for
Butterworth poles:

#5 " #5 "
=5 œ 55  4 =5 œ  sin 1  4 cos 1 (4.3.17)
#8 #8

where 5 is an integer from " to 8. As shown in Fig. 4.3.2 (for 8 œ "–&), all these poles
lie on a semicircle with the radius < œ " in the left half of the =-plane:

jω jω jω jω jω
s3 s4
s2
s1
j ω1 s2
s1
s1 θ1 s1 s1
σ σ1 σ σ σ σ
s2
s2 s3
s3
s4 s5
n=1 n=2 n=3 n=4 n=5

Fig. 4.3.2: Butterworth poles for the system order 8 œ "–&.

The numerical values of the poles for systems of order 8 œ "–"!, together with
the corresponding angle ), are listed in Table 4.3.1. Obviously, if 8 is even the system
has complex conjugate pole pairs only. If the 8 is odd, one of the poles is real, and in
the normalized presentation its value is =" œ =Î=H œ ". In the non-normalized
form, the value of the real pole is equal to =H . Since this is the radius of the circle on
which all the poles lie, we can calculate the upper half power frequency also from any
pole (for Butterworth poles only!):

=H œ l=i l œ É5i#  =i# (4.3.18)

or, when one of the poles (=" œ 5" ) is real:

=H œ 5" (4.3.19)

- 4.30 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.3.1 Frequency Response

The normalized frequency response magnitude plots, expressed by Eq. 4.3.2,


with =H œ " and for 8 œ "–"!, are drawn in Fig. 4.3.3. Evidently the passband’s
flatness increases with increasing 8.

2.0

ω H = 1 /RC
1.0

|F ( jω)| 0.707

2
3
4
10
0.1
0.1 1.0 10.0
ω /ω H
Fig. 4.3.3: Frequency response magnitude of 8th -order system with Butterworth poles, 8 œ "–"!.

We can write the frequency response of an amplifier with Butterworth poles of


order, say, 8 œ &, in three different ways. The general expression with poles is:

a  "b & = " = # = $ = % = &


J& Ð=Ñ œ (4.3.20)
Ð=  =" ÑÐ=  =# ÑÐ=  =$ ÑÐ=  =% ÑÐ=  =& Ñ

with = œ 4 =Î=H and =i œ 5i  4 =i (the values of 5i and =i are listed in Table 4.3.1).
By multiplying all the expressions in parentheses, we obtain:
+!
J& Ð=Ñ œ (4.3.21)
=&  + % = %  + $ = $  + # = #  + " =  + !

where:
+% œ  ="  =#  =$  =%  =&
+$ œ = " = #  = " = $  = " = %  = " = &  = # = $  = # = %  = # = &  = $ = %  = $ = &  = % = &
+# œ  = " = # = $  = " = # = %  = " = # = &  = # = $ = %  = # = $ = &  = $ = % = &
+" œ = # = $ = % = &  = " = $ = % = &  = " = # = % = &  = " = # = $ = &  = " = # = $ = %
+! œ  = " = # = $ = % = & (4.3.22)

- 4.31 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

If we use the normalized poles with the numerical values listed in Table 4.3.1 to
calculate the coefficients +! á +% , we obtain:
"
J& Ð=Ñ œ (4.3.23)
=&  $.#$'" =%  &.#$'" =$  &.#$'" =#  $.#$'" =  "
For the magnitude only, by applying Eq. 4.3.2, we have:
"
lJ& Ð=Ñl œ (4.3.24)
É"  a=Î=H b"!

The reason why we took particular interest for the function with the normalized
numerical values of the order 8 œ & is that in Sec. 4.5 we will compare it with the
function having Bessel poles of the same order.

4.3.2 Phase response

The general expression for the phase angle is:


=
8  =i
= H
: œ " arctan (4.3.25)
iœ"
5i

For an odd number of poles the imaginary part of the first pole =" œ !. For the
remaining poles or in the case of even 8, we enter the complex conjugate pair
components: =i,i+" œ 5i „ 4 =i . The phase response plots are drawn in Fig. 4.3.4. By
comparing it with Fig. 4.1.4 we note that Butterwoth poles result in a much steeper
phase slope near the system’s cut off frequency at = œ =H (which is even more evident
in the envelope delay).
0
1

2
− 180 3
ϕ (ω )
4
[°]
− 360

− 540

10
− 720
ω H = 1 /RC

− 900
0.1 1.0 10.0
ω /ω H
Fig. 4.3.4: Phase angle of 8th -order system with Butterworth poles, 8 œ "–"!.

- 4.32 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.3.3 Envelope Delay

We obtain the expressions for envelope delay by making a frequency derivative


of Eq. 4.3.25:
8
5i
7e =H œ " # (4.3.26)
iœ" =
5i#  Œ  =i 
=H

The envelope delay plots for 8 œ "–"! are shown in Fig. 4.3.5. Owing to the
steeper phase shift, the curves for 8  " dip around the system cut off frequency. Those
frequencies are delayed more than the rest of the spectrum, thus revealing the system
resonance. Therefore we expect that amplifiers with Butterworth poles will exhibit an
increasing amount of ringing in the step response, a property not acceptable in pulse
amplification.
0
1
2
−2 3
4

−4

−6
τ en ω H 10
−8

−10
ω H = 1 /RC
−12

−14
0.1 1.0 10.0
ω /ω H
Fig. 4.3.5: Envelope delay of 8th -order system with Butterworth poles, 8 œ "–"!.

4.3.4 Step Response

Since we have 8 non-repeating poles we start with the frequency function in the
form which is suitable for the _" transform:
a"b8 =" =# â =8
J Ð=Ñ œ (4.3.27)
Ð=  =" ÑÐ=  =# Ñ â Ð=  =8 Ñ

We multiply this by the unit step operator "Î= and obtain:


a"b8 =" =# â =8
KÐ=Ñ œ (4.3.28)
= Ð=  =" ÑÐ=  =# Ñ â Ð=  =8 Ñ

- 4.33 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

To obtain the step response in the time domain we use the _" transform:
8
a"b8 =" =# â =8 e = >
gÐ>Ñ œ _" ˜KÐ=Ñ™ œ " resi (4.3.29)
iœ"
= Ð=  =" ÑÐ=  =# Ñ â Ð=  =8 Ñ

It would take too much space to list the complete analytical calculation for
systems with 1 to 10 poles. Some examples can be found in the Appendix 2.3. Here we
shall use the computer routines, which we develop and discuss in detail in Part 6. The
plots for 8 œ "–"! are shown in Fig. 4.3.6.
The plots confirm our expectation that amplifiers with Butterworth poles are not
suitable for pulse amplification. The main advantage of Butterworth poles is the flat
frequency response (MFA) in the passband (evident from the plots in Fig. 4.3.3). For
measuring sinusoidal signals in a wide range of frequencies, i.e., in an electronic
voltmeter, Butterworth poles offer the best solution.

1.2

1.0

gn ( t )
n =1
0.8 2
3

0.6
10

0.4

0.2
T = RC

0.0
0 5 10 15
t /T
Fig. 4.3.6: Step response of 8th -order system with Butterworth poles, 8 œ "–"!.

- 4.34 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

Table 4.3.1: Butterworth Poles

Order 8 5 [radÎ=] = [radÎ=] ) [°]

"  ".!!!! !.!!!! ")!

#  !.(!(" „ !.(!(" ")! … %&.!!!!

$  ".!!!! !.!!!! ")!


 !.&!!! „ !.)''! ")! … '!.!!!!

%  !.*#$* „ !.$)#( ")! … #".&!!!


 !.$)#( „ !.*#$* ")! … '(.&!!!

&  ".!!!! !.!!!! ")!


 !.)!*! „ !.&)() ")! … $'.!!!!
 !.$!*! „ !.*&"" ")! … (#.!!!!

'  !.*'&* „ !.#&)) ")! … "&.!!!!


 !.(!(" „ !.(!(" ")! … %&.!!!!
 !.#&)) „ !.*'&* ")! … (&.!!!!

(  ".!!!! !.!!!! ")!


 !.*!"! „ !.%$$* ")! … #&.("%$
 !.'#$& „ !.()") ")! … &".%#)'
 !.###& „ !.*(%* ")! … ((."%#*

)  !.*)!) „ !."*&" ")! … "".#&!!


 !.)$"& „ !.&&&' ")! … $$.(&!!
 !.&&&' „ !.)$"& ")! … &'.#&!!
 !."*&" „ !.*)!) ")! … ().(&!!

*  ".!!!! !.!!!! ")!


 !.*$*( „ !.$%#! ")! … #!.!!!!
 !.(''! „ !.'%#) ")! … %!.!!!!
 !.&!!! „ !.)''! ")! … '!.!!!!
 !."($' „ !.*)%) ")! … )!.!!!!

"!  !.*)(( „ !."&'% ")! … *.!!!!


 !.)*"! „ !.%&%! ")! … #(.!!!!
 !.(!(" „ !.(!(" ")! … %&.!!!!
 !.%&%! „ !.)*"! ")! … '$.!!!!
 !."&'% „ !.*)(( ")! … )".!!!!

- 4.35 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.3.5 Ideal MFA Filter; Paley–Wiener Criterion

The following discussion will be given in an abridged form, since a complete


derivation would detract us too much from the discussion of amplifiers. We are
interested in designing an amplifier with the ideal frequency response, maximally flat in
the passband and zero outside, as in Fig. 4.3.7 (shown also for negative frequencies),
expressed as:
" for ¹=Î=H ¹  "
EÐ=Ñ œ œ (4.3.30)
! for ¹=Î=H ¹  "

ω /ω H
−1 1
Fig. 4.3.7: Ideal MFA frequency response.

For the time being we assume that the function EÐ=Ñ is real, and consequently it
has no phase shift. At the instant > œ ! we apply a unit step voltage to the input of the
amplifier (multiply Ea=b by the unit step operator "Î=). By applying the basic formula
for the _" transform (Part 1, Eq. 1.4.4), the output function in the time domain is the
integral of the sinÐ>ÑÎ> function [Ref. 4.2]:
>
" sin >
gÐ>Ñ œ ( .> (4.3.31)
1 >
_

The normalized plot of this integral is shown in Fig. 4.3.8. Here we have 50% of
the signal amplitude at the instant > œ !. Also, there is some response for >  !, before
we applied any step voltage to the amplifier input, which is impossible. Any physically
realizable amplifier would have some phase shift and an envelope delay, therefore the
step response would be shifted rightwards from the originÞ However, an infinite phase
shift and delay would be needed in order to have no response for time >  !.

t
−2 −1 0 1 2

Fig. 4.3.8: Step response of a network having the ideal frequency response of Fig. 4.3.7.

- 4.36 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

What we would like to know is whether there is any phase response, linear or
not, which the amplifier should have in order to suit Eq. 4.3.30 without any response for
time >  !. The answer is negative and it was proved by R.E.A.C. Paley and N. Wiener.
Their criterion is given by an amplitude function [Ref. 4.2]:

_
¸log EÐ=Ѹ
( .=  _ (4.3.32)
"  =#
_

Outside the range =H  =  =H , EÐ=Ñ œ !, as required by Eq. 4.3.30, but the
magnitude in the numerator is infinite ( | log ! | œ _ ); therefore the condition expressed
by Eq. 4.3.32 is not met. Thus it is not possible to make an amplifier with a continuous
infinite attenuation in a certain frequency band (it is, nevertheless, possible to have an
infinite attenuation, but at distinct frequencies only). As we can derive from Eq. 4.3.32,
the problem is not the steepness of the frequency response curve at =H in Fig. 4.3.7, but
the requirement for an infinite attenuation everywhere outside the defined passband
=H  =  =H .
If we allow that outside the passband EÐ=Ñ œ %, no matter how small % is, such
a frequency response is possible to achieve. In this case the corresponding phase
response must be [Ref. 4.2]:
"=
:Ð=Ñ œ ln k%k † ln º º (4.3.33)
"=

However, such an amplifier would still have a step response very similar to that
in Fig. 4.3.8, except that it would be shifted rightwards and there would be no response
for >  !. This is because we have almost entirely (down to %) and suddenly cut the
signal spectrum above =H . The overshoot is approximately * %. We have met a similar
situation in Part 1, Fig.1.2.7.a,b in connection with the square wave when we were
discussing the Gibbs’ phenomenon [Ref. 4.2].
Some readers may ask themselves why the step response overshoot of some
systems with Butterworth poles in Fig. 4.3.6 exceeds 9%? The reason is the
corresponding non-linear phase response, resulting in a peak in the envelope delay, as
shown in Fig. 4.3.5. This is a characteristic of not just the Butterworth poles, but also of
any pole pattern, e.g., Chebyshev Type I and Elliptic (Cauer) systems, where the
magnitude and phase change more steeply at cutoff.
We shall use Eq. 4.3.32 again when we shall discuss the possibility of obtaining
an ideal Gaussian response of the amplifier.

- 4.37 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.4 Derivation of Bessel Poles for MFED Response

If we want to preserve the waveform shape, the amplifier must pass all its
frequency components with the same delay. From the requirement for a constant delay
we can derive the system poles. The frequency response function having a constant
delay X [Ref. 4.8, 4.9] is of the form:

J Ð=Ñ œ e =X (4.4.1)

Let us normalize this expression by choosing a unit delay, X œ ". It is possible


to approximate e= by a rational function, where the denominator is a polynomial and
all its roots (the poles of J a=b) lie in the left half of the =-plane. In this case the
denominator is a so called Hurwitz polynomial [Ref. 4.10]. The approximation then
fulfils the constant delay condition expressed by Eq. 4.4.1 to a certain accuracy only up
to a certain frequency. The higher the polynomial degree, the higher is the accuracy.
We can write e= also by using the hyperbolic sine and cosine functions:
"
" sinh =
J Ð=Ñ œ œ (4.4.2)
sinh =  cosh = cosh =
"
sinh =
Both hyperbolic functions can be expressed with their corresponding series:

=# =% =' =)
cosh = œ "     â (4.4.3)
#x %x 'x )x
=$ =& =( =*
sinh = œ =     â (4.4.4)
$x &x (x *x
With these suppositions and using ‘long division’ we obtain:
sinh = " "
œ  (4.4.5)
cosh = = $ "

= & "

= ( "

= *

=
With a successive multiplication we can simplify this continuous fraction into a
simple rational function. By truncating the fraction at *Î= we obtain the following
approximation:
sinh = "& =%  %#! =#  *%&
¸ & (4.4.6)
cosh = =  "!& =$  *%& =
Now we put this and Eq. 4.4.4 into Eq. 4.4.2 and perform the suggested division
by sinh =. A normalized expression, where J Ð=Ñ œ " if = œ ! is obtained by multiplying
the numerator by *%&. With these operations we obtain:
*%&
J Ð=Ñ œ e= ¸ (4.4.7)
=&  "& =%  "!& =$  %#! =#  *%& =  *%&

- 4.39 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

The poles of this equation are the roots of the denominator:


=",# œ  $Þ$&#! „ 4 "Þ(%#(
=$,% œ  #Þ$#%( „ 4 $Þ&("!
=& œ  $Þ'%'(

A critical reader might ask why have we taken such a circuitous way to come to
this result instead of deriving it straight from McLaurin’s series as:

" "
e= œ ¸ (4.4.8)
e= =# =$ =% =&
"=   
#x $x %x &x

In this case the roots are:


=",# œ "Þ'%*' „ 4 "Þ'*$'
=$,% œ !Þ#)*) „ 4 $Þ"#)$
=& œ #Þ")!'

and the roots =$,% lie in the right half of the =-plane. Therefore the denominator of
Eq. 4.4.8 is not a Hurwitz polynomial [Ref. 4.10] (a closer investigation would reveal
that the denominator is not a Hurwitz polynomial if its degree exceeds 4, but even for
low order systems it can be shown that the McLaurin’s series results in an envelope
delay which is far from being maximally flat). Thus Eq. 4.4.8 describes an unstable
system or an unrealizable transfer function.
Let us return to Eq. 4.4.7, which we express in a general form:

+!
J Ð=Ñ œ (4.4.9)
=8  +8" =8"  â  +# =#  +" =  +!

where the numerical values for the coefficients can be calculated by the equation:

Ð# 8  "Ñx
+3 œ (4.4.10)
#83 3x Ð8  3Ñx

We can express the ratio of hyperbolic functions also as:

cosh = N"Î# Ð4 =Ñ


œ (4.4.11)
sinh = 4 N"Î# Ð4 =Ñ

where N"Î# Ð4 =Ñ and 4 N"Î# Ð4 =Ñ are the spherical Bessel functions [Ref. 4.10, 4.11].
Therefore we name the polynomials having their coefficients expressed by Eq. 4.4.10
Bessel polynomials. Their roots are the poles of Eq. 4.4.9 and we call them Bessel
poles. We have listed the values of Bessel poles for polynomials of order 8 œ "–"! in
Table 4.4.1, along with the corresponding pole angles )3 .

- 4.40 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

We usually express Eq. 4.4.9 in another normalized form which is suitable for
the _" transform:

a  "b8 = " = # = $ â = 8
J Ð=Ñ œ (4.4.12)
Ð=  =" ÑÐ=  =# ÑÐ=  =$ Ñ â Ð=  =8 Ñ

where =" , =# , =$ , á , =8 are the poles of the function J Ð=Ñ.


In Sec. 4.3 we saw that Butterworth poles lie in the left half of the =-plane on an
origin centered unit circle. Since the denominator of Eq. 4.4.12 is also a Hurwitz
polynomial [Ref. 4.10], all the poles of this equation must also be in the left half of the
=-plane. This is evident from Fig. 4.4.1, where the Bessel poles of the order 8 œ "–"!
are drawn. However, Bessel poles lie on ellipses (not on circles). The characteristics of
this family of ellipses is that they all have the near focus at the origin of the complex
plane and the other focus on the positive real axis.

10
9
10 8
7
6
5
5 4
3
2

1
ℑ{s}

−5

−10

−5 0 5 10 15 20
ℜ{s}

Fig. 4.4.1: Bessel poles for order 8 œ "–"!. Bessel poles lie on a family of ellipses with
one focus at the origin of the complex plane and the other focus on the positive real axis).
The first-order pole is the same as for the Butterworth system and lies on the unit circle.

- 4.41 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.4.1 Frequency Response

A general normalized expression for the frequency response is the magnitude


(absolute value) of Eq. 4.4.12:

=" = # = $ â = 8
¸J Ð=Ѹ œ » (4.4.13)
Ð=  =" ÑÐ=  =# ÑÐ=  =$ Ñ â Ð=  =8 Ñ »
= œ 4=Î=h

where =h œ "ÎVG is the non-peaking stage cut off frequency. If we put the numerical
values for poles =i œ 5i „ 4 =i and = œ 4 =Î=h as suggested, then this formula obtains a
form similar to Part 2, Eq. 2.6.10 (where we had 4 poles only).
The magnitude plots for 8 œ "–"! are shown in Fig. 4.4.2. By comparing this
figure with Fig. 4.3.3, where the frequency response curves for Butterworth poles are
displayed, we note an important difference: for Butterworth poles the upper half power
frequency is always ", regardless of the number of poles. In contrast, for Bessel poles
the upper half power frequency increases with 8.
The reason for the difference is that the derivation of 8 Butterworth poles was
based on #8È" (for magnitude), whilst the Bessel poles were derived from the
condition for a unit envelope delay. This difference prevents any direct comparison of
the bandwidth extension and the rise time improvement between both kinds of poles.
To be able to compare the two types of systems on a fair basis we must normalize the
Bessel poles to the first-order cut off frequency. We do this by recursively multiplying
the poles by a correction factor and calculate the cut off frequency, until a satisfactory
approximation is reached (see Sec. 4.4.6). Also, a special set of Bessel poles is derived
in Sec. 4.5, allowing us to interpolate between Bessel and Butterworth poles. The
BESTAP algorithm (in Part 6) calculates the Bessel poles in any of the three options.
2.0

1.0

| F ( j ω )| 0.707
10
1 2 3

ω h = 1 /RC

0.1
0.1 1.0 10.0
ω /ω h
Fig. 4.4.2: Frequency-response magnitude of systems with Bessel poles for order 8 œ "á "!.

- 4.42 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.4.2 Upper Half Power Frequency

To find the cut off frequency and the bandwidth improvement offered by the
Bessel poles an inversion formula should be developed from Eq. 4.4.13. This inversion
formula would be different for each 8, so there would not be a general solution. Instead,
we shall use a computer program (see Part 6) to calculate the complete frequency
response magnitude and find the cut off frequency from it. Calculated in this way, the
bandwidth improvement factors (b œ =H Î=h for Bessel poles of the order 8 œ "–"! are
listed in the following table:

Table 4.4.1: Relative Bandwidth Improvement with Bessel Poles


8 " # $ % & ' ( ) * "!
(b "Þ!! "Þ$' "Þ(& #Þ"# #Þ%# #Þ(! #Þ*& $Þ") $Þ$* $Þ&*

However, some special circuits can have a bandwidth improvement different


from the one shown in the table for a corresponding order; e.g., T-coil circuits improve
the bandwidth for 8 œ # by exactly twice as muchÞ We shall discuss this in Part 5,
where we shall analyze a two-stage amplifier with 7 staggered Bessel poles, having one
three-pole T-coil and one four-pole L+T circuit and obtain a total bandwidth
improvement (b œ $Þ&& for the complete amplifier.

4.4.3 Phase Response

The calculation is similar to the phase response for Butterworth poles; however
there we had the normalized upper half power frequency =H œ ", whilst for Bessel
poles =H increases with the order 8, so here we must use =h œ "ÎVG as the reference,
where VG is the time constant of the non-peaking amplifier. Then for the phase
response we simply repeat Eq. 4.3.25 and write =h (instead of =H ):
=
8  =i
=h
:a=b œ " arctan (4.4.14)
iœ"
5i

Fig. 4.4.3 shows the phase plots of Eq. 4.4.14 for Bessel poles for the order
8 œ "–"! (owing to the cutoff frequency increasing with order 8, the frequency scale
had to be extended to see the asymptotic values at high frequencies).
So far we have used a logarithmic frequency scale for our phase response plots.
However, by using a linear frequency scale, as in Fig. 4.4.4, the plots show that the
phase response for Bessel poles is linear up to a certain frequency [Ref. 4.10], which
increases with an increased order 8.

- 4.43 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

0
1

2
− 180
3
ϕ (ω )
[°] 4

−360

− 540

10
−720
ω h = 1 /RC

− 900
0.1 1.0 10.0 100.0
ω /ω h
Fig. 4.4.3: Phase angle of the systems with Bessel poles of order 8 œ "á "!.

ϕ (ω ) 0
[°] 1
− 100
2
− 200
3

− 300 4

− 400

− 500

− 600
ω h = 1 /RC
10
− 700 n =∞

− 800
0 2 4 6 8 10 12 14 16 18 20
ω /ω h
Fig. 4.4.4: Phase-angle as in Fig. 4.4.3, but in a linear frequency scale. Note the linear
phase frequency-dependence extending from the origin to progressively higher frequencies.

- 4.44 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.4.4 Envelope-delay

Here, too, we take the corresponding formula from Butterworth poles and
replace the frequency normalization =H by =h :
8
5i
7e =h œ " # (4.4.15)
iœ" =
5i#  Œ  =i 
=h

The envelope delay plots are shown in Fig. 4.4.5. The delay is flat up to a certain
frequency, which increases with increasing order 8. This was our goal when we were
deriving the Bessel poles, starting with Eq. 4.4.1. Therefore the name MFED
(Maximally Flat Envelope Delay) is fully justified by this figure. This property is
essential for pulse amplification. Because pulses contain a broad range of frequency
components, all of them, (or in practice, the most significant ones, i.e., those which are
not attenuated appreciably) should be subject to equal time delay when passing through
the amplifier in order to preserve the pulse shape at the output as much as possible.

− 0.2
τ en ω H
1
− 0.4
2 3 4
10
− 0.6

− 0.8

−1.0
ω h = 1 /RC
−1.2
0.1 1.0 10.0 100.0
ω /ω h
Fig. 4.4.5: Envelope delay of the systems with Bessel poles for order 8 œ "–"!. Note
the flat unit delay response increasing with system order. This figure demonstrates the
fulfilment of the criterion from which we have started the derivation of MFED.

4.4.4 Step Response

We start with Eq. 4.4.12 and multiply it by the unit step operator "Î= to obtain:
a  "b8 = " = # = $ â = 8
KÐ=Ñ œ (4.4.16)
= Ð=  =" ÑÐ=  =# ÑÐ=  =$ Ñ â Ð=  =8 Ñ

- 4.45 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

By applying the _" -transform we obtain:


8
a  "b8 =" =# =$ â =8 e=>
gÐ>Ñ œ _" ˜KÐ=Ñ™ œ " res3 (4.4.17)
3œ"
= Ð=  =" ÑÐ=  =# ÑÐ=  =$ Ñ â Ð=  =8 Ñ

By inserting the numerical pole values from Table 4.4.3 for the systems of order
8 œ "–"! we can proceed in the same way as in the examples in Appendix 2.3, but it
would take too much space. Instead, we shall use the routines developed in Part 6 to
generate the plots of Fig. 4.4.6. This diagram is notably different from the step response
plots of Butterworth poles in Fig. 4.3.6. Again, the reason is that for normalized
Butterworth poles the upper half power frequency =H is always one, regardless of the
order 8, consequently the step response always has the same maximum slope, but a
progressively larger delay. The Bessel poles, on the contrary, have progressively steeper
slope, whilst the delay approaches unity. This is also reflected by the improvement in
rise time, listed in Table 4.4.2. Of course, the improvement in rise time is even higher
for peaking circuits using T-coils.

1.2

1.0
gn (t )

0.8

0.6

0.4
n=1 2 3 T = RC
10
0.2

0.0
0 0.5 1.0 1.5 2.0 2.5 3.0
t /T
Fig. 4.4.6: Step response of systems with Bessel poles of order 8 œ "–"!.
Note the 50% amplitude delay approaching unity as the system order increases.

Table 4.4.2: Relative Rise time Improvement with Bessel Poles


8 " # $ % & ' ( ) * "!
(r "Þ!! "Þ$) "Þ(& #Þ!* #Þ$* #Þ') #Þ*$ $Þ") $Þ%" $Þ'!

From Fig 4.4.2 and 4.4.6 one could make a false conclusion that the upper half
power frequency increases and the rise time decreases if more equal amplifier stages are
cascaded. This is not true, because all the parameters of systems having Bessel poles are
defined with respect to the single stage non-peaking amplifier, where =h œ "ÎVG .

- 4.46 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

In the case of a system with 8 Bessel poles this would mean chopping the stray
capacitance of a single amplifying stage into smaller capacitances and separating them
by coils to create 8 poles.
Unfortunately there is a limit in practice because each individual amplifier stage
input sees two capacitances: the output capacitance of the previous stage and its own
input capacitance. Therefore, in a single stage we can have at most four poles (either
Bessel, Butterworth or of any other family).
If we use more than one stage, we can assign a small group of staggered poles
from the 8th -group (from either Table 4.3.1 or Table 4.4.3) to each stage, so that the
system as a whole has the poles as specified by the 8th -group chosen. Then no stage by
itself will be optimized, but the amplifier as a whole will be. More details of this
technique are given in Sec. 4.6 and some examples can be found in Part 5 and Part 7.

- 4.47 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

Table 4.4.3: Bessel Poles (identical envelope delay)

Order 8 5 [radÎ=] = [radÎ=] ) [°]

"  "Þ!!!! !Þ!!!! ")!

#  "Þ&!!! „ !Þ)''! ")! … $!Þ!!!!

$  #Þ$### !Þ!!!! ")!


 "Þ)$)* „ "Þ(&%% ")! … %$Þ'&#&

%  #Þ)*'# „ !Þ)'(# ")! … "'Þ''*(


 #Þ"!$) „ #Þ'&(% ")! … &"Þ'$#&

&  $Þ'%'( !Þ!!!! ")!


 $Þ$&#! „ "Þ(%#( ")! … #(Þ%'*'
 #Þ$#%( „ $Þ(&"! ")! … &'Þ*$''

'  %Þ#%)% „ !Þ)'(& ")! … ""Þ&%""


 $Þ($&( „ #Þ'#'$ ")! … $&Þ"!(*
 #Þ&"&* „ %Þ%*#( ")! … '!Þ(&!)

(  %Þ*(") !Þ!!!! ")!


 %Þ(&)$ „ "Þ($*$ ")! … #!Þ!()(
 %Þ!(!" „ $Þ&"(# ")! … %!Þ)$"'
 #Þ')&( „ &Þ%#!( ")! … '$Þ'%$*

)  &Þ&)(* „ !Þ)'(' ")! … )Þ)#&(


 &Þ#!%) „ #Þ'"'# ")! … #'Þ')'"
 %Þ$')$ „ %Þ%"%% ")! … %&Þ$!""
 #Þ)$*! „ 'Þ$&$* ")! … '&Þ*#%&

*  'Þ#*(! !Þ!!!! ")!


 'Þ"#*% „ "Þ($() ")! … "&Þ)#*&
 &Þ'!%% „ $Þ%*)# ")! … $"Þ*("&
 %Þ'$)% „ &Þ$"($ ")! … %)Þ*!!(
 #Þ*(*$ „ (Þ#*"& ")! … '(Þ((&$

"!  'Þ*##! „ !Þ)'(( ")! … (Þ"%%(


 'Þ'"&$ „ #Þ'""' ")! … #"Þ&%$!
 &Þ*'(& „ %Þ$)%* ")! … $'Þ$!)&
 %Þ))'# „ 'Þ##&! ")! … &"Þ)(!$
 $Þ"!)* „ )Þ#$#( ")! … '*Þ$""*

- 4.48 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.4.5 Ideal Gaussian Frequency Response

Suppose that we have succeeded making an amplifier with zero phase shift and
an ideal Gaussian frequency response:
#
KÐ=Ñ œ e = (4.4.18)

the plot of which is shown in Fig. 4.4.7 for both positive and negative frequencies (and,
to acquire a feeling for Bessel systems, compared to the magnitude of a 5th -order system
with modified Bessel poles).

1.0

0.8

| F ( jω ) | ω h = 1/RC
B5a G
0.6
R

0.4

0.2
G B5a

B5a
0
−3 −2 −1 0 1 2 3
ω /ω h
Fig. 4.4.7: Ideal Gaussian (MFED) frequency response K (real only, with no phase shift),
compared to the magnitude of a 5th -order modified Bessel system F&+ (identical cutoff
asymptote, Table 4.5.1). The frequency scale is two-sided, linear, and normalized to
=h œ "ÎVG of the first-order system, which is shown as the reference V.

By examining Eq. 4.4.9 and Eq. 4.4.18 we come to the conclusion that it is
possible to approximate the Gaussian response with any desired accuracy up to a
certain frequency. At higher frequencies, the Gaussian response falls faster than the
approximated response. This is brought into evidence in Fig. 4.4.8 where the same
responses are ploted in loglog scale.
By applying a unit step at the instant > œ ! to the input of the hypothetical
amplifier having a Gaussian frequency response the resulting step response is equal to
the so called error-function, which is defined as the time integral of the exponential
function of time squared [Ref. 4.2]:
>"

gG a>b œ erfa>b œ
"  ># Î% .>
( e (4.4.19)
# È1
_

- 4.49 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

0
10
ω h = 1/ RC
−1 R
10

| F (ω ) |
−2 B5a
10

10
−3 G

−4
10

−5
10
1
10−1 10 0 10
ω /ω h
Fig. 4.4.8: Frequency response in loglog scale brings into evidence how the ideal Gaussian
response K decreases much more steeply with frequency than the 5th -order Bessel response
F&+. The Bessel system would have to be of infinitely high order to match the Gaussian
response.

g
B5a
1.0
g
G

0.8
τe
g (t )

0.6

0.4

0.2
g 2π
TH =
G g ωh
B5a
0

−5 −4 −3 −2 −1 0 1 2 3 4 5
t / TH
Fig. 4.4.9: Step response of a hypothetical system, gK , having the ideal Gaussian frequency
response with no phase shift, as the one in Fig. 4.4.7 and 4.4.8. Compare it with the step
response of a 5th -order Bessel system, gF&+ , with modified Bessel poles, Table 4.5.1, and
envelope delay compensated (7e œ $Þ*% XH ) for minimal difference in the half amplitude
region.

- 4.50 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

The plot of Eq. 4.4.21 calculated by the Simpson method is shown in Fig. 4.4.9.
The step response is symmetrical, without any overshoot. However, here too, we have a
response for >  ! as it was in the ideal MFA amplifier.
If our hypothetical amplifier were to have any linear phase delay the curve gK in
Fig. 4.4.9 would be shifted rightwards from the origin, but an infinite phase shift would
be required in order to have no response for time >  ! (the same as for Fig. 4.3.8).
By looking back to Eq. 4.4.7, we realize that we would need an infinite number
of terms in the denominator ( œ an infinite number of poles) in order to justify an ‘ œ ’
sign instead of an approximation ( ¸ ). This would mean an infinite number of system
components and amplifying stages, and therefore the conclusion is that we can not
make an amplifier with an ideal Gaussian response (but we can come very close).
A proof, based on the Paley–Wiener Criterion, can be carried out in the
following way: if we compare the step response in Fig. 4.4.9 with the step response of a
non-peaking multi-stage amplifier in Fig. 4.1.7, for 8 œ "!, there is a great similarity.
Therefore we can ask ourselves if a Gaussian response could be achieved by increasing
the number of stages to some arbitrarily large number (8 p _). By doing so, the phase
response diverges when 8 p _ and it becomes infinite if = p _. Therefore for both
reasons (infinite number of stages and divergent phase response) it is not possible to
make an amplifier with an ideal Gaussian response.

4.4.6 Bessel Poles Normalized to Identical Cutoff Frequency

Because the Bessel poles are derived from the requirement for an identical
envelope delay there is no simple way of renormalizing them back to the same cut off
frequency. However, such a renormalization would be very useful, not only for
comparing the systems with different pole families and equal order, but also for
comparing systems of different order within the Bessel family itself.
What is difficult to do analytically is often easily done numerically, especially if
the actual number crunching is executed by a machine. The normalization procedure
goes by taking the original Bessel poles and finding the system magnitude by Eq. 4.4.13
at the unit frequency (=Î=h œ "). We obtain an attenuation value, say, lJ Ð"Ñl œ + and
we want lJ Ð"Ñl to be "ÎÈ# . The ratio ; œ "Î+È# is the correction factor by which
we multiply all the poles and insert the new poles again into Eq. 4.4.13. We keep
repeating this procedure until l;  "ÎÈ# l  &, with & being an arbitrarily small error;
for practical purposes, a value of & œ !Þ!!" is adequate. In the algorithm presented in
Part 6, this tolerance is reached in only 6 to 9 iterations, depending on system order.
The following graphs were made using the computer algorithms presented in
Part 6 and show the performance of cut off frequency normalized Bessel systems of
order 8 œ "–"!, as in the previous figures.
Fig. 4.4.10 shows the frequency response magnitude; the plots for 8 œ &–* are
missing, since the difference is too small to identify them on such a vertical scale (the
difference in high frequency attenuation becomes significant with higher magnitude
resolution, say, down to 0.001 or more). Fig. 4.4.11 shows the phase, Fig. 4.4.12 the
envelope delay and Fig. 4.4.13 shows the step response. Finally, in Table 4.4.4 we
report the values of Bessel poles and their respective angles for systems with equal cut
off frequency.

- 4.51 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

2.0

ω H = ω h = 1 / RC
1.0

|F ( jω)|

2
3

4
0.1 10
0.1 1.0 10.0
ω /ω H
Fig. 4.4.10: Frequency response magnitude of systems with normalized Bessel poles of order
8 œ ", #, $, %, and "!. Note the nearly identical passband response — this is the reason why we
can approximate the oscilloscope (multi-stage) amplifier rise time from the cut off frequency,
using the relation for the first-order system: 7< œ !Þ$&Î0T .

0
1
2
3
− 180
4
ϕ (ω )
[°]
− 360

10
− 540

− 720
ω H = ω h = 1 / RC

− 900
0.1 1.0 10.0
ω /ω H
Fig. 4.4.11: Phase angle of systems with normalized Bessel poles of order 8 œ "–"!.

- 4.52 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

0
ω H = ω h = 1 / RC
− 0.5
1
− 1.0
2
− 1.5
τ en ω H 3
− 2.0 4

− 2.5

−3.0

−3.5
10
− 4.0
0.1 1.0 10.0
ω /ω H
Fig. 4.4.12: Envelope delay of systems with normalized Bessel poles of the order 8 œ "–"!.
Although the bandwidth is the same, the delay flatness extends progressively with system
order, already reaching beyond the system cut off frequency for 8 œ &.

1.2

T H = T h = RC
1.0

gn ( t )
0.8

n=1
0.6
2
3
10
0.4

0.2

0.0
0 1 2 3 4 5 6
t /T H
Fig. 4.4.13: Step response of systems with normalized Bessel poles of order 8 œ "–"!.
Note the half amplitude slope being almost equal for all systems, indicating an equal
system cut off frequency.

- 4.53 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

Table 4.4.4: Bessel Poles (equal cut off frequency)

Order 8 5 [radÎ=] = [radÎ=] ) [°]

"  "Þ!!!! !Þ!!!! ")!

#  "Þ"!"( „ !Þ'$'! ")! … $!Þ!!!!

$  "Þ$##( !Þ!!!! ")!


 "Þ!%(& „ !Þ***$ ")! … %$Þ'&#&

%  "Þ$(!" „ !Þ%"!$ ")! … "'Þ''*(


 !Þ**&# „ "Þ#&(" ")! … &"Þ'$#&

&  "Þ&!#% !Þ!!!! ")!


 "Þ$)"! „ !Þ(")! ")! … #(Þ%'*'
 !Þ*&() „ "Þ%("$ ")! … &'Þ*$''

'  "Þ&("' „ !Þ$#!* ")! … ""Þ&%""


 "Þ$)"* „ !Þ*("& ")! … $&Þ"!(*
 !Þ*$!( „ "Þ''"* ")! … '!Þ(&!)

(  "Þ')%& !Þ!!!! ")!


 "Þ'"## „ !Þ&)*$ ")! … #!Þ!()(
 "Þ$(*! „ "Þ"*"( ")! … %!Þ)$"'
 !Þ*!** „ "Þ)$'' ")! … '$Þ'%$*

)  "Þ(&(& „ !Þ#(#* ")! … )Þ)#&(


 "Þ'$(! „ !Þ)##) ")! … #'Þ')'"
 "Þ$($* „ "Þ$))% ")! … %&Þ$!""
 !Þ)*#* „ "Þ**)% ")! … '&Þ*#%&

*  "Þ)&'( !Þ!!!! ")!


 "Þ)!(# „ !Þ&"#% ")! … "&Þ)#*&
 "Þ'&#& „ "Þ!$"% ")! … $"Þ*("&
 "Þ$'(' „ "Þ&'() ")! … %)Þ*!!(
 !Þ)()% „ #Þ"%** ")! … '(Þ((&$

"!  "Þ*#(( „ !Þ#%"' ")! … (Þ"%%(


 "Þ)%#$ „ !Þ(#($ ")! … #"Þ&%$!
 "Þ''"* „ "Þ##"# ")! … $'Þ$!)&
 "Þ$'!) „ "Þ($$' ")! … &"Þ)(!$
 !Þ)'&) „ #Þ#*#( ")! … '*Þ$""*

- 4.54 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.5 Pole Interpolation

Sometimes we desire to design an amplifier, or just a single stage, with a


performance which is somewhere between the Butterworth and Bessel response. We
shall derive the corresponding poles by the pole interpolation procedure which was
described by Y. Peless and T. Murakami [Ref. 4.14].

4.5.1 Derivation of Modified Bessel Poles

In order to be able to interpolate between Butterworth and Bessel poles, the later
must be modified so that, for both systems of equal order, the frequency response
magnitude will have the same asymptotes in both passband and stopband. This is
achieved if the product of all the poles is equal to one, as in Butterworth systems:
8
a"b8 $=5 œ " (4.5.1)
5œ"

A general expression for the frequency response normalized in amplitude is:


+!
J a=b œ (4.5.2)
= 8  +8" =8"  â  +# =#  +" =  +!
where:
8
+! œ a"b8 $=5 (4.5.3)
5œ"

If we divide both the numerator and the denominator by +! we obtain:


"
J a=b œ (4.5.4)
" 8 +8" 8" +# # +"
=  = â =  ="
+! +o +! +!
Next we introduce another variable = such that:
=8 =
=8 œ or =œ "Î8
(4.5.5)
+! +!
Then Eq. 4.5.4 can be written as:
"
J a=b œ (4.5.6)
=8  ,8" =8"  â  ,# =#  ," =  "
where the coefficients ,i are:
+8"
,8" œ "Î8
+!
+8#
,8# œ #Î8
+!
â
+"
," œ ""Î8 (4.5.7)
+!

- 4.55 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

The coefficients +5 for the Bessel polynomials are calculated by Eq. 4.4.10.
Then the coefficients ,5 are those of the modified Bessel polynomial, from which we
can calculate the modified Bessel poles of J a=b for the order 8 œ "–"!. These poles are
listed together with the corresponding pole radii < and pole angles ) in Table 4.5.1.

4.5.2 Pole Interpolation Procedure

At the time of the German mathematician Friedrich Wilhelm Bessel


(1784–1846), there were no electronic filters and no wideband amplifiers, to which the
roots of his polynomials could be applied. W.A. Thomson [Ref. 4.9] was the first to use
them and he also derived the expressions required for MEFD network synthesis.
Therefore some engineers use the name Thomson poles or, perhaps more correctly,
Bessel–Thomson poles.
In the following discussion we shall interpolate between Butterworth and the
modified Bessel poles. If we were to label the poles by initials only, a confusion would
result in the graphs and formulae. Therefore, to label the modified Bessel poles, we
shall use the subscript ‘T’ in honor of W.A. Thomson.
The procedure of pole interpolation can be explained with the aid of Fig. 4.5.1.

sB
sI
sT

rT rI rB
θT
θI
θB
R=1
σ

Fig. 4.5.1: Pole interpolation procedure. Butterworth (index B) and Bessel poles
(index T) are expressed in polar coordinates, =a<,)b. The trajectory going through
both poles is the interpolation path required to obtain the transitional pole =I .

We first express the poles in polar coordinates with the well known conversion:

<5 œ É55#  =5# (4.5.8)


and
=5
)5 œ 1  arctan (4.5.9)
55

Here the 1 radians added are required because the arctangent function repeats with a
period of 1 radians, so it does not distinguish between the poles in quadrant III form
those in I, and the same is true for quadrants IV and II.

- 4.56 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

By using the polar coordinates a pole =5 is expressed as:

=5 œ 55  4=5 œ <5 e4)5 (4.5.10)

In Eq. 4.5.3 the coefficient +! is equal to the product of all poles. Because we
have divided the polynomial coefficients +5 by +! to obtain the coefficients ,5 we have
effectively normalized the product of all poles to one:

8
º # =5 º œ " (4.5.11)
5œ"

and this is now true for both Butterworth and the modified Bessel poles. Therefore we
can assume that there exists a trajectory going through the 5 th Butterworth pole =B5 and
the 5th Bessel pole =T5 and each point on this trajectory can represent a pols =I5 which
can be expressed as:

=I5 œ <I5 e4)I5 (4.5.12)

such that the absolute product of all interpolated poles =I is kept equal to one. Then:

<I5 œ <T75 (4.5.13)


and:
)I5 œ )B5  7 a)T5  )B5 b (4.5.14)

The parameter 7 can have any value between ! and ".


If 7 œ ! we have the Butterworth poles and if 7 œ " we have the modified
Bessel poles. By using, say, 7 œ !Þ&, the characteristics of such a network would be
just half way between the Butterworth and modified Bessel poles.
It is obvious that we need to calculate only one half of the poles, say, those with
the positive imaginary value, 55  4 =5 , since the complex conjugate poles 55  4 =5
have the same magnitude, only the sign of their imaginary component is negative. In the
cases with odd 8 the interpolated real pole remains on the real axis ()I5 œ 1) between
the two real poles belonging to the Butterwort and the modified Bessel system.

- 4.57 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

Table 4.5.1: Modified Bessel Poles


(with HF asymptote identical as Butterworth)

Order 8 5 [radÎ=] = [radÎ=] < ) [°]

"  "Þ!!!! !Þ!!!! "Þ!!!! ")!

#  !Þ)''! „ !Þ&!!! "Þ!!!! ")! … $!Þ!!!!

$  !Þ*%"' !Þ!!!! !Þ*%"' ")!


 !Þ(%&' „ !Þ(""% "Þ!$!& ")! … %$Þ'&#&

%  !Þ*!%) „ !Þ#(!* !Þ*%%% ")! … "'Þ''*(


 !Þ'&(# „ !Þ)$!# "Þ!&)) ")! … &"Þ'$#&

&  !Þ*#'% !Þ!!!! !Þ*#%' ")!


 !Þ)&"' „ !Þ%%#( !Þ*&*) ")! … #(Þ%'*'
 !Þ&*!' „ !Þ*!(# "Þ!)#& ")! … &'Þ*$''

'  !Þ*!*% „ !Þ")&( !Þ*#)# ")! … ""Þ&%""


 !Þ(**( „ !Þ&'## !Þ*((& ")! … $&Þ"!(*
 !Þ&$)' „ !Þ*'"( "Þ"!## ")! … '!Þ(&!)

(  !Þ*"*& !Þ!!!! !Þ*"*& ")!


 !Þ))!! „ !Þ$#"( !Þ*$'* ")! … #!Þ!()(
 !Þ(&#( „ !Þ'&!& !Þ**%) ")! … %!Þ)$"'
 !Þ%*'( „ "Þ!!#& "Þ"")) ")! … '$Þ'%$*

)  !Þ*!*( „ !Þ"%"# !Þ*#!' ")! … )Þ)#&(


 !Þ)%($ „ !Þ%#&* !Þ*%)$ ")! … #'Þ')'"
 !Þ(""" „ !Þ(")( "Þ!""! ")! … %&Þ$!""
 !Þ%'## „ "Þ!$%% "Þ"$#* ")! … '&Þ*#%&

*  !Þ*"&& !Þ!!!! !Þ*"&& ")!


 !Þ)*"" „ !Þ#&#( !Þ*#'# ")! … "&Þ)#*&
 !Þ)"%) „ !Þ&!)' !Þ*'!& ")! … $"Þ*("&
 !Þ'(%% „ !Þ(($" "Þ!#&* ")! … %)Þ*!!(
 !Þ%$$" „ "Þ!'!" "Þ"%&" ")! … '(Þ((&$

"!  !Þ*!*" „ !Þ""%! !Þ*"'# ")! … (Þ"%%(


 !Þ)')) „ !Þ$%$! !Þ*$%" ")! … #"Þ&%$!
 !Þ()$) „ !Þ&(&* !Þ*(#' ")! … $'Þ$!)&
 !Þ'%") „ !Þ)"(' "Þ!$*% ")! … &"Þ)(!$
 !Þ%!)$ „ "Þ!)"$ "Þ"&&) ")! … '*Þ$""*

- 4.58 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.5.3 A Practical Example of Pole Interpolation

Let us calculate the frequency, phase, time delay and step response of a network
with three poles. Three poles are just enough to demonstrate the procedure of pole
interpolation completely. Let us select 7 œ !.&. From Table 4.3.1 we find the following
values for Butterworth poles of order 8 œ $:
="B : <"B œ " )"B œ ")!°
=#B : <#B œ " )#B œ  "#!° (4.5.15)
=$B : <$B œ " )$B œ  "#!°
and in Table 4.5.1, order 8 œ $, we find these values for modified Bessel poles:
="T : <"T œ !Þ*%"' )"T œ ")!°
=#T : <#T œ "Þ!$!& )#T œ  "$'Þ$&° (4.5.16)
=$T : <$T œ "Þ!$!& )$T œ  "$'Þ$&°
Now we interpolate the radii and the pole angles:
<" œ <"T 7 œ !Þ*%"'!Þ& œ !Þ*(!%
<#,$ œ <"T 7 œ ".!$!&!.& œ ".!"&" (4.5.17)
)#,$ œ )B  7 a)T  )B b œ "#!°  !Þ& a"$'Þ$&°  "#!°b œ "#)Þ"(&°

With these values we calculate the real and imaginary components of


transitional Butterworth–Bessel poles (TBT):
=" œ  !Þ*(!% œ 5"
=#,$ œ  <# cos )# „ 4 <# sin )#
œ  "Þ!"&" cos "#)Þ"(&° „ 4 "Þ!"&" sin "#)."(&°
œ  !Þ'#(% „ 4 !Þ(*)! œ 5# „ 4 =# (4.5.18)

The relation for the normalized frequency response magnitude is:


"
¸J Ð=Ѹ œ (4.5.19)
Éa5"  =b# 5##  a=  =# b# ‘5##  a=  =# b# ‘

"
œ
#
Éa!Þ*(!%  =b !Þ'#(%#  a=  !Þ(*)!b# ‘!Þ'#(%#  a=  !Þ(*)!b# ‘

The magnitude plot is shown in Fig. 4.5.2 (TBT); for comparison, the magnitude
plots with Butterworth poles and modified Bessel poles are also drawn.
The normalized phase response is calculated as:
= =  =# =  =#
: œ arctan  arctan  arctan œ
5" 5# 5#
= =  !Þ(*)" =  !Þ(*)"
œ arctan  arctan  arctan (4.5.20)
 !Þ*(!%  !Þ'#(&  !Þ'#(&
In Fig. 4.5.3 the phase plot of the transitional (TBT) system, together with the
plots for Butterworth and modified Bessel systems are drawn.

- 4.59 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

TBT
1.0

T B
| F (ω ) |

0.1
0.1 1.0 10
ω /ω h

Fig. 4.5.2: Frequency response’s magnitude of the Transitional Bessel–Butterworth


three-pole system (TBT), along with the modified Bessel (T) and Butterworth (B)
responses.

TBT

− 90 B
T
ϕ [ °]

− 180

− 270
0.1 1.0 10
ω /ω h

Fig. 4.5.3: Phase angle of the Transitional Bessel–Butterworth three-pole


system (TBT), along with the Bessel (T) and Butterworth (B) phase.

- 4.60 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

The normalized envelope delay is calculated as:


51 5# 5#
7e œ  #  # (4.5.21)
51#  =# 5#  a=  =# b# 5#  a=  =# b#

!Þ*(!% !Þ'#(& !Þ'#(&


œ   
!Þ*(!%#  =# !Þ'#(&#  a=  !Þ(*)"b# !Þ'#(&#  Ð=  !Þ(*)"Ñ#

The envelope delay plot is shown in Fig. 4.5.4 (TBT), along with the delays for the
Butterworth and the modified Bessel system.

−1

τe ω h

−2
TBT B
T

−3
0.1 1.0 10
ω /ω h

Fig. 4.5.4: Envelope delay of the Transitional Bessel–Butterworth three-pole system


(TBT), along with the Bessel (T) and Butterworth (B) delays.

The starting point for the step response calculation is the general formula for a
three pole function multiplied by the unit step operator "Î=:
 =" = # = $
KÐ=Ñ œ (4.5.22)
= Ð=  =" ÑÐ=  =# ÑÐ=  =$ Ñ

We calculate the corresponding step response in the time domain by the _" transform:
8
gÐ>Ñ œ _" ˜KÐ=Ñ™ œ " res3 KÐ=Ñ e=>
3œ"

$
 =" =# =$ e=>
œ " res3 (4.5.23)
3œ"
= Ð=  =" ÑÐ=  =# ÑÐ=  =$ Ñ

- 4.61 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

After the sum of the residues is calculated, we insert the poles =" œ 5" and
=#,$ œ 5# „ 4 =# to obtain (see Appendix 2.3):

#
5" Éc5# a5#  5" b  =## d  =## a#5#  5" b#
gÐ>Ñ œ "  e5# > sin Ð=# >  )Ñ
=# a5#  5" b#  =## ‘

a5##  =## b
 e5" > (4.5.24)
a5#  5" b#  =##

where the angle ) is:


 =# a#5#  5" b
) œ 1  arctan (4.5.25)
5# a5#  5" b  =##

By inserting the numerical values for poles from Eq. 4.5.18, we arrive at the
final relation:

gÐ>Ñ œ "  "Þ%#"" e'#(& > sinÐ!Þ(*)" >  #Þ))""Ñ  "Þ$''! e!Þ*(!% > (4.5.26)

The plot based on this formula is shown in Fig. 4.5.5, (TBT). By inserting the
appropriate pole values in Eq. 4.5.24 we obtain the plots of Butterworth (B) and
modified Bessel (T) system’s step responses.

1.2
B
1.0 TBT

g (t ) T
0.8

0.6

0.4

0.2

0
0 1 2 3 4 5 6 7 8 9 10
t /T

Fig. 4.5.5: The step response of the Transitional Bessel–Butterworth three-pole system
(TBT), along with the Bessel (T) and Butterworth (B) responses.

- 4.62 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.6 Staggered vs. Repeated Bessel Pole Pairs

In order to compare the performance of an amplifier with staggered (index ‘s’)


vs. repeated Bessel pole pairs (index ‘r’) we need to compare the following two
frequency response functions:
=" =# â =8
¸Js Ð=Ѹ œ º º (4.6.1)
Ð=  =" ÑÐ=  =# Ñ â Ð=  =8 Ñ
and:
a=" =# b8Î#
¸Jr Ð=Ѹ œ » (4.6.2)
Ð=  =" Ñ8Î# Ð=  =# Ñ8Î# »

where 8 is an even integer (#, %, ', á ). For a fair comparison we must use the poles
from Table 4.4.4, the Bessel poles normalized to the same cutoff frequency.
The plots in Fig. 4.6.1 of these two functions were made by a computer, using
the numerical methods described in Part 6. From this figure it is evident that an
amplifier with staggered poles (as reported in the Table 4.4.4 for each 8) preserves the
intended bandwidth. On the other hand, the amplifier with the same total number of
poles, but of second-order, repeated (8Î#)-times, does not — its bandwidth shrinks with
each additional second-order stage. Obviously, if 8 œ # the systems are identical.
0
10

f
g
h
−1 i
10
a) 2-pole Bessel reference
| F (ω)|

Staggered pole Bessel


b) 4-pole
c) 6-pole a
| Fs (ω )|
d) 8-pole
−2 e) 10-pole
10
Repeated 2-pole Bessel
f) 2x 2-pole
b
g) 3x 2-pole | Fr (ω )|
h) 4x 2-pole
c
i) 5x 2-pole
10
−3 ed
−1 0 1
10 10 10
ω /ω h
Fig. 4.6.1: Frequency response magnitude of systems with staggered poles, compared with
systems with repeated second-order pole pairs. The bandwidth of systems with repeated
poles decreases with each additional stage.

Even if the poles were of a different kind, e.g., Butterworth or Chebyshev poles,
the staggered poles would also preserve the bandwidth, but the system with repeated
second-order pole pairs will not. For the same total number of poles the curves tend to

- 4.63 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

the same cut off asymptote (from Fig. 4.6.1 this not evident, but it would have been
clear if the graphs had been plotted with increased vertical scale, say, down to 106 ).
In the time domain the decrease of rise times is even more evident. To compare
the step responses we take Eq. 4.6.1 and 4.6.2 (without the magnitude sign) and
multiply them by the unit-step operator "Î=, obtaining:
=" = # â = 8
Ks Ð=Ñ œ (4.6.3)
= Ð=  =" ÑÐ=  =# Ñ â Ð=  =8 Ñ
and:
a=" =# b8Î#
Kr Ð=Ñ œ (4.6.4)
= Ð=  =" Ñ8Î# Ð=  =# Ñ8Î#
with 8 being again an even integer.
By using the _" transform, we obtain the step responses in the time domain:
=" =# â =8 e=>
gs Ð>Ñ œ _" ˜Ks Ð=Ñ™ œ " res (4.6.5)
= Ð=  =" ÑÐ=  =# Ñ â Ð=  =8 Ñ
and:
a=" =# b8Î# e=>
gr Ð>Ñ œ _" ˜Kr Ð=Ñ™ œ " res (4.6.6)
= Ð=  =" Ñ8Î# Ð=  =# Ñ8Î#
The analytical calculation of these two equations, for 8 equal to 2, 4, 6, 8 and
10, should be a pure routine by now (at least for readers who have followed the
calculations in Part 2 and Appendix 2.3; for those who have skipped them, there will be
a lot of opportunities to revisit them later, when the need will force them!). Anyway,
this can be done more easily by using a computer, employing the numerical methods
described in Part 6. The plots obtained are shown in Fig. 4.6.2. The figure is convincing
enough and does not need any comment.

1.0

g( t )

0.8
2 4 6 8 10

0.6
2

2
2


2


0.4

0.2

0
0 1 2 3 4 5 6 7 8 9 10
t /T
Fig. 4.6.2: Step response of systems with staggered poles, compared with systems with repeated second-
order pole pairs. The rise times of systems with repeated poles increase with each additional stage.

- 4.64 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

4.6.1 Assigning the Poles For Maximum Dynamic Range

Readers who usually pay attention to details may have noted that we have listed
the poles in our tables (and also in figures) in a particular order. We have combined
them in complex conjugate pairs and listed them by the increasing absolute value of
their imaginary part. Yes, there is a reason for this, beyond pure aesthetics.
In general we can choose any number (7) of poles from the total number (8) of
poles in the system, and assign them to any stage of the total number of stages (5).
Sometimes, a particular choice would be limited by reasons other than gain and
bandwidth, e.g., in oscilloscopes, the first stage is a JFET source follower, which
provides the high input impedance required and it is usually difficult to design an
effective peaking around a unity gain stage, so almost universally this stage has one real
pole. In most other cases the choice is governed mainly by the gain ‚ bandwidth
product available for a given number of stages.
However, the main reason which dictates the particular pole ordering is the
dynamic range. Remember that in wideband amplifiers we are, more often than not, at
the limits of a system’s realizability. If we want to extract the maximum performance
from a system we should limit any overshoot at each stage to a minimum.
If we consider a rather extreme example, by putting the pole pair with the
highest imaginary part in the first amplifier stage, the step response of this stage would
exhibit a high overshoot. Consequently, the maximum amplitude which the system
could handle linearly would be reduced by the amount of that overshoot.
In order to make the argument more clear, let us take a 3-stage 5-pole system
with Bessel poles (Table 4.4.3, 8 œ &) and analyze the step response of each stage
separately for two different assignments. In the first case we shall use a reversed pole
assignment: the pair =%,& will be assigned to the first stage, the pair =#,$ to the second
stage and the real pole =" to the last stage. In the second case we shall assign the poles
in the preferred order, the real pole first and the pair with the largest imaginary part last.
Our poles have the following numerical values:
=" œ  $Þ'%'(
=#,$ œ  $Þ$&#! „ 4 "Þ(%#(
=%,& œ  #Þ$#%( „ 4 $Þ(&"!
We can model the actual amplifier by three voltage driven current generators
loaded by appropriate G V or G PV networks. Since the passive components are
isolated by the generators, each stage response can be calculated separately. We have
thus one first-order function and two second-order functions:
=" e=> " =" >
g" Ð>Ñ œ ! res œ" e
= a=  =" b ="
=# =$ e=> "
g# Ð>Ñ œ ! res œ" e5# > sinÐ=# >  )# Ñ
= a=  =# ba=  =$ b lsin )# l
=% =& e=> "
g$ Ð>Ñ œ ! res œ" e5% > sinÐ=% >  )% Ñ
= a=  =% ba=  =& b lsin )% l

- 4.65 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

In the reverse pole order case the first stage, with the pole pair =%,& , is excited by
the unit step input signal. We know that the second-order system from Table 4.4.3 has
an optimal step response; since the imaginary to real part ratio of =%,& is larger
(their tan )% is greater) than it is with the poles of the optimal case, we thus expect that
the stage with =%,& would exhibit a pronounced overshoot.
In Fig. 4.6.3 we have drawn the response of each stage when it is driven
individually by the unit step input signal (the responses are gain-normalized to allow
comparison). It is evident that the stage with poles =%ß& has a 13% overshoot.

1.2
4,5
1.0

0.8 s4,5
Normalized gain

4,5

1 2,3
0.6
i s 2,3 2,3
0.4

0.2 s1 1

0
0 0.5 1 1.5 2 2.5 3
t /T
Fig. 4.6.3: Step response of each of the three stages taken individually.

1.2

1.0

0.8 4,5 2,3 1


Normalized gain

0.6

i 4,5 s 2,3 2,3 s1 1


0.4 s4,5

0.2

0
0 0.5 1 1.5 2 2.5 3
t/T
Fig. 4.6.4: Step response of the complete amplifier with reverse pole order at each stage.
Although the second stage had no overshoot of its own, it overshoots by nearly 5% when
processing the @%,& output.

- 4.66 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

In Fig. 4.6.4 the step response of the complete amplifier is drawn, showing the
signal after each stage. Note that although the second stage exhibits no overshoot when
driven by the unit step (Fig. 4.6.3), it will overshoot by nearly 5% when driven by the
output of the first stage, @%,& . And the overshoot would be even higher if we were to put
the =" stage in the middle.
The dynamic range of the input stage will therefore have to be larger by 13%
and that of the second stage by 5% in order to handle the signal linearly. Fortunately the
maximal input signal is equal to the maximal output, divided by the total gain. If we
have followed the rule given by Eq. 4.1.33 and Fig. 4.1.9 the input stage will have only
"Î$ of the total system gain, so its output amplitude will be only a fraction of the supply
voltage. On the other hand, the optimal stage gain is rather low, as given by Eq. 4.1.38,
so the dynamic range may become a matter of concern after all.
The circuit configuration which is most critical in this respect is the cascode
amplifier, since there are two transistors effectively in series with the power supply, so
the biasing must be carefully chosen. In traditional discrete circuits, with relatively high
supply voltages, the dynamic range was rarely a problem; the major concern was about
poor linearity for large signals, since no feedback was used. In modern ICs, with lots of
feedback and a supply of 5V or just 3V, the usable dynamic range can be critical.
We can easily prevent this limitation if we use the correct pole ordering, so that
the first stage has the real pole =" and the last stage the pole pair =%,& . As we can see in
Fig. 4.6.5, the situation improves considerably, since in this case the two front stages
exhibit no overshoot, while the output overshoot is 0.4% only.
In a real amplifier, the pole assignment chosen can be affected by other factors,
e.g., the stage with the largest capacitance will require the poles with the lowest
imaginary part; alternatively a lower loading resistor and thus lower gain can be chosen
for that stage.

1.2

1.0

0.8
Normalized gain

1 2,3 4,5
0.6

i 1 2,3 4,5
0.4 s1 s 2,3 s4,5

0.2

0
0 0.5 1 1.5 2 2.5 3
t /T
Fig. 4.6.5: Step response of the complete amplifier, but with the correct pole assignment.

- 4.67 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

Résumé of Part 4

The study of this part should have given the reader enough knowledge to acquire
an idea of how multi-stage amplifiers could be optimized by applying inductive peaking
circuits, discussed in Part 2, at each stage.
Also, the merit of using DC over AC coupled multi-stage amplifiers should be
clearly understood.
A proper pole pattern selection is of fundamental importance for the amplifier’s
performance. In particular, for a smooth, low overshoot transient response the envelope
delay extended flatness offered by the Bessel poles provides a nearly ideal solution,
approaching the ideal Gaussian response very quickly: with only 5 poles, the system’s
frequency and step response conform exceptionally well to the ideal, with a difference
of less than 1% throughout most of the transient.
Finally, the advantage of staggered vs. repeated pole pairs should be strictly
considered in the design of multi-stage amplifiers when gain ‚ bandwidth efficiency is
the primary design goal. We hope that the reader has gained awareness of how the
bandwidth, which has been achieved by hard work following the optimizations of each
basic amplifying stage given in Part 3, can be easily lost by a large factor if the stages
are not coupled correctly.
A few examples of how these principles are used in practice are given in Part 5.

- 4.69 -
P.Starič, E.Margan Cascading Amplifier Stages, Selection of Poles

References:

[4.1] P.M. Chirlian, Electronic Principles and Design,


McGraw-Hill, New York, 1971.
[4.2] G.E. Valley & H. Wallman, Vacuum Tube Amplifiers,
MIT Radiation Laboratory Series Vol.18, McGraw-Hill, New York, 1948.
[4.3] J. Bednařík & J. Daněk, Obrazové zesilovače pro televisi a měřicí techniku,
Statní nakladatelství technické literatury, Prague, 1957.
[4.4] L.T. Bruton, VG –Active Circuits, Theory and Design,
Prentice-Hall, 1980.
[4.5] J.M. Pettit and M. McWhorter, Electronic Amplifier Circuits,
McGraw-Hill, New York, 1961.
[4.6] S. Butterworth, On the Theory of Filter Amplifiers,
Experimental Wireless & Wireless Engineer, Vol.7, 1930, pp. 536–541.
[4.7] W. Gellert, H. Kustner,
¨ M. Hellwich, H. Kastner
¨ , The VNR Concise Encyclopedia of
Mathematics, second edition, Van Nostrand Reinhold Company, New York, 1992.
[4.8] L. Storch, Synthesis of Constant Time-Delay Ladder Networks Using Bessel Polynomials,
Proceedings of I.R.E., Vol. 7, 1954, pp. 536–541.
[4.9] W.E. Thomson, Networks with Maximally Flat Delay,
Wireless Engineer, Vol. 29, 1952, pp. 253–263.
[4.10] M.E. Van Valkenburg, Introduction to Modern Network Synthesis,
John Wiley, New York, 1960.
[4.11] M. Abramowitz & I.A. Stegun, Handbook of Mathematical Functions,
Dover Publication, New York, 1970.
[4.12] J.N. Little & C.B. Moller, MATLAB User's Guide,
The MathWorks, Inc., South Natick, USA, 1990.
[4.13] P. Starič, Interpolation Between Butterworth and Thomson Poles (in Slovenian),
Elektrotehniški vestnik, 1987, pp. 133–139.
[4.14] Y. Peless & T. Murakami, Analysis and Synthesis of Transitional Butterworth-Thomson Filters
and Bandpass Amplifiers, RCA Review, Vol.18, March, 1957, pp. 60–94.
[4.15] D.L. Feucht, Handbook of Analog Circuit Design,
Academic Press, Inc., San Diego, 1990.
[4.16] A.B. Williams and F.J. Taylor, Electronic Filter Design Handbook
(LC, Active and Digital Filters), Second Edition, McGraw-Hill, New York, 1988.
[4.17] A.I. Zverev, Handbook of Filter Synthesis,
John Wiley and Sons, New York, 1967.

- 4.71 -
P. Starič, E. Margan:

Wideband Amplifiers

Part 5:

System Synthesis And Integration

Any sufficiently advanced technology is indistinguishable from magic.


Arthur C. Clarke
(Profiles of the Future: An Inquiry
into the Limits of the Possible, 1973)
P.Stariè, E.Margan System synthesis and integration

Contents ................................................................................................................................. 5.3


List of Figures ........................................................................................................................ 5.4
List of Tables ......................................................................................................................... 5.6

Contents:

5.0 ‘The Product is Greater than the Sum of its Parts’ ....................................................................... 5.7
5.1 Geometrical Synthesis of Inductively Compensated Multi-stage Amplifiers —
A Simple Example ........................................................................................................................ 5.9
5.2 High Input Impedance Selectable Attenuator with a JFET Source Follower .............................. 5.25
5.2.1 Attenuator High Frequency Compensation ................................................................ 5.26
5.2.2 Attenuator Inductance Loops ..................................................................................... 5.37
5.2.3 The ‘Hook-Effect’ ...................................................................................................... 5.41
5.2.4 Improving the JFET Source Follower DC Stability ................................................... 5.43
5.2.5 Overdrive Recovery ................................................................................................... 5.47
5.2.6 Source Follower with MOSFETs ............................................................................... 5.48
5.2.7 Input Protection Network ........................................................................................... 5.51
5.2.8 Driving the Low Impedance Attenuator ..................................................................... 5.53
5.3 High Speed Operational Amplifiers ............................................................................................ 5.57
5.3.1 The Classical Opamp ................................................................................................. 5.57
5.3.2 Slew Rate Limiting .................................................................................................... 5.61
5.3.3 Current Feedback Amplifiers ..................................................................................... 5.62
5.3.4 Influence of a Finite Inverting Input Resistance ........................................................ 5.69
5.3.5 Noise Gain and Amplifier Stability Analysis ............................................................. 5.72
5.3.6 Feedback Controlled Gain Peaking ............................................................................ 5.79
5.3.7 Improved Voltage Feedback Amplifiers .................................................................... 5.80
5.3.8 Compensating Capacitive Loads ................................................................................ 5.83
5.3.9 Fast Overdrive Recovery ............................................................................................ 5.89
5.4 Improving Amplifier Linearity .................................................................................................... 5.93
5.4.1 Feedback and Feedforward Error Correction ............................................................. 5.94
5.4.2 Error Reduction Analysis ........................................................................................... 5.98
5.4.3 Alternative Feedforward Configurations .................................................................. 5.100
5.4.4 Time Delay Compensation ....................................................................................... 5.103
5.4.5 Circuits With Local Error Correction ...................................................................... 5.104
5.4.6 The Tektronix M377 IC ........................................................................................... 5.117
5.4.7 The Gilbert Multiplier .............................................................................................. 5.123
Résumé of Part 5 .............................................................................................................................. 5.129
References ........................................................................................................................................ 5.131

- 5.3 -
P.Stariè, E.Margan System synthesis and integration

List of Figures:

Fig. 5.1.1: A two-stage differential cascode 7–pole amplifier ........................................................... 5.10


Fig. 5.1.2: The 7 normalized Bessel–Thomson poles and their characteristic circles ....................... 5.11
Fig. 5.1.3: The 3–pole stage realization ............................................................................................ 5.14
Fig. 5.1.4: The 4–pole L+T-coil stage ............................................................................................... 5.15
Fig. 5.1.5: Polar plot of the actual 7 poles and the 2 real poles ......................................................... 5.18
Fig. 5.1.6: Frequency response comparison ...................................................................................... 5.19
Fig. 5.1.7: Step response comparison ................................................................................................ 5.19
Fig. 5.1.8: Two pairs of gain normalized step responses ................................................................... 5.20
Fig. 5.1.9: Two pairs of step responses with different gain ............................................................... 5.21
Fig. 5.1.10: Two pairs of step responses with similar gain ................................................................ 5.21
Fig. 5.1.11: The influence of the real pole on the step response ........................................................ 5.23
Fig. 5.1.12: Making the deflection plates in sections ........................................................................ 5.24
Fig. 5.2.1: A typical conventional oscilloscope input section ............................................................. 5.26
Fig. 5.2.2: The 10:1 attenuator frequency compensation ................................................................... 5.26
Fig. 5.2.3: Attenuator frequency response (magnitude, phase and envelope delay) .......................... 5.31
Fig. 5.2.4: Attenuator step response .................................................................................................. 5.33
Fig. 5.2.5: Compensating the 50 H signal source impedance ............................................................ 5.34
Fig. 5.2.6: Switching the three attenuation paths (1:1, 10:1, 100:1) .................................................. 5.35
Fig. 5.2.7: Attenuator with no direct path .......................................................................................... 5.36
Fig. 5.2.8: Inductances owed to circuit loops .................................................................................... 5.37
Fig. 5.2.9: The attenuator and the JFET source follower inductance loop damping .......................... 5.40
Fig. 5.2.10: Step response of the circuit in Fig. 5.2.9 ........................................................................ 5.41
Fig. 5.2.11: The ‘Hook effect’ ........................................................................................................... 5.42
Fig. 5.2.12: Canceling the Hook effect in common PCB material ..................................................... 5.42
Fig. 5.2.13: Simple offset trimming of a JFET source follower ........................................................ 5.43
Fig. 5.2.14: Active DC correction loop ............................................................................................. 5.44
Fig. 5.2.15: The principle of separate low pass and high pass amplifiers .......................................... 5.45
Fig. 5.2.16: Elimination of the high pass amplifier ........................................................................... 5.46
Fig. 5.2.17: The error amplifier becomes an inverting integrator ...................................................... 5.46
Fig. 5.2.18: High amplitude step response non-linearity ................................................................... 5.47
Fig. 5.2.19: A typical n-channel JFET structure cross-section and circuit model .............................. 5.48
Fig. 5.2.20: A typical n-channel MOSFET cross-section and circuit model ..................................... 5.50
Fig. 5.2.21: Input protection from electrostatic discharge ................................................................. 5.52
Fig. 5.2.22: Input protection for long term overdrive ........................................................................ 5.52
Fig. 5.2.23: JFET source follower with a buffer ................................................................................ 5.54
Fig. 5.2.24: The low impedance attenuator ....................................................................................... 5.55
Fig. 5.3.1: The classical opamp, simplified circuit ............................................................................ 5.58
Fig. 5.3.2: Typical opamp open loop gain and phase compared to the closed loop gain ................... 5.60
Fig. 5.3.3: Slew rate in an opamp with a current mirror .................................................................... 5.62
Fig. 5.3.4: A current feedback amplifier model derived from a voltage feedback opamp ................. 5.62
Fig. 5.3.5: A fully complementary current feedback amplifier model ............................................... 5.63
Fig. 5.3.6: Cross-section of the Complementary Bipolar Process ..................................................... 5.64
Fig. 5.3.7: The four components of GT .............................................................................................. 5.64
Fig. 5.3.8: Current feedback model used for analysis ........................................................................ 5.65
Fig. 5.3.9: Current on demand operation during the step response .................................................... 5.68
Fig. 5.3.10: Comparison of gain vs. bandwidth for a conventional and a CF amplifier .................... 5.68
Fig. 5.3.11: Non-zero inverting input resistance ................................................................................ 5.69
Fig. 5.3.12: Actual CFB amplifier bandwidth owed to non-zero inverting input resistance .............. 5.72
Fig. 5.3.13: VFB and CFB amplifiers with capacitive feedback ....................................................... 5.73
Fig. 5.3.14: Noise gain definition ...................................................................................................... 5.73
Fig. 5.3.15: An arbitrary example of the phase–magnitude relationship ........................................... 5.74
Fig. 5.3.16: VFB amplifier noise gain derived .................................................................................. 5.75
Fig. 5.3.17: CFB amplifier noise impedance derived ........................................................................ 5.76
Fig. 5.3.18: CFB amplifier and its noise impedance equivalent ........................................................ 5.77
Fig. 5.3.19: Functionally equivalent circuits using VF and CF amplifiers ........................................ 5.78
Fig. 5.3.20: Single resistor bandwidth adjustment for CFB amps ..................................................... 5.79

- 5.4 -
P.Stariè, E.Margan System synthesis and integration

Fig. 5.3.21: Frequency and step response of the amplifier in Fig. 5.3.20 .......................................... 5.79
Fig. 5.3.22: Improved VF amplifier using folded cascode ................................................................ 5.80
Fig. 5.3.23: Improved VF amplifier derived from a CFB amp .......................................................... 5.81
Fig. 5.3.24: The ‘Quad Core’ structure ............................................................................................. 5.82
Fig. 5.3.25: Output buffer stage with improved current handling ...................................................... 5.83
Fig. 5.3.26: Capacitive load ads a pole to the feedback loop ............................................................ 5.83
Fig. 5.3.27: Amplifier stability driving a capacitive load .................................................................. 5.86
Fig. 5.3.28: Capacitive load compensation by inductance ................................................................. 5.86
Fig. 5.3.29: Capacitive load compensation by separate feedback paths ............................................ 5.87
Fig. 5.3.30: Adaptive capacitive load compensation ......................................................................... 5.88
Fig. 5.3.31: Amplifier with feedback controlled output level clipping .............................................. 5.89
Fig. 5.3.32: Amplifier with output level clipping using separate supplies ......................................... 5.90
Fig. 5.3.33: CFB amplifier output level clipping at ^T ..................................................................... 5.90
Fig. 5.4.1: Amplifiers with feedback and feedforward error correction ............................................ 5.94
Fig. 5.4.2: Closed loop frequency response of a feedback amplifier ................................................. 5.95
Fig. 5.4.3: Optimized feedforward amplifier frequency response ..................................................... 5.98
Fig. 5.4.4: Grounded load feedforward amplifier ............................................................................ 5.100
Fig. 5.4.5: ‘Error take off’ principle ................................................................................................ 5.102
Fig. 5.4.6: ‘Current dumping’ principle (Quad 405) ....................................................................... 5.103
Fig. 5.4.7: Time delay compensation of the feedforward amplifier ................................................. 5.104
Fig. 5.4.8: Simple differential amplifier .......................................................................................... 5.105
Fig. 5.4.9: DC transfer function of a differential amplifier .............................................................. 5.106
Fig. 5.4.10: Test set up used to compare different amplifier configurations ................................... 5.108
Fig. 5.4.11: Simple differential cascode amplifier used as the reference ......................................... 5.108
Fig. 5.4.12: Frequency response of the reference amplifier ............................................................. 5.109
Fig. 5.4.13: Step response of the reference amplifier ...................................................................... 5.109
Fig. 5.4.14: Improved Darlington .................................................................................................... 5.110
Fig. 5.4.15: Frequency response of the improved Darlington differential amplifier ........................ 5.110
Fig. 5.4.16: Step response of the improved Darlington differential amplifier ................................. 5.110
Fig. 5.4.17: Differential amplifier with feedforward error correction ............................................. 5.111
Fig. 5.4.18: Differential amplifier with double error feedforward ................................................... 5.111
Fig. 5.4.19: Frequency response of the differential amplifier with double feedforward .................. 5.112
Fig. 5.4.20: Step response of the differential amplifier with double feedforward ........................... 5.112
Fig. 5.4.21: Cascomp (compensated cascode) amplifier ................................................................. 5.113
Fig. 5.4.22: Frequency response of the Cascomp amplifier ............................................................. 5.113
Fig. 5.4.23: Step response of the Cascomp amplifier ...................................................................... 5.113
Fig. 5.4.24: Differential cascode amplifier with error feedback ...................................................... 5.114
Fig. 5.4.25: Modified error feedforward with direct error sensing and direct correction ................ 5.114
Fig. 5.4.26: Cascomp evolution with feedback ................................................................................ 5.115
Fig. 5.4.27: Frequency response of the Cascomp evolution ............................................................ 5.115
Fig. 5.4.28: Step response of the Cascomp evolution ...................................................................... 5.115
Fig. 5.4.29: Differential cascode amplifier with output impedance compensator ............................ 5.116
Fig. 5.4.30: Frequency response of the amplifier with output impedance compensator .................. 5.116
Fig. 5.4.31: Step response of the amplifier with output impedance compensator ............................ 5.116
Fig. 5.4.32: Basic wideband amplifier block of the M377 IC ......................................................... 5.118
Fig. 5.4.33: The basic M377 block as a compound transistor ......................................................... 5.119
Fig. 5.4.34: The M377 amplifier with fast overdrive recovery ........................................................ 5.120
Fig. 5.4.35: Simulation of the M377 amplifier frequency response ................................................ 5.121
Fig. 5.4.36: Simulation of the M377 amplifier step response .......................................................... 5.121
Fig. 5.4.37: Gain switching in the M377 amplifier .......................................................................... 5.122
Fig. 5.4.38: Fixed gain with V–#V attenuator switching ................................................................ 5.122
Fig. 5.4.39: The Gilbert multiplier development ............................................................................. 5.124
Fig. 5.4.40: DC transfer function of the Gilbert multiplier .............................................................. 5.126
Fig. 5.4.41: The Gilbert multiplier has almost constant bandwidth ................................................. 5.126
Fig. 5.4.42: Another way of developing the Gilbert multiplier ........................................................ 5.127
Fig. 5.4.43: The four quadrant multiplier ........................................................................................ 5.127
Fig. 5.4.44: The two quadrant multiplier used in M377 .................................................................. 5.128

- 5.5 -
P.Stariè, E.Margan System synthesis and integration

List of Tables:

Table 5.1.1: Comparison of component values for different pole assignments ................................. 5.22
Table 5.3.1: Typical production parameters of the Complementary-Bipolar Process ....................... 5.64

- 5.6 -
P.Stariè, E.Margan System synthesis and integration

5.0 ‘The Product is Greater than The Sum of its Parts’

... and that can be true in both the mathematical and technological sense! Well,
in math, at least as long as we are dealing with numbers greater than two; but in
technology the goal might not be so straightforward and neither are the means of
achieving it.
Electronics engineering is a difficult and demanding job. Most electronics
engineers pay attention to the input and output constraints imposed by the real world
as a matter of course. Many will also silently nod their head when the components
logistician tells them to use another part instead of the one originally specified, just
because it's cheaper. A number of them will also agree to take it into account when the
marketing manager tells them that the customer wants and expects from the product a
feature which was not foreseen initially as one of the design goals. And almost all will
shrug their shoulders when the chief director announces that the financial resources
for their project have been cut low or even that the project has been canceled. But
almost all will go mad when the mechanical engineer or the enclosure designer
casually stops by and asks if a switch or a pot could be moved from the left side of the
printed circuit board to the right. Fortunately for the electronics engineer, he has on
his side the most powerful argument of all: “Yeah, maybe I could do that, but
probably the final performance would suffer!” Electronics engineering is a difficult
and demanding job, indeed.
In the past 50 years electronics engineers have been delivering miracles at an
ever increasing rate. Not just the general public, but also other people involved within
the electronics industry have become accustomed to this. In the mid 1980s no one ever
asked you if something could be done; instead, the question was ‘for how much and
when?’. Today, no one asks ‘how much’ either — it has to be a bargain (between
brothers) and it had better be ready yesterday!
How many of you could give a name or two of engineers who became rich or
just famous in the last 50 or so years? Let us see: William R. Hewlett was famous, but
that was probably due more to his name in the Hewlett–Packard firm’s title and less to
the actual recognition of his work by the general public. Then there is Ray Dolby,
known for his noise reduction system. Gordon Moore of Intel is known for his ‘law’.
Errr ... who else? Oh, yes, Bill Gates is both rich and famous, but he is more famous
because he is so rich and far less for his own work!
Do you see what we mean? No doubt, many engineers have started a very
profitable business based on their own original circuit ideas. True, most commercially
successful products are the result of a team effort; still, the key solution is often born
in the mind of a gifted individual. But, frankly speaking, is there a single engineer or
scientist who could attract 50,000 delirious spectators to a stadium every Sunday?
OK, 5,000? Maybe Einstein could have been able to do so, but even he was
considered to be ‘a bit nuts’. Well, there you have it, the offer–demand economy.
In this book we have tried to pay a humble tribute to a small number of
amplifier designers. But our readers are engineers themselves, so we are preaching to
the already converted. Certainly we are not going to influence, let alone reverse, any
of those trends.
So, if tomorrow your boss tells you that he will be paying you 5% less, shrug
your shoulders and go back to your drawing board. And do not forget to stop by the
front-panel designer to tell him that he is asking too much!

- 5.7 -
P.Stariè, E.Margan System synthesis and integration

5.1 Geometrical Synthesis of Inductively Compensated


Multi-Stage Amplifiers — A Simple Example

The reader who has patiently followed the discussion presented in previous
chapters is probably eager to see all that theory being put into practice.
Before jumping to some more complex amplifier circuits we shall give a
relatively simple example of a two-stage differential cascode amplifier, by which we
shall illustrate the actual system optimization procedure in some detail, using the
previously developed principles in their full potential.
Since we want to grasp the ‘big picture’ we shall have to leave out some less
important topics, such as negative input impedance compensation, cascode damping,
etc.; these are important for the optimization of each particular stage which, once
optimized, can be idealized to some extent. We have covered that extensively enough
in Part 3, so we shall not explicitly draw the associated components in the schematic
diagram. But, at the end of our calculations, we shall briefly discuss the influence of
those components to final circuit values.
A two-stage amplifier is a ‘minimum complexity’ system for which the
multi-stage design principles still apply. To this we shall add a 3-pole T-coil and a
4-pole L+T-coil peaking networks, discussed in Part 2, as loads to each stage, making
a total of 7 poles. There is, however, an additional real pole, owed to the U" input
capacitance and the total input and signal source resistance. As we shall see later, this
pole can be neglected if its distance from the complex plane’s origin is at least twice
as large as that of the system real pole set by "ÎVa Ga .
Such an amplifier thus represents an elementary example in which everything
that we have learned so far can be applied. The reader should, however, be aware that
this is by no means the ideal or, worse still, the only possibility. At the end of our
calculations, when we shall be able to assess the advantages and limitations offered by
our initial choices at each stage, we shall examine a few possibilities of further
improvement.
We shall start our calculations from the unavoidable stray capacitances and the
desired total voltage gain. Then we shall apply an optimization process, which we like
to refer to as the geometrical synthesis, by which we shall calculate all the remaining
circuit components in such a way that the resulting system will conform to the 7-pole
normalized Bessel–Thomson system. The only difference will be that the actual
amplifier poles will be larger by a certain factor, proportional (but not equal) to the
upper half power frequency =H . We have already met the geometrical synthesis in its
basic form in Part 2, Fig. 2.5.3 when we were discussing the 3-pole T-coil circuit. The
name springs from the ability to calculate all the peaking network components from
simple geometrical relations which involve the pole real an imaginary components,
given, of course, the desired pole pattern and a few key component values which can
either be chosen independently or set by other design requirements. Here we are going
to see a generalization of those basic relations applied to the whole amplifier.
It must be admitted that the constant and real input impedance of the T-coil
network is the main factor which allows us to assign so many poles to only two stages.
A cascade of passive 2-pole sections could have been used, but those would load each

- 5.9 -
P.Stariè, E.Margan System synthesis and integration

other and, as a result, the bandwidth extension factor would suffer. Another possibility
would be to use an additional cascode stage to separate the last two peaking sections,
but another active stage, whilst adding gain, also adds its own problems to be taken
care of. It is, nevertheless, a perfectly valid option.
Let us now take a quick tour of the amplifier schematic, Fig. 5.1.1. We have
two differential cascode stages and two current sources, which set both the transistor’s
transconductance and the maximum current available to the load resistors, Va and Vb .
This limits the voltage range available to the CRT. Since the circuit is differential the
total gain is a double of each half. The total DC gain is (approximately):
Va Vb
E! œ # † (5.1.1)
Ve" Ve#
The values of Ve" and Ve# set the required capacitive bypass, Ge" Î# and
Ge$ Î#, to match the transistor’s time constants. In turn, this sets the input capacitance
at the base of U" and U$ , to which we must add the inevitable Gcb and some strays.
V c2 V c4
Cbd Cbb

L d kd Ra L b kb Rb
Q2 Q4 Lc

Q1 Q3
Ca
V b2 V b4
Cd Cc Cb
Y
Rs R e1 R e3
R1
Ce3 X X'
s Ce1 I e3 2
I e1
2 V ee
Y'

V b2 V b4
Q1' Q3'

Q'2 Q4'

V c2 V c4
Fig. 5.1.1: A simple 2-stage differential cascode amplifier with a 7-pole peaking system: the
3-pole T-coil inter-stage peaking (between the U# collector and the U$ base) and the 4-pole
L+T-coil output peaking (between the U% collector and the vertical plates of the cathode ray
tube). The schematic was simplified to emphasize the important design aspects — see text.

The capacitance Gd should thus consists of, preferably, only the input
capacitance at the base of U$ . If required by the coil ‘tuning’, a small capacitance can
be added in parallel, but that would also reduce the bandwidth. Note that the
associated T-coil Pd will have to be designed as an inter-stage peaking, as we have
discussed in Part 3, Sec. 3.6, but we can leave the necessary corrections for the end.
The capacitance Gb , owed almost entirely to the CRT vertical plates, is much
larger than Gd , so we expect that Va and Vb can not be equal. From this it follows that

- 5.10 -
P.Stariè, E.Margan System synthesis and integration

it might be difficult to apply equal gain to each stage in accordance with the principle
explained in Part 4, Eq. 4.1.39. Nevertheless, the difference in gain will not be too
high, as we shall see.
Like any other engineering process, geometrical synthesis also starts from
some external boundary conditions which set the main design goal. In this case it is
the CRT’s vertical sensitivity and the available input voltage, from which the total
gain is defined. The next condition is the choice of transistors by which the available
current is defined. Both the CRT and the transistors set the lower limit of the loading
capacitances at various nodes. From these the first circuit component Vb is fixed.
With Vb fixed we arrive at the first ‘free’ parameter, which can be represented
by several circuit components. However, since we would like to maximize the
bandwidth this parameter should be attributed to one of the capacitances. By
comparing the design equations for the 3-pole T-coil and the 4-pole L+T-coil peaking
networks in Part 2, it can be deduced that Ga , the input capacitance of the 3-pole
section, is the most critical component.
With these boundaries set let us assume the following component values:
Gb œ "" pF (9 pF of the CRT vertical plates, 2 pF stray )
Ga œ % pF (3 pF from the U# Gcb , 1 pF stray ) (5.1.2)
Vb œ $'! H (determined by the desired gain and the available current)
The pole pattern is, in general, another ‘free’ parameter, but for a smooth,
minimum overshoot transient we must apply the Bessel–Thomson arrangement. As
can be seen in Fig. 5.1.2, each pole (pair) defines a circle going through the pole and
the origin, with the center on the negative real axis.

2
s1d
K = 1 for a single real pole
K = 2 for series peaking s1c
K = 4 for T-coil peaking
1
s1b
−K −K −K

RdCd RbCb RaCa
0 sa
−K
RcCc s2b
−1
s2c

s2d
−2

−5 −4 −3 −2 −1 0
σ
Fig. 5.1.2: The 7 normalized Bessel–Thomson poles. The characteristic circle
of each pole (pair) has a diameter determined by the appropriate VG constant
and the peaking factor O , which depends on the type of network chosen.

- 5.11 -
P.Stariè, E.Margan System synthesis and integration

The poles in Fig. 5.1.2 bear the index of the associated circuit components and
the reader might wonder why we have chosen precisely that assignment.
In a general case the assignment of a pole (pair) to a particular circuit section
is yet another ‘free’ design parameter. If we were designing a low frequency filter we
could indeed have chosen an arbitrary assignment (as long as each complex conjugate
pole pair is assigned as a pair, a limitation owed to physics, instead of circuit theory).
If, however, the bandwidth is an issue then we must seek those nodes with the
largest capacitances and apply the poles with the lowest imaginary part to those circuit
sections. This is because the capacitor impedance (which is dominantly imaginary) is
inversely proportional both to the capacitor value and the signal frequency.
In this light the largest capacitance is at the CRT, that is, Gb ; thus the pole pair
with the lowest imaginary part is assigned to the output T-coil section, formed by Pb
and Vb , therefore acquiring the index ‘b’, ="b and =#b .
The real pole is the one associated with the 3-pole stage and there it is set by
the loading resistor Va and the input capacitance Ga , becoming =a .
The remaining two pole pairs should be assigned so that the pair with the
larger imaginary part is applied to that peaking network which has a larger bandwidth
improvement factor. Here we must consider that O œ % for a T-coil, whilst O œ # for
the series peaking L-section (of the 4-pole L+T-scetion). Clearly the pole pair with the
larger imaginary part should be assigned to the inter-stage T-coil, Pd , thus they are
labeled ="d and =#d . The L-section then receives the remaining pair, ="c and =#c .
We have thus arrived at a solution which seems logical, but in order to be sure
that we have made the right choice we should check other combinations as well. We
are going to do so at the end of the design process.
The poles for the normalized 7th -order Bessel–Thomson system, as taken
either from Part 4, Table 4.4.3, or by using the BESTAP (Part 6) routine, along with
the associated angles, are:
=a œ 5a œ  %Þ*(") )a œ ")!°
=b œ 5b „ 4 =b œ  %Þ(&)$ „ 4 "Þ($*$ )b œ ")!° … #!Þ!()(°
=c œ 5c „ 4 =c œ  %Þ!(!" „ 4 $Þ&"(# )c œ ")!° … %!Þ)$"'°
=d œ 5d „ 4 =d œ  #Þ')&( „ 4 &Þ%#!( )d œ ")!° … '$Þ'%$*° (5.1.3)

So, let us now express the basic design equations by the assigned poles and the
components of the two peaking networks.
For the real pole =a we have the following familiar proportionality:
"
=a œ 5a œ Ha œ %Þ*(") º (5.1.4)
Va Ga
At the output T-coil section, according to Part 2, Fig. 2.5.3, we have:
5b %Þ(&)$ %
Hb œ œ œ &Þ$*%" º (5.1.5)
cos# )b !Þ))#" Vb Gb

- 5.12 -
P.Stariè, E.Margan System synthesis and integration

For the L-section of the L+T output network, because the T-coil input
impedance is equal to the loading resistor, we have:
5c %Þ!(!" #
Hc œ œ œ (Þ"!*% º (5.1.6)
cos# )c !Þ&(#& V b Gc
And finally, for the inter-stage T-coil network:
5d #Þ')&( %
Hd œ œ œ "$Þ'$$$ º (5.1.7)
cos# )d !Þ"*"( Va Gd

From these relations we can calculate the required values of the remaining
capacitances, Gc and Gd . If we divide Eq. 5.1.5 by Eq. 5.1.6, we have the ratio:
%

Hb Vb G b # Gc
œ œ (5.1.8)
Hc # Gb

Vb G c
It follows that the capacitance Gc should be:
Gb Hb ""  &Þ$*%"
Gc œ † œ † œ %Þ"($! pF (5.1.9)
# Hc #  (Þ"!*%
Likewise, if we divide Eq. 5.1.4 by Eq. 5.1.7, we obtain:
"

Ha Va G a Gd
œ œ (5.1.10)
Hd % % Ga

Va G d
Thus Gd will be:
Ha  %Þ*(")
Gd œ % Ga œ%†%† œ &Þ)$%* pF (5.1.11)
Hd  "$Þ'$$$

Of course, for most practical purposes, the capacitances do not need to be


calculated to such precision, a resolution of 0.1 pF should be more than enough. But
we would like to check our procedure by recalculating the actual poles from circuit
components and for that purpose we shall need this precision.
Now we need to know the value of Va . This can be readily calculated from the
ratio Ha ÎHb :
"

Ha Va Ga Vb G b
œ œ (5.1.12)
Hb % % Va G a

Vb Gb
resulting in:
Vb Gb Hb $'! ""  &Þ$*%"
Va œ † † œ † † œ #')Þ& H (5.1.13)
% Ga Ha % %  %Þ*(")

- 5.13 -
P.Stariè, E.Margan System synthesis and integration

We are now ready to calculate the inductances Pb , Pc and Pd . For the two
T-coils we can use the Eq. 2.4.19:

Pb œ Vb# Gb œ $'!# † "" † "!"# œ "Þ%#&' µH (5.1.14)


and
Pd œ Va# Gd œ #')Þ&# † &Þ)$%* † "!"# œ !Þ%#!' µH (5.1.15)

For Pc we use Eq. 2.2.26 to obtain the proportionality factor of the VG constant:

"  tan# )b # $'!# † %Þ"($! † "!"#


Pc œ Vb Gc œ œ !Þ"&$$ µH (5.1.16)
% % † !Þ))#"
The magnetic coupling factors for the two T-coils are calculated by Eq. 2.4.36:

$  tan# )b $  !Þ"$$'
5b œ œ œ !Þ&&)% (5.1.17)
&  tan# )b &  !Þ"$$'
and likewise:
$  tan# )d $  %Þ!($)
5d œ œ œ  !Þ"")$ (5.1.18)
&  tan# )d &  %Þ($)

Note that 5d is negative. This means that, instead of the usually negative
mutual inductance, we need a positive inductance at the T-coil tap. This can be
achieved by simply mounting the two halves of Pd perpendicular to each other, in
order to have zero magnetic coupling and then introduce an additional coil, Pe (again
perpendicular to both halves of Pd ), with a value of the required positive mutual
inductance, as can be seen in Fig. 5.1.3. Another possibility would be to wind the two
halves of Pd in opposite direction, but then the bridge capacitance Gbd might be
difficult to realize correctly.

Cbd Cbd

L d kd Ra kd = 0 Ra
L d /2 L d /2

Ca Ra Le

Cd Ca
kd < 0
Cd

sa s 1d s 2d
Fig. 5.1.3: With the assigned poles and the resulting particular component values the 3-pole
stage magnetic coupling 5d needs to be negative, which forces us to use non-coupled coils
and add a positive mutual inductance Pe . Even with a negative 5d the T-coil reflects its
resistive load to the network input, greatly simplifying the calculations of component values.

The additional inductance Pe can be calculated from the required mutual


inductance given by the negative value of 5d Þ In Part 2, Eq. 2.4.1–2.4.5 we have
defined the T-coil inductance, its two halves, and its mutual inductance by the
relations repeated in Eq. 5.1.19 for convenience:

- 5.14 -
P.Stariè, E.Margan System synthesis and integration

P œ P"  P #  # P M
P
P" œ P# œ (5.1.19)
# Ð"  5Ñ

PM œ  5 ÈP" P#
Thus, if 5 œ ! we have:
Pd !Þ%#!'
P"d œ P#d œ œ œ !Þ#"!$ µH (5.1.20)
# #
and:
Pd Pd Pd !Þ%#!'
Pe œ  5 d Ê † œ  5d œ !Þ"")$ œ !Þ!#& µH (5.1.21)
# # # #

If we were to account for the U3 base resistance (discussed in Part 3, Sec. 3.6)
we would get 5d even more negative and also P"d Á P#d .
The coupling factor 5b , although positive, also poses a problem: since it is
greater than 0.5 it might be difficult to realize. As can be noted from the above
equations, the value of 5 depends only on the pole’s angle ). In fact, the 2nd -order
Bessel system has the pole angles of „ "&!°, resulting in a 5 œ !Þ&, representing the
limiting case of realizability with conventionally wounded coils. Special shapes, coil
overlapping, or other exotic techniques may solve the coupling problem, but, more
often than not, they will also impair the bridge capacitance. The other limiting case,
when 5 œ !, is reached by the ratio eÖ=×ÎdÖ=× œ È$ , a situation occurring when
the pole’s angle ) œ "#!°.
In accordance with previous equations we also calculate the value of the two
halves of Pb :
Pb "Þ%#&'
P"b œ P#b œ œ œ !Þ%&(% µH (5.1.22)
# a"  5b b # a"  !Þ&&)%b

Cbb

L b kb Rb
Lc

Rb
Cc Cb

s 1c s 2c s 1b s 2b

Fig. 5.1.4: The 4-pole output L+T-coil stage and its pole assignment.

The last components to be calculated are the bridge capacitances, Gbb and Gbd .
The relation between the T-coil loading capacitance and the bridge capacitance has
been given already in Part 2, Eq. 2.4.31, from which we obtain the following
expressions for Gbb and Gbd :

- 5.15 -
P.Stariè, E.Margan System synthesis and integration

"  tan# )b "  !Þ"$$'


Gbb œ Gb œ "" œ !Þ((*$ pF (5.1.23)
"' "'
and:
"  tan# )d "  %Þ!($)
Gbd œ Gd œ &Þ)$%* œ "Þ)&!$ pF (5.1.24)
"' "'

This completes the calculation of amplifier components necessary for the


inductive peaking compensation and thus achieving the Bessel–Thomson system
response. We would now like to verify the design by recalculating the actual pole
values. To do this we return to the relations which we have started from, Eq. 5.1.3 to
Eq. 5.1.7 and for the imaginary part using the relations in Part 2, Fig. 2.5.3 Þ In order
not to confuse the actual pole values with the normalized values, from which we
started, we add an index ‘A’ to the actual poles:

" "
5aA œ  œ  œ  *$"Þ" † "!' radÎs (5.1.25)
Va G a #')Þ& † % † "!"#
% cos# )b % † !Þ))#"
5bA œ  œ  œ  )*"Þ! † "!' radÎs
Vb Gb $'! † "" † "!"#
% cos )b sin )b % † !Þ*$*# † !Þ$%$$
=bA œ „ œ „ œ „ $#&Þ( † "!' radÎs
Vb Gb $'! † "" † "!"#
# cos# )c # † !Þ&(#&
5cA œ  œ  œ  ('#Þ# † "!' radÎs
Vb Gc $'! † %Þ"($! † "!"#
# cos )c sin )c # † !Þ(&'' † !Þ'&$)
=cA œ „ œ „ œ „ '&)Þ& † "!' radÎs
Vb Gc $'! † %Þ"($! † "!"#
% cos# )d % † !Þ"*"(
5dA œ  œ  œ  %)*Þ& † "!' radÎs
Va Gd #')Þ& † &Þ)$%* † "!"#
% cos )d sin )d % † !Þ%%$* † !Þ)*'"
=dA œ „ œ „ œ „ "!"&Þ' † "!' radÎs
Va Gd #')Þ& † &Þ)$%* † "!"#

If we divide the real amplifier pole by the real normalized pole, we get:

5bA  *$"Þ" † "!'


œ œ ")(Þ$ † "!' (5.1.26)
5b  %Þ*(")
and this factor is equal for all other pole components. Unfortunately, from this we
cannot calculate the upper half power frequency of the amplifier. The only way to do
that (for a Bessel system) is to calculate the response for a range of frequencies around
the cut off and then iterate it using the bisection method, until a satisfactory tolerance
has been achieved.
Instead of doing it for only a small range of frequencies we shall, rather, do it
for a three decade range and compare the resulting response with the one we would
get from a non-compensated amplifier (in which all the inductances are zero). Since to
this point we were not interested in the actual value of the voltage gain, we shall make
the comparison using amplitude normalized responses.

- 5.16 -
P.Stariè, E.Margan System synthesis and integration

The non-compensated amplifier has two real poles, which are:


" "
="N œ  and =# N œ  (5.1.27)
Va aGa  Gd b Vb aGb  Gc b

Consequently, its complex frequency response would then be:


="N =#N
JN a=b œ (5.1.28)
a=  ="N ba=  =#N b
with the magnitude:
È="N =#N
¸JN a=b¸ œ (5.1.29)
Éa=#  =#"N ba=#  =##N b

and the step response:


="N =#N
ga>b œ _" œ 
= a=  ="N ba=  =#N b
=#N =" N
œ" e="N >  e=#N > (5.1.30)
="N  =#N =" N  = # N

The rise time is:


" "
7r œ #Þ# Ë  # (5.1.31)
=#"N =# N

and the half power frequency:


È="N =#N
0h œ (5.1.32)
#1

In contrast, the complex frequency response of the 7-pole amplifier is:


 =aA ="bA =#bA ="cA =#cA ="dA =#cA
JA a=b œ E!
a=  =aA ba=  ="bA ba=  =#bA ba=  ="cA ba=  =#cA ba=  ="dA ba=  =#dA b
(5.1.33)

and the step response is the inverse Laplace transform of the product of JA a=b with
the unit step operator "Î=:

" "
ga>b œ _" œ JA a=b œ ! resŒ JA a=b e=>  (5.1.34)
= =

We shall not attempt to solve either of these functions analytically, since it


would take too much space, and, anyway, we have solved them separately for its two
parts (3rd - and 4th -order) in Part 2. Because the systems are separated by an amplifier
(U$ , U4 ), the frequency response would be a simple multiplication of the two
responses. For the step response we now have 8 residues to sum (7 of the system
poles, in addition to the one from the unit step operator). Although lengthy, it is a

- 5.17 -
P.Stariè, E.Margan System synthesis and integration

relatively simple operation and we leave it as an exercise to the reader. Instead we are
going to use the computer routines, the development of which can be found in Part 6.
In Fig. 5.1.5 we have made a polar plot of the poles for the inductively
compensated 7-pole system and the non-compensated 2-pole system. As we have
learned in Part 1 and Part 2, the farther from origin the smaller is the pole’s influence
on the system response. It is therefore obvious that the 2-pole system’s response will
be dominated by the pole closer to the origin and that is the pole of the output stage,
=#N . The bandwidth of the 7-pole system is, obviously, much larger.

90
120 60
ℑ{s }
1.5 × 10 9
s 1dA

150 1.0 30
s 1cA
0.5
s 1bA
ℜ{s }
180 s aA 0
s 1N s 2N
s 2bA

s 2cA

210 330

s 2dA

240 300
270

Fig. 5.1.5: The polar plot of the 7-pole compensated system (poles
with index ‘A’) and the 2-pole non-compensated system (index ‘N’).
The radial scale is ‚ "!* radÎs. The angle is in degrees.

The pole layout gives us a convenient indication of the system’s performance,


but it is the magnitude vs. frequency response that reveals it clearly. As can be seen in
Fig. 5.16, the non-compensated system has a bandwidth of less than 25 MHz. The
compensated amplifier bandwidth is close to 88 MHz, more than 3.5 times larger.
The comparison of step responses in Fig. 5.1.7 reveals the difference in
performance even more dramatically. The rise time of the non-compensated system is
about 14 ns, whilst for the compensated system it is only 3.8 ns, also a factor of 3.5
times better; in addition, the overshoot is only 0.48 %.
Both comparisons show an impressive improvement in performance. But is it
the best that could be obtained from this circuit configuration? After all, in Part 2 we
have seen a similar improvement from just the 4-pole L+T-coil section and we expect
that the addition of the 3-pole section should yield a slightly better result at least.
One obvious way of extending the bandwidth would be to lower the value of
Vb , increase the bias currents, and scale the remaining components accordingly. Then
we should increase the input signal amplitude to get the same output. But this is the
‘trivial’ solution (mathematically, at least; not so when building an actual circuit).

- 5.18 -
P.Stariè, E.Margan System synthesis and integration

0
10

0.707

| FA ( f )|
| F ( f )|
| FN ( f )|
A0

−1
10
6 7 8 9
10 10 10 10
f [Hz]
Fig. 5.1.6: The gain normalized magnitude vs. frequency of the 7-pole
compensated system |JA a0 b| and the 2-pole non-compensated system, |JN a0 b|.
The bandwidth of JN is about 25 MHz and the bandwidth of JA is about 88 MHz,
more than 3.5 times larger.

1.2

1.0
g (t )
A0
0.8
gA(t ) gN ( t )

0.6

0.4

0.2

0
0 5 10 15 20 25
t [ns]
Fig. 5.1.7: The gain normalized step responses of the 7-pole compensated system
gA a>b and the 2-pole non-compensated system gN a>b. The rise time is 14 ns for the
gN a>b, but only 3.8 ns for gA a>b. The overshoot of gA a>b is only 0.48 %.

- 5.19 -
P.Stariè, E.Margan System synthesis and integration

By a careful inspection of the amplifier design equations and comparing them


with the analysis of the two sections in Part 2, we come to the conclusion that the most
serious bandwidth drawback factor is the high value of the CRT capacitance, which is
much higher than Ga or Gd . But if so, did we limit the possible improvement by
assigning the poles with the lowest imaginary part to the output? Should not we obtain
a better performance if we add more peaking to the output stage?
Since we have put the design equations and the response analysis into a
computer routine, we can now investigate the effect of different pole assignments. To
do so we simply re-order the poles and run the routine again. Besides the pole order
that we have described, let us indicate it by the pole order: abcd (Eq. 5.1.3), we have
five additional permutations: abdc, acbd, adbc, acdb, adcb. The last two
permutations result in a rather slow system, requiring a large inductance for Pb and
large capacitances Gc and Gd . But the remaining ones deserve a look.
In Fig. 5.1.8 we have plotted the four normalized step responses and, because
there are two identical pairs of responses, we have displaced them vertically by a
small offset in order to distinguish them more clearly.

1.2

1.0

0.8
abcd acbd
abdc adbc
0.6

0.4

0.2

0
0 2 4 6 8 10 12 14 16 18 20
t [ns]
Fig. 5.1.8: The normalized step responses of the four possible combinations of pole
assignments. There are two pairs of responses, here spaced vertically by a small
offset to allow easier identification. One of the two faster responses (labeled ‘abcd’)
is the one for which the detailed analysis has been given in the text.

If the pole pairs =c and =d are mutually exchanged the result is the same as our
original analysis. But by exchanging =b with either =c or =d the result is sub-optimal.
A closer look at Table 5.1.1 reveals that both of the two slower responses have
Va œ $&% H instead of #') H. The higher value of Va means actually a higher gain, as
can be seen in Fig. 5.1.9, where the original system was set for a gain of E! ¸ "!, in
contrast with the higher value, E! ¸ "$. The higher gain results from a different
‘tuning’ of the 3-pole T-coil stage, in accordance with the different pole assignment.

- 5.20 -
P.Stariè, E.Margan System synthesis and integration

14

12

10

8
abcd
abdc
6
acbd
4 adbc

0
0 2 4 6 8 10 12 14 16 18 20
t [ns]
Fig. 5.1.9: The slower responses of Fig. 5.1.8, when plotted with the actual gain,
are actually those with a higher value of Va and therefore a higher gain.

Since our primary design goal is to maximize the bandwidth with a given gain,
let us recalculate the slower system for a lower value of Vb . If Vb œ $"' H (from the
E96 series of standard values, 0.5 % tolerance), the gain is restored. Fig. 5.1.10 shows
the recalculated responses, labeled ‘acbd’ and ‘abdc’, compared to the response
obtained by the ‘abcd’ and ‘abdc’ pole assignment.

12

10

8
Rb = 360 Ω abcd
abdc
6
acbd
R = 316 Ω
adbc b
4

0
0 2 4 6 8 10 12 14 16 18 20
t [ns]
Fig. 5.1.10: If the high gain responses are recalculated by reducing Vb from the
original 360 H to 316 H, the gain is nearly equal in all four cases, However, those
pole assignments which put the poles with the higher imaginary part at the output
stage still result in a slightly slower system.

- 5.21 -
P.Stariè, E.Margan System synthesis and integration

The difference in rise time between the two pairs is much smaller now;
however, the recalculated pair is still slightly slower. This shows that our initial
assumptions of how to achieve maximum bandwidth (within a given configuration)
were not guessed by sheer luck.
In Table 5.1.1 we have collected all the design parameters for the four out of
six possible pole assignments. The systems in the last two columns have the same
pole assignments as in the middle two, but have been recalculated from a lower Vb
value, in order to obtain the total voltage gain nearly equal to the first system. From a
practical point of view the first and the last column are the most interesting: the
system represented by the first column is the fastest (as the second one, but the latter
is difficult to realize, mainly owing to low Gc value), whilst the last one is only
slightly slower but much easier to realize, mainly owing to a lower magnetic coupling
5b and the non-problematic values of Gc and Gd .

Table 5.1.1
Vb [H] $'! $'! $'! $'! $"' $"'
pole order: abcd abdc acbd adbc acbd adbc
E! *Þ''( *Þ''( "#Þ(% "#Þ(% *Þ)"( *Þ)"(
Va H[ ] #')Þ& #)'Þ& $&$Þ* $&$Þ* $"!Þ( $"!Þ(
Gc [pF] %Þ"($ #Þ"(( #Þ)(! (Þ#%* #Þ)(! (Þ#%*
Gd [pF] &Þ)$) ""Þ"* "%Þ(& &Þ)$) "%Þ(& &Þ)$)
Gbb [pF] !Þ((* !Þ((* "Þ#!" "Þ#!" "Þ#!" "Þ#!"
Gbd [pF] "Þ)&" "Þ### "Þ!%& "Þ)&" "Þ!%& "Þ)&"
Pb [µH] "Þ%#' "Þ%#' "Þ%#' "Þ%#' "Þ!*) "Þ!*)
Pc [µH] !Þ"&$ !Þ!)! !Þ"'# !Þ%"! !Þ"#& !Þ$"'
Pd [µH] !Þ%#" !Þ)!( "Þ)%( !Þ($" "Þ%#$ !Þ&'$
5b !Þ&&) !Þ&&) !Þ$*# !Þ$*# !Þ$*# !Þ$*#
5d  !Þ"") !Þ$*# !Þ&&)  !Þ"") !Þ&&)  !Þ"")
(b $Þ&( $Þ&( #Þ$& #Þ$& #Þ)% #Þ)%
(r $Þ&& $Þ&& #Þ$$ #Þ$$ #Þ)" #Þ)"
Table 5.1.1: Circuit components for 4 of the 6 possible pole assignments. The last two
columns represent the same pole assignment as the middle two, but have been recalculated
for Vb œ $"' H and nearly equal gain. The first column is the example calculated in the text
and its response is one of the two fastest. The other fast system (second column) is probably
non realizable (in discrete form), because Gc ¸ # pF. The last column (adbc) is, on the other
hand, only slightly slower, but probably much easier to realize (T-coil coupling and the
capacitance values). The bandwidth and rise time improvement factors (b and (r were
calculated by taking the non-compensated amplifier responses as the reference.

The main problem encountered in the realization of our original ‘abcd’ system
is the relatively high magnetic coupling factor of the output T-coil, 5b . A possible way
of improving this could be by applying a certain amount of emitter peaking to either
the U" or U3 emitter circuit. Then we would have a 9-pole system and we would have
to recalculate everything. However, the use of emitter peaking results in a negative
input impedance which has to be compensated (see Part 3, Sec. 3.5), and the
compensating network adds more stray capacitance.

- 5.22 -
P.Stariè, E.Margan System synthesis and integration

A 9-pole system might be more easily implemented if, instead of the 3-pole
section, we were to use another L+T-coil 4-pole network. The real pole could then be
provided by the signal source resistance and the U" input capacitance, which we have
chosen to neglect so far. With 9 poles both T-coils can be made to accommodate those
two pole pairs with moderate imaginary part values (because the T-coil coupling
factor depends only on the pole angle ) ), so that the system bandwidth could be more
easily maximized. A problem could arise with a low value of some capacitances,
which might become difficult to achieve. But, as is evident from Table 5.1.1, there are
many possible variations (their number increases as the factorial of the number of
poles), so a clever compromise can always be made. Of course, with a known signal
source an additional inductive peaking could be applied at the input, resulting in a
total of 11 or perhaps even 13 poles, but then the component tolerances and the
adjustment precision would set the limits of realizability.
Finally, we would like to verify the initial claim that the input real pole =" ,
owed to the signal source resistance and base spread resistance and the total input
capacitance, can be neglected if it is larger than the system real pole =a . Since the
input pole is separated from the rest of the system by the first cascode stage, it can be
accounted for by simply multiplying the system transfer function by it. In the
frequency response its influence is barely noticeable. In the step response, Fig. 5.1.11,
it affects mostly the envelope delay and the overshoot, while the rise time (in
accordance with the frequency response) remains nearly the same.

10
9

8
7

6 m = 10 2 1.1

4 s1 = m sa
3
s1 = −1
2 ( Rs + r b1 ) Cin

sa = −1
1 R a Ca
0
0 2 4 6 8 10 12 14 16 18 20
t [ns]
Fig. 5.1.11: If the real input pole =" is at least twice as large as the system’s real pole
=a , its influence on the step response can be seen merely as an increased envelope
delay and a reduced overshoot, while the rise time remains nearly identical.

Usually the signal source’s impedance is 50 H or less; an input capacitance of


several pF would still ensure that the input pole is high above the system real pole.
However, an oscilloscope needs an input buffer stage and a preamplifier, with variable
gain and attenuation, to adapt the signal amplitude to the required level. This
preamplifier’s output impedance, driving the input capacitance of our amplifier, can
be high enough, forcing us to account for it. In such cases, as already stated before, it

- 5.23 -
P.Stariè, E.Margan System synthesis and integration

might become feasible to replace the 3-pole peaking network by another


4-pole L+T-coil network and make the input pole the main system real pole.
As already mentioned, the relatively high capacitance of the CRT vertical
deflection plates is the dominant cause for the amplifier bandwidth limitation.
To avoid this problem the most advanced CRTs from the analog ’scope era
have had their deflection plates made in a number of sections, connected externally by
a series of T-coils (see Fig. 5.1.12), thus reducing the capacitance seen by the
amplifier to just a fraction of the original value. At the same time, the T-coils have
provided a delay required to match the signal propagation to the electron velocity in
the writing beam (compensating for the electron’s finite travel time by the deflection
plates, as well as some non negligible relativistic effects! — see Appendix 5.1), thus
aiding to a better beam control.

V cc
Rb

Va
d ka

ld

Vg

Rb
V cc

Fig. 5.1.12: If the CRT deflection plates are made in a number of sections (usually between 4
and 8), connected by a series of T-coil peaking circuits, the amplifier would effectively be
loaded by a much smaller capacitance, allowing the system cutoff frequency to be several times
higher. The T-coils also provide the time delay necessary to keep the deflecting voltage (as seen
by the electrons in the writing beam) almost constant throughout the electron’s travel time across
the deflecting field. For simplicity, only the vertical deflection system is shown, but a similar
circuit could be used for the horizontal deflection, too (such an example can be found in the
1 GHz Tektronix 7104 model; see Appendix 5.1 for further details). Note that, owing to the
increasing distance between the plates their length should also vary accordingly, in order to
compensate for the reduced capacitance. Fortunately, the capacitance is also a function of the
plate’s width, not just length and distance, so a well balanced compromise can always be found.

- 5.24 -
P.Stariè, E.Margan System synthesis and integration

5.2 High Input Impedance Selectable Attenuator


with a JFET Source Follower

A typical oscilloscope vertical input must incorporate a number of passive


signal conditioning functions:

1) a selectable 50 HÎ1 MH input resistance, with low reflection coefficient on the


50 H setting;
2) a selectable DC–GND–AC coupling;
3) a selectable 1:1Î10:1Î100:1 or similar attenuation with a 1 MH ll 10 pF input
impedance (independent of the selected attenuation; resistance tolerance 0.1 %,
capacitance between 10–20 pF, since the external probes are adjustable, but its
dielectric properties must be constant with frequency and temperature);
4) a 2 kV electrostatic discharge spark gap, able to protect the input from a
human body model discharge (200 pF, 15 kV);
5) a 400 V (DC + peak AC) continuous protection of the delicate input amplifier
at no input attenuation setting (except for the 50 H setting).

To this we must add the following requirements:

6) the upper cut off frequency at least twice higher than the system’s bandwidth;
7) the upper cut off frequency should be independent of any of the above settings;
8) the gain flatness must be kept within 0.5 % from DC to 1/5 of the bandwidth;
9) the protection diodes must survive repeating 1–2 A surge currents with < 1 ns
rise and 50 µs decay, their leakage must be < 100 pA and capacitance < 1 pF;

To preserve a high degree of signal integrity the stray capacitances and


inductances must be kept low throughout the input stage, which means small
components with a small size of soldering pads and the traces as short as possible.
In addition, the unity gain JFET buffer stage performance should include:

10) a > 100 MH input resistance;


11) a < 1 pF input capacitance;
12) a 50 H output resistance or the ability to drive such loads;
13) the bandwidth at least twice higher than the rest of the system;
14) < 5 nVÎÈHz input noise density (see [Ref. 5.66] for low noise design);
15) < 0.5 mVpp wideband noise (at 5 mVÎdiv.; important for digitizing ’scopes);
16) gain close to unity, flat within 0.5 %, up to 1/5 of the system bandwidth;
17) low overshoot and undershoot, any resonance up to 10× the system’s
bandwidth should be well damped to reduce ringing;
18) fast step settling time to within 0.1% of the final signal value;
19) recovery from input overdrive as short as possible;
20) ability to handle signals in a wide range, from 8 mVpp (1 mVÎdiv) up to 40 Vpp
(5VÎdiv, with input attenuation), with a DC offset of ± "Î# screen at least;

This is an impressive list, indeed. Especially if we consider that for a 500 MHz
system bandwidth the above requirements should be fulfilled for a 1 GHz bandwidth.

- 5.25 -
P.Stariè, E.Margan System synthesis and integration

A typical input stage block diagram is shown in Fig. 5.2.1. The attenuator and
the unity gain buffer stage will be analyzed in the following sections.

control
DC-GND-AC Cp
Vcc
Coupling
1.5nF
1:1 Rp Rd Ro
1M Ω o
i 10:1 Av =+1
BNC 15pF 100:1 150k 150 Ω 50 Ω
C AC
Rt Vee
50 Ω 33nF 1M Ω Attenuator
Unity-Gain
Overdrive Buffer
protection

Fig. 5.2.1: A typical conventional oscilloscope input section. All the switches must be high
voltage types, controlled either mechanically or as electromagnetic relays (but other
solutions are also possible, as in [Ref. 5.2]). The spark gap protects against electrostatic
discharge. The Vt 50 H resistor is the optional transmission line termination. The 1 MH
resistor in the DC–GND–AC selector charges the AC coupling capacitor in the GND
position, reducing the overdrive shock through Gac in presence of a large DC signal
component. The attenuator is analyzed in detail in Sec. 5.2.1–3. The overdrive protection
limits the input current in case of an accidental connection to the 240 Vac with the attenuator
set to the highest sensitivity. The unity gain bufferÎimpedance transformer is a > 100 MH
Vin , 50 H Vo JFET or MOSFET source follower, analyzed in Sec. 5.2.4 and 5.2.5.

5.2.1 Attenuator High Frequency Compensation

A simple resistive attenuator, like the one in Fig. 5.2.2a is too sensitive to any
capacitive loading by the following amplifier stage. For oscilloscopes the standard
input impedance Va œ V"  V# is 1 MH, so for a 10:1 attenuation the resistance
values must be V" œ *!! kH and V# œ "!! kH. With such values the output
impedance, which equals the parallel connection of both resistances, would be about
90 kH. Assuming an amplifier input capacitance of only 1 pF, the resulting system
bandwidth would be only 1.77 MHz [0h œ aV"  V# bÎa#1Gi V" V# b]Þ
Therefore high frequency compensation, as shown in Fig. 5.2.2b, is necessary
if we want to obtain higher bandwidth. The frequency compensation, however, lowers
the input impedance at high frequencies.

i i i

9R C R1 C1
9R 900k 10pF
o = 0.1 i o= 0.1 i o

R R 9C R2 C2a C2b
Ci 100k 82pF 10-30pF

a) b) c)
Fig. 5.2.2: The 10:1 attenuator; a) resistive: with V œ "!! kH, the following stage input
capacitance of just 1 pF would limit the bandwidth to only 1.(( MHz; b) compensated: the
capacitive divider takes over at high frequencies but the input capacitance of the following
stage of 1 pF would spoil the division by 1%; c) adjustable: in practice, the capacitive
divider is trimmed for a perfect step response.

- 5.26 -
P.Stariè, E.Margan System synthesis and integration

In general, at DC the signal source is loaded by the total attenuation resistance


Va ; for an attenuation factor E, the values of resistor V" and V# must satisfy the
following equations:

V"  V # œ V a (5.2.1)

The current through the resistive path is:


@i @o
3i œ œ (5.2.2)
V"  V# V#
so, from the last two expressions, the attenuation is:
@o " V#
œ œ (5.2.3)
@i E V"  V #
and the required resistance relation is:

V" œ aE  "bV# (5.2.4)

Thus, for an Va œ " MH and E œ "!:

V" œ *!! kH and V# œ "!! kH (5.2.5)

The high frequency compensation consists of a capacitive divider having the


same attenuation factor as the high impedance resistive divider in parallel with it, as in
Fig. 5.2.2b. In order to achieve a precise attenuation, resistors with 0.1% tolerance are
used, giving a maximum error of 0.2%. However, capacitors with a comparably tight
tolerance are not readily available, and, even if they were, the layout strays would
dominate. So in practice the capacitive divider is made adjustable, Fig. 5.2.2c.
Trimming of G" should be avoided, in order to reduce the circuit size (and thus stray
inductances and capacitances); a much better choice is to trim only some 20–30% of
G# . Care should be taken to connect the variable plate to the ground, otherwise the
metal tip of the adjusting screwdriver would modify the capacitance by contact alone.
For a well trimmed attenuator, the capacitive reactance ratio at high
frequencies must match the resistor ratio at DC and low frequencies:
"
V" \G" 4=G" G#
œ œ œ (5.2.6)
V# \G# " G"
4=G#
which also implies that the two VG constants must be equal:

V" G" œ V# G# œ 7a (5.2.7)

The input impedance now becomes:


" "
^a œ ^"  ^# œ 
" "
 4=G"  4=G#
V" V#
V" V# "
œ  œ aV"  V" b (5.2.8)
"  4=G" V" "  4=G# V# "  4= 7a

- 5.27 -
P.Stariè, E.Margan System synthesis and integration

In the latter expression we have taken into account Eq. 5.2.7. This is the same as if we
would have a single parallel Va Ga network:
"
^a œ Va (5.2.9)
"  4=Ga Va
where:
"
Va œ V "  V # and Ga œ (5.2.10)
" "

G" G#
By substituting G# œ aE  "bG" the input capacitance Ga relates to G" as:
" E"
Ga œ œ G" (5.2.11)
" " E

G" aE  "bG"

The transfer function can then be calculated from the attenuation:


" @out ^# "
J a4=b œ œ œ œ (5.2.12)
E @in ^"  ^# V" "  4=G# V#
" †
V# "  4=G" V"

Obviously, the frequency dependence will vanish if the condition of Eq. 5.2.7
is met. However, the transfer function will be independent of frequency only if the
signal’s source impedance is zero (we are going to see the effects of the signal source
impedance a little later).
The transfer function of an unadjusted attenuator (V" G" Á V# G# ) has a simple
pole and a simple zero, as can be deduced from Eq. 5.2.12. If we rewrite the
impedances as:
"
" V" G"  ="
^" œ œ V" œ V" (5.2.13)
" " =  ="
 =G" =Œ 
V" V" G "
and
"
" V# G #  =#
^# œ œ V# œ V# (5.2.14)
" " =  =#
 =G# =Œ 
V# V# G #

where =" and =# represent the poles in each impedance arm, explicitly:
" "
=" œ  and =# œ  (5.2.15)
V" G" V# G #
The transfer function is then:
 =#
V#
@out ^# =  =#
œ œ  ="  =# (5.2.16)
@in ^"  ^ # V"  V#
=  =" =  =#

- 5.28 -
P.Stariè, E.Margan System synthesis and integration

By solving the double divisions, the transfer function can be rewritten as:
@out  =# V# a=  =" b
œ (5.2.17)
@in  =" V" a=  =# b  =# V# a=  =" b

We can replace the products =i Vi by "ÎGi :


"
a=  =" b
@out G# G" a=  =" b
œ œ œ
@in " " G# a=  =# b  G" a=  =" b
a=  =# b  a=  =" b
G" G#
(5.2.18)
G" a=  =" b G" a=  =" b
œ œ †
=aG"  G# b  =# G#  =" G" G"  G# =# G #  = " G "
=
G"  G #
This can be simplified by defining a few useful substitutions: the capacitive divider
attenuation:
G"
EG œ (5.2.19)
G"  G#
the system zero:
"
=z œ = " œ  (5.2.20)
V" G "
and the system pole:
=# G #  = " G "
=p œ (5.2.21)
G"  G#
Further, the system pole can be rewritten as:
" " " "
 G#  G" 
=# G#  =" G" V# G# V" G " V# V"
=p œ œ œ 
G"  G# G"  G# G"  G#
V"  V# "
œ  † (5.2.22)
V" V# G"  G #
From the system pole we note that the system time constant is equal to the
parallel connection of all four components.
We will also define the resistance attenuation as:
V#
EV œ (5.2.23)
V"  V#
and we then rewrite the system pole as:
V"  V# " " "
=p œ  † œ  † (5.2.24)
V# V" aG"  G# b EV V" aG"  G# b
With all these substitutions the complex frequency response is:

@out =  =z
J a=b œ œ EG (5.2.25)
@in =  =p

Again, it is obvious that the frequency dependence vanishes if =p œ =z .

- 5.29 -
P.Stariè, E.Margan System synthesis and integration

From J a=b we derive the magnitude:

4=  5 z  4=  5z
Q a=b œ ¹J a4=b¹ œ ÈJ a4=b † J a  4=b œ EG Ë † (5.2.26)
4=  5 p  4 =  5p

which results in:

=#  5z#
Q a=b œ EG Ë (5.2.27)
=#  5p#

The phase angle is the arctangent of the imaginary to real component ratio of
the frequency response J a4=b:
4=  5 z
eœ 
eeJ a4=bf 4=  5 p
:a=b œ arctan œ arctan (5.2.28)
deJ a4=bf 4=  5 z
dœ 
4=  5 p

First we must rationalize J a4=b by multiplying both the numerator and the
denominator by the complex conjugate of the denominator a  4=  5p b:

4=  5z a4=  5z ba  4=  5p b =#  4 = 5 z  4 = 5 p  5 z 5 p
œ œ
4=  5p a4=  5p ba  4=  5p b =#  5p#

and then we separate the real and imaginary part:


=#  4= 5z  4= 5p  5z 5p =#  5 z 5 p =a 5 z  5 p b
œ 4
=#  5p# =#  5p# =#  5 p
The phase angle is then:
5z  5p
=a 5z  5p b =
:a=b œ arctan # œ arctan 5z 5p (5.2.29)
=  5z 5 p "
=#
By using the identity:
BC
arctan B  arctan C œ arctan
"  BC
we can write:
5z 5p
:a=b œ arctan  arctan (5.2.30)
= =

With 5p œ 5z the phase angle is zero for any =.


The envelope delay is the frequency derivative of the phase:
.: . 5z 5p
7d œ œ Šarctan  arctan ‹
.= .= = =
" 5z " 5p
œ Š # ‹ Š # ‹ (5.2.31)
5z # = 5p # =
"Š ‹ "Š ‹
= =

- 5.30 -
P.Stariè, E.Margan System synthesis and integration

So the result is:


5z 5p
7d œ  # (5.2.32)
=#  5z# =  5p#

Again, note that for 5p œ 5z the envelope delay is zero.


We have plotted the magnitude, phase and envelope delay in Fig. 5.2.3. The
plots are made for the matched and two unmatched cases in order to show the
influence of trimming the attenuator by G# (±10 pF).
−19.0
C 2 = 80 pF
−19.5 i
R1 C1
o [dB] 900k 10pF
C 2 = 90 pF
−20.0 o
R2 C2
100k 90 ± 10pF
−20.5
C 2 = 100 pF

−21.0
3.0

2.0 C 2 = 80 pF

1.0
ϕ [°] C 2 = 90 pF
0.0

−1.0

C 2 = 100 pF
−2.0

−3.0
1.0
C 2 = 80 pF
0.5
τ [ µ s]
C 2 = 90 pF
0.0

− 0.5
C 2 = 100 pF
−1.0
1 10 100 1k 10k 100k 1M 10M 100M
f [Hz]
Fig. 5.2.3: The attenuator magnitude, phase, and envelope delay responses for the correctly
compensated case (flat lines), along with the under- and over-compensated cases (G2 is trimmed
by ±10 pF). Note that these same figures apply also to oscilloscope passive probe compensation,
demonstrating the importance of correct compensation when making single channel pulse
measurements and two channel differential measurements.

- 5.31 -
P.Stariè, E.Margan System synthesis and integration

The step response is obtained from J a=b by the inverse Laplace transform,
using the theory of residues:
p =œ=
" EG =  =z =  =z
_" œ J a=b œ ( e=> .= œ EG "resœ e=> 
= #14 =a=  =p b =œ!
= a=  =p b
G

We have two residues. One is owed to the unit step operator, "Î=:

=  =z =  =z =>
res" œ EG lim =œ e=>  œ EG lim œ e 
=p! =a=  = p b =p! =  =p
"

 =z V" G "
œ EG œ EG
 =p " "
 †
EV V" aG"  G# b
V" aG"  G# b G"  G # "
œ EG EV œ EG EV œ EG EV
V" G" G" EG
V#
œ EV œ (5.2.33)
V"  V#
As expected, the residue for zero frequency (DC) is set by the resistance ratio.
The other residue is due to the system pole, =p :
=  =z =  =z =>
res# œ EG lim a=  =p bœ e=>  œ EG lim š e ›œ
=p=p =a=  =p b =p=p =
" " "
 † Œ 
=p  =z =p > EV V" aG"  G# b V" G"
œ EG e œ EG e=p >
=p " "
 †
EV V" aG"  G# b
" " "
† 
EV V" aG"  G# b V" G" =p >
œ EG e
" "

EV V" aG"  G# b

G"  G# = > EV = >


œ EG Œ"  EV  e p œ EG Œ"  e p
G" EG

œ aEG  EV b e=p > (5.2.34)

The result is a time decaying exponential, with the time constant set by the
system pole, =p , and the amplitude set by the difference between the capacitive and
resistive divider.
The step response is the sum of both residues:

0 a>b œ " res œ EV  aEG  EV b e=p >


" "
 † >
œ EV  aEG  EV b e EV V" aG" G# b
(5.2.35)

- 5.32 -
P.Stariè, E.Margan System synthesis and integration

So the explicit result is:

V# G" V# V V
 " #† " >
0 a>b œ Œ   e V"V# G"G# (5.2.36)
V"  V # G"  G # V"  V #

When EG œ EV , the exponential function coefficient is zero, thus:


V#
0 a>b œ (5.2.37)
V"  V#
The system’s time constant, as we have already seen in Eq. 5.2.24, is:
" V " V#
7a œ  œ aG"  G# b (5.2.38)
=p V"  V #

For a well compensated attenuator, the following is true:


V" V #
7a œ aG"  G# b œ V" G" œ V# G# (5.2.39)
V"  V #

We have plotted the step response in Fig. 5.2.4. The plots are made for the
matched and two unmatched cases in order to show the influence of trimming the
attenuator by G# (±10 pF), as in the frequency domain plots.

0.12
a
0.10 b
c
0.08

o
i
0.06
R1 C1 sz
900k 10pF
0.04
o a bc σ
0.02 R2 C2
100k 90 ± 10pF sp a) C 2 = 80 pF
b) C 2 = 90 pF
0.00
c) C 2 = 100 pF
− 0.02
0 20 40 60 80
t [ µ s]
Fig. 5.2.4: The attenuator’s step response for the correctly compensated case, along with
the under- and over-compensated cases (G2 is trimmed by ±10 pF). Note that by changing
G# the system pole also changes but the system zero remains the same.

Now we are going to analyze the influence of the non-zero source impedance
on the transfer function. Since we can re-use some of the results we shall not need to
recalculate everything.
The capacitive divider presents a relatively high output capacitance to the
following amplifier, and the amplifier input capacitance appears in parallel with G# ,
changing the division slightly, but that is compensated by trimming.
However, the attenuator input capacitance Ga (Eq. 5.2.10) is smaller than G"
*
(for an attenuation of E œ "!, the input capacitance is Ga œ "! G" ). The actual values

- 5.33 -
P.Stariè, E.Margan System synthesis and integration

of G" and G# are dictated mainly by the need to provide a standard value for various
probes (compensated attenuators themselves, too). Historically, values between 10
and 20 pF have been used for Ga . Although small, this load is still significant if the
signal source internal impedance is considered.
High frequency signal sources are designed to have a standardized impedance
of Vg œ 50 H (75 H for video systems). The cable connecting any two instruments
must then have a characteristic impedance of ^! œ 50 H, and it must always be
terminated at its end by an equal impedance in order to prevent signal reflections. As
shown in Fig. 5.2.5a and 5.2.5b, the internal source resistance Vg and the termination
resistance Vt form a ÷2 attenuator (neglecting the 1 MH of the 10:1 attenuator):
@o Vt &! H "
œ œ œ (5.2.40)
@g Vg  Vt &! H  &! H #

Therefore the effective signal source impedance seen by the attenuator is:
Vg Vt #&!!
Vge œ œ œ #& H (5.2.41)
Vg  Vt "!!

With a 9 pF equivalent attenuator input capacitance (Ga œ !Þ* G" ) a pole at


=h œ "ÎVge Ga is formed, resulting in an 0h œ "Îa# 1 Vge Ga b œ (!( MHz cut off.

Rg Z 0 = 50 Ω
i
50 Ω
9R C
Rt o
g
50 Ω
R 9C

a)
R ge i
R ge i
25 Ω
9R C
25 Ω g o
9R C 2
g o R 9C
2
R 9C
Rge
2.78 Ω Rc =
9

b) c)
Fig. 5.2.5: a) When working with 50 H impedance the terminating resistance must match
the generator internal resistance, forming a ÷2 attenuator with an effective output
impedance of 25 H; b) With a 9 pF attenuator input capacitance, a HF cut off at 707 MHz
results; c) The cut off for the ÷10 attenuator can be compensated by a 25/9 H resistor
between the lower end of the attenuator and the ground.

Owing to the high resistances involved, we can neglect the attenuator’s


resistive arms and consider only the equivalent signal source impedance and the
capacitive divider (assuming the attenuator is correctly compensated). If J! a=b is the

- 5.34 -
P.Stariè, E.Margan System synthesis and integration

attenuator transfer function for a zero signal source impedance (Eq. 5.2.25), the
transfer function for the impedance Vge œ #& H is:
"

" Vge Ga
J" a=b œ Vge † † J! a=b (5.2.42)
# "
=Œ 
Vge Ga

The resulting pole =h can be compensated by inserting a resistor Vc of


25Î9 œ 2.78 H between the lower attenuator end and the ground, as in Fig. 5.2.5c;
with the equivalent input resistance of Vg œ 25 H, the Vc provides the 10:1 division at
highest frequencies, as required. The effective output resistance at these frequencies is
only 2.5 H, so the pole formed by it and the unity gain buffer total input capacitance
of, say, 2 pF would be well beyond the bandwidth of interest (~32 GHz). However, the
bandwidth of the passive part of the input circuitry is impaired by other effects, as we
shall soon see.
A typical attenuator consists of three switch selectable sections, 1:1, 10:1, and
100:1, as shown in Fig. 5.2.6. This allows us to cover the required input amplitude
range from millivolts to tens of volts (intermediate 2:1 and 5:1 attention settings are
usually implemented after the input buffer stage with low value resistors). Because the
attenuator’s compensation is adjustable, the input capacitance changes, so it has to be
‘standardized’ by an additional compensating capacitor to make the input capacitance
of all sections equal.
1

15pF 1M

900k 10pF
2 o
2-8pF
90pF Co
100k
1pF

990k 10pF
3
2-8pF
10k 990pF

Fig. 5.2.6: The direct and the two attenuation paths are switched at both input and output,
in order to reduce the input capaciatance. For low cross-talk, the input and output of each
unused section should be grounded (not shown here). The variable capacitors in parallel
with the two attenuation sections are adjusted so that the input capacitance is equal for all
settings. Of course, other values are possible, e.g., ÷1, ÷20, ÷400 (as in Tek 7A11), with the
highest attenuation achieved by cascading two ÷20 sections. The advantage is that the
parasitic serial inductance of the largest capacitance in the highest attenuation section is
avoided; a disadvantage is that it is very difficult to trim correctly.

- 5.35 -
P.Stariè, E.Margan System synthesis and integration

Unfortunately, for attenuator settings other than 10:1 the resistive


compensation shown in Fig. 5.2.5 can be very difficult to achieve. For a 100:1
attenuation the resistance required between the attenuator and the ground would be
only 0.2525 H and such a low value might be present in the circuit already (in the
form of a ground return trace if it is not wide enough). Even if we are satisfied by a
1% tolerance we still need a well controlled design, taking care of every 10 mH.
The most critical is the direct (1:1) path. In order to present the same input
load to an external 10:1 probe this path must have the same input resistance and
capacitance as the higher attenuation paths. However, the direct path has no
attenuation, so the direct path can not be compensated by a low value resistor.
It is for this reason that many designers avoid using the direct path altogether
and opt for a 2:1 and 20:1 combination instead. Such a circuit, showing also the
resistive compensation, is drawn in Fig. 5.2.7. As a bonus, the amplifier input current
limiting and input overdrive protection circuitry is easier to realize (no 1:1 path),
needing lower serial impedance and thus allowing higher bandwidths.

500k 20pF
1
2-8pF
500k 15-25pF
i o
25 Ω Co
50 Ω
1pF
50 Ω
g
950k 10pF
2
2-8pF
50k 180-200pF

1.3 Ω

Fig. 5.2.7: The attenuator with no direct path, in which the 25 H effective source impedance
compensation can be used for both settings. Low ground return path impedance is necessary.

On the negative side, by using the ÷2 and ÷20 attenuation, the amplifier must
provide for another gain of two, making the system optimization more difficult.
Fortunately, for modern amplifiers, driving an AD converter, the gain requirement is
low, since the converter requires only a volt or two for a full range display; in contrast,
a conventional ’scope CRT requires tens of volts on the vertical deflecting plates.
Thus a factor of at least 10 in gain reduction (and a similar bandwidth increase!) is in
favor to modern circuits.
Whilst the gain requirements are relaxed, modern sensitive circuits require a
higher attenuation to cover the desired signal range. But obtaining a 200:1 attenuation
can be difficult, because of capacitive feed through: even a 0.1 pF from the input to

- 5.36 -
P.Stariè, E.Margan System synthesis and integration

the buffer output, together with a non-zero output impedance, can be enough to spoil
the response. If we can tolerate a feed through error of one least significant bit of an 8
bit analog to digital converter, the 200:1 attenuator would need an effective isolation
of #! log"! a#!! ‚ #) b œ *% dB, which is sometimes hard to achieve even at audio
frequencies, let alone GHz. A cascade of two sections could be the solution.

5.2.2 Attenuator Inductance Loops

Designer’s life would be easy with only resistances and capacitances to deal
with. But every circuit also has an inductance, whether we intentionally put it in or
desperately try to avoid it. As we have learned in Part 2, in wideband amplifiers,
instead of trying to avoid the unavoidable, we rather try to put the inductance to use by
means of fine tuning and adequate damping.
In Fig. 5.2.8 we have indicated the two inductances associated with the
attenuator circuit. Because of the high voltages involved, the attenuator circuit can not
use arbitrarily small components, packed arbitrarily close together. As a consequence,
the circuit will have loop dimensions which can not be neglected and, since the
inductance value is proportional to the loop area, the inductance values can be
relatively large (for wideband amplifiers).
As for stray capacitance, the value of stray inductance can not be readily
predicted, at least not to the precision required. Each component in Fig. 5.2.8 will
have its own stray inductances, one associated with the internal component structure
and the other associated with the component leads, the soldering pads, and PCB
traces. These will be added to the loop inductance.

50 Ω R1 C1 Cp
Rd
o
50 Ω L1 2-8pF
Rp
R2 C2
Co
g L2 2pF
R3 LM

Fig. 5.2.8: Inductances owed to circuit loops can be modeled as inductors in series with the
signal path. Note that in addition to the two self inductances there is also a mutual inductance
between the two. The actual values depend on the loop’s size, which in turn depends on the size
of the components and the circuit’s layout. Smaller loops have less inductance. Mutual
inductance can be reduced by shielding, although this can increase the stray capacitances.

Nevertheless, it is relatively easy to estimate both loop inductances, at least to


an order of magnitude. Basically, a single loop current M causes a magnetic flux Q
with a density F within the loop area W , so the self inductance is:
Q FW .LW .! .r LW
Pœ œ œ œ (5.2.43)
M M M M

- 5.37 -
P.Stariè, E.Margan System synthesis and integration

The current M and the magnetic field strength L are proportional: L œ MÎ#<
for a single loop, where < is the loop radius. In a linear non-magnetic environment
(with the relative permeability .r œ ") M and F are also proportional because
F œ .L . Furthermore, .! is the free space magnetic permeability, also known as the
‘induction constant’, the value of which has been set by the SI agreement about the
Ampere: .! œ %1 † "!( [V s A" m" ]. This means that a current of 1 A encircling
once a loop area of 1 m# causes a magnetic field strength of 1 Vs. Because for a
circular loop W œ 1<# , our loop inductance equation can be reduced to:
.! E .! 1 < # .! 1 <
Pœ œ œ œ k< (5.2.44)
#< #< #
where k œ #1# † "!( HÎ m. The inductance of a 1 m2 loop (< œ !Þ&'%# m) is then
¸ "."% † "!' H (the unit of inductance is ‘henry’, after Joseph Henry, 1791–1878;
[H] œ [V s A" ]).
As a more practical figure, a loop of 10 cm# ( ¸ 0.0178 m radius circle) has an
inductance of ¸ $& nH. This does not look much, but remember that in our circuit the
loop inductance P" is effectively in series with the signal source and is loaded by the
attenuator’s input capacitance, forming a 2nd -order low pass filter with a cut off
frequency 0h œ "Έ#1ÈP" Ga ‰ ¸ #') MHz, assuming P" œ $& nH and Ga œ "! pF.
With such values the step response rings long, since the equivalent signal source
resistance (Vge œ #& H) is not high enough to damp the resonance (such damping
would be adequate for P"  & nH).
The above inductance estimation is based on a circular loop model, whilst our
loops will usually be of a square form (increasing P), with additional stray
inductances owing to the internal geometry of the components (capacitors) and their
leads or just the PCB traces in case surface mounted components are used.
The loop inductances can, of course, be measured. If we replace G" , G# and V$
(Fig. 5.2.8) by a wire of the same total length, the input resistance and P" form a high
pass filter, whose cut off frequency can be measured. Next, by removing the wire and
also V$ and replacing Vp , Vd and Go by another wire, we can measure P"  P# .
Obtaining P# is than a matter of simple subtraction. Finally, by applying a signal to
the input, shorting V$ , and measuring the signal induced in P# , we can calculate the
mutual inductance PM . Note that a thin wire will have a somewhat larger inductance
than a wide PCB trace.
The best way to reduce the loop area (and consequently P) is to use a 3-layer
PCB, and make the middle layer a ‘ground plane’. In addition, using surface mounted
components reduces the circuit size and also allows us to place them on both sides of
the board. However, this technique also increases the stray capacitances and can also
cause reflections if the ‘microstrip’ trace impedances are not well matched to the
circuit. Therefore, a careful PCB design is needed, with wider ground clearance
around sensitive pads and using a material with low &r . The most sensitive node in this
respect is the attenuator output.
Another way of reducing the effect of stray inductance is to employ the same
technique as we did for the low value resistors. This means that the inductance in the
ground path (the signal return path) should not be too small, as it would be in the
ground plane case; rather, the return path inductance should be kept in the same ratio

- 5.38 -
P.Stariè, E.Margan System synthesis and integration

to the P" as the attenuation ratio. Precision in this respect is difficult, but not
impossible to achieve. Our inductance expression Eq. 5.2.44 does not show it, but
inductance is also inversely proportional to trace width. Powerful finite element
numerical simulation routines will be required for the job.
However, the same trick can not be used for P# (no attenuation in this loop!).
Fortunately, as will become clear from the analysis below, the input inductance P" is
more critical than P2 , since the latter is loaded by a much smaller capacitance (Go )
and can be suitably damped with a larger resistance (Vd , which is already in the circuit
because it is required for the FET gate protection).
We shall analyze the attenuator loops by assuming perfectly matched time
constants, V" G" œ V# G# , matched also to the other attenuator paths, so that the
variable capacitor in parallel is not needed. Also, we shall replace the two 50 H
resistors with a single 25 H one, representing the effective signal source resistance Vs
in series with the input, with @i œ @g Î#. The loop inductances are represented by
discrete components, P" and P# in the forward signal paths, as drawn in Fig. 5.2.8.
In the second loop the first thing to note is that G# is many times larger than
Go (10–500 ×, depending on the attenuation setting) and the same is true for Gp , which
means that their reactance will be comparably low and can thus be neglected.
Likewise, the resistances V# and Vp in parallel with these capacitances are large in
comparison with their reactances. What remains is the loop inductance P# in series
with Vd  V$ , driving the amplifier input capacitance Go . If the attenuated input
voltage is @i ÎE, the output voltage will be:
@i " "
@o œ † † (5.2.45)
E =Go "
=P#  Vd  V$ 
=Go
So we have a 2nd -order transfer function:
"
@o " P# Go
J# a=b œ œ † (5.2.46)
@i E Vd  V$ "
=#  = 
P# P# G o
Since V$ is fixed and of quite low value, Vd is used to provide the desired damping.
The input loop analysis is similar. Here we have the equivalent source
resistance Vs  V$ in series with P" , driving the equivalent input attenuator
capacitance Ga (Eq. 5.2.9; the attenuator resistance V"  V# can be neglected at high
frequencies). At the top of the attenuator we have:
@g " "
@i œ ŒV$   (5.2.47)
# =Ga "
=P"  Vs  V$ 
=Ga
which results in the following second-order transfer function:
" V$
=
# @i P" Ga P"
J" a=b œ œ (5.2.48)
@g #
Vs  V$ "
= = 
P" P" G a

- 5.39 -
P.Stariè, E.Margan System synthesis and integration

The numerator can be written as:


" V$ "
= œ a"  =Ga V$ b (5.2.49)
P" G a P" P" G a
It is clear that the frequency of the zero, "ÎGa V$ , is much higher than the
frequency of the pole pair, "ÎÈP" Ga . Also, if P" ¸ P# and Ga is at least 5 to 10
times larger than Go , then Ga will dominate the response. Fortunately, as discussed
above, with a clever layout of components and a suitable ground plane, P" can be
broken into P"a and P"b , so that P"b is in the ground return path. If we can make
P"a œ *P"b we would achieve an effective inductance compensation in this loop.
We are thus left with the P# loop and its transfer function, Eq. 5.2.49.
However, this 2nd -order function will be transformed by the pole of the JFET source
follower into a 3rd -order function, owing to its capacitive loading.
Although the inductance is always caused by a current loop, the inductance of
a straight PCB trace can be estimated as some 7–10 nHÎcm (length), depending on the
trace width. In [Ref. 5.16] a good empirical approximation is offered:

#6 A2
P œ !Þ# 6 ”lnŒ   !Þ##$&Œ   !Þ&• (5.2.50)
A2 6

where the trace length 6, width A and thickness 2 are all in mm, resulting in the
inductance in nH (no ground plane in this case!). With surface mounted components,
choosing capacitors with low serial inductance, and using miniature relay switches in
the attenuator, the inductance P# can be reduced to less than 10 nH, making the pole
(pair) at "ÎÈP# Go high, compared to the source follower real pole (set by the
damping resistance Vd and the source follower loading capacitance GL (see the JFET
source follower discussion in Part 3, Sec. 3.9). However, by making P# somewhat
larger, say, 30–50 nH, we can achieve a 3rd -order Bessel pole pattern, improving the
bandwidth and reducing the rise time. In Fig. 5.2.9 we see the attenuator circuit of the
E œ "! section, followed by a JFET source follower.
L 1a i

3.3nH R1 C1 Vcc
Rs L2 D1
900k 10pF Rd JFET 1
25 a o
2N5911
R2 C 2 0-50nH 180
g 100k 90pF R ss C ss
D2 50 800pF
R3 L
L 1b 2.74
Co
JFET 2 CL RL
0.37nH 2N5911 3.3pF 12k

R ss
50 Vee

Fig. 5.2.9: The attenuator and the source follower JFET" (JFET# acts as a constant current
source bias for JFET" ). The i nput loop inductance P" should be low, but the attenuation
can be compensated (by P" b ). The inductance P# of the second loop can b e‘tuned’ and damped
by an appropriate value of Vd to provide a Bessel step response, as seen in Fig. 5.2.10.

- 5.40 -
P.Stariè, E.Margan System synthesis and integration

Note that here we have not drawn the protecting components Gp and Vp , but
since a 325 V (peak value of the 230 V AC-mains) at the input results in a 32.5 V at
the attenuator output, these components are absolutely necessary. Also, Gp should be a
high voltage type (500 V), in order to survive the 325 V in the direct path (and still
163 V for a ÷2 attenuator); therefore, it will be of larger dimensions, so its internal
serial inductance will have to be taken into account.
Note also that for high bandwidth a low value of Go must be ensured. Since
the negative input impedance compensation network (as in Part 3, Sec. 3.9), as well as
Vd , H" , H# , GGD , and GL are present at the @o node, Go will tend to be high.
We have analyzed the step response in Fig. 5.2.10 for two values of P# (10 and
50 nH; Vd has been chosen for a correct Bessel damping).

1.2
i o2
10
1.0
g
o1
10
0.8

0.6
L1
L2

0.4 1) L 2 = 10nH

0.2 2) L 2 = 50nH

0
0 1 2 3 4
t [ns]
Fig. 5.2.10: Step response of the circuit in Fig. 5.2.9. With a low P1 , a correctly damped
P# , and a good JFET, a 350 MHz bandwidth (@ L2 rise time ¸ 1 ns), can be easily achieved.
The source follower gain is a little less than one. @o and @ L are drawn for the two P2 cases.

5.2.3 The ‘Hook–Effect’

The discussion about high impedance attenuators would not be complete


without mentioning the so called ‘hook–effect’ (Fig. 5.2.11). The name springs from
the shape of the step response signal, which, owing to a sag in the 10–300 kHz region,
resembles a hook at slower time base values. The effect is caused by the frequency
dependent relative permittivity, &r , of the PCB material (the standard glass epoxy FR4,
FR stands for ‘flame resistant’, has an average &r œ %Þ&, but it changes with frequency
and temperature considerably).
The capacitance in farads of a parallel plates capacitor is expressed as:
W
G œ &! & r [F] œ [AsÎV] (5.2.51)
.
where W is the plate area [m# ], . is their distance [m], &! œ )Þ)& ‚ "!"# [As/Vm] is
the permittivity of the free space (vacuum) and &r is the relative permittivity of the

- 5.41 -
P.Stariè, E.Margan System synthesis and integration

dielectric between the plates. A pad on the PCB thus has some small stray capacitance
towards the ground (large if a ground plane is used). This capaciatnce changes with
frequency proportionally with &r . Also, the material is porous and the fibres are long,
extending to the edge of the board, allowing moisture in (water &r œ )!), which
causes long term changes. The problem is not specific to this material only, it is
encountered with all traditional PCB materials (as well as many other insulators).
0.12
C2 min
0.10

C2 max
0.08
o
i ε(ω)
0.06 i
R1 C1 CPCB1
900k 10pF 1-3pF
0.04 o
R2 C2 CPCB2
100k 90pF 1-3pF
0.02

0
0 20 40 60 80 100 120 140 160 180
t [ µ s]
Fig. 5.2.11: The ‘hook–effect’ is most noticeable in the frequency range 10–300 kHz. Because
the relative permittivity, &r , of a common PCB material is not exactly constant with frequency,
the high impedance attenuator will exhibit a hook in its step response, which can not be trimmed
out by the usual adjustment of G# . The GPCB stray capacitance can vary by some 10–30 %,
depending on the actual topology involved. Since G" is small, it is affected by a few percent. The
lower attenuator leg is less affected, due to a larger value of G# .

To solve this problem, special Teflon® based material is used for instrument
front end, but it is expensive and not readily available. If it can not be obtained, one
possible solution could be to implement two large pads on a two sided PCB, in
parallel to G" and G# , with their areas in the same ratio as the attenuation factor
required [Ref. 5.67]. Then, the effect would be equally present in both legs, canceling
out the hook, Fig. 5.2.12. Even some trimming can be done by drilling small holes in
the larger pad pair (in contrast to cutting a pad corner, drilling removes the dielectric,
thus lowering both E and &).

Fig. 5.2.12: Canceling the hook–effect in the common PCB material is achieved by
intentionally adding two capacitances in form of large PCB pads, with areas in the same ratio as
required by the attenuation (since the area is proportional to the square of the linear dimensions,
for a 9:1 capacitance ratio, a 3:1 dimension ratio is needed). Trimming is possible by drilling
small holes in the larger pad.

- 5.42 -
P.Stariè, E.Margan System synthesis and integration

The main problem with this solution is that the use of external probes will
expose the hook again, although to a lesser extent (owing to the large capacitance of
the probe compensation).

5.2.4 Improving the JFET Source Follower DC Stability

The DC performance of a JFET source follower is far from perfect. Even if we


use a dual JFET in the same case and on the same substrate, i.e., the Siliconix 2N5911
as in Fig. 5.2.9, their characteristics will not match perfectly. The 2N5911 data sheet
state a VGS mismatch of 10 mV maximum and a temperature drift of 20 µVÎK. The
circuit in Fig. 5.2.13 offers moderate DC stability; the resistor VT is trimmed for a
zero @L to @in DC offset.

Vcc
from in
attenuator
Vofs R ss C ss
50 800pF
L

CL RL
3.3pF 12k
510
10nF RT 62
100 Vee
Fig. 5.2.13: Simple offset trimming of a JFET source follower.

Traditionally, oscilloscopes have a ‘vertical position’ control on the front


panel (one for each channel), which is adjusted by the user in accordance with the
particular measurement conditions, which differ from one situation to another, so the
offset and drift (if not too high) are not of particular concern.
However, in modern instrumentation some additional features are becoming
important, such as automated measurement, where we can not rely on the presence of
a human operator to make adjustments every so often. Also it is not uncommon to find
digital oscilloscopes with an 8 bit resolution (1:256) at high sampling rates, but
capable of 12 bit (1:4096) or even 16 bit (1:65536) resolution at low speed, and in
digital equipment it is expected that DC errors are of the order of ±1 LSB.
By trimming the current source, we reduce the DC offset, but the temperature
drift will remain. The gate current of a JFET, although normally in the < 100 pA
range, is also temperature dependent and approximately doubles with every 10 K. The
source follower input sees an attenuation dependent source resistance (from 1 MH to
10 kH), so an additional offset component will be present owing to the gate current
and the attenuator output impedance. A typical oscilloscope input has a maximum
sensitivity of 5 mVÎdiv., or a 40 mV full screen range; the 1 LSB resolution for an 8
bit sampling is %!Î#&' œ !Þ"& mV, therefore the simple trimming circuit is
inadequate for digital equipment, and an active offset compensation technique is
required to keep the DC error below some 200 µV.

- 5.43 -
P.Stariè, E.Margan System synthesis and integration

Basically, there are three ways of achieving a low DC error, each having its
own advantages and drawbacks. While DC performance is not of primary interest in
this book, it should be implemented so that high frequency performance is preserved.
The first technique is suitable for microprocessor controlled equipment, where
the input can be temporarily switched to ground, the offset measured, and the error
either adjusted by a digital to analog converter or subtracted from the sampled signal
in memory. But this operation should not be repeated too often or take a considerable
amount of time, otherwise the equipment would be missing valid trigger events or,
worse still, introduce errors by loading and unloading the signal source with the
instrument’s input impedance. This is a rather inelegant solution and it should be
taken as the last resort only.
A better way, shown in Fig. 5.2.14, is to use a good differential amplifier to
monitor the difference between the U" gate and the output, integrate it and modify the
U# current to minimize the offset. Note that this technique works well only while the
input is within the linear range of the JFET; when in the non-linear range or when
overdiven, the integrator will develop a high error voltage, which will be seen as a
long ‘tail’ after the signal returns within the linear range. Also, owing to the presence
of V" and V# , the attenuator lower arm resistors will need to be readjusted.

from Vcc
attenuator in Q1

R1 R2 R4 R ss C ss
10M 1M 1M R3 50 800pF
L

10M CL
Q2 RL
A1 3p3 12k
C1 C2
10nF 10nF
47

R5 Vee

10k
Fig. 5.2.14: Active DC correction loop. The amplifier E" amplifies and integrates the
difference between the U" gate and the output, driving through V& the source of U# and
modifying its current to minimize the offset. The resulting offset is equal to the offset of E",
multiplied by the loop gain ("  V$ ÎV% ). The differential amplifier with a very low offset will
usually have its input bias current much larger than the JFET input current, therefore resistors
V# and V% provide a lower impedance to ground. G# is the integration capacitor, whilst G"
provides an equal time constant to the non-inverting input. The feedback divider, V$ and V%
should be altered to compensate for the system gain slightly lower than one (this is achieved by
adding a suitably low value resistor in series with V% ). For a low error the amplifier E" must
have a high common mode rejection up to the frequency set by G# and V$ llV% .

But the most serious problem is owed to the amplifier E" : in order to
minimize the system offset it should have both low offset and low input bias current
itself. Although E" can be a low bandwidth device, the low input error requirements
can easily put us back to where we started from.
The example in Fig. 5.2.14 is relatively simple to implement. However, for a
low error we must keep an eye on several key parameters. Ideally we would like to get

- 5.44 -
P.Stariè, E.Margan System synthesis and integration

rid of the resistor V# (and V% ) to avoid the DC path to ground, because it alters the
attenuator balance.
Unfortunately, the input common mode range of the error amplifier is limited
and, more importantly, amplifiers with a low DC offset are usually made with bipolar
transistors at the input, so their input bias current can be in the nA range, much higher
than the JFET gate’s leakage (< #! pA). The bias current would then introduce a high
DC offset over V" (and V$ ). Here V# and V% come to the rescue, by conducting the
large part of the bias current to ground over their lower resistance. On the other hand,
the amplifier input offset voltage is then effectively amplified by the DC loop gain,
"  V$ ÎV% . The amplifier is selected so that the total offset error is minimized:

% ?V V$  V% V " V#
Zofs œ Œ"  ŒZA" ofs  MA" ofs  (5.2.52)
V V% V"  V #

where ?VÎV is the nominal resistor tolerance and ZA" ofs and MA" ofs are the amplifier’s
voltage and current input offset, respectively.
An industry standard amplifier, the OP-07, has Zofs œ $! µV and Mofs œ !Þ% nA
typical, so by taking the resistor values as in Fig. 5.2.14 (with a 1 % tolerance), we can
estimate the typical total system offset to be within „(#) µV, which is slightly larger
than we would like. The offset can be reduced using a chopper stabilized amplifier,
such as the Intersil’s ICL-7650 or the LTC-1052 from Linear Technology, which have
a very low voltage offset (< 5 µV) and low current offset (< 50 pA), but their switching
noise must be filtered at the output; also their input switches are very delicate and
must be well protected from over-voltage. Therefore, we can not do without V# and
V% and consequently the attenuator must be corrected by increasing the lower
resistance appropriately. See [Ref. 5.2] for more examples of such solutions.
The third technique involves separate low pass and high pass amplifier paths
and summing their outputs.
The example in Fig. 5.2.15 is made on the assumption that the sum of the two
outputs restores the original signal in both phase and amplitude. As the readers who
have tried to build loudspeaker crossover networks will know from experience, this
can be done correctly only for simple, first-order VG filters (with just two paths; for
higher order filters a third, band pass path is necessary).

i1
A1

i
A3
o

A2
i2

Fig. 5.2.15: The principle of separate low pass and high pass amplifiers.

Here the main problem is with the input of the low pass amplifier E" , which
must have an equally low input bias current as the high pass E# , but should also have
a very low voltage offset. Although in E" we don't need to worry about the high

- 5.45 -
P.Stariè, E.Margan System synthesis and integration

frequency response, we are essentially again at the start, since JFETs and MOSFETs,
which have low input current, have a high offset voltage and vice versa for the BJTs.
But we can combine Fig. 5.2.14 and 5.2.15, and, by putting the VG network in
front of the source follower, we can eliminate the amplifier E# . Fig. 5.2.16 shows a
possible implementation.
Vcc
C1
in Q1
330pF
R1 R7 R ss C ss
900k 10M R3 50 800pF
L

A1 900k CL
Q2 RL
R5 3.3pF 12k
R2 C2 2.7k R4
100k 100k
50
10nF R6
300 Vee

Fig. 5.2.16: With this configuration we can eliminate the need for a separate high pass
amplifier. The DC correction is now applied to the U" gate through V( . The error integrating
amplifier E" must have a gain of 10 in order to compensate for the "  V" ÎV# and "  V$ ÎV%
attenuation. Resistors V" and V# now provide the 1 MH input impedance for all attenuation
settings, and this requires the compensated attenuators in front to be corrected accordingly.

Furthermore, instead of using a single differential amplifier we can invert the


output by another low offset amplifier and rearrange the error amplifier into an
inverting integrator, as in Fig. 5.2.17. We can also self bias U" by bootstrapping the
resistor V( . This increases the input impedance by a very large factor, allowing us to
reduce G" and thus further limit the current under overload conditions. However, be
aware of the possibility of leakage currents from the protection diodes and the JFET
gate itself, now that its DC input impedance has been increased.

Vcc
C1
in Q1
C2 20pF
R1 R7 R ss C ss
10nF
1M 10M 50 800pF
L

R2 A1 CL
Q2 RL
1M R5 3.3pF 12k
10k
A2 47

R4 R3 Vee

100k 100k
Fig. 5.2.17: By inverting the output the error amplifier becomes an inverting
integrator and the offset correction is independent from the attenuator settings.
The bootstrapping of V( produces an effective input resistance of about 2.4 GH.

- 5.46 -
P.Stariè, E.Margan System synthesis and integration

Of course, now the DC error correction path must be returned to the current
source U# . The input resistor V" must be increased to 1 MH, since now the input of
E" is at the virtual ground; likewise V# must be equal to V" . Note that both E" and
E# offsets add to the final DC error.
Further evolution of this circuit is possible by combining a DC gain switching
(V# or V% adjusting) with input attenuation. A very interesting result has been
described in [Ref. 5.2], where also all input relays have been eliminated (using 3
source followers with the switching at their supply voltages by PIN diodes).

5.2.5 Overdrive Recovery

The integration loop will reduce the DC error only if the output follows the
input. However, under a hard overdrive the JFET will saturate and the integrator will
build up a charge proportional to the input overdrive amplitude and duration. When
the overdrive is removed, the loop will re-establish the original DC conditions, but
with the integration time constant, so the follower will exhibit a very long ‘tail’.
This is one of the most annoying properties of modern instrumentation,
because we often want to measure the settling time of an amplifier and a convenient
specification is the time from start of the transient to within 0.1 % of the final value.
With a good old analog ’scope we would simply increase the vertical sensitivity and
adjust the vertical position so that the final signal level is within the screen range. But
with modern DC compensated circuits this is not possible, and in order to avoid the
post-overdrive tail we must use a specially built external limiter, [Ref. 5.6], to keep
the input signal within the linear range of the ’scope. The quality and speed of this
limiter will also influence the measurement.
Note that simple follower circuits, like the one in Fig. 5.2.13, would also
exhibit a small but noticeable post-overdrive tail, mainly owed to thermal effects.
Also, high amplitude step response can be nonlinear, as shown in Fig. 5.2.18, owing to
the variation of the JFET gate to channel capacitance with voltage (Eq. 5.2.18–19), but
the time constant involved here is relatively small.
overdrive

overdrive

Fig. 5.2.18: Large signal step response (but still well below overdrive) is nevertheless
nonlinear, caused by the variation of the JFET gate–drain capacitance with voltage.

- 5.47 -
P.Stariè, E.Margan System synthesis and integration

5.2.6 Source Follower with MOSFETs

For a very long time, ever since semiconductors replaced electronic tubes in
instrumentation, JFETs were the only components used for the source follower input
section. Even today, JFETs outshine all other components in all performance aspects
but one — shear speed. Unfortunately, BJT input impedance is much too low for the 1
MH required. And MOSFETs, although having higher DC input resistance than
JFETs, can have (depending on their internal geometry) higher input leakage current,
are notoriously noisy, and their gate is easily damaged by overdrive.
If, however, we are ready to accept the design challenge to help the MOSFET
by external circuitry, we might be rewarded with a faster follower. Also MOSFETs
lend themselves nicely to integration, and this is where the experience gained from the
design of high speed digital circuits can help. Circuit area reduction minimizes the
stray capacitance and inductance, and new IC processing and semiconductor materials
(e.g., GaAs, SiGe) increase charge mobility.
Note that for source follower applications a depletion type MOSFET is needed
in order to achieve the required drain–source conductance with zero gate–source
voltage. With appropriate doping, the supply voltage can be reduced to only 2 or 3 V
(in contrast to several tens of volts required by conventional circuits), whilst retaining
good high frequency operation. This also reduces the power dissipation and, more
importantly, with low system supply voltage, the need for voltage gain is lower.
As with BJTs and JFETs, the parasitic capacitances of MOSFETs are also
voltage dependent, but only partially, as will become evident from the following
comparison with JFETs.

V gs V ds Id Id
I dss
S G D

p
n

p
V gs
bias depleted region G(B) Vp
a) b)
Id
Cgd Id
G D
Q1 g mV gs
V ds
+
Cgs V gs ro
V gs

S
c) d)
Fig. 5.2.19: a) A typical n-channel JFET structure cross-section under the bias condition.
The p-type substrate is in contact with the p-type gate. The n-type channel is formed
between the source and the drain. The bias voltage depletes the channel. b) The Zgs –Md
characteristic. c) The symbolic circuit. d) The equivalent circuit model.

- 5.48 -
P.Stariè, E.Margan System synthesis and integration

The JFET capacitances Ggd and Ggs are voltage dependent:


Zg. 8J
Ggd œ Ggd0 Œ"   (5.2.53)
Zbi
Zgs 8J
Ggs œ Ggs0 Œ"   (5.2.54)
Zbi
where:
8J is the junction grading coefficient ("Î# for abrupt and "Î$ for graded
junctions; most JFETs are built with a graded junction);
5B X RA RD
Zbi œ ln is the intrinsic zero bias built in potential;
;e 8#i
RA is the acceptor doping density in :-type material ( ¸ "!#" atomsÎm$ );
RD is the donor doping density in 8-type material ( ¸ "!## atomsÎm$ );
8i is the intrinsic Si charge density ("Þ& ‚ "!"' electronsÎm$ at 300 K).

The built in potential Zbi relates to the JFET pinch off voltage parameter ZP as:
;e R A RA
ZP œ +# Œ"    Zbi (5.2.55)
# &Si RD
where:
+ is the channel thickness ( ¸ # ‚ "!' m);
&Si is the silicon dielectric permeability ( œ "Þ!% ‚ "!"! FÎm).

With the typical values above, Zbi ¸ !Þ'% V and ZP ¸ $Þ% V.


In integrated circuits, a JFET would also have a gate to substrate capacitance
Ggss , which, accounting for an abrupt junction, can be expressed as:
"
Zgss  #
Ggss œ Ggss0 Œ"   (5.2.56)
Zbi

A typical zero bias range of values for these capacitances is:


Ggd0 œ !Þ$–" pF
Ggs0 œ "–% pF
Ggss0 œ %–) pF

So a JFET with a transconductance gm œ #Þ& ‚ "!$ AÎV and a total gate


capacitance GT œ Ggd  Ggs  Ggss œ % pF (under appropriate bias) would yield a cut
off frequency:
" gm #Þ& ‚ "!$
0T œ † œ ¸ "!! MHz (5.2.57)
# 1 GT 'Þ#) ‚ % ‚ "!"#

For MOSFETs, the situation is slightly different. Fig. 5.2.20 shows a typical n-
channel MOS transistor cross-section and the equivalent circuit model.

- 5.49 -
P.Stariè, E.Margan System synthesis and integration

S V gs G D Id V ds Id
SiO 2
Metal
SiO 2 SiO 2
n+ n+

p-type substrate (body) I dss

B V gs
V gsoff
bias depleted region V sub bias induced b)
n-type channel
a)
Cgd Id
Id G D
Q1 g mV gs
+
Vds Cgs V gs ro
g mbV sb
V gs V sub Cgb Cdb
Csb V sb S
+

c) B d)
Fig. 5.2.20: a) A typical n-channel MOSFET structure’s cross-section under bias condition.
Two heavily doped n+ regions (source and drain) are manufactured on a p-type substrate and a
metal gate covers a thin insulation layer, slightly overlapping the n+ regions. The bias voltage
depletes a thick region in the substrate, within which an n-type channel is induced between the
source and the drain. b) The Zgs –Md characteristic. c) The symbolic circuit. d) The equivalent
circuit model has two current sources, one owed to the usual mutual transconductance gm and
the gate–source voltage Zgs ; the other is owed to the so called ‘body effect’ transconductance
gmb and the associated source–body voltage Zsb . The gmb is typically an order of magnitude
lower than gm .

From the MOSFET structure cross-section it can be deduced that Ggb is small,
owing to the relatively large depleted region. Ordinarily its value is about 0.1 pF and it
is relatively constant. Likewise the depletion region capacitances Gsb and Gdb are also
small (they are proportional to the gate and source area), but they are voltage
dependent:
"

Zsb #
Gsb œ Gsb0 Œ"   (5.2.58)
Zbi
"

Zdb #
Gdb œ Gdb0 Œ"   (5.2.59)
Zbi

The capacitances Ggs and Ggd are owed to the SiO# insulation layer between
the gate and the channel. If Wg is the gate area and Gx is the unit area capacitance of
the oxide layer under the gate, then the total capacitance is:

Ggs0  Ggd0 œ Wg Gx (5.2.60)

Most MOSFETs are built with symmetrical geometry, thus the total zero bias
capacitance is simply split in half. But in the saturation region the channel narrows, so

- 5.50 -
P.Stariè, E.Margan System synthesis and integration

the drain voltage influence is small, resulting in a nearly constant Ggd whose value is
essentially proportional to the small gate–drain overlapping area. Thus typical Ggd
values range between 0.002 and 0.020 pF.
Ggs is larger, typically some #Î$ of the Wg Gx value, or about 1–2 pF.
Although MOSFET's’ gm is typically lower than in JFETs, it is the very small
capacitances, in particular Ggd and Ggb , which are responsible for the wider bandwidth
of a MOSFET source follower. Cut off frequencies of many GHz are easily achieved.

5.2.7 Input Protection Network

The input protection network is needed for two distinct real life situations. The
first one is the (occasional) electrostatic discharge, the second one is a long term
overdrive.
Imagine a technician sitting on a well insulated chair, wearing woollen or
synthetic clothes, and rubber plated shoes, repairing a circuit on his bench. For a while
he rubs his clothes on the chair by reaching for the schematic, the spare parts, some
tools, etc., thus quickly charging himself up to an average 500 V. Suddenly, he needs
to put a 1:1 ’scope probe somewhere on the rear panel and he stands up, touching the
probe to identify its trace by the characteristic capacitive AC mains pickup. By
standing up, he has increased his average distance from the chair by a large factor, say
30, but the charge on the chair and his clothes remains unchanged. This is equivalent
to charging a parallel plates capacitor and then increasing the plates distance, so that
the capacitance drops inversely to the distance (Eq. 5.2.51). Because Z œ UÎG , his
effective voltage would increase 30 times, reaching some 15 kV!
The average capacitance of the human body towards the surroundings of an
average room is about 200 pF. So, when our technician touches the probe tip, he will
discharge the 15 kV of his 200 pF right into the input of the poor ’scope. And such a
barbaric act can be repeated hundreds of times during an average repairing session.
At the instant the probe tip is touched the effective input voltage falls for the
first 5 ns (the propagation delay of the 1 m long probe cable) to a level set by the
resulting capacitive divider + œ "Îa"  Gcable ÎGbody b, so if Gcable œ "!! pFÎm,
Z œ + Zbody ¸ "! kV. Here we assume a signal propagation velocity of 0.2 mÎns
(about 2/3 of the speed of light). Also, note that the probe cable is made as a lossy
transmission line (the inner conductor is made of a thin resistive wire, about 50 HÎm).
After 5 ns the cable capacitance is fully charged and the signal reaches the
spark gap. The spark gap fires, limiting the input voltage to its own breaking voltage
(1500–2000 V), providing a low impedance path to ground and discharging
Gcable  Gbody . Some 25 ns later the voltage falls below the spark threshold.
Now the total capacitance Gcable  Gbody  Gin is discharged into the
remaining input resistance. With the attenuator set to highest sensitivity (1:1), the
input resistance is equal to the 1 MH of Vin , in parallel to the series connection of the
damping resistor Vd and one of the protection diodes (depending on the voltage
polarity). The diode must withstand a peak current Mdpk œ Zspark ÎVd ; if Vd œ "&! H,
then Mdpk œ #!!!Î"&! œ "$Þ$ A! Fortunately, the peak current is lowered also by the
loop inductance. The spark discharges the capacitance in less than $! ns and then the

- 5.51 -
P.Stariè, E.Margan System synthesis and integration

current falls exponentially as the total capacitance is discharged through Vd , which


lasts another #&! ns. At this time the voltage is lower than Zcc  ZH" and the
capacitance is discharged through Vin .
In Fig. 5.2.21 we have plotted the first 250 ns of the discharge event, along
with the schematic, showing only the most important circuit components.

15 Vcc

6V3
iR d D1
t =0
Z 0 = 50 Ω Rd
12 spark JFET 1
100pF/m
body τ d = 5ns 2kV 150 gate
15kV Rin D2
C in
C body 1M 20pF
9 200pF (Vee)

6
i R d [A] gate [V] R inCT

body [kV] R d CT
3
spark [kV] CT = Cbody + Ccable + Cin

0
0 50 100 150 200 250
t [ns]
Fig. 5.2.21: A human body model of electrostatic discharge into the oscilloscope input. About
5 ns after touching the probe tip the probe cable is charged and the voltage reaches the spark gap.
The spark gap fires and limits the voltage to its firing threshold. The arc provides a low
impedance path discharging the body and cable capacitance until the voltage falls below the firing
threshold (~25 ns). The remaining charge is fed through Vd and one of the protection diodes, until
the voltage falls below Zcc  ZH" (~250 ns). Finally, the capacitance is discharged through Vin .

A different situation occurs in case of a long term overdrive. Fig. 5.2.22 shows
the protection network.
Cp Vcc
D1
1.5nF
i in Rp Rd o
Av =+1
in 150k 150Ω
230V
50Hz
D2
Vee
Fig. 5.2.22: Input protection network for long term overdrive.

The most severe long term input overdrive occurs when the oscilloscope input
is on its highest sensitivity setting (no attenuation) and the user inadvertently connects
a 1:1 probe to a high voltage DC or AC power supply. A typical highest sensitivity
setting of 2 mVÎdiv or ±8 mV range is brutally exceeded by the 230 Veff , 650 Vpp AC
mains voltage. Since with a well designed instrument nothing dramatic would happen

- 5.52 -
P.Stariè, E.Margan System synthesis and integration

(no flash, no bang, no smoke), the user might realize his error only after a while (a few
seconds at best and several minutes in the evening at the end of a long working day).
The instrument must be designed to withstand such a condition for indefinitely long.
With component values as in Fig. 5.2.22 the peak current through Vp is:

MV pk œ aZinpk  Zcc bÎVp œ a$#&  "!bÎ"&! ‚ "!$ œ #Þ" mA

and the peak current through Gp :

MG pk œ aZinpk  Zcc b=Gp œ a$#&  "!ba#1 ‚ &! ‚ "Þ& ‚ "!* b œ !Þ"& mA

Of course, MG leads MV in phase by 1Î#, so the total current through Vp is the


vector sum, ÉMG#  MV# , and its value is essentially that of MV , since the mains
frequency (50–60 Hz) is much lower than the network cutoff, "Îa#1Gp Vp b or 707 Hz.
One could easily be unimpressed by such low current values, however we
must not forget the transient conditions. With abundant help from Mr Murphy, we
shall make the connection at the instant when the mains voltage is at its peak. And
Mr Gauss will ensure a 50% probability that the instantaneous voltage will be above
the effective value. Then the input current is limited by Vd only (2.1 A peak!).
Fortunately the current falls exponentially with the Vd Gp time constant (225 ns), so
the transient is over in about 1 µs.
The value of Gp should not be too low, either; note that it forms a capacitive
divider with the JFET input capacitance and ground strays. If these are about 1.5 pF,
the high frequency gain will be lower than at DC by about 0.1 %.
Note also that all these components must be specified to survive voltage
transients of at least 500 V, so their larger physical dimensions will increase the circuit
size, and as a consequence the parasitic loop inductance and stray capacitances.
Note also that by using a ÷2 basic input attenuation, the 150 kH and 1.5 nF can
be safely eliminated because the attenuator takes over their function.

5.2.8 Driving the Low Impedance Attenuator

The high impedance attenuator, discussed in Sec. 5.2.1, is almost exclusively


implemented as a two or three decade switch. The intermediate attenuation and gain
settings of the 1–2–5 sequence vertical sensitivity selector are usually realized in the
stages following the FET source follower. For highest bandwidth the 1–2–5 attenuator
is designed as a 50 H resistive divider and there are some advantages (regarding the
linear signal handling range) if this attenuator is put immediately after the FET source
follower. However, the FET by itself can not drive such a low impedance load and
additional circuitry is required to help it to do so.
An interesting solution is shown in Fig. 5.2.23, patented by John L. Addis in
1983 [Ref. 5.21].
The input FET U" is biased by the constant current source U# , as we have seen
in Fig. 5.2.13. It is also actively compensated for large signal transient nonlinearity (by
G# and U% ) and bootstrapped by U$ , which reduces the input capacitive loading by

- 5.53 -
P.Stariè, E.Margan System synthesis and integration

keeping the voltage at Ggd nearly constant. The output complementary emitter
follower is driven from the U" source and V$ (V$ should be equal to V% to reduce the
DC offset). The U$ bootstrap is provided by H^# , which, together with H^$ , also
bootstrap the bias circuit (V*,"! and H$,% ) for the bases of U& and U' , lowering in this
way the load to U" source.

Vcc
R7
Q3

D2

C1
R8 DZ 2
Q5
in R1 Q1 D3

R3 R9 R 13
R2 R 15

R5 R 10 R 14
o
RL
C2 Q4 i q2
D1 D4

Q6
Q2 DZ 1 R 11 DZ 3

i q1
R4 R6 R 12
Vee
Fig. 5.2.23: The FET source follower with a buffer is capable of driving
a low impedance load (such as a 50 H 1–2–5 attenuator section).

Bootstrapping increases the DC and low frequency impedance seen by U" , but
note that its use will make sense at high frequencies only if the U&,' and U$ are
substantially faster than the FET itself. Otherwise, the bootstrap circuitry would only
increase the parasitic capacitances and thus increase the rise time.
With enough driving capability made available by U& and U' the load resistor
VL can now be realized as the low impedance three step attenuator with a direct path
and two attenuated paths, ÷2 and ÷4. Besides the usual maximum input sensitivity of
5 mVÎdiv., this attenuator will provide the next two settings of 10 and 20 mVÎdiv.
The following lower sensitivity settings (50 mVÎdiv., etc.) are achieved by switching
in the ÷10 and ÷100 sections of the high impedance input attenuator, achieving the
lowest sensitivity of 2 VÎdiv. An external ÷10 probe will decrease this to 20 VÎdiv.
Fig. 5.2.24 shows two possible implementations of the low impedance divider,
having 50 H impedance at both input and output. The first attenuator design is based
on the 7 -type network and the other on the 1-type network. If the input signal is a
current, the series 50 H in the ÷1 branch can be omitted.

- 5.54 -
P.Stariè, E.Margan System synthesis and integration

50 50
i i
÷1 ÷1
25 25
37.5 ÷2 50 ÷2
o o
37.5
÷4 25 ÷4
12.5
40.625 25

37.5
12.5

Fig. 5.2.24: The low impedance attenuator (50 H input and output) can be
built as a straight 7 -type ladder or a 1-type ladder. If driven by a current
source the series 50 H in the ÷1 branch can be omitted.

Here we assume that the input impedance of the following amplifier stage is
high enough and its input capacitance is low enough for the division factor to remain
correct at each setting and the bandwidth does not change. It is also important to keep
the switch capacitive cross-talk low and preserve the nominal impedance by designing
it as a microstrip transmission line.
In addition to the discrete step attenuation, oscilloscopes, as well as other high
speed instruments, often need a continuously variable attenuation (or gain), although
within a restricted range (a range of 0.3 to 1 is often enough). A passive potentiometer
is, of course, an obvious solution and it was used extensively in early days. However,
this potentiometer is usually placed somewhere in the middle of the amplifier and its
control shaft has to be brought to the instrument front panel, which can often be a
mechanical nightmare. Also, its variable impedance causes the bandwidth to vary and
this is a very undesirable property. An electronically controlled amplifier gain with
constant bandwidth would therefore be welcome. We will examine such circuits at the
end of Sec. 5.4.

- 5.55 -
P.Stariè, E.Margan System synthesis and integration

5.3 High Speed Operational Amplifiers

From about 1980 we have been witnessing both the development of a radically
different operational amplifier topology and a major improvement in complementary
semiconductor devices’ technology, resulting in a steep rise in performance. At the
same time, the market’s hunger for higher bandwidth has been met by a massive
production increase, so that the prices have remained fairly low. With the
accompanying development in digital technology, both in terms of switching speed
and circuit complexity, the techniques which have previously been forbiddingly
expensive and too demanding to realize suddenly became feasible and within reach.
At the turn of the century opamps with the unity gain ‚ bandwidth product of
about 1 GHz or more (such as the Burr–Brown OPA-640) became available at a price
comparable to that of a couple of good discrete high frequency transistors. Add to this
a relatively good DC performance and, using surface mounting devices, a circuit area
of ~1 cm# , a low power consumption, and a noise level comparable to the thermal
noise of a 100 H resistor, the advantages are obvious.
Clearly, we have come a long way from the ubiquitous µA741.
However, in order to better evaluate the performance and the design
requirements of the new devices we shall first expose the weak points of the classical
configuration.

5.3.1 The Classical Opamp

The name ‘operational amplifier’ springs from the analog computer era, in
which amplifier blocks could be combined with passive components to perform
various mathematical operations, from simple signal addition and subtraction to
integration and differentiation.
The main performance limitation of early IC opamps was imposed by the
integration technology itself: whilst fairly good NPN transistors could be easily
produced, PNP transistors could be made along with NPN ones only as very slow,
‘lateral’ structures. This has restricted their use to only those parts of the opamp,
where the needed bandwidth, gain, and load could be low. In practical terms, the only
such place in a typical opamp is the so called ‘middle stage level translator’, as shown
in Fig. 5.3.1. Even so, the opamp open loop bandwidth was almost always below 100
Hz, mainly owing to the Miller effect and the need to provide enough phase margin at
low closed loop gain to ensure unconditional system stability.
On the other hand, for general purpose applications, the important parameter
was a high negative feedback factor, used to minimize the circuit performance
variations owed to the transistor parameters (which in the early days were difficult to
control) and instead rely on passive components which could be easily produced with
a relatively tight tolerance. It was this ability to deliver predictable performance by a
simple choice of two feedback resistors which made the opamp a popular and widely
used circuit component. And not only a well defined gain, but also a broad range of
other signal conditioning functions is made possible by combining various passive
and active components in the feedback loop.

- 5.57 -
P.Stariè, E.Margan System synthesis and integration

The feedback concept is so simple and works so well that too many people
take it for granted; and equally many are surprised to discover that it can cause as
much trouble as the solutions it offers.

Slow PNP
"lateral"
Rc transistor V cc
Q3

+ Q1 Q2 Q4
∆ Ccb o

Rs

I1 I2 I3
s
V ee

Re Rf
Fig. 5.3.1: The classical opamp, simplified. The input differential pair U" and U#
subtract the feedback from the input signal, driving with the difference the ‘level
translator’ stage U$ , which in turn drives the output emitter follower U% , which
provides a low output impedance. The feedback voltage is derived from the output
voltage @o by dividing it as " œ Ve ÎÐVe  Vf Ñ. If the opamp open loop gain Eo is
much higher than "Î" , the closed loop gain is Ecl ¸ "Î" (see text).

In the (highly simplified) opamp circuit in Fig. 5.3.1 the open loop gain is
equal to the gain of the differential pair U" and U# , multiplied by the gain of the level
translator U$ . The output emitter follower U% has a unit voltage gain. However, all
the three stages have their gain frequency dependent, as was explained in Part 3.
Fortunately, the three poles are far apart (all are real) and the poles of the first and the
third stage can be (and usually are) easily set high enough for the amplifier open loop
frequency response to be dominated by the second stage pole (which was in turn
named ‘the dominant pole’).
The dominant pole of the circuit in Fig. 5.3.1 is set by the U" collector resistor
Vc and the Miller capacitance GM :
"
=! œ  (5.3.1)
Vc G M
where we have neglected the input resistance of U$ in parallel with Vc , which we can
do if Vc is small.
GM appears effectively in parallel with Vc and its value is equal to the U$
collector to base capacitance Gcb , multiplied by the U$ gain:

GM œ Gcb a"  E$ b (5.3.2)

The gain E$ is set by the U$ transconductance and the loading resistance, in


the form of the equivalent (shunt) input resistance at the base of U4 :
;e M #
E$ ¸ gm$ Vb% œ Vb% (5.3.3)
5B X

- 5.58 -
P.Stariè, E.Margan System synthesis and integration

The equivalent input resistance of U% is approximately equal to the amplifier


loading resistance reflected into the U% base:

Vb% œ VL a"  "% b (5.3.4)

where "% is the U% current gain.


The gain of the input differential pair is set by the U" collector load resistor
Vc and the transconductance of both U" and U# (again neglecting the U$ input
resistance):
;e M "
E" ¸ agm"  gm# b Vc œ Vc (5.3.5)
5B X
where we have assumed both transconductances to be equal, owing to the current M"
being equally divided into Mc" and Mc# .
As a result the open loop transfer function can be written as:
 =!  =!
Ea=b œ E! œ E" E$ (5.3.6)
=  =! =  =!
Now we can derive the closed loop transfer function. By considering that the
feedback factor " is set by the feedback resistive divider:
Ve
"œ (5.3.7)
Ve  Vf
the voltage at the inverting input is equal to the output voltage multiplied by the
feedback factor:

@i œ @o " (5.3.8)

The signal being amplified is the difference between the source voltage and
the voltage provided by feedback:

?@ œ @s  @i (5.3.9)

This voltage is amplified by the amplifier open loop transfer function, Ea=b, to
give the output voltage:

@o œ Ea=b ?@ (5.3.10)

By considering Eq. 5.3.6 and 5.3.9, we can write:


 =!
@o œ E! a@s  @i b (5.3.11)
=  =!
and, since @i is a feedback scaled @o :
 =!
@o œ E! a@s  @o "b (5.3.12)
=  =!
By rearranging this into:

 =!  =!
@o Œ"  "E!  œ E! @s (5.3.13)
=  =! =  =!

- 5.59 -
P.Stariè, E.Margan System synthesis and integration

we can obtain the explicit expression for @o :


 =!
E!
=  =!
@o œ  =! @s (5.3.14)
"  " E!
=  =!
or:
"
@o œ @s (5.3.15)
"
 =! "
E!
=  =!

From this last expression it is obvious that if the open loop gain E! is very
high the amplifier gain @o Î@s is reduced to the familiar "Î" , or aVf  Ve bÎVe .
Likewise, for a finite value of E! , the frequency dependent part increases, thus
lowering the closed loop gain at higher frequencies.
Take, for example, the µA741 opamp, Fig 5.3.2: owing to its dominant pole,
the open loop cut off frequency is at about 10 Hz, whilst the open loop gain at DC is
about 10& . The unity gain crossover frequency 0" is therefore about 1 MHz.
f0
A0
100

A0 − 3dB
+
80 ∆ A( f ) o
A( f ) −

A( f ) = o /∆
60 0
ϕ (f ) ϕ [ o]
A [dB]

40 90

A CL = o / s = 1+ Rf /Re
20 180
+ fh
s
A( f ) f1
∆ o
0 −
Re Rf

− 20
1 10 100 1k 10k 100k 1M 10M
f [Hz]
Fig. 5.3.2: A typical opamp open loop gain and phase compared to the closed loop
gain. The dashed lines show the influence of a secondary pole (usually the input
differential stage pole), which, for stability requirements, must be set at or above the
unity gain transition frequency, 0" œ 1 MHz. 0h is the closed loop cutoff frequency.

For a closed loop gain of 10, " œ !Þ"; since the frequency dependence term is
a ratio, the factor #1 can be extracted and canceled, leaving 0! Îa40  0! b, where 0! is
the open loop cutoff frequency. By putting this into Eq. 5.3.15, we see that the
amplifier will be making corrections to its own non-linearities by a factor 10% (80 dB)
at 1 Hz, but only by a factor of 10# (40 dB) at 1 kHz; and at 100 kHz there would be

- 5.60 -
P.Stariè, E.Margan System synthesis and integration

only 3 dB of feedback, resulting in a 50% gain error. This means that for a source
signal of 0.1 V there would be a ?@ of 0.05 V, resulting in an output voltage of
@o œ 0.5 V (instead of the 1 V as at low frequencies). Above the closed loop cutoff
frequency the amplifier has practically no feedback at all.
An additional error is owed to the phase shift: at 100 kHz a single pole
amplifier would have the output at 90° phase lag against the input. An amplifier with
an additional input differential stage pole at 1 MHz would shift the phase by 135°, so
there would be only a 45° phase margin at this frequency and the circuit would be
practically at the edge of closed loop stability. If we were to need this amplifier to
drive a 2 m long coaxial cable (capacitance 200 pF), by considering the amplifier
output impedance of about 75 H the additional phase shift of 5° would be enough to
turn the amplifier into a high frequency oscillator.

5.3.2 Slew Rate Limiting

The discussion so far is valid for the small signal amplification. For large
signals the bandwidth would be much lower than the small signal one. This is owed to
the Miller capacitance causing U$ to act as an integrator. For a positive input step
larger than # 5B X Î;e (M" Ve" if the input differential pair has emitter degeneration
resistors), the transistor U" will be fully open, while U# will be fully closed.
Therefore, the maximum current available to charge GM will be equal to the tail
current M" . The voltage across GM will increase linearly until the input differential
stage will be out of saturation. Consequently, the slew rate limit is:

.@ M"
SR œ œ (5.3.16)
.> GM

Usually M" is of the order of 100 µA (or even lower if low noise is the main design
goal). Also, owed to the gain of U$ the Miller capacitance GM can be large; say, with
Gcb œ 4 pF and E$ œ 50, GM will be about 200 pF, giving a slew rate SR œ 0.5 Vεs.
We know that for a sine wave the maximum slope occurs at zero crossing, where the
derivative is .@Î.> œ . aZp sin =>bÎ.> œ = Zp cos =>; at zero crossing > œ ! and
cosa!b œ ", so the slew rate equation can be written as:

M"
SR œ = Zp œ (5.3.17)
GM

For a supply voltage of ±15 V, the signal amplitude just before clipping would
probably be around 12 V, so the maximal full power sine wave frequency would be
0max œ M" Î#1 GM Zp , or approximately 6.5 kHz only!
The frequency at which the sine wave becomes a linear ramp, with a nearly
equal peak amplitude, is slightly higher: 0r œ "Î%>r œ SRÎ% Zp œ 10 kHz (note that
the SR of the circuit in Fig. 5.3.1 is not symmetrical, since GM is charged by M" and
discharged through Vc ; in an actual opamp circuit, such as in Fig. 5.3.3, Vc is
replaced by a current mirror, driven by the collector current of U# , giving a
symmetrical slew rate).

- 5.61 -
P.Stariè, E.Margan System synthesis and integration

Vcc
Q5 Q6 Ccb
I1 o
s

+ Q1 Q2 o
− A3 +1


Rs

I1
s
V ee

Re Rf
Fig. 5.3.3: Simplified conventional opamp circuit with the current mirror as the active
load to U" . The second stage is modeled as a Miller integrator with large gain. This
circuit exhibits symmetrical slew rate limiting. The dominant pole =! œ gm ÎGM , where
gm is the differential amplifier’s transconductance and GM œ Gcb a"  E$ b.

5.3.3 Current Feedback Amplifiers

The circuit in Fig. 5.3.1 could be characterized as a ‘voltage feedback’


amplifier and in the previous analysis we have shown its most important performance
limitations. Instead the circuit in Fig. 5.3.4 is characterized as a ‘current feedback’
amplifier, since the feedback signal is in the form of a current, which, as will become
evident soon, offers several distinct advantages.

Q2 V cc
Q3

+ Q1 Q4
C cb o

Rs

i fb
I1 I2 I3
s
V ee

Re Rf
Fig. 5.3.4: Current feedback opamp, derived from the voltage feedback opamp (Fig. 5.3.1):
we first eliminate U# from the input differential amplifier and introduce the feedback into
the U" emitter (low impedance!)Þ Next, we load the U" collector by a diode connected U# ,
forming a current mirror with U$ . Finally, we use very low values for Vf and Ve . The
improvements in terms of speed are two: first, for large signals, the current available for
charging Gcb is almost equal to the feedback current 3fb , eliminating slew rate limiting;
second, Gcb is effectively grounded by the low impedance of U# , thus avoiding the Miller
effect. A disadvantage is that the voltage gain is provided by U$ alone, so the loop gain is
lower. Nevertheless, high frequency distortion can be lower than in classical opamps,
because, for the equivalent semiconductor technology, the dominant pole is at least two
decades higher, providing more loop gain for error correction at high frequencies.

- 5.62 -
P.Stariè, E.Margan System synthesis and integration

The amplifier in Fig. 5.3.4 would still run into slew rate limiting for high
amplitude signals, owing to the fixed bias of the first stage current source M" . This is
avoided by using a complementary symmetry configuration, as shown in Fig. 5.3.5. Of
course, the complementary symmetry can be used throughout the amplifier, not just in
fist stage.

Vcc
Q5 Q7

Q3 Q11
Q1 ZT Q9

+
C T RT o

Q2 Q10

Rs Q4 Q12

Q6 Q8
s V ee
i fb

Re Rf
Fig. 5.3.5: A fully complementary current feedback amplifier model. It consists of four parts:
transistors U"–% form a unity gain buffer, the same as U*–12 , with the four current sources
providing bias; U&,( and U',) form two current mirrors. In contrast to the voltage feedback
circuit, both of whose inputs are of high impedance, the inverting input of the current feedback
amplifier is a low impedance output of the first buffer. The current flowing in or out of the
emitters of U$,% is (nearly) equal to the current at the U$,% collectors. This current is reflected
by the mirrors and converted into a voltage at the U(,) collectors, driving the output unity gain
buffer. The circuit stability is ensured by the transimpedance ^T , which can be modeled as a
parallel connection of a capacitor GT and resistor VT . The closed loop bandwidth is set by Vf
and the gain by Ve (the analysis is presented later in the text). One of the first amplifiers of
this kind was the Comlinear CLC400.

Perhaps, it would be more correct to label the structure in Fig. 5.3.5 as a


‘current on demand’ type of amplifier, owing to the fact that the feedback current,
which is proportional to the input–output error, feeds the dominant pole capacitance.
The larger the error, the larger the current, which practically eliminates the slew rate
limiting. The slew rate will be limited nevertheless, due to secondary effects, which
will be discussed later, but the maximum current charging GT is usually much greater
than in conventional amplifiers. Also, GT is small (not affected by the Miller effect).
Another name often found in the literature is the ‘transimpedance amplifier’,
after the transimpedance equation (^T , see the analysis below). Owing to historical
reasons, we shall keep the name used in the section title.
The complementary symmetry nature of the circuit in Fig. 5.3.5 would have
been difficult to realize with the available opamp integration technology of the late
1960s and early 1970s, owing to the different characteristics of PNP and NPN
transistors. A major technological breakthrough, made between 1980 and 1985 at

- 5.63 -
P.Stariè, E.Margan System synthesis and integration

Comlinear Corporation and Elantec, later followed by Analog Devices, Burr–Brown


and others, enabled PNP transistors to have their 0T almost as high as the NPN had.
Fig. 5.3.6 shows a typical chip cross-section and Table 5.3.1 presents the typical
values of the most important production parameters improving over the years.
NPN Transistor PNP Transistor
poly-Si poly-Si
B E B C B E B C
SiO2 SiO2
n-Epi p-well

p-iso n+ buried layer p-iso p-iso p+ buried layer p-iso


L epi L epi n-well
p-Substrate p-Substrate

Fig. 5.3.6: Cross-section of the Complementary-Bipolar process.

Process (yr) VIP1 (1986) VIP2 (1988) VIP3 (1994) VIP10 (2000) Units
Parameter NPN PNP NPN PNP NPN PNP NPN PNP —
3c Î3b 250 150 250 80 150 60 100 55 —
Early ZA 200 60 150 40 150 50 120 40 V
0T 0.4 0.2 0.8 0.5 3 2.5 9 8 GHz
Gjs 2.0 2.2 1.5 1.8 0.5 0.8 0.005 0.007 pF
E width 15 11 2 1 µm
Area 20000 18000 2400 300 µm#
Zce max 36 36 32 12 V

Table 5.3.1: Typical production parameters of the Complementary Bipolar process [Ref. 5.33]

Although the same technology is now used also for conventional voltage
feedback amplifiers, the current feedback structure offers further advantages which
result in improved circuit bandwidth, as put in evidence by the following discussion.
The stability of the amplifier in Fig. 5.3.5 is ensured by the so called
transimpedance network, ^T , which can be modeled as a parallel VT GT network.
Note that the compensating capacitor, GT (consisting of 4 parts, GT" –GT% ), is
effectively grounded, as can be seen in Fig. 5.3.7, since Gcb of U*,"! are tied to the
supply voltages directly, whilst the Gcb of U(,) are tied to the supply by the low
impedance CE path of U&,' .

V cc
Q5 Q7

C T1 Q11

Q3 Q9
C T3
o
C T4
RT
Q4 Q10
C T2
Q12
Q6 Q8
V ee

Fig. 5.3.7: The capacitance GT consists of four components, all effectively grounded.

- 5.64 -
P.Stariè, E.Margan System synthesis and integration

Therefore in this configuration the Miller effect is substantially eliminated.


This means that, for the same driving current, this circuit is capable of much higher
bandwidth, compared to the conventional opamp.
Also, owing to the two current mirrors, the current which charges and
discharges GT is equal to the current injected by the feedback network into the
inverting input (the first buffer output). Since this current is feedback derived, it is
proportional to the input–output error; thus for a fast input voltage step there would
initially be a large input–output error, causing a large current into the inverting input
and an equally large current will charge GT , so its voltage, and consequently the
output voltage, will increase fast. As the output voltage increases, the error is reduced,
reducing the error and lowering the current.
To analyze the circuit operation we shall at first assume ideal buffers and
current mirrors, as modeled in Fig. 5.3.8. Later, we shall see how the real circuit
parameters limit the performance.

M1
Vcc

A1 i fb ZT A2
+ T o
+1 +1
s
− RT CT

i fb
M2 Vee
Re Rf
fb
ie if
Fig. 5.3.8: Current feedback amplifier model used for the analysis.

Imagine for a moment that Vf is taken out of the circuit. Essentially this would
be an open loop configuration, the gain of which can be expressed by the ratio of two
resistors, VT ÎVe . The gain is provided by the current mirrors PM ; their output
currents are summed, so the two mirrors behave like a single stage; consequently the
gain value, compared with that of a conventional opamp, is relatively low (in practice,
maximum gains between 60 and 80 dB are common). It is important to note, however,
that the open loop (voltage) gain does not play such an important role in current
feedback amplifiers. As the name implies, it is more important how the feedback
current is processed.
Let us now examine a different situation: we put back Vf and disconnect Ve .
If there were to be any voltage difference between the outputs of the two buffers, a
current would be forced through Vf , increasing the output current of the first buffer,
E" . The two current mirrors would reflect this onto the input of the second buffer, E# ,
in order to null the output voltage difference. In other words, the output of the first
buffer E" represents an inverting current mode input of the whole system.
If we now reconnect Ve it is clear that the E" output must now deliver an
additional current, 3e , flowing to the ground. The current increase is reflected by the
mirrors into a higher @T , so the output voltage @o would increase, forcing the current

- 5.65 -
P.Stariè, E.Margan System synthesis and integration

3f (through Vf ) into the E" output. By looking from the E" output, 3e flows in the
direction opposite to 3f , so the total current 3fb of the E" output will be equal to their
difference. Thus with Ve and Vf a classical feedback divider network is formed, but
the feedback signal is a current. As expected, the output of E# must now become
aVf  Ve bÎVe times higher than the output of E" to balance the feedback loop.
Deriving the circuit equations is simple. The transimpedance equation
(assuming an ideal unity gain buffer E# , thus @T œ @o ) is:

@o œ ^T 3fb (5.3.18)

The feedback current (assuming an ideal unity gain buffer E" , thus @fb œ @s ) is:

@s @o  @ s " " @o
3fb œ  œ @s Œ   (5.3.19)
Ve Vf Ve Vf Vf

The closed loop gain (from both equations above) is:

@o Vf "
œ Œ"   (5.3.20)
@s Ve Vf
"
^T
We see that the equation for the closed loop gain has two terms, the first one
resulting from the feedback network divider and the second one containing the
transimpedance ^T and Vf , but not Ve ! This is in contrast to what we are used to in
conventional opamps.
If we now replace ^T by its equivalent network, "Îa= GT  "ÎVT b, then the
closed loop gain Eq. 5.3.20 can be rewritten as:

@o Vf "
œ Œ"   (5.3.21)
@s Ve Vf
"  = GT Vf
VT
We can rewrite this to reveal the system’s pole, in the way we are used to:

Vf Vf "
@o
" Œ"  
Ve VT G T Vf
œ † (5.3.22)
@s Vf Vf "
" =  ”  Œ"   •
VT VT GT V f

By comparing this with the general single pole system transfer function:
 ="
J a=b œ E! (5.3.23)
=  ="
we note that the term:

Vf "
=" œ  Œ"   (5.3.24)
VT GT V f

is the closed loop pole, which sets the closed loop cutoff frequency: 0h œ ¸=" ¸Î#1.

- 5.66 -
P.Stariè, E.Margan System synthesis and integration

We also note that the system closed loop gain is:


Vf
"
Ve
E! œ (5.3.25)
Vf
"
VT
Since Vf is normally much smaller than VT , the term VT ÎVf represents the
open loop gain, so Vf ÎVT represents the closed loop gain error (in analogy with the
finite open loop gain error at DC in classical amplifiers).
If, for example, we have an amplifier with VT œ $!! kH and GT œ # pF and
we form the feedback with Vf œ $$! H and Ve œ ""! H, the amplifier would have a
closed loop bandwidth of about 240 MHz and a gain of 4. Moreover, its loop gain
would be VT aVf  Ve bÎVf Ve ¸ $!!! and flat up to some 265 kHz, more than three
orders of magnitude higher than the usual 100 Hz in conventional voltage feedback
operational amplifiers.
Thus, we find that the closed loop bandwidth depends mainly on Vf (but not
on Ve ), whilst the gain, once Vf has been chosen, can be set by Ve alone. With
current feedback the amplifier designer has independent control over the two most
important circuit parameters and must only watch for possible second-order effects.
The benefits of the current feedback amplifier are all due to two main points:
a) since there is only one voltage gain stage (the two current mirrors, working
effectively in parallel) and only one internal high impedance node (^T ),
this structure is inherently a single pole system (since the integration
technology allows the poles of both buffers to be much higher). As a
consequence, the system may always be made unconditionally stable,
whilst not compromising the available system bandwidth;
b) since the feedback is entered in the form of a current the system bandwidth
depends on Vf but not on Ve , so that if Vf is held constant while Ve is
varied, the bandwidth remains (almost) independent of gain. As a bonus
there is practically no slew rate limiting mechanism, because the feedback
current drives a grounded GT and the larger the input voltage step, the
larger will be the current charging GT . So the amplifier’s step response
will always look like that of a low pass Vf GT network for any signal
amplitude up to the clipping level.

It might be interesting to return to Eq. 5.3.19 with the result of Eq. 5.3.21 and
express the current which charges GT as a function of input voltage:

" " "


3GT ¸ 3fb œ @s Œ  Œ"   (5.3.26)
Vf Ve "  Vf ÎVT  = GT Vf

Therefore we can express 3GT (see the transient response in Fig. 5.3.9) as:

" " =
3GT ¸ 3fb œ @s Œ   (5.3.27)
Vf Ve Vf "
=  Œ"  
VT GT V f

- 5.67 -
P.Stariè, E.Margan System synthesis and integration

(arbitrary units)
i fb

o
A

0 1 2 3 4
t [ns]
Fig. 5.3.9: ‘Current on demand’: The step response reveals that the feedback
current is proportional to the difference between the input and output voltage,
essentially a high pass version of the output voltage, as shown by Eq. 5.3.27.

In Fig. 5.3.10 we compare the cut off frequency vs. gain of a voltage feedback
and a current feedback amplifier. The voltage feedback amplifier bandwidth is
inversely proportional to gain; in contrast, the current feedback amplifier bandwidth
is, in principle, independent of gain. This property makes current feedback amplifiers
ideal for wideband programmable gain applications.

100 A
0

A0 − 3dB f s + o
s + o
0 ∆ A( f ) ZT( f )
80 − ifb −
A( f ) Re Rf Re Rf

60
A CL = o / s = 1+ R f / R e

40
A CL = 100
A [dB]

f Vh2 f Ch2
20
A CL = 10
f Vh1 f Ch1
0
A CL = 1
f Vh0 f Ch0
− 20
1 10 100 1k 10k 100k 1M 10M 100M 1G
f [Hz]
Fig. 5.3.10: Comparison of closed loop cut off frequency vs. gain of a conventional and a
current feedback amplifier. The conventional amplifier has a constant GBW product (higher
gain, lower cut off). But the current feedback cut off frequency is (almost) independent of gain.

Of course, a real amplifier will have some second-order effects, which we


have not discussed so far, so its performance will be somewhat less than ideal.

- 5.68 -
P.Stariè, E.Margan System synthesis and integration

The main causes for non-ideal performance are:


• the small but finite inverting input resistance (non-zero output impedance of
E" in Fig. 5.3.8), which causes a voltage error between the non-inverting and
the inverting input and, consequently, a lower feedback current through Vf ;
• the non-zero output impedance of E# , which, combined with the output load
forms an additional feedback divider;
• the asymmetry and non-linearity of the two current mirrors, which directly
influences the transfer function;
• the finite current gain of the output stage E# , which, if too low, would allow
the amplifier load to be reflected at the input of E# and influence ^T ;
• the secondary poles at high frequencies, owed to the finite bandwidth of the
transistors within the amplifier.

The last four points are equally well known from conventional amplifiers and
their influence is straightforward and easy to understand, so we shall not discuss them
any further. The first point, however, deserves some attention.

5.3.4 Influence of a Finite Inverting Input Resistance

The current feedback amplifier requires a low (ideally zero) impedance at the
inverting input in order to sense the feedback current correctly. This, in addition to the
manufacturing imperfections between the transistors U"% (Fig. 5.3.11), results in a
relatively high input offset, owed to both DC voltage errors and current errors.
The offset is reduced by using the current mirror technique for the biasing
current sources (Ua–d ), making the currents of U",# equal. Further reduction is
achieved by adding low value resistors, V"–% (a value of about "! <e is usually
sufficient) to U"–% emitters. This, however, increases the inverting input resistance.

Qa Qb
Q3
R1
I bb
R3
Q1 A1 Ro
+ 1
s +1
+ −

fb
Q2
i fb
R4
Re Rf
R2 o

Q4
Qc Qd

Fig. 5.3.11: The resistors V"–% used to balance the inputs are the cause for the non-zero
inverting input resistance, V$ ||V% , modeled by Vo ; this causes an additional voltage drop,
reducing the feedback current (see analysis).

- 5.69 -
P.Stariè, E.Margan System synthesis and integration

A typical circuit of the first buffer implementing DC offset reduction by


current mirror biasing and using emitter degeneration resistors is shown in Fig. 5.3.11;
the equivalent inverting input resistance V$ llV% is modeled by Vo . It causes an
additional voltage drop which reduces the feedback current.
For the analysis we assume that the buffer has a unity gain, thus @o" ¸ @s .
Since the feedback current flows through Vo , the voltage at the inverting input, @fb ,
will be lower than @s by 3fb Vo :

@fb œ @s  3fb Vo (5.3.28)

By summing the currents at the @fb node we have:


@o  @fb @fb @s  @fb
œ  (5.3.29)
Vf Ve Vo
Note that the last term in this equation is the feedback current from Eq. 5.3.28:
@s  @fb
3fb œ (5.3.30)
Vo
and from the transimpedance equation Eq. 5.3.18 the feedback current required to
produce the output voltage @o is:
@o
3fb œ (5.3.31)
^T
By substituting @fb and 3fb in Eq. 5.3.28, we obtain the transfer function:

Vf
Œ"  
@o Ve
œ (5.3.32)
@s Vf Vo Vf
"  Œ"   
Ve ^T ^T

If we express ^T by its components:


"
^T œ (5.3.33)
"
 =GT
VT
we obtain:
Vf
@o Œ"  
Ve
œ (5.3.34)
@s Vf Vo Vf Vf
"  Œ"     =GT ”Vf  Vo Œ"  •
Ve VT VT Ve

which we re-order in the usual way to separate the DC gain from the frequency
dependent part:

- 5.70 -
P.Stariè, E.Margan System synthesis and integration

Vf Vo Vf
"  Œ"   
Ve VT VT
Vf Vf
" GT ”Vf  Vo Œ"  •
@o Ve Ve
œ †
@s Vf Vo Vf Vf Vo Vf
"  Œ"    "  Œ"   
Ve VT VT Ve VT VT
=
Vf
GT ”Vf  Vo Œ"  •
Ve
(5.3.35)
Again, a comparison with the general normalized first-order transfer function:
="
J a=b œ E! (5.3.36)
=  ="
reveals the DC gain:
Vf
"
Ve
E! œ (5.3.37)
Vf Vo Vf
"  Œ"   
Ve VT VT
and the pole:
Vf Vo Vf
"  Œ"   
Ve VT VT
=" œ  (5.3.38)
Vf
GT ”Vf  Vo Œ"  •
Ve

The bandwidth is calculated from the pole =" :


"
0h œ l=" l (5.3.39)
#1
The DC closed loop gain E! contains the desired gain Ecl :
Vf
Ecl œ "  (5.3.40)
Ve
and an error term &:
Vo Vf
& œ Ecl  (5.3.41)
VT VT
which is small since VT is usually 100 kH or higher, whilst Vo is between 5 and 50 H
and Vf is between 100 H and 1 kH; thus, even if Ecl is 100 or more, & rarely exceeds
103 . The transfer function can therefore be expressed as:
"&
@o Ecl GT aVf  Vo Ecl b
œ † (5.3.42)
@s "& "&
=
GT aVf  Vo Ecl b

- 5.71 -
P.Stariè, E.Margan System synthesis and integration

Whilst the gain error & is small, the bandwidth error can be rather high at high
gain; e.g., if Vf œ " kH and Vo œ "! H, with Ecl œ "!!, the time constant would
double and the bandwidth would be halved. In Fig. 5.3.12 we have plotted the
bandwidth reduction as a function of gain for a typical current feedback amplifier.
Although not constant as in theory, the bandwidth is reduced far less than in voltage
feedback opamps (about 50× less for a gain of 100).

2
10
A CL = 100
f h2
Re = 10.1 Ω
+
s

+1 i fb 1 A CL = 10
o 10
+1 Re = 110 Ω
RT CT f h1
Ro Rf
Re =
− Acl − 1
i fb A CL = 1
0
10 Re = ∞
Rf
C T = 1.6 pF f h0
Re RT = 300 kΩ
Rf = 1 k Ω
Ro = 10 Ω
−1
10
10 6 10 7 10 8 10 9
f [Hz]
Fig. 5.3.12: The bandwidth of an actual current feedback amplifier is gain dependent, owing to
a small but finite inverting input resistance Vo . The nominal bandwidth of 100 MHz at unity gain
is reduced to 90 MHz at the gain of 10 and to only 50 MHz at the gain of 100 if Vo is 10 H.
Nevertheless, this is still much better than in voltage feedback amplifiers.

From these relations we conclude two things: first, both the actual closed loop
gain and bandwidth are affected by the desired closed loop gain Ecl œ "  Vf ÎVe ;
second and more important, for a given Vo , we can reduce Vf by Vo Ecl and
recalculate the required Ve to arrive at slightly modified values which preserve both
the desired gain and bandwidth!
Note that in the above analysis we have assumed a purely resistive feedback
path; additional influence of Vo will show up when we shall consider the effect of
stray capacitances in the following section.

5.3.5 Noise Gain and Amplifier Stability Analysis

A classical voltage feedback unity gain compensated amplifier (for which any
secondary pole lies above the open loop unity gain crossover) usually remains stable if
a capacitor Gf is added in parallel to Vf , as in Fig. 5.3.13a. Because Gf lowers the
bandwidth, it is often used to prevent problems at and above the closed loop cutoff. In
contrast, a capacitor Ge in parallel with Ve , as shown in Fig. 5.3.13b, would reduce the
feedback at high frequencies, leading to instability.

- 5.72 -
P.Stariè, E.Margan System synthesis and integration

A different situation is encountered with current feedback amplifiers, which


become unstable in the presence of any capacitance in parallel with either resistor of
the feedback loop. Therefore the behavior of the circuit in Fig. 5.3.13a in the case of a
current feedback amplifier is at odds with what we were used to.

o o

Rs Rs
Rf
Rf
s Re s Ce Re
Cf

Fig. 5.3.13: a) A unity gain compensated voltage feedback amplifier remains stable with
capacitive feedback, whilst in b) it is unstable. In contrast, the stability of a current feedback
amplifier is upset by either a) Gf in parallel with Vf , or b) Ge in parallel with Ve.

Before we attempt to explain the unusual sensitivity of current feedback


amplifiers to capacitively affected feedback, let us introduce an extremely useful
concept, called noise gain. In contrast with the name, the noise gain is not used just to
evaluate the circuit noise, but the circuit stability as well. It can be applied to all kinds
of amplifiers, not just current feedback ones.
The noise, generated by the amplifier input stage, undergoes the full open loop
amplification, so the input stage noise dominates over the noise of other stages.
Therefore a noisy amplifier can be modeled as a noise generator in series with the
input of a noiseless amplifier, as in Fig. 5.3.14, regardless of the actual signal
amplification topology, be it inverting or non-inverting.
Any signal within the amplifier feedback loop is processed in the same way as
the input stage noise. Thus by grounding the signal input and by analyzing the noise
gain within the amplifier and its attenuation in the feedback network, we shall be able
to predict the amplifier behavior.

1 A (s ) o

N
2

Re Rf
Fig. 5.3.14: Noise gain definition: A noisy amplifier is modeled as a noise
generator @N in series with the input of a noiseless amplifier. The inverting signal
gain is @o Î@# œ  Vf ÎVe ; the non-inverting signal gain is @o Î@" œ "  Vf ÎVe ;
the noise gain is @o Î@N œ  a"  Vf ÎVe b. The noise generator polarity is
indicated only as a reference for the noise gain polarity inversion.

The noise gain is inverting in phase, but equal in value to the non-inverting
signal gain:
@o Vf
EN œ œ  Œ"   (5.3.43)
@N Ve

- 5.73 -
P.Stariè, E.Margan System synthesis and integration

For the voltage feedback amplifier the closed loop bandwidth is equal to the
unity gain bandwidth frequency and the noise gain:
01
0cl œ (5.3.44)
lEN l

If the feedback network is purely resistive the noise gain is independent of


frequency; reactive components (usually capacitances) within the feedback loop will
cause the noise gain to change with frequency.
To see this, we usually draw the asymptotes of the transfer function magnitude
(absolute value) in a loglog Bode plot, with the breakpoints representing the poles and
zeros, each pole adding a slope of #! dbÎ"!0 and each zero a #! dBÎ"!0 . We
then approximate the phase angle at each breakpoint and in the middle of the linear
section (this is a simple and straightforward process if the breakpoints are far apart —
see an arbitrary example in Fig. 5.3.15 illustrating some most common possibilities).
From this we can evaluate the feedback loop phase margin and, consequently, the
amplifier stability.

log A
slope 0
phase 0 45
−2
0d 0
B/
10
9

0
45 0 45
−2
0d 90
B/
10 1
f 35

log f
− 40 180
dB
/10
f

Fig. 5.3.15: An arbitrary example of the phase angle estimated from the
gain magnitude, its slopes and various breakpoints. If two breakpoints are
relatively close the phase would not actually reach the value predicted from
the slope value, but an intermediate value instead.

In the same manner, along the amplifier open loop gain asymptotes, we draw
the noise gain, as in Fig. 5.3.16, and we look at the crossover point of these two
characteristics. Two important parameters can be derived from this plot: the first is
the amount of gain at the crossover frequency 0c ; the second is the relative slope
difference between the two lines at 0c , which also serves as an indication of their
phase difference.
If the available gain at 0c is greater than unity the phase difference determines
the amplifier stability. The feedback can be considered ‘negative’ and the amplifier
operation stable if the loop phase margin is at least 45°; this means that, if a 360°
phase shift is ‘positive’, the maximum phase shift within the feedback loop must
always be less than 315° (if Ea0c b  " ). Since the inverting input provides 180°, the
total phase shift of the remaining amplifier stages (secondary poles) and the feedback
network should never exceed 135°. Note also that a phase margin of 90° or more
results in a smooth transient response; for a phase margin between 90° and 45° an
increasing amount of peaking would result.

- 5.74 -
P.Stariè, E.Margan System synthesis and integration

log A
A0

A (s ) | A( f ) |
o

R f +R e fc
N Acl =
Rf Re fs
0
Re ( Acl = 1 ) f0 log f
Cf Re
β=
R f +R e 1 1
2π R f Cf 2π R f || R eCf
Fig. 5.3.16: Voltage feedback amplifier noise gain is derived from the equivalent circuit noise
model (a noise generator in series with the input of a noiseless amplifier). The Bode plot shows
the relationships between the most important parameters (lEa0 bl is the open loop gain
magnitude, 0! is the dominant pole, and 0s is the secondary pole, owed to the slowest internal
stage). The inverse of the feedback attenuation " is the noise gain and it is equivalent to the
amplifier closed loop gain Ecl . Note that the noise gain is flat up and beyond the open loop
crossover 0c , owed to the zero at "Î#1Ve llVe Gf . The amplifier is stable since the noise gain
crosses the open loop gain at a point where their slope difference is 20 dBÎ100 . If the amplifier
open loop gain was higher, the gain at the secondary pole (at 0s ) would be higher than unity and
the slope difference would be 40 dBÎ100 . Then, the increased phase (135° at 0s and
approaching 180° above), along with the 180° of the amplifier inverting input, would make the
feedback positive (p360°) and the amplifier would oscillate.

Now, let us find the noise gain of the voltage feedback amplifier in Fig. 5.3.16.
Note that while there is some feedback available the amplifier tries to keep the
difference between the inverting and non-inverting input as small as its open loop
gain allows; so, with a high open loop gain, the input voltage difference tends to be
zero (plus the DC voltage offset).
Note also that, in order to keep track of the phase inversion by the amplifier,
we have added polarity indicators to the noise generator. If the ‘+’ side of the noise
generator @N tries to push the inverting input positive, the output voltage must go
negative to compensate it.
With Gf in parallel with Vf we shall have:

Î " Ñ
@o Ð Vf Gf Vf Ó
œ Ð " † Ó (5.3.45)
@N Ve "
Ï = Ò
GV f f

which we can also rewrite as:

Î " " Ñ
=
@o Ð Gf Vf Gf V e Ó
œ Ð  Ó (5.3.46)
@N " "
Ï = G V =
Gf V f Ò
f f

- 5.75 -
P.Stariè, E.Margan System synthesis and integration

Here we have a pole at "ÎGf Vf and a zero at "ÎGf Ve . Eq. 5.3.46 is the noise
gain (and also the closed loop gain Ecl ; see the two distinct breakpoint frequencies in
Fig. 5.3.16). The inverse of this is the feedback attenuation " .
With the open loop gain as shown in Fig. 5.3.16 the amplifier is stable, since
the noise gain crosses over the open loop gain at 0c , where the open loop and closed
loop slope difference is 20 dBÎdecade, and the associated phase shift is (nearly) 90°.
In addition to the 180° of the amplifier inverting input, the total phase angle is then
270°. The minimum phase margin for a stable amplifier would be 45° and here we
have 90° (360  270), so the feedback can still be considered "negative".
However, if the open loop gain was higher (and if the poles remain at the same
frequencies) the gain at 0s (the frequency of the secondary pole) can be greater than
unity. In this case at the crossover of the noise gain and open loop gain the slope
difference will be 40 dBÎ100 , with the associated phase of 135° at 0s and approaching
180° above. The feedback will become ‘positive’ and, with the gain at 0s greater than
unity, the amplifier would oscillate.
In the case of the current feedback amplifier in Fig. 3.5.17 we first note that
instead of gain our Bode plot shows the feedback impedances and the amplifier
transimpedance, all as functions of frequency.
Intuitively speaking, a capacitance Gf in parallel to Vf would reduce the
impedance of the feedback network at high frequencies, thus also reducing the closed
loop gain. However, intuition is misleading us: since the current feedback system
bandwidth is inversely proportional to the feedback impedance in the f-branch (as
demonstrated by Eq. 5.3.24), the addition of Gf increases the bandwidth. By itself this
would be welcome, but note that at the crossover frequency 0c the slope difference
between the transimpedance ^T and the ‘noise transimpedance’ (in analogy with
noise gain) is 40 dBÎ100 , causing a phase shift of 180°. This means that at 0c the
feedback current becomes positive and the amplifier will oscillate.

log | Z | 1
fT =
RT 2π RTCT 1
fz =
2π Rf Cf
Z T (s ) | Z T| 1
i fb o fp =
2π Ro || Re Cf

N
(
R f 1+
Ro
R f || R e
)
| Z fb |
Rf Rf
Re
| Z N|
Cf R o || R e
log f
fT fz fc fp fs
Fig. 5.3.17: For the current feedback amplifier we draw the impedances, not gain. The feedback
network impedance l^fb l, as seen from @o , is slightly higher than Vf at DC (owing to Vo , the
inverting input resistance) and falls to Vo llVe at high frequencies; its inverse (about Vf ) is the
amplifier noise transimpedance, l^N l. The feedback network pole becomes the zero of the noise
transimpedance: =z œ "ÎGf Vf (0z œ l=z lÎ#1); likewise, the feedback zero becomes the noise
transimpedance pole =p œ "ÎGf aVo llVe b, (0p œ l=p lÎ#1). At 0c the crossover with l^T l, the
slope difference is 40 dBÎ100 and the relative phase angle is 180°; the amplifier will inevitably
oscillate, even if the secondary pole is far away and its ^T breakpoint is well below Vf . The
dashed line is the transimpedance required for stability, realized by an V in series with Gf .

- 5.76 -
P.Stariè, E.Margan System synthesis and integration

+1 o
+1
i fb
RT CT
Ro Rf o
i fb

i fb Ro Re

Rf
Re

Fig. 5.3.18: The current feedback amplifier and its ‘noise transimpedance’ equivalent, @o Î3fb .

We can find the noise transimpedance as simply as we found the noise gain
for voltage feedback amplifiers. By assuming that the feedback current is noise
generated, from the equivalent circuit in Fig. 5.3.18 we calculate the ratio of the
output voltage and the feedback current:
@o Vo
œ Vo  Vf Œ"   (5.3.47)
3fb Ve
By adding a capacitance Gf in parallel with Vf the noise transimpedance becomes:
"
@o Gf Vf Vo
œ Vo  Vf Œ"   (5.3.48)
3fb " Ve
=
Gf Vf
and this equation is represented by l^N l in Fig. 5.3.17.
In most practical cases there will be stray capacitances in parallel to both Vf
and Ve , and in addition between both inputs, as well as from the non-inverting input
to ground, and also from output to ground. A real world situation can be rather
complicated.
As we have shown, current feedback amplifiers are extremely sensitive to any
capacitances within the negative feedback loop. This means that whole families of
circuits (such as integrators, differentiators, some filter topologies, current amplifiers,
I to V converters, logarithmic amplifiers, etc.) can not be realized in the same way as
with conventional amplifiers. Fortunately, there are alternative ways of performing the
same functions and some of the most common ones are shown in Fig. 5.3.19 for a
quick comparison:
• an inverting integrator can be implemented using two current feedback (CFB)
opamps, with the bonus of providing both the inverting and non-inverting
configuration within the same circuit;
• a single pole inverting filter amplifier can be implemented by exploiting the
internal capacitance GT and the external feedback resistor Vf (useful for high
frequency cut off; for lower frequencies a high value of Vf is impractical since it
would cause a large voltage offset, owing to a large input bias current);

- 5.77 -
P.Stariè, E.Margan System synthesis and integration

• for filters the Sallen–Key non-inverting configuration is recommended for use


with CFB amplifiers. This configuration can be easily cascaded (using second-
and third-order sections) to realize multi-pole high order filters, in the same way
as the ‘multiple feedback’ inverting configuration can be cascaded;
• current sources, such as some digital to analog converters, photo-diodes, etc.,
have a relatively large capacitance in parallel. If a CFB amplifier is used as an
inverting current to voltage converter, the source capacitance must be
compensated. The gain setting resistors Vf and Ve are in series with the
compensation capacitor, thus preventing instability at high frequencies. Also,
these resistors can be used to scale the compensation capacitor value to more
practical values of 10–20 pF, instead of 1 pF or less which would normally be
needed for high speed response.

Voltage Feedback Amplifiers Current Feedback Amplifiers


R C
R C
s
s R R −
CFA1 o
VFA
− o

R CFA2

o
Integrator (inverting) Integrator (inverting and non-inverting)
C Rf Rf
s
R R

s CFA − o

VFA
− o

fh = 1
2 π R f CT

Single-Pole Amplifier (inverting) Single-Pole Amplifier (inverting)


2R Rf

R R 2R C3 C2
s R R R CFA
C1 C2 o
s
VFA C1
− o C3

Multiple Feedback 3-pole Filter (inverting) Sallen–Key 3-pole Filter (non-inverting)

DAC DAC
R R Rf
Cc Cc
R
Re
Cd Cd
VFA CFA
− o Rf << R
− o
Re = Rf / 10
I to V Conversion DAC Gd Compensation I to V Conversion DAC Gd Compensation

Fig. 5.3.19: Functionally equivalent circuits with conventional and current feedback amplifiers.
Integrators, filters and current to voltage converters in inverting configurations cannot be achieved
using a single CF amplifier. However, two-amplifier circuits can provide inherent amplifier pole
compensation, which is very important at high frequencies. Filters can be realized in the non-
inverting configuration. And feedback capacitance can be isolated by a resistive divider.

- 5.78 -
P.Stariè, E.Margan System synthesis and integration

5.3.6 Feedback Controlled Gain Peaking

As we have just seen, the current feedback amplifier is sensitive to capacitive


loading of its inverting input. But stray capacitances are unavoidable and they can
cause significant gain peaking and even oscillations in high speed low gain designs.
Fortunately, the current feedback topology offers a simple way of controlling this by
choosing such values of Vf and Ve which would set the bandwidth and the gain to the
optimum. Although CFB amplifier performance is usually optimized for a specific
value of the bandwidth defining resistance Vf , ample trade-off range is often possible.
However, as we have learned in Part 4, in multi-stage amplifiers it is necessary
to optimize the system as a whole and not just each stage individually. The possibility
of controlling the gain peaking with feedback resistors lends itself nicely to our
purpose. In practice we would have to iteratively adjust both Vf and Ve to obtain the
desired gain, bandwidth, and peaking. What we would like is to be able to adjust the
bandwidth and peaking by a single resistor, without affecting the gain. The circuit in
Fig. 5.3.20 does just that.
The feedback resistors Vf and Ve should be chosen for the gain required, but
with the lowest possible values, which would not overload the output stage. Then the
resistor between the feedback divider and the inverting input, Vb , should be adjusted
for the required response, assuming a fixed value of the stray capacitance.

o
CFA
Rs i fb Rf
Cs1 b
Rb
s Cs2 Re

Fig. 5.3.20: This circuit exploits the ability of current feedback amplifiers to
adjust the bandwidth and gain peaking independently of the gain. The price to pay
is the lower slew rate limit. See the frequency and the step response in Fig. 5.3.21.

10 6

5
o
4
A (f ) 3

2
s
1

1 -1
1M 10M 100M 1G 10G 0 2 4 6 8
t [ns]
f [Hz]
Fig. 5.3.21: a) Frequency response; b) Step response of the amplifier in Fig. 5.3.20. The closed loop
gain Ecl œ "  Vf ÎVe œ %, Vf œ "&! H, Ve œ &! H, the source resistance Vs œ &! H, the stray
capacitances, Gs1,2 œ " pF, the amplifier transcapacitance GT œ " pF, while Vb is varied from "&! H
for highest to (&! H for lowest peaking.

- 5.79 -
P.Stariè, E.Margan System synthesis and integration

Note, however, that in this way we lose the current on demand property of the
CFB amplifier, since Vb will reduce the slew rate.
In a similar manner as was done for passive circuits in Part 2 and in Sec. 5.1,
the resulting gain peaking can be used to improve the step response of a multi-stage
system. As shown in Fig. 5.3.21, the gain peaking reveals the amplifier resonance,
which decreases with increasing Vb , while the DC gain remains almost unchanged.

5.3.7 Improved Voltage Feedback Amplifiers

The lessons learned from the current feedback technology can be used to
improve conventional voltage feedback amplifiers.
Besides the improved semiconductor manufacturing technology, basically
there are two approaches: one is to take the voltage feedback amplifier and modify it
using the techniques of current feedback to avoid its week points. One such example
is shown in Fig. 5.3.22. The other way, like the circuit in Fig. 5.3.23, is to take the
current feedback amplifier and modify it to make it appear to the outside world as a
voltage feedback amplifier.
V cc
R1 R2

Vbb
+ Q1 Q2 Q4 Q3 o
+1
i

Rs −
Cc
I1 Q5 Q6
s
V ee
fb
Re Rf
Fig. 5.3.22: The voltage feedback amplifier, improved. The transistors U"% form a differential
‘folded’ cascode, which drives the current mirror U&,' . In this way the input is a conventional high
impedance differential, but the dominant pole compensation capacitor Gc is grounded, eliminating
the Miller effect. This circuit still exhibits slew rate limiting, although at much higher frequencies.
Typical examples of this configuration are Analog Devices’ AD-817 and Burr–Brown’s OPA-640.

The differential folded cascode U"–% and the current mirror U&,' of Fig. 5.3.22
can be modeled by a transconductance, gm , driven by the input voltage difference,
?@ œ @s  @fb . Here @s is the signal source voltage and @fb is the feedback voltage,
derived from the output voltage @o and the feedback network divider, Ve ÎaVf  Ve b.
The current 3 œ ?@ gm drives the output buffer and the capacitance Gc :

" " Ve "


@o œ 3 œ gm a@s  @fb b œ gm Œ@s  @o  (5.3.49)
= Gc = Gc Vf  V e = Gc

From this we obtain:


Ve gm gm
@o Œ"  †  œ @s (5.3.50)
Vf  Ve = Gc = Gc

- 5.80 -
P.Stariè, E.Margan System synthesis and integration

and, finally, the transfer function:


Ve gm

@o Vf Vf  Ve Gc
œ Œ"   (5.3.51)
@s Ve Ve gm
= †
Vf  V e Gc
If we compare this with a general first-order amplifier transfer function:
@o ="
œ Ecl (5.3.52)
@s =  ="
we see that the closed loop gain is, as usual, Ecl œ "  Vf ÎVe , whilst the amplifier
closed loop pole is =" œ Ve gm ÎGc aVf  Ve b, and therefore the cut off frequency is
an inverse function of the closed loop gain, just as in voltage feedback amplifiers.
A similar situation is encountered in Fig. 5.3.23, where the current 3b charging
GT is set by the input voltage difference and Vb : 3b œ ?@ÎVb .

M1
Vcc

A1 Rb A2 ib A3
+ o
+1 +1 +1
∆ ib
Rs − CT

s fb
M2 Vee
Re Rf

Fig. 5.3.23: The basic current feedback amplifier (E" , E$ , Q" , Q# ) is improved by
adding another buffer, E# , which presents a high impedance to the feedback divider, Vf
and Ve ; an additional resistor, Vb , now takes the role of converting the voltage feedback
into current and provide bandwidth setting. Like the original current feedback amplifier,
this circuit is also (almost) free from slew rate limiting. However, the closed loop
bandwidth is, as in voltage feedback amplifiers, gain dependent. A typical representative
of this configuration is Analog Devices’ OP-467.

The output voltage is:


" " Ve "
@o œ 3b œ ?@ œ Œ@s  @o  (5.3.53)
= GT = G T Vb Vf  V e = GT V b

so the transfer function is:


Ve "

@o Vf Vf  V e G T Vb
œ Œ"   (5.3.54)
@s Ve Ve "
= †
Vf  V e G T Vb

The closed loop gain is the same as in the previous case, whilst the pole is
=" œ Ve ÎGT Vb aVf  Ve b, so the closed loop cutoff frequency is again an inverse
function of the closed loop gain.

- 5.81 -
P.Stariè, E.Margan System synthesis and integration

One of the important parameters in integrated circuit design is the available


bandwidth vs. quiescent current. The average technology achievement around the year
1990 was about 10 MHzÎmA and around the year 2000 it was already about
100 MHzÎmA; the figure is steadily rising. With the ever increasing number of
transistors on a silicon chip it is important to keep this value high. The
implementation of structures which convey the supply current efficiently to the signal
helps to reduce the waste of power.
An example, named the ‘Quad Core’ [Ref. 5.32], is shown in Fig. 5.3.24. This
is an interesting combination of circuits in Fig. 5.3.22 and 5.3.23, where the two input
buffers, formed by U",# and U$,% , combine their currents by the current mirrors, U&–) ,
driving the following gain stage U*,"! in a differential push–pull mode.

Vcc
Q5 Q7

Q9
Q1 Q3
Ccb

R1 R3
R5
− + o
+1
R2 R4

Ccb
Q2 Q4
Q10

Q6 Q8
Vee

Fig. 5.3.24: An interesting combination of circuits in Fig. 5.3.22 and 5.3.23 is the so
called ‘Quad Core’ structure, [Ref. 5.32]. Here both the inverting and the non-inverting
input buffer currents are combined by the current mirrors to drive the Gcb of U9,10. The
non-labeled transistors provide the Zbe compensation for U"–% . Typical representatives
are Analog Devices’ AD-8047, AD-9631, AD-8041 and others.

The current available to charge the Gcb capacitances of U*,"! is set by the
input voltage difference and V& . This current is effectively doubled by the input
structure, thus increasing the bandwidth, the loop gain, and linearity. A further
bandwidth improvement is achieved by the low impedance of U(,) , which are
practically fully open and so provide a tight control of the U*,"! base voltages,
reducing the Miller effect considerably. The circuit behaves as a voltage feedback
amplifier with the advantages of low offset and high loop gain and with a bandwidth
and slew rate limiting close to that of current feedback amplifiers.
The output buffer stage can also be improved for greater current handling
efficiency. An example is shown in Fig. 5.3.25.
Here the collectors of U#,$ and U",% are summed and mirrored by U&,( and
U',) , respectively, and finally added to the output load current. With appropriate bias
this scheme allows a reduction of the quiescent power supply current to just one third

- 5.82 -
P.Stariè, E.Margan System synthesis and integration

of the conventional buffer, whilst not compromising the full power bandwidth. At the
same time, the circuit has a comparable loading capability and offers better linearity.
Vcc
I b1 Q5 Q7

Q3
Q1
d o

Q2
Q4

I b2 Q6 Q8
Vee
Fig. 5.3.25: Output buffer stage with improved current handling.

5.3.8 Compensating Capacitive Loads

Another very important property of high speed opamps is their ability to drive
capacitive loads. To the amplifier in Fig. 5.3.26, because of its non-zero output
resistance Vo , the capacitive load GL adds a high frequency pole within the feedback
loop. The feedback becomes frequency dependent and the phase margin is lowered,
thus compromising the stability.

V cc
i +
Ro L
∆ o
io iL

V ee
Rf
CL RL
fb
Re

Fig. 5.3.26: Owing to the non-zero output impedance a capacitive load adds another pole
within the feedback loop. If the closed loop gain is too low the resulting increase in phase can
make the feedback positive at high frequencies, instead of negative, destabilizing the amplifier.

The load voltage @L can be expressed as a function of the internal voltage @o


(seen when no load is present), the factor HV , and the frequency dependence:
=L
@L œ @o HV (5.3.55)
=  =L

- 5.83 -
P.Stariè, E.Margan System synthesis and integration

Here HV is the resistive divider formed by the output resistance Vo , the load
resistance VL and the total resistance of the feedback divider Vf  Ve :
VL aVf  Ve b
VL  Vf  Ve
HV œ (5.3.56)
VL aVf  Ve b
Vo 
VL  Vf  Ve
The pole =L is formed by the load capacitance GL and the equivalent resistance seen
by it, Vq , whilst =L is the appropriate cut off frequency:
"
=L œ  ; =L œ l=L l (5.3.57)
Vq GL
Vq is simply the parallel combination of all the resistances at the output node:
"
Vq œ (5.3.58)
" " "
 
Vo VL Vf  V e
The internal output voltage, @o , is a function of the input voltage difference, ?@, and
the amplifier open loop gain E, which, in turn, is also a function of frequency, Ea=b:

@o œ Ea=b ?@ (5.3.59)

The input voltage difference is, of course, the difference between the signal source
voltage and the output (load) voltage, attenuated by the feedback resistors:
Ve
?@ œ @s  @L (5.3.60)
Vf  V e
The open loop gain Ea=b is defined by the DC open loop gain E! and the frequency
dependent term owed to the amplifier dominant pole at the frequency =! :
=!
Ea=b œ E! (5.3.61)
=  =!
With this in mind, we can express the internal output voltage:

=! Ve
@o œ E! Œ@s  @L  (5.3.62)
=  =! Vf  V e

and by inserting this back in Eq. 5.3.53 we have the load voltage:

=! Ve =L
@L œ E ! Œ@s  @L HV (5.3.63)
=  =! Vf  V e =  =L

which we solve for @L explicitly:

Ve =! =L =! =L
@L Œ"  E! HV  œ @s E! HV
Vf  Ve =  =! =  =L =  =! =  =L
(5.3.64)

- 5.84 -
P.Stariè, E.Margan System synthesis and integration

Now we can write the transfer function:


=! =L
E! HV
@L a=  =! ba=  =L b
œ (5.3.65)
@s Ve =! = L
" E! HV
Vf  Ve a=  =! ba=  =L b

where we separate the closed loop gain term:


Ve =! =L
E! HV
@L Vf  V e Vf  Ve a=  =! ba=  =L b
œ † (5.3.66)
@s Ve Ve =! =L
" E! HV
Vf  Ve a=  =! ba=  =L b
and, by multiplying the numerator and the denominator by a=  =! ba=  =L b, which
we expand into a polynomial, we obtain the expression for the transfer function.
Clearly it is a second-order function of frequency:
Ve
E ! H V =! = L
@L Vf  V e Vf  V e
œ † (5.3.67)
@s Ve Ve
=#  =a=!  =L b  =! =L Œ"  E! HV 
Vf  V e

The product of the poles, =" =# , is a function of not just =! and =L , but also of
the open loop gain E! and the closed loop feedback dividers, HV and Ve ÎaVf  Ve b
(refer to Appendix 2.1 to find the system poles of a 2nd -order function). Since the
output resistance, Vo , is usually much lower than both the load resistance VL and the
feedback resistances Vf  Ve , the output divider HV is usually between 0.9 and 1.
The system’s stability is therefore dictated by the amount of loop gain when = p =L .
Thus close to =L the loop gain will be higher than 1 either if E! is very high, or if =L
is relatively low and Vf p !, that is, if the closed loop gain approaches unity!
This is often counter-intuitive, not just to beginners, but sometimes even to
experienced engineers. Usually, if we want to enhance the amplifier’s stability, we
increase the feedback at high frequencies by placing a capacitor in parallel to Vf .
However, in the case of a capacitive load the amplifier will be turned into an
oscillator by that procedure. In contrast, the stability will improve if we increase the
closed loop gain (increase Vf or decrease Ve ). This is illustrated in Fig. 5.3.27, where
the gain and the phase are plotted for the three values of Vf (_, 4Ve and 0) and the
capacitive load is such, that the loop gain at 0L ¸ # † "!% 0! is about 3.
Conventional opamp compensation schemes (consisting usually of a resistor
or a series VG network, connected between both inputs, thus increasing the noise
gain, without affecting the signal gain) increase the stability at the expense of either
the gain, or the bandwidth, or both! Conventional compensation should be used as the
last resort only, when the circuit must meet an unknown load capacitance, which can
vary considerably.
In fixed applications, when the capacitive load is known, or is within a narrow
range, it is much better to compensate the load by inductive peaking, as we have seen
in Part 2. The simplest approach is shown in Fig. 5.3.28, where the load impedance
appears real to the amplifier output, so that the feedback is not frequency dependent.

- 5.85 -
P.Stariè, E.Margan System synthesis and integration

10 6 − 180
a) Rf = ∞
10 5 b) R f = 4 Re
c) Rf = 0 ϕc
10 4 − 90
3 ϕ[ ]
10
Ma
2
10 0
L
s 10 1 ϕa
Mb
Mc
10 0 90

10 −1 +
ϕ
b
s
A( f )
L ϕa

10− 2 − 180
Re Rf CL RL
10 −3

10− 4 270
10−1 10 0 10 1 10 2 f 10 3 10 4 10 5 10 6
f0
Fig. 5.3.27: An amplifier driving a capacitive load can become unstable if its close loop gain is
decreased too much: a) with no feedback and b) with a gain of 5, the amplifier is stable, although
in the latter case there is already a pronounced peaking; whilst in c) with the closed loop gain
reduced to 1 the peaking is very high and the phase goes over 360° and oscillation is excited at
the frequency at which the phase equals 360°.

L c = RL2 C L
V cc
i +
Ro F L
∆ o
io Rd = RL
iL

V ee
Rf
CL RL
fb
Re

Fig. 5.3.28: Capacitive load compensation which makes the load to appear
real and equal to VL to the opamp. This works for fixed load impedances.

Here the inductance Pc and its parallel damping resistance, Vd , are chosen so
that Vd œ VL and Pc œ VL# GL , and the amplifier sees an impedance equal to VL from
DC up to the frequency at which the coil stray capacitance starts to influence the
compensation. With a careful inductance design the frequency at which this happens
can be much higher than the critical amplifier frequency.
However, even with such compensation, the bandwidth can be lower than
desired, since the compensation network forms a low pass filter with the load, and the
value of the inductance is influenced by both the load resistance VL and the load
capacitance GL .

- 5.86 -
P.Stariè, E.Margan System synthesis and integration

The filter transfer function is:


" "
=
@L Pc G L VL G L
œ (5.3.68)
@F # "
=#  = 
VL G L Pc G L

The cut off frequency is =h œ "ÎÈPc GL œ "ÎVL GL and that is much lower
than =L of Eq. 5.3.57, which would apply if the amplifier could be made stable by
some other means. If VL and GL can be separated, it is possible to build a 2-pole
series peaking or a T-coil peaking, tuned to form a 3-pole system together with the
pole associated with the amplifier closed loop cut off. This procedure is similar to the
one described in Part 2 and also in Sec. 5.1, so we leave it as an exercise to the reader.
Another compensation method, shown in Fig. 5.3.29, is to separate the AC and
DC feedback path by a small resistance Vc in series with the load and a feedback
bridging capacitance Gc :

V cc
s +
Ro Rc L
∆ o
io iL
− Cc
V ee ACfb
Rf
CL RL
DCfb
Re
Cc Rf ≥ Rc C L

Fig. 5.3.29: The capacitive load is separated from the AC feedback path by a small
resistor Vc in series with the output; owing to the capacitance Gc this type of
compensation can be applied only to voltage feedback unity gain compensated amplifiers.

This type of compensation can be very effective, since very small values of Vc
can be used (5–15 H or so), lowering the bandwidth only slightly; however, due to the
feedback bridging capacitance Gc , it can be applied only to voltage feedback unity
gain compensated amplifiers; it can not be used for current feedback amplifiers.
A more serious problem is the fact that, in some applications, the load
capacitance would vary considerably; for example, some types of fast AD converters
have their input capacitance code dependent (thus also signal level dependent!). It is
therefore desirable to design the amplifier output stage with the lowest possible output
resistance and employ a compensation scheme which would work for a range of
capacitance values.
Fig. 5.3.30 shows the implementation found in some CFB opamps, where the
compensation network, formed by Gc and Vc , is in parallel with the output buffer.
With a high impedance load the voltage drop on the output resistance Vo is small and
the current through the compensation network is low; but with a capacitive load or
other low impedance load the output current causes a high voltage drop on Vo , and
consequently the current through Gc increases. Effectively the series combination of

- 5.87 -
P.Stariè, E.Margan System synthesis and integration

Gc and GL is added in parallel to GT , thus lowering the system open loop bandwidth
in proportion with the load and keeping the loop stable.

V cc log| Z |
i fb | ZT|
Qa
dynamic
i fb ( T) R o copmensation
T o
+1
io iL Rf
RT CL RL
CT C Rc ic
c log f
Qb 1
2π Ro C L
V ee

Fig. 5.3.30: The output buffer with a finite output resistance Vo would, when driving a capacitive
load GL , present an additional pole within the feedback loop (taken from @o ), which could
condition the amplifier stability. The compensation network, formed by a serial connection of Gc
and Vc , draws part of the feedback current to the output node, effectively increasing GT in
proportion to the load, reducing the transimpedance and preserving the closed loop stability.

In Fig. 5.3.30 we have three VG impedances: ^T is the CFB amplifier


transimpedance consisting of VT and GT in parallel; ^c is the compensation
impedance consisting of Vc and Gc in series; and ^L is the load impedance consisting
of VL and GL in parallel. The output unity gain buffer is assumed to be ideal and the
real circuit is modeled by its output resistance Vo .
If @T is its input voltage, the output voltage @o will be lower by 3o Vo , where 3o
is the output current. Since ^c is connected between the buffer input and the load, the
voltage over ^c is equal to 3o Vo , so:

@T  @ o œ 3 o V o (5.3.69)
@T  @o 3o V o
3c œ œ (5.3.70)
^c ^c
The transimpedance ^T is driven by the feedback current 3fb ; the voltage @T , which in
an ideal case would be equal to 3fb ^T , is now lower, because part of the current is
stolen by the compensation network ^c :

@T œ a3fb  3c b^T (5.3.71)

With a few simple substitutions we obtain:

^T
@o œ 3fb ^T  3o Vo Œ  " (5.3.72)
^c

This equation shows that the original transimpedance equation (Eq. 5.3.18) is
modified by the output current and the ^T Î^c ratio. Thus a capacitive load, which
would draw high currents at high frequencies (or at the step transition), will
automatically lower the system open loop cut off frequency. Consequently the gain at
high frequencies is reduced so that the closed loop crossover remains well above the
secondary pole (created by Vo and GL ).
Note that the distortion at high frequencies of the compensated amplifier will
be worse than that of a non-compensated one.

- 5.88 -
P.Stariè, E.Margan System synthesis and integration

5.3.9 Fast Overdrive Recovery

The ability to resume linear signal amplification after a prolonged large


overdrive is one of the most important oscilloscope design considerations. The
average oscilloscope user is often tempted to zoom in on a small detail of a relatively
large signal to inspect or debug the performance of electronic equipment.
With the input sensitivity turned high, depending on the attenuation and gain
settings, various stages of a multi-stage amplifier can be overdriven into their non-
linear region or even saturated, whilst others can remain within their linear region; at
some settings it is the output stage that is overdriven first, at others it will be so with
the input stage (this is because the input attenuator is always varied in steps of ×10,
the following attenuation and gain steps are usually lower, ×2 or ×2.5). When
overdriven, different amplifier stages will have different electrical and thermal
histories, so that when the signal falls back within the linear range the circuit will not
re-balance immediately, but will take some time before it regains its original
accuracy, often with many different time constants, characterized by relatively long
rising or falling ‘tails’.
For decades, Tektronix oscilloscopes excelled in fast recovery, far above all its
competitors. Although the problem of overdrive recovery has never been easy to
solve, in the old days of electronic tubes and discrete transistors the power supply
voltage was always very high, allowing ample signal range. With modern ICs, having
several transistors one on top of another and with ever decreasing power supply
voltages, the useful linear range is often only a few volts. Therefore, special local
limiting circuitry must be added to smoothly switch in and out with overdrive.
The overdrive problem is even more pronounced in many modern fast ADCs
(e.g., the ‘flash type’, especially those with a two-stage pipeline architecture) whose
input voltage range is only 1–2 V. When overdriven, they become slower; or they can
generate large error codes, even if the overdrive level is just a few least significant
bits (LSBs) above maximum. Such ADCs require a well controlled clipping amplifier
to drive their input. With a voltage feedback opamp, the solution could be simple, by
adding diodes shunting the feedback loop. Precise levels with sharp knees are needed,
so a Schottky diode bridge with a biased Zener diode is often used. Fig. 5.3.31 shows
one such possible circuit.
V cc
R1

Re Rf
s

A o

R2

Vee
Fig. 5.3.31: The output level clipping is more precise if a biased Zener diode in a
Schottky diode bridge is controlling the feedback. However, this circuit can be
used only with voltage feedback unity gain compensated amplifiers.

- 5.89 -
P.Stariè, E.Margan System synthesis and integration

Current feedback opamps do not like changes in feedback impedance,


therefore a different output clipping scheme is used, having separate supply voltages
for the output stage, as in Fig. 5.3.32. However, when the output reaches the level of
its supply voltage the input stage loses feedback, so the input signal can overdrive the
input differential stage. Usually we can prevent this by adding two anti-parallel
Schottky diodes between the two inputs. Unfortunately, this would also increase the
input capacitance.
V cc
V cp
Re Rf
s

A o

V cn
V ee
Fig. 5.3.32: The output buffer with a separate lower supply voltage can be used for
output signal clipping with current feedback amplifiers. Since feedback is lost during
clipping, the input stage must be protected from overdrive by anti-parallel Schottky
diodes, which, inevitably, increase the input capacitance.

Another solution, often used with current feedback topology, is realized by


limiting the voltage at the GT internal node, using two normally closed voltage
followers, as shown in Fig. 5.3.33. The addition of the voltage limiters increases the
total capacitance at the GT node, so, all other things being equal, limiting amplifiers
tend to be slower. But with a careful design the bandwidth can still be quite high.

Vcc o

Q1 Q3 Q7
Q12
VcH
Q5

Q9

i Bi i fb Bo
+ o i
+1 +1
i fb CT


Q10
Q6

Q11 VcL
Q2 Q4 Q8

Vee
Rf

Fig. 5.3.33: Output signal clipping by limiting the internal GT node of a current feedback amplifier.
The transistors U5–8 are normally reverse biased. For positive voltages U',) (U&,( for negative) start
conducting only when the voltage at GT reaches ZcH (ZcL ).

- 5.90 -
P.Stariè, E.Margan System synthesis and integration

Transistors U"–% form the two current mirrors, which reflect the feedback
current 3fb from the inverting input (the first buffer output) into the transimpedance
node (at GT ). Transistors U& and U( are normally reverse biased and so are the B–E
junctions of U' and U) . The transistors U&,( start to conduct when the GT voltage
(and consequently the output voltage @o ) falls below ZcL . Likewise, the transistors
U',) conduct when the GT voltage exceeds ZcH . When either of these transistors
conduct, they diverge the mirrored current 3fb to one of the supplies. Note that the
voltages which set the clipping levels can be as high as the supply voltage. Also, they
can be adjusted independently, as long as ZcH  ZcL . Since only two transistors at a
time are required to switch on or off, the limiting action, as well as the recovery from
limiting, can be very fast.
The most common misconception of overdrive recovery, even amongst more
experienced engineers, is the belief that short switching times can be achieved only if
the transistors are prevented from being saturated by artificially keeping them within a
linear signal range. It often comes as a surprise if this does not solve the problem or,
in some cases, can make it even worse.
It is true that adding Schottky diodes to a TTL gate makes it faster than
ordinary TTL, and the inherently non-saturating ECL is even faster. Fast recovery is
ultimately limited by the so called minority carrier storage time within the
semiconductor material, and it depends on the type and concentration of dopants
which determine the mobility of minority charge carriers. However, in analog circuits
the problem is radically different from digital circuits, since we are interested in not just
how quickly the output will start to catch up with the input, but rather how quickly it
will follow the input to within 0.1 %, or even 0.01 %. In current state of the art
circuitry, the best recovery times are < 5 ns for 0.1 % error and < 25 ns for 0.01 %.
In this respect it is the thermal ‘tails’ that are causing trouble. Wideband
circuits need more power than conventional circuits, so good thermal balance is
critical. Careful circuit design is required in order to keep temperature differences
low, both during linear and non-linear modes of operation.
To some extent, we have been dealing with thermal problems in Part 3. There
we were discussing two-transistor circuits, such as the cascode and the differential
amplifier. But the problem in multi-transistor circuits is that, even if it is differentially
or complementary symmetrical, only one or two transistors will be saturated during
overdrive, the rest of the circuit will either remain linear or cut off, which in this last
case means cooling down. In saturation there is a low voltage across the device
(usually a few tens of mV), so, even if the current through it is large, the power
dissipation is low. Inevitably this results in different thermal histories of different
parts of the circuit.
In integrated circuits the temperature gradients across the die are considerably
lower than those between transistors in discrete circuits, but complex circuits can be
large and heat conduction can be limited, so designers tend to reduce the power of
auxiliary circuitry which is not essential for high speed signal handling. Therefore, hot
spots can still exist and can cause trouble. Another important factor is gain switching
and DC level adjustment, which must not affect the thermal balance, either because of
amplifier working point changes or because of the control circuitry.

- 5.91 -
P.Stariè, E.Margan System synthesis and integration

Circuits which rely on feedback for error correction can be inherently less
sensitive to thermal effects (except for the input differential transistor pair!).
However, the feedback stability, or, more precisely, the no feedback stability during
overdrive, can cause identical or even worse problems. If there is insufficient damping
during the transition from saturation back to the linear range, long ringing can result,
impairing the recovery time. Such problems are usually solved by adding normally
reverse biased diodes, which conduct during the saturation of a nearby transistor,
allowing the remaining part of the circuit some control over critical nodes.
We will review a few possible solutions in the following section.

- 5.92 -
P.Stariè, E.Margan System synthesis and integration

5.4 Improving Amplifier Linearity

The discussion of modern wideband amplifiers would not be complete without


considering their linearity. In older wideband instrumentation a non-linearity of 1%
was considered adequately low. In oscilloscopes this figure was comparable to the
width of the luminous trace on the CRT screen.
In modern digitizing oscilloscopes the vertical resolution is limited by the
resolution of the analog to digital converter (ADC), which for high sampling rate
systems is rarely better than 8 bit (1:28 , or ~0.4 %). At lower sampling rates the
resolution can be 10 bit (1:210 , or ~0.1 %) or 12 bit (1:212 , or ~0.025 %). However, in
such cases the digital display’s resolution limits the readout accuracy (some new
digital oscilloscopes have LCD screens with 1024 horizontal by 768 vertical pixels).
Nevertheless, the linearity issue is still important, because digital systems are
often used when additional signal processing is required, either by the digital signal
processor (DSP) within the ’scope itself or by an external computer to which the
sampled signal is transferred. This processing can range from simple two-channel
signal subtraction, addition, multiplication, etc., to averaging, convolution, or spectral
analysis by fast Fourier transform (FFT). Note that spectral analysis, even if
performed on 8 bit data, can offer an apparent 80 dB range (0.01 % resolution) if the
signal in memory is long (1 MB signal acquisition length is not uncommon!) and if the
FFT is performed on a large number of waveforms. It is therefore important to
preserve all the linearity which the amplifier and the digitizer can offer.
At low frequencies the simplest way of improving linearity is by applying
some sort of local corrective (negative) feedback at each amplifying stage, as we have
seen in Sec. 5.2. But at high frequencies the feedback can give more trouble than it
can solve, owing to multiple high frequency poles and the total phase and time delay
within the loop. Pole–zero compensation and phase correction can be used to a certain
extent, but ultimately the amplifier’s time delay sets the limit. With feedback the error
can be reduced only and never eliminated, since the error reduction is proportional to
the loop gain, which can not be infinite, and it also falls with frequency. So the error is
small at low frequencies, increasing to its full value at the closed loop cut off.
At the highest frequencies the only error correction technique which can be
made to work is the so called error ‘feedforward’. This technique involves taking the
driving signal error from an earlier circuit stage and then subtracting it from the output
signal, so that the error is effectively canceled.
For the younger generation of engineers it is perhaps interesting to mention the
historical perspective of feedback and feedforward error correction. Feedback is an
omnipresent concept today, but it was not always so! In fact, both the feedback and
the feedforward concepts were invented by Harold S. Black [Ref. 5.39 and 5.40].
However, feedforward was invented in 1923 and feedback in 1927, some 4 years later.
While the patent for feedforward was granted in 1928, the feedback was such a
“strange and counterintuitive” concept that the patent was granted almost 10 years
after the application, in 1937! In spite of the fact that the feedback concept has been
invented later and has been slow to catch up, once it happened it soon became the
preferred method of error reduction, mostly owing to the work of H. Nyquist (1889–
1976) and H.W. Bode (1905–1982). On the other hand, feedforward was almost

- 5.93 -
P.Stariè, E.Margan System synthesis and integration

forgotten, then ‘reinvented’ from time to time, only to be rediscovered by the broader
engineering community in 1975, when the Quad 405 audio power amplifier came onto
the market [Ref. 5.42 and 5.43]. Following the presentation article by the 405’s
designer, Peter J. Walker, the idea was refined and generalized by a number of
authors, among the first by J. Vanderkooy and S.P. Lipshitz [Ref. 5.44].
Later M.J. Hawksford [Ref. 5.46] showed that between the two extremes (pure
feedback on one end and pure feedforward on the other) there is a whole spectrum of
solutions combining both concepts. Moreover, such solutions can be applied both at
system level (like the Quad 405 itself or similar solutions, as in [Ref. 5.48]) or down
to the transistor level (as in [Ref. 5.47]).
It is interesting that while there were several examples of feedforward
application in the field of RF communications (some even before 1975), most of the
theoretical work was done with audio power amplification in mind. Apparently it took
some time before the designers of high speed circuits grasped the full potential and the
inherent advantages of the feedforward error correction techniques. In a way, this
situation has not changed much, for even today we meet feedforward error correction
mostly in RF communications systems and top level instrumentation. At the IC level
we find feedforward only as a method of phase compensation (bypassing a slow inter-
stage, not error correction), mostly in older opamps. From 1985 the advance in
semiconductor processing has been extremely fast, discouraging amplifier designers
from seeking more ‘exotic’ circuit topologies.
Before we examine the benefits of the feedforward technique for wideband
amplification we shall first compare the feedback and feedforward principles from the
point of view of error correction.

5.4.1 Feedback and Feedforward Error Correction

Fig. 5.4.1 shows the comparison of amplifiers with feedback and feedforward
error correction. The feedforward amplifier is shown in its simplest form — later we
shall see other possible realizations of the same principle.
1
A1(s )
o
∆ A (s )
β Rf

β Rf o ZL
s ZL s Re

Re

2
∆ A2( s )

a) Feedback b) Feedforward
Fig. 5.4.1: Comparison of amplifiers with feedback and feedforward error correction. The
feedback amplifier has excess gain, EÐ=Ñ, which is reduced to the required level by taking the
output voltage, suitably attenuated (" ), to the inverting input of the differential amplifier (hence the
name ‘negative feedback’). The feedforward case, in its most simple form, has two amplifiers: the
main amplifier E" Ð=Ñ is assisted by the auxiliary amplifier E# Ð=Ñ, which takes the difference
between the input voltage and the attenuated main amplifier output forward to the load.

- 5.94 -
P.Stariè, E.Margan System synthesis and integration

The analysis of the feedback amplifier has already been presented in Sec. 5.3,
but we shall repeat some expressions in order to correlate them with the feedforward
amplifier. From Fig. 5.4.1a we can write:

@o œ a@s  " @o b † Ea=b (5.4.1)

where " œ Ve ÎaVf  Ve b. By solving for @o we have:


Ea=b
@o œ @s (5.4.2)
"  " Ea=b

The fraction on the right hand side is the amplifier closed loop gain Kfb ; it can be
rewritten in such a form, from which it is easily seen that the gain expression can be
approximated as "Î" if EÐ=Ñ is large:
Ea=b " " Vf
Kfb œ œ ¸ º œ" (5.4.3)
"  "Ea=b " " Ea=bp_ Ve
"
Ea=b

Of course, this final simplification is valid at low frequencies only. Since EÐ=Ñ is
finite and falls with frequency owing to the amplifier dominant pole =! :
 =!
Ea=b œ E! (5.4.4)
=  =!
the closed loop transfer function must also have a pole, but at a frequency =h at which
EÐ=h Ñ œ "Î" . Since 0h œ l=h lÎ#1 and 0! œ l=! lÎ#1 we can write:
" 0!
œ E! (5.4.5)
" 0h  0!

and, considering that " E! ¦ ", the closed loop cut off frequency is:
E!
0h œ 0! a" E!  "b ¸ 0! " E! œ 0! (5.4.6)
Kfb
So the closed loop cut off frequency of a voltage feedback amplifier is inversely
proportional to the closed loop gain.
f0
log A A ( f ) = A0
j f + f0
A0
1 fh
Gfb( f ) =
β j f + fh
| A ( f )| fh = f0 ( β A0 − 1)
1 R +R
= f e
β Re | Gfb( f ) |
0 log f
| A ( f )| = 1 f0 fh fc
Re
β=
R f +R e
Fig. 5.4.2: For a voltage feedback amplifier the closed loop frequency response Kfb a0 b
depends on the open loop gain E! , its dominant pole 0! and the feedback attenuation factor " .
The transition frequency 0c is equal to the amplifier gain bandwidth product (but only if the
amplifier does not have a secondary pole close to or lower than 0c ).

- 5.95 -
P.Stariè, E.Margan System synthesis and integration

For the feedforward amplifier, Fig. 5.4.1b, we must first realize that the load
voltage, @o , is the difference of the output voltages of individual amplifiers:

@o œ @"  @# (5.4.7)

Since the main amplifier output is:

@" œ @s E" a=b (5.4.8)

and the auxiliary amplifier output is:

@# œ a" @"  @s b † E# a=b (5.4.9)

it follows that the system output is:

@o œ @s E" a=b  a" @"  @s b † E# a=b

œ @s cE" a=b  " E" a=b E# a=b  E# a=bd (5.4.10)

So by temporarily neglecting the frequency dependence, the closed loop gain


of the feedforward amplifier is:

Kff œ E"  E#  " E" E# (5.4.11)

The following reasoning is the most important point in feedforward amplifier


analysis and it is probably difficult to foresee, but once we do, it becomes all too
obvious. Let us say that, within the frequency range of interest, we can achieve:

" E" œ " (5.4.12)


Then:
" " "
Kff œ  E#  " E# œ (5.4.13)
" " "

So, whatever the actual value of the auxiliary amplifier gain E# , the system’s
gain Kff will be equal to "Î" if we can make E" œ "Î" . Note that we have not
requested any of the two gains to be very high, as we were forced to for feedback
amplifiers, therefore this result is achieved without any approximation! True, if E" is
frequency dependent and " is not, at high frequencies Eq. 5.4.12 would not hold and,
consequently, Eq. 5.4.13 would not be so simple.
However, when E" starts to fall with frequency the factor  " E" E# is also
reduced by the same amount, and the appropriate part of E# compensates the loss.
This would be so as long as the gain E# remains constant with frequency (and even
beyond its own cut off, provided that there still is enough gain for correction!).

Thus feedforward (in principle) achieves the dream goal:


a zero error, high cut off frequency gain, using non-ideal amplifiers!

We shall, of course, still have to deal with component tolerances, temperature


dependences, uncontrollable strays, time delays, etc., but with a manageable effort all
these factors can be minimized, or, at least, kept below some predictable limit.

- 5.96 -
P.Stariè, E.Margan System synthesis and integration

If you now think that there are no more surprises with feedforward amplifiers,
consider the following points:
Eq. 5.4.11 is, in a sense, symmetrical, thus Kff œ "Î" (as in Eq. 5.4.13) can be
achieved also if we decide to make:

" E# œ " (5.4.14)

However, the advantage of making " E" œ " is that the input signal is
canceled at the auxiliary amplifier differential input (remaining as a common mode
signal only). In contrast with the ‘output balance’ condition represented by Eq. 5.4.14,
Eq. 5.4.12 represents the so called ‘input balance’ condition, in which the auxiliary
amplifier needs only a very low level amplitude swing at low frequencies (but rising
to "Î# amplitude and higher at E" cut off and beyond, respectively). It therefore
processes only the errors of the main amplifier, canceling them at the load and leaving
only those of the auxiliary amplifier; as errors in processing the main amplifier error,
those are secondary errors only.
It is also possible to make both gains equal to "Î" :

E" œ E# œ "Î" (5.4.15)

achieving in this way both input and output node balance.


In some instances there are good reasons to make E" larger than "Î",
achieving also gain correction (in this case, the auxiliary amplifier will have to partly
handle the input signal, too). Of course, in all these cases the gain matching has to be
very precise, since in feedforward amplifiers the error cancellation is additive, as can
be deduced from Eq. 5.4.11, not multiplicative as in feedback amplifiers (Eq. 5.4.2).
Another nice property of the feedforward circuit is that it is feedback-free, so
there are no loops causing potential instability, and consequently no stability criterion
to satisfy. As a disadvantage, the output impedance is not lowered by the feedforward
action (as it is in feedback amplifiers), so for high amplifier power efficiency it has to
be made as low as possible within the frequency range of interest.
Let us return to the frequency dependence of the feedforward system gain. By
defining:
 ="  =#
E" a=b œ E!" and E# a=b œ E!# (5.4.16)
=  =" =  =#
the system gain is:
 ="  =# =" =#
Kff œ E!"  E!#  " E!# E!" (5.4.17)
=  =" =  =# a=  =" ba=  =# b

where E!" and E!# are the main and auxiliary amplifier DC gain, respectively. By
choosing E!" œ "Î" we get:

"  ="  =#  ="


Kff œ †  E!# Œ"  
" =  =" =  =# =  ="

"  ="  =# =
œ †  E!# † (5.4.18)
" =  =" =  =# =  ="

- 5.97 -
P.Stariè, E.Margan System synthesis and integration

Note that the auxiliary amplifier gain is effectively multiplied by the high pass
version of the main amplifier’s frequency dependence. If we now decide to make
E!# œ "Î" also, we obtain:

"  ="  =# =
Kff œ Œ  † 
" =  =" =  =# =  ="

"  =" =# =
œ ” Œ"  † • (5.4.19)
" =  =" =" =  =#

The question is: is it desirable to make =" œ =# ? Let us see:

"  ="  =" =


Kff œ  (5.4.20)
"  =  =" a=  =" b#


The second fraction represents a second-order band pass response, which will add
some gain peaking and extend the bandwidth by a factor of almost 3:

10
1 Gff
−3dB

A1 A2

0
10
−2 −1 0 1
10 10 10 10
f / f1
Fig. 5.4.3: The feedforward amplifier bandwidth Kff is highest (and optimized in
the sense of lowest gain ‚ bandwidth requirement of the auxiliary amplifier) if both
amplifiers have the same bandwidth and the gains equal to "Î" . In this figure
"Î" œ "!, E!" œ "!, and E!# œ "" Ðin order to distinguish E# from E" more
easilyÑ. If =" Á =# the Kff bandwidth will be lower.

5.4.2 Error Reduction Analysis

In order to analyze the error reduction by both feedback and feedforward


action we have to determine the system gain sensitivity on the amplifier gain
variations. Generally, the concept of sensitivity of some system property, let us call it
T , to variations of some subsystem parameter B is expressed as the B fraction of T ,
multiplied by the partial derivative of T on B:
B `T
WBT œ † (5.4.21)
T `B
and it represents the amount of change in T for a unit change in B.

- 5.98 -
P.Stariè, E.Margan System synthesis and integration

For the feedback amplifier we want to know the influence of variations in the
amplifier open loop gain E to the closed loop gain Kfb , as defined by Eq. 5.4.3:
E
`
E `Kfb E "  "E
WEKfb œ † œ †
Kfb `E E `E
"  "E

" "E "E


œ a"  " Eb † –  — œ "  "  "E
"  "E a"  "Eb#

"
œ Ò !¹ (5.4.22)
"  "E Ep_

This means that the gain sensitivity is low only if E is very high. We also want to
know the influence of variations of the feedback attenuation " :
E
`
" `Kfb " "  "E
W"Kfb œ † œ †
Kfb `" E `"
"  "E

" a"  "Eb E#


œ †– —
E a"  "Eb#

"E
œ  Ò  "¹ (5.4.23)
"  "E Ep_

In the case of the feedforward amplifier, the influence of E" and E# on the
system gain, as well as the influence of ", using Eq. 5.4.11 for Kff , is:

E" `Kff E" a"  "E# b "  " E# if "E" œ "


WEK"ff œ † œ œ
Kff `E" E" a "  " E # b  E # ! if " E# œ "
(5.4.24)

E# `Kff E# a"  " E" b ! if " E" œ "


WEK#ff œ † œ œ
Kff `E# E# a"  "E" b  E" "  " E" if "E# œ "
(5.4.25)
Ú
Ý  " E2 if " E" œ "
" `Kff  " E" E#
W"Kff œ † œ œ Û  " E" if " E# œ "
Kff `" E"  E#  "E" E# Ý
Ü  " if E" œ E# œ "Î"
(5.4.26)

It is evident that the second condition in Eq. 5.4.24 and the first in Eq. 5.4.25,
as well as the third condition in Eq. 5.4.26, are the same as for the feedback amplifier.
However, remember that for the feedback amplifier the results belong to the ideal case
for which Ep_, so in practice they can be approximated only, whilst for the

- 5.99 -
P.Stariè, E.Margan System synthesis and integration

feedforward amplifier they can be realized without any approximation (but within the
limits of the specified component tolerances).
In a similar way we can determine the error reduction. For the feedback
amplifier we have:
&fb "
œ Ò !¹ (5.4.27)
&E "  "E Ep_

and again, zero distortion is achievable only in the idealized case of infinite gain.
In contrast, for the feedforward amplifier we have:
&ff
œ "  " E# œ !¹ (5.4.28)
&E" " E# œ"

and this extraordinary result can be realized (not only approximated!) to whatever
degree of precision we are satisfied with (accounting also for the technology cost).
Also, we must not forget that the open loop gain of the feedback amplifier
decreases with frequency, so, for a given E0 , the theoretically achievable maximum
error reduction, "ÎÐ"  " E! Ñ, is obtained only from DC up to the frequency of the
dominant pole, 0" ; beyond 0" the error increases proportionally with frequency.
In contrast, feedforward amplifiers offer the same level of error reduction from
DC up to the full feedforward system bandwidth and even beyond!

5.4.3 Alternative Feedforward Configurations

The main drawback of the feedforward amplifier in Fig. 5.4.1b is the ‘floating’
load (between the outputs of both amplifiers); in most cases we would prefer a ground
referenced load, instead.
We have already noted that the output impedance of the main amplifier is not
reduced by feedforward action; in fact, it does not need to be low in order to achieve
effective error canceling. This leads to the idea of summing passively the two outputs,
with that of the auxiliary amplifier inverted in phase:

1
Z3
A1( s )

β Rf o ZL

s Re

Z4
2
∆ A2 ( s )

Fig. 5.4.4: Grounded load feedforward amplifier. Note the inverted input polarity of the
auxiliary amplifier, compared with the circuit in Fig. 5.4.1b. This allows passive signal
summing over the output impedances ^$ and ^% .

- 5.100 -
P.Stariè, E.Margan System synthesis and integration

The impedances ^3 and ^4 should be lower than ^L , but this condition is


dictated mainly by amplifier power efficiency, not by the summing process. We have
labeled ^$ and ^% in accordance with tradition (emerging from the Quad 405 circuit,
see Fig. 5.4.6) and also in accordance with a general form, in which ^" and ^# appear
as feedback components to E" and E# . As implied by the ^ symbol, these
impedances can also be complex.
The output voltage @o can be calculated by summing the two output currents:
@"  @ o @#  @ o
3" œ and 3# œ (5.4.29)
^$ ^%
so that:
@o œ ^L a3"  3# b (5.4.30)

which results in:


^L ^$ ^%
@o œ @  @#  (5.4.31)
^$ ^% Œ ^$  ^% " ^$  ^ %
^L 
^$  ^%
By extracting the common attenuation factor, +:
^L ^$
+œ † (5.4.32)
^$ ^ % ^$  ^%
^L 
^$  ^%
the system’s gain is:

@o ^% +
Kff œ œ + ”E "  aE #  " E " E # b • œ (5.4.33)
@s ^$ "» when " E" œ"
or " E# œ^$ Î^%

Because of the passive summing, the correct balance condition and error
canceling for this circuit is achieved when the two output voltages are in the inverse
ratio as the impedances:
@# ^$
œ (5.4.34)
@" ^%
If we assume that the output balance has been achieved, then:

@# œ @" " E# (5.4.35)

But the output balance condition is " E# œ ", therefore:


@" " E # ^$ ^$
œ Ê " E# œ (5.4.36)
@" ^% ^%
On the other hand, the input balance condition, " E" œ ", because of
Eq. 5.4.34 and 5.4.36, results in the gain ratio equal to the impedance ratio:
E# ^$
œ (5.4.37)
E" ^%

- 5.101 -
P.Stariè, E.Margan System synthesis and integration

The auxiliary amplifier will, under this condition, draw considerable current,
even without the load. To reduce the current demand, we have to give up the input
balance condition. If we set @# œ @" then there would be no current if ^L œ _, and
by choosing ^$ ll^% ¥ ^L the current demand is reduced for the nominal load:
@"
@# œ E# Œ  " @"  œ @ " (5.4.38)
E"
and this means that:
E#
œ " E#  " (5.4.39)
E"
which, considering Eq. 5.4.35, results in:
E# ^$
œ " (5.4.40)
E" ^%
and this should be compared with the ‘simple’ balance condition in Eq. 5.4.37.
Another configuration, known as the ‘error take off’, by Sandman [Ref. 5.49],
is shown in Fig. 5.4.5. Here both the main and the auxiliary amplifier are of the
negative feedback type; however, the auxiliary amplifier senses both the distortion
and the gain error from the main amplifier feedback input and delivers it to the load in
the same feedforward passive summing manner.
Zi Z1 1
Z4 o

∆ A1 ( s )
ZL
Re
Z2 2 Z3
s

A2 ( s )

Fig. 5.4.5: ‘Error take off’ principle. The error ?@ of the main amplifier, left by feedback,
is taken by the auxiliary amplifier and fed forward to the load, where it is passively
summed to the main output. The impedances ^" to ^% form the balancing bridge.

With an ideal main amplifier the voltage at its inverting input would be at a
(virtual) ground potential; any signal ?@ present at this point represents an attenuated
version of the main amplifier error (gain error and distortion). If adequately amplified,
it can be added to the main output to cancel the error at the load:
^i
?@ œ " & œ & (5.4.41)
^i  ^ "
To achieve effective error cancellation, we must set:
^# ^i ^$
† œ (5.4.42)
Ve ^ i  ^ " ^%

- 5.102 -
P.Stariè, E.Margan System synthesis and integration

A variation of this circuit is shown in Fig. 5.4.6, which actually represents the
original Quad 405 ‘current dumping’ amplifier configuration, and we now see how it
follows from both the ‘error take off’ circuit and the pure feedforward circuit. If the
balance condition is achieved E# must compensate whatever the amount of error at
the E" output. It is important to realize that the input signal of E" can be taken from
any suitable point within the circuit (the only condition is that it should, preferably,
not be out of phase); the E# output represents just such a convenient point. The
balance condition for the 405 is:
^$ ^#
œ (5.4.43)
^% ^"
and the main amplifier error is canceled. Again, the impedances ^n can be real or
complex, whichever combination satisfies this equation.

Ri Z1 1
Z4 o

x
A1( s )
ZL

Z2 Z3
s
2

A2( s )

Fig. 5.4.6: Current dumping: by taking the main amplifier input signal (@x) from the
auxiliary amplifier (@# ), we obtain the original Quad 405 amplifier configuration.
Although it is effective in error cancellation, its main disadvantages are the requirement
for large voltage at the auxiliary amplifier output and a relatively low cut off frequency. In
the Quad 405, ^" and ^$ are resistors, ^# is a capacitor and ^% is an inductor.

A disadvantage of the current dumping scheme is that the auxiliary amplifier,


although supplying relatively low current, must supply all the output voltage plus the
error term; in such a condition the auxiliary amplifier’s error can be rather high, and
although it is a second-order error it can be significant. Also, in the classical 405
realization ^# (the feedback impedance of the auxiliary amplifier) is a capacitance
(compensating the inductance ^% ), which results in a relatively slow system (high
speed is not an issue at audio frequencies).

5.4.4 Time Delay Compensation

Let us return to the basic feedforward amplifier of Fig. 5.4.1b. It is clear that if
both the main and the auxiliary amplifier have limited bandwidth, time delays are
inevitable. In order to work properly the feedforward scheme must include some time
delay compensation, otherwise error cancellation close to and beyond the system cut
off frequency would not occur.
For two sinusoidal signals a small time delay between them is transformed
into a small amplitude error when summed and it might even be possible to

- 5.103 -
P.Stariè, E.Margan System synthesis and integration

compensate it by altering the balance condition. However, for a square wave or a step
function, large spikes, equal to full amplitude difference, would result and these can
not be corrected by the balance setting components. Moreover, these spikes can
overdrive the input of the auxiliary amplifier and saturate its output; error correction
in such conditions is non-operating. Thus for high speed amplification some form of
time delay compensation is mandatory.
In Fig. 5.4.7 we see a general principle of time delay compensation. Since each
amplifier has its own time delay acting on a different summing node, at least two
separate time delay circuits are needed. Here, 7" compensates the delay of the main
amplifier, allowing the auxiliary amplifier input to perform the difference between the
input and "–attenuated signal with the correct phase. Likewise, 7# does so for the
auxiliary amplifier delay for correct summing at the output.

1
A1( s ) τ2

β Rf

τ1 Re o ZL
s

2
∆ A2( s )

Fig. 5.4.7: Time delay compensation principle for the basic feedforward amplifier: 7"
compensates the delay introduced by the main amplifier and 7# compensates the delay
introduced by the auxiliary amplifier.

5.4.5 Circuits With Local Error Correction

In this section we shall briefly discuss a few simple circuits in which we shall
employ our knowledge of feedback and feedforward error correction. We shall
demonstrate how the same technique, which was developed at the system level, can
also be applied at the local (subsystem) level. Local error correction often gives better
results, since the linearization of individual amplifier stages lowers the requirements
or indeed completely eliminates the need for system level correction. In many
applications, such as oscilloscopes and adaptable data acquisition instrumentation,
system level correction can be difficult to implement, owing to variable inter-stage
conditions (variable attenuation, gain, DC level setting, trigger pick up delays, etc.).
The most interesting circuits to which the error correction is applied are the
differential amplifier and the differential cascode amplifier. We have discussed them
briefly in Part 3, Sec. 3.7. Here we shall review the analysis with the emphasis on
their non-linearity. The dominant non-linearity mechanism is the familiar exponential
dependence of Me to Zbe of a single transistor:
;e
Zbe
Me œ M s Œe 5 B X  " (5.4.44)

- 5.104 -
P.Stariè, E.Margan System synthesis and integration

Here Ms is the saturation current (about "!"% A in silicon transistors,


depending on the dopant concentrations in the p–n junctions); the remaining symbols
have the usual meaning (see Part 3, Sec. 3.1). Under normal operating conditions the
DC emitter bias current Me! exceeds M= by at least "!"" , simplifying Eq. 5.4.44 to:
;e
Zbe
Me ¸ Ms e 5B X » (5.4.45)
Me! ¦ Ms
For small signals, not altering the junction temperature considerably, we can assume
ZX œ 5B X Î;e to be constant. So if we also neglect the dependence of the current gain
and Ms on temperature and biasing, as well as the internal resistance and capacitance
variations with the signal, we can express the non-linearity in form of the internal
emitter resistance:
`Zbe
<e œ (5.4.46)
`Me

For the differential pair of Fig. 5.4.8 the effective resistance seen by their
base–emitter junctions is the sum:

`Zbe" `Zbe#
<ed œ  (5.4.47)
`Me" `Me#

which, since one increases and the other decreases with the signal, varies much less
over the much larger input signal range than in the single transistor case.

V cc V cc

Rc Rc Rc Rc
V c1 V c2 V c1 V c2
I c1 Ic2 I c1 Ic2

V b1 Q1 Q2 V b2 V b1 Q1 Q2 V b2
Re Re

I e0 I e0

a) b)
V ee V ee
Fig. 5.4.8: a) A simple differential amplifier, showing the voltages and currents
used in the analysis. b) The same, but with the emitter degeneration resistors Ve .

For the amplifier in Fig. 5.4.8a we must first realize that the differential input
voltage is equal to the difference of the two Zbe junction voltages:

Zdi œ Zb"  Zb# œ Zbe"  Zbe# (5.4.48)

We calculate the Zbe junction voltages from Eq. 5.4.45:


Me
Zbe œ ZX ln (5.4.49)
Ms

- 5.105 -
P.Stariè, E.Margan System synthesis and integration

In an integrated circuit, Ms" ¸ Ms# , so the ratio of emitter currents is:


Me" eZbe1 ÎZX Zbe1 Zbe2 Zdi
œ Z ÎZ œ e ZX œ e ZX (5.4.50)
Me# e be2 X
The collector current Mc œ !F Me ; also, the sum of emitter currents must be equal to the
bias provided by the constant current source, Me! . Thus:

Mc"  Mc# œ !F aMe"  Me# b œ !F Me! (5.4.51)

From the last two equations we obtain:


!F Me0 !F Me0
Mc" œ and Mc# œ (5.4.52)
"  eZdi ÎZX "  eZdi ÎZX
The collector voltage is equal to the potential drop Vc Mc from the supply voltage:

Zo" œ Zcc  Vc Mc" and Zo# œ Zcc  Vc Mc# (5.4.53)

Therefore the differential output voltage will follow a hyperbolic tangent function of
the input differential voltage:
Zdo œ Zo"  Zo# œ Vc aMc#  Mc" b

" "
œ !F Vc Me! Œ  
"  eZdi ÎZX "  eZdi ÎZX
 Zdi
œ !F Vc Me! tanh (5.4.54)
ZX

1 Vdo
α F Rc I e0

a
b
Vdi
VT
−8 − 7 − 6 − 5 − 4 −3 − 2 −1 0 1 2 3 4 5 6 7 8

−1
Fig. 5.4.9: a) The DC transfer function of the differential amplifier in Fig. 5.4.8a: the
input differential voltage is normalized to ZX and the output is normalized to !F Vc Me! .
b) With the emitter degeneration, as in Fig. 5.4.8b, the transfer function is more linear,
but at the expense of the gain (lower slope).

The system gain is represented by the slope of the plot, which, for Zdi ¸ !, is:
Zdo !F Vc Me! !F Vc Me! Vc
œ œ œ !F (5.4.55)
Zdi ZX <e Me! <e

- 5.106 -
P.Stariè, E.Margan System synthesis and integration

Now, in wideband amplifier applications, the emitters are usually


‘degenerated’ by the addition of external resistors Ve . The degeneration resistor acts
as a local current feedback, extending the linear part of the DC transfer function by
Me! Ve  ZX , instead of just ZX . Of course, this reduces the gain to Vc ÎaVe  <e b.
By considering one half of the differential pair and accounting for the bias
current Me! and signal current 3 flowing into Ve , the input voltage can be expressed
from Eq. 5.4.44 as:
3  Me!
Zin œ a3  Me! bVe  ZX lnŒ  (5.4.56)
Ms
and by differentiating this we obtain:
ZX
`Zin œ Ve `3  `3 (5.4.57)
3  Me!
We separate the linear and non-linear components:
ZX ZX 3
`Zin œ ŒVe   `3  `3 (5.4.58)
M e!
ðóóóóóñóóóóóò M e! 3  Me! b
a
ðóóóóóñóóóóóò
linear non-linear

We define the loading factor B as the ratio of the signal current to the bias current:
3
Bœ (5.4.59)
Me!
So we can express the incremental non-linearity (INL) factor R as the ratio of the
non-linear gain component to the linear one:
B ZX
R aBb œ † (5.4.60)
"  B ZX  Me! Ve
Generally R can be (and usually is) a function of many variables, not just one.
In a similar way we can derive INL for the differential pair, where the
differential input voltage is:
Me!  3
Zin œ 3Ve  aZbe"  Zbe# b œ 3Ve  ZX lnŒ  (5.4.61)
Me!  3
The linear and non-linear components are:
#ZX #ZX 3#
`Zin œ ŒVe   `3  `3 (5.4.62)
Me! Me! a3#  M # b
and the INL:
B# #ZX
R aBb œ † (5.4.63)
"  B# #ZX  Me! Ve
This expression can be used to estimate the amount of error for a given signal
and bias current, which an error correction scheme attempts to suppress.
In the following pages we are going to show a collection of differential
amplifier circuits, employing some form of error correction, either feedback,
feedforward or both. We are also going to present their frequency and time domain

- 5.107 -
P.Stariè, E.Margan System synthesis and integration

performance to compare how the bandwidth has been affected as a result of increased
circuit complexity (against a simple cascode amplifier).
For a fair comparison all circuits have been arranged to suit the test setup
shown in Fig. 5.4.10; the amplifiers were set for a gain of 2, using the same type of
transistors (BF959) and biased by a 10 mA current source. The input signal was
modeled by a 10 mA step driven current source, loaded by two 50 H ll 1 pF networks.
An equal network was used as the output load. Finally, all circuits were ‘tuned’ for a
Bessel system response (MFED), using only capacitive emitter peaking (of course,
inductive peaking can be used in the final design). Note that this setup offers only a
relative indication of what can be achieved, not a final optimized design.
Vcc

R e1 Ce1 R e1 Ce1
50 Ω 1pF 50 Ω 1pF

i( )
s 10mA ∆ i A=2 ∆ o

R e1 Ce1 R e1 Ce1
50 Ω 1pF 50 Ω 1pF

Vcc
Fig. 5.4.10: Test set up used to compare different amplifier configurations.

I e0 − i o I e0 + io
Rb V bb R
b
Q3 Q4

Cb Cb

Ce
i
Q1 Q2 − i
Re Re

2 I e0
V ee
Fig. 5.4.11: This simple differential cascode amplifier, employing no error correction, is
used in the test set up circuit of Fig. 5.4.10, representing the reference against which all
other amplifiers are compared. The emitter peaking and base impedance are adjusted for a
MFED response.

The simple differential cascode amplifier of Fig. 5.4.11, employing no error


correction, represents a reference against which all other amplifiers will be compared.
The U",# emitter peaking capacitor Ge and the U$,% base network Vb Gb are adjusted

- 5.108 -
P.Stariè, E.Margan System synthesis and integration

for a MFED response. Fig. 5.4.12 and 5.4.13 show the frequency domain and time
domain responses, respectively.

9
6
o
o 3 ϕ [°] τ [ns]
i i
0 0 0
[dB]
−3 ϕ
− 45 − 0.1
−6 − 90 − 0.2
−9 −135 − 0.3

−12 −180 − 0.4

−15 τ −225 − 0.5

−18 −270 − 0.6


−21
0.001 0.01 0.1 1.0 10.0
f [GHz]
Fig. 5.4.12: Frequency domain performance of a simple differential cascode amplifier of
Fig. 5.4.11 (no error correction) used in the test set up circuit of Fig. 5.4.10. This will be
used as the reference for all other circuits. The bandwidth achieved is a little over 400 MHz.

0.200
∆ o

0.150
[mV]

0.100
−∆ i

0.050

0.000
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
t [ns]
Fig. 5.4.13: Time domain performance of a simple differential cascode amplifier of
Fig. 5.4.11 (no error correction) used in the test set up circuit of Fig. 5.4.10. This will be
used as the reference for all other circuits. The input voltage indicates the input
impedance dynamics in the first 100 ps and up to 1.5 ns. Note also the small undershoot at
the output, owed to the cross-talk via Gbc of U",# . The output rise time is less than 1 ns.

The first circuit to be compared with the reference is shown in Fig. 5.4.14. The
circuit is owed to C.R. Battjes [Ref. 5.18] and is functionally a Darlington connection
(U" , U# ), improved by the addition of U$ . Used as the differential input stage of a
cascode amplifier, it enhances the input characteristics and increases both the output
current handling and the bandwidth. At a first glance it may seem that the diode
connected U$ (shorted collector and base) can not do much. However, it allows U" to
carry a current much larger than the U# base current, delivering it to the resistance Ve
and lowering the impedance seen by base of U# , thus extending the bandwidth. The
compound device has about twice the current gain of a single transistor.

- 5.109 -
P.Stariè, E.Margan System synthesis and integration

I e0 − i o I e0 + io
2+ β
I out = 2+ β Q4 Q8
1+ β 1+
1+ β V bb V bb
2 +1
I in = β
1+ β Q1 1
1 i − i
1+ 2 β Q1 Q5
β
1 Q2
1 Q2 Q6
β Ce
Q3 1+ 1 1+ 1 Q7
β β Q3
o Re Re

Re 2( 1+ 1 )
β 2 I e0

V ee
a) b)
Fig. 5.4.14: a) Improved Darlington. b) Used as the input differential stage of the
cascode amplifier — see the performance in the following figures.

9
6
o o
3 ϕ [°] τ [ns]
i
0 i
0 0
[dB]
−3 − 30 − 0.1
ϕ
−6 − 60 − 0.2
−9 − 90 − 0.3
τ
− 12 −120 − 0.4
− 15 −150 − 0.5

− 18 −180
− 21 −210
0.001 0.01 0.1 1.0 10.0
f [GHz]
Fig. 5.4.15: Frequency domain performance of the differential cascode amplifier using
the circuit of Fig. 5.4.14b. The bandwidth is about 560 MHz. Note the input voltage
changing slope above 2 GHz.

0.200
∆ o

0.150
[mV]

0.100
−∆ i

0.050

0.000

0 0.5 1 1.5 2 2.5 3 3.5 4


t [ns]
Fig. 5.4.16: Time domain performance of the differential cascode amplifier of Fig. 5.4.14b.
The undershoot has increased, but the rise time is less than 0.7 ns.

- 5.110 -
P.Stariè, E.Margan System synthesis and integration

In Fig. 5.4.17 U" and U# form the differential amplifier, whose error current 3"
is sensed at the resistor V" and is available at the collector of U$ for summing with
the output current 3# (error feedforward) further in the following circuit.
V cc
I1 + i 1 I2 + i 2 I 2 − i2 I 1− i1
I1
i Q1 Q2
Q3 i2 R2 Q4 i1

R3 R1

I1 I2 I2 I1

V ee
Fig. 5.4.17: A simple differential amplifier with feedforward error correction. Accurate
matching of transistors is required only for DC error reduction, not for the feedforward
linearization. Here 3# is the differential current, whilst the error current, 3", sensed at V",
is available at the U$ collector to be added to the output current further in the circuit.

Two such circuits can form a differential amplifier, employing a double error
feedforward correction, as shown in Fig. 5.4.18. The error currents can now be
summed directly with output currents, without further processing.
However, the main problem with this linearization technique is that V" must
be relatively high for a suitable error sensing, so it can reduce the bandwidth
considerably. In part the bandwidth can be improved by adding precisely matched
capacitors in parallel to both V# and V$ (emitter peaking), but then the input
impedance can become negative and should be compensated accordingly. This
negative input impedance compensation is easily implemented at „ @i inputs, but, by
adding it to the inputs connected to V" , the error sensing will be affected, since a part
of 3" would flow into the compensating networks, reducing error correction at high
frequencies.

Io + i o Io − i o

V cc

R1
i − i
R2 R2
i1

R3 R3

V ee
Fig. 5.4.18: Two circuits from Fig. 5.4.17 can form a differential amplifier with a double
error feedforward directly summed with the output.

- 5.111 -
P.Stariè, E.Margan System synthesis and integration

9
6
o
o
3 ϕ [°] τ [ns]
i i
0 0 0
[dB] ϕ
−3 − 30 − 0.1
−6 − 60 − 0.2
−9 − 90 − 0.3
τ
−12 −120 − 0.4

−15 − 150 − 0.5


−18 − 180
−21 − 210
0.001 0.01 0.1 1.0 10.0
f [GHz]
Fig. 5.4.19: Frequency domain performance of the circuit from Fig. 5.4.18. The bandwidth
can be high (560 MHz), but for a suitable error sensing the required high value of V" would
compromise it. The plot of @i indicates the negative input impedance at high frequencies,
which would need additional compensation networks at both inputs and at V".

0.200
∆ o

0.150
[mV]

0.100
−∆ i

0.050

0.000

0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0


t [ns]
Fig. 5.4.20: Time domain performance of the circuit in Fig. 5.4.18. Note the jump of @i in
the first 100 ps and a pronounced undershoot in ?@o .

A very interesting circuit, known as the ‘Cascomp’ (compensated cascode),


shown in Fig. 5.4.21, was invented by Patrick Quinn [Ref. 5.53–54]. Here the usual
differential cascode amplifier, U"–% , is enhanced by indirect error sensing and
feedforward error correction. The error, generated at the emitters of U",# , is also
available at the emitters of U$,% , where it appears almost identical (owed to the same
bias and signal currents and thus similar emitter resistances). Sensed by U&,' and
amplified adequately, the error is subtracted from the U$,% collector currents. The
addition of a further common base stage, U(,) , lowers the summing node impedance,
which increases the summing precision, and at the same time improves the bandwidth
of the error sensing amplifier, U&,' . Since the error signal voltage at the U$,% emitters
is low (several mV, or so), the emitter resistances Ve# of the error amplifier U&,' can
also be very low, or even eliminated completely, without degrading the linearity.

- 5.112 -
P.Stariè, E.Margan System synthesis and integration

I 1 + I 2 −i o I 1 + I 2 + io
V b78
Q7 Q8

Rb3 Rb4
V b34 Q3 Q4 V b34
Ce2 Q6
Cb3 Q5 Cb4
Re2 Re2

i Ce1 − i
Q1 Q2
Re1 Re1

2I2 2 I1
V ee V ee
Fig. 5.4.21: The ‘Cascomp’ amplifier employs indirect error sensing and error feedforward.

9
6
o
o 3
ϕ [°] τ [ns]
i i
0 0 0
[dB] ϕ
−3 − 90 − 0.1
−6 −180 − 0.2
−9 −270 − 0.3
−12 −360 − 0.4

−15 −450 − 0.5


−18 −540 − 0.6
τ
−21 − 0.7
0.001 0.01 0.1 1.0 10.0
f [GHz]
Fig. 5.4.22: Frequency domain performance of the ‘Cascomp’ amplifier. The
bandwidth is about 500 MHz, but note also the high flatness of the envelope delay,
right up to the bandwidth limit.

0.200
∆ o

0.150
[mV]

0.100
−∆ i

0.050

0.000

0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0


t [ns]
Fig. 5.4.23: Time domain performance of the ‘Cascomp’ amplifier. The rise time is
about 0.7 ns and the initial undershoot is very low.

- 5.113 -
P.Stariè, E.Margan System synthesis and integration

Fig. 5.4.24 shows a similar circuit, but with a feedback error correction. The
error signal is taken at the same point as before, but its amplified version is applied to
the emitters of the input differential pair U",# . Unfortunately, in spite of its attractive
concept this configuration is not suitable for high frequencies, since capacitive emitter
peaking can not be used (a capacitance in parallel to V" would short the auxiliary
amplifier outputs, reducing the amount of error correction at high frequencies), thus
the bandwidth is about 180 MHz only. But when bandwidth is not the primary design
requirement this amplifier can be a valid choice. We shall not plot its performance.

I 1 + io V cc I1 − i o

I2 I2
V b34 Q3 Q4 V b34
R2
(− i) Q5 Q6 ( i)
i Q1 Q2 − i

R1
I1 + I2 I1 + I2

V ee
Fig. 5.4.24: Differential cascode amplifier with indirect error sensing and
error feedback. Unfortunately this configuration tends to be rather slow.

The circuit in Fig. 5.4.25 represents a modification of the feedforward error


correction in which the correction current is summed at the same point where the
error was generated. This configuration requires reasonably well matched devices
with a high current gain ". Transistors U",# form the differential pair, U&,' form the
error sensing amplifier and U$,% provide an additional Zbe voltage to enhance the error
amplifier dynamic range.

I o − io I o + io

Q1 β Q2 − i
i ( I 1− i 1)
1+β

Q3 Q4
Q5 Q6
I 1− i 1 i1 R
1+β
I1 I1

V ee
Fig. 5.4.25: A possible modification of error feedforward, employing direct
error sensing and direct feedforward error correction.

- 5.114 -
P.Stariè, E.Margan System synthesis and integration

I 1 + I 2 − io I 1 + I 2 + io
V b56 V b34
Q5 Q6

Q3 Q4

Q7 Q8
Re2 Re2
Q1 Q2
Re1 Re1

i − i
Ra Rb 2 I1 Rb Ra
2 I2

V ee V ee
Fig. 5.4.26: Another evolution of the Cascomp is realized by feedback derived error
sensing and feedforward error correction. The junction of Va and Vb is at the ‘virtual
ground’ at which the error of the U",# pair is sensed and amplified by the auxiliary
amplifier U(,) . The error is subtracted from the output current at the emitters of U&,'.

9
6
o
o
3 ϕ [°] τ [ns]
i i
0 0 0
[dB]
−3 ϕ − 45 − 0.1
−6 − 90 − 0.2
−9 − 135 − 0.3
− 12 −180 − 0.4
− 15 −225 − 0.5
− 18 −270 − 0.6
τ
− 21 − 0.7
0.001 0.01 0.1 1.0 10.0
f [GHz]
Fig. 5.4.27: Frequency domain performance of the Cascomp evolution amplifier. The
bandwidth is a little over 400 MHz.

0.200
∆ o

0.150
[mV]

0.100
−∆ i

0.050

0.000

0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0


t [ns]
Fig. 5.4.28: Time domain performance of the Cascomp evolution amplifier.

- 5.115 -
P.Stariè, E.Margan System synthesis and integration

I 1 + I 2 − io V cc Vcc I 1 + I 2 + io

Q5 Q6
Q7 Q8
Re2 Re2

V bb I4 V bb
Q3 I3 2 I2 Q4

V ee Ce
i Q2 − i
Q1
Re1 Re1

2 I1
V ee
Fig. 5.4.29: This ‘output impedance compensation’, also patented by Pat Quinn, has
direct error sensing and direct feedforward error correction, performed by U&–) .

9
6
o
o
3 ϕ [°] τ [ns]
i i
0 0 0
[dB]
−3 ϕ − 45 − 0.1
−6 − 90 − 0.2
−9 − 135 − 0.3

− 12 − 180 − 0.4
− 15 − 225 − 0.5
−18 − 270 − 0.6
τ
−21 − 315 − 0.7
0.001 0.01 0.1 1.0 10.0
f [GHz]
Fig. 5.4.30: Frequency domain performance of the amplifier in Fig. 5.4.29. The
bandwidth is about 400 MHz.

0.200
∆ o

0.150
[mV]

0.100
−∆ i

0.050

0.000

0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0


t [ns]
Fig. 5.4.31: Frequency domain performance of the amplifier in Fig. 5.4.29.

- 5.116 -
P.Stariè, E.Margan System synthesis and integration

5.4.6 The Tektronix M377 IC

In this section we shall have a brief discussion of the M377 IC, made by
Tektronix for their 11 000 series oscilloscope, later also employed in many other
models. The M377 was described in [Ref. 5.50 and 5.51] by its designer John Addis.
When it was designed, back in mid 1980s, this integrated circuit started a
revolutionary trend in oscilloscope design which is still evolving today in the first
decade of the XXI century. One of the important design goals was to eliminate the
need for manual adjustments as much as possible. Classical oscilloscopes, made with
discrete components, required a lot of manual adjustment; for example, the Tektronix
7104 oscilloscope with its two single-channel plug-ins required 32 manual
adjustments for correcting the thermal effects only, many of which needed iterative
corrections in combination with another one or even several other settings. This
caused long calibration periods and a lot of bench testing by experienced personnel,
increasing the production cost considerably. In contrast, the M377 needs only one
manual adjustment (for optimizing the transient response), whilst all other calibration
procedures are performed at power–up by a microprocessor, which varies the settings
using several DC voltage control inputs. Of course, the calibration can also be done
upon the user’s request, by pressing a push button on the front panel. Some settings,
such as the high impedance attenuator calibration, is done in production by laser
trimming of the components.
The elimination of almost all electromechanical adjustments and their
replacement by electronic controls resulted in circuit miniaturization, reducing strays
and parasitics, thus improving bandwidth. But it also increased the circuit complexity
(the number of active devices) and density, resulting in higher power dissipation, and
consequently higher temperatures, requiring careful thermal compensation.
The first step in this direction was the ‘Cascomp’, [Ref. 5.53–5.54], which we
have met already in Fig. 5.4.21. The first instrument to use the Cascomp was the
Tektronix 2465 oscilloscope. Although it represented a significant improvement in
precision over a simple cascode differential pair, it also had some limitations. First,
the addition of the error amplifier, U&,' , helps to reduce both the nonlinearity and, if
the operating points are chosen carefully, also the thermally induced tails. But since
both the main and the error amplifier have nearly equal gain, the system gain can not
be high, otherwise the error amplifier’s non-linearity would show up.
Another problem is the stack of three transistor pairs in the signal path, which
results in an increased gain sensitivity to ! variations. The gain, as a function of
temperature, increases as !$ , about 225 ppmÎK for a Cascomp using transistors with
" œ )!, compared to some 150 ppmÎK of the standard cascode. But a standard
cascode also has a counteracting temperature dependent gain term, ¸  ")& ppmÎK
(at Me œ #! mA, Xj œ '!°C and Ve œ %! H) owed to the dynamic emitter resistance
<e , which in the Cascomp is compensated. To some degree ! effects can be
compensated by adding resistors Vb$,% to the bases of U$,% , but these resistors can
never be made large enough, owing to the transient response requirement for low base
resistance (see Part 3, Fig. 3.4.6 and the associated analysis).
The three transistor pair stack (and not forgetting the current source, M" )
further disadvantage the IC against the discrete design, since for a given supply
voltage the output linear voltage range is severely reduced. Also, any level shifting

- 5.117 -
P.Stariè, E.Margan System synthesis and integration

back down to the negative supply voltage, needed by an eventual subsequent stage,
requires a greater level shift than in a conventional cascode.
Finally, the Cascomp has a limiting ability to handle overdrive signals. The
emitters of U$,% do not ‘see’ the whole signal during overdrive, thus the error
amplifier signal is clipped off at peaks, and as a result the main and the error amplifier
experience different thermal histories. Additional circuitry is needed to ensure correct
input signal clipping and acceptable thermal behavior.
All these limitations dictated a different approach in the M377 IC. The basic
wideband amplifier block is shown in Fig. 5.4.32 and an improved differential version
in Fig. 5.4.33. This is a feedback amplifier, which can be viewed (oversimplified) as a
compound transistor, with the U" base, the U$ emitter and the U$ collector
representing the base, emitter and collector of the compound device, respectively.
Compared with a single transistor, operating at the same current, such a compound
transistor has a greater gm and ", and also much better linearity.
V cc io V cc io

Rc Cc Rc
Q3 Q3

i o i o
Q1 Q2 Q1 Q2

RL RL
I1 I3 I1 I3

V ee V ee V ee V ee

a) b)
V cc V cc

Cc Cc
Rc Rc
io
Rd Rd
io
Rb

Lc Q5

Q3 Q3
Q4
i Q1 Q2 o i Q1 Q2 o

RL RL
I1 I3 I1 I3

V ee V ee V ee V ee

c) d)
Fig. 5.4.32: a) The M377 IC main amplifier block, basic scheme. b) Dominant pole
compensation. c) Inductive peaking. d) Inductance created by the U& emitter impedance.

- 5.118 -
P.Stariè, E.Margan System synthesis and integration

V cc2 io

C Zc
B V cc1 Q3
E Q4
i Q1 Q2

I1
V cc2 io I3 Re3
C V ee
Zc
V cc1 Q3 Re3
I1
Q4
i B Q1 Q2 E − i
Q5 Q6
Q8
I1 I3
V cc1 Q7
V ee V ee Zc
−i o
a) b) V cc2
Fig. 5.4.33: a) The basic wideband amplifier block of the M377 IC can be viewed as a
compound transistor, in which the U" base, the U$ emitter, and the U$ collector
represent the base, emitter and collector of the compound device. b) Two such blocks
form a differential amplifier. An improved design results if U$ is of a Darlington
configuration. To a high precision the output current is 3o œ @3 ÎVe$ . The impedance ^c
is the compensation shown in Fig. 5.4.32d.

However, the ! of U$ is not improved, neither by U" , nor U# . This could be


corrected if, for example, the U" collector were to be connected to the U$ emitter,
which, besides improving !, would also bootstrap the collector of U" , lowering the
input capacitance. Unfortunately, the low collector voltage and, consequently the
operating point of U" , would reduce its cut off frequency. Also, owing to feedback
through the collector–base capacitance Gcb of U" , such a circuit could have a negative
input impedance at very high frequencies, compromising stability. The best solution
for a high 0T ( ¸ 1 GHz in the M377) is to use a Darlington for U$.
One of the most important parameters of a high speed amplifier is the
overdrive recovery time. Following the ever increasing requirements for speed and
precision we usually specify the time (in ns) needed to settle to, say, 0.1% or even
0.01% of the final level, following a relatively long overdrive period and the
overdrive of 1.2–1.5 times the maximum level. Fig. 5.4.34 shows the configuration
used in the M377, which recovers to 0.04% in just 6 ns after a 2 V overdrive; recovery
to 0.01% is about 25 ns. The current sources and Schottky diodes, added to the
original circuit of Fig. 5.4.33b, allow separate feedback paths under overdrive. It is
important to realize that in this circuit, if not compensated, only the half with the
negative input voltage would be overdriven. The positive part then takes all the
current provided by the M$ source. Under this condition H& and H' cut off, while H%

- 5.119 -
P.Stariè, E.Margan System synthesis and integration

conducts owing to M# and M% , allowing the feedback of the lower circuit half to remain
operative, preserving the delicate thermal balance.
V cc2 io

Zc
Q3
V cc2
V cc1
I2 Q7
i Q1 Q2 0.5mA

D1 I4
D3
I1 1mA
D2 Re3
4mA V cc2 I 3 12mA
V ee V ee
4mA
D5 V cc2 Re6
I1
D6
D4 I4
− i 1mA
Q4 Q5
0.5mA
I2 Q8
V cc1
V cc2
Q6
Zc

V cc2 − io
Fig. 5.4.34: The M377 amplifier with the components for high speed overdrive recovery.

The frequency domain and time domain responses of the circuit in Fig. 5.4.34
are shown in Fig. 5.4.35 and 5.4.36, respectively. Note that the compensating
impedance ^c was adjusted to the needs of the transistors used for circuit simulation
(BF959, as in all previous circuits, thus allowing comparison), therefore the graphs do
not represent the true M377 performance capabilities.
Although it can be argued that the simulation has been performed using a
simplified basic version of the circuit, there are a few points to note, which are
nevertheless valid. First, there is the potential instability problem (owed to feedback),
indicated by the phase plot turning upward and the envelope delay going positive
above some 4 GHz. If proper care is not taken, especially compensating the parasitics
and strays in an IC environment, the step response might display some initial
waveform aberrations, or even ringing and oscillations.
Another point of special attention is the parasitic capacitance to the substrate
of the Schottky diodes. Being within the feedback loop, these capacitances can be
troublesome. Proper forward bias for low impedance is needed to move those
unwanted poles (transfer function zeros) well above the cutoff frequency. High bias
would result in high temperatures, which, in a densely packed IC such as this one, can
be problematic. Also, since noise increases with temperature, the bias can not be as
high as one would like. The 0.5 mA bias offers a good compromise.

- 5.120 -
P.Stariè, E.Margan System synthesis and integration

As an advantage, judging by the constant slope of the input voltage plot, the
circuit input impedance is well behaved, thus the loading of a previous stage should
not be critical. Likewise, the active inductive peaking, realized by the base resistance
Vb and transformed as inductance at the U& emitter (as shown in Fig. 5.4.32d), offers
a simple way of adjusting the frequency compensation network.

9
6
o
o
3 ϕ [°] τ [ns]
i
0 0 0
[dB] i
−3 − 45 − 0.1
ϕ
−6 − 90 − 0.2
−9 − 135 − 0.3
− 12 − 180 − 0.4
− 15 − 225 − 0.5

− 18 − 270 − 0.6
τ
− 21 − 315 − 0.7
0.001 0.01 0.1 1.0 10.0
f [GHz]
Fig. 5.4.35: Frequency domain performance of the amplifier in Fig. 5.4.34. The
bandwidth of the simulated circuit is about 500 MHz, but this is owed to the transistor
used (BF959) and the frequency compensation network adjusted in accordance,
therefore the graph is not representative of the actual M377 IC performance.

0.200
∆ o

0.150
[mV]

0.100
−∆ i

0.050

0.000

0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0


t [ns]
Fig. 5.4.36: Time domain performance of the amplifier in Fig. 5.4.34. The comment in
the caption of Fig. 5.4.35 also holds here.

Now take a close look at the circuit in Fig. 5.4.34; in particular, the diode pairs
H#,$ and H&,' , the resistors Ve$,' and the current source M$ ; if another such block is
added in parallel (with different values of resistors Ve$,' ), and if the current sources
are switched on one at a time, a gain switching in steps can be achieved. The
bandwidth would change only slightly with switching. Fig. 5.4.37 shows such a circuit
with two gain values, but three or more can easily be added.
Another way of changing the vertical sensitivity is to use a fixed gain
amplifier and switch the attenuation at its output, as shown in Fig. 5.4.38. In this way

- 5.121 -
P.Stariè, E.Margan System synthesis and integration

the bandwidth is preserved, but attenuation switching has its own weak points, such
as a reduced signal range and higher noise at higher attenuation.
As a point of principle, switching the amplifier gain is preferred to fixed gain
with switched attenuation. Although it alters the bandwidth, gain switching preserves
the signal’s dynamic range at all settings, whilst the system with fixed gain and an
attenuator will have a comparable dynamic range only with no attenuation; at any
other attenuation setting the dynamic range would be proportionally reduced. Also,
gain switching systems will preserve the same noise level, whilst the fixed gain
systems will have the lowest signal to noise ratio at maximum attenuation.
V cc2 io

Zc
Q3
V cc2
V cc1
D5
D1 R
i Q1 Q2

D7
V ee D3 2R I1 I2
V ee V ee

V ee D4 2R
D8
− i R
Q4 Q5
D2
D6
Vcc1 V cc2
Q6
Zc

V cc2 − io
Fig. 5.4.37: Gain switching in steps was achieved by adding one or more emitter current
sources (M" , M# , á ) with appropriate resistor values and Schottky diode pairs and switching on
one current source at a time.
V cc V cc

R 2R R R 2R R
R R − o o R R
V b4 V b3 Vb2 Vb2 V b3 V b4
Q4 Q3 Q2 Q6 Q7 Q8
− i
i Q1 Q5
Re Re

2 I e0

V ee
Fig. 5.4.38: With an V–#V network the attenuation can be switched in steps by applying a
positive DC voltage Zb#,$,% to U#,' , U$,( , U%,) , one pair at a time; at each collector the load is
Vll#V œ #VÎ$. With U#,' on the gain is E œ #VÎ$Ve , with U$,( on E œ VÎ$Ve , and with U%,8
on E œ VÎ'Ve , effectively halved by each step. A similar circuit was used in the Tek 2465.

- 5.122 -
P.Stariè, E.Margan System synthesis and integration

For small gain or attenuation changes of, say, 1:4, as commonly found in
oscilloscope amplifiers, these differences can be small. However, in M377, the gain
switching had a 50:1 range. With such a high gain range the frequency compensation
needed to be readjusted (for the highest gain no compensation was needed).

5.4.7 The Gilbert Multiplier

A similar problem of compensation readjustment as a function of gain is


encountered with a continuously variable gain. Although used only occasionally, a
continuously variable gain is a standard feature of almost all oscilloscopes and no
manufacturer dares to exclude it, even in digital instruments (although there it is done
in very small steps).
In older analog instruments a simple potentiometer was used. This worked
well up to some 20 MHz. For higher frequencies the ‘pot’ size and the variable
impedance at the slider represented major difficulties, even if the required gain
change was within a relatively small range, ordinarily about 3:1. At Tektronix, an
ingenious wire pot was used, having a bifiliar winding to cancel the inductance, but
its parasitic capacitance, which also varied with the setting, was causing too much
cross-talk at higher frequencies. Finally, there was also the mechanical problem of
placing the pot at the correct point in the circuit and still being able to bring its axis
on the front panel, aligned with the main attenuator switch.
A much more elegant choice is to use some sort of electronic gain control, by
using either a voltage or a current controlled amplifier (VCA or ICA). Such an
amplifier modulates (ideally) only the signal amplitude, a process which can be
mathematically described as multiplication of a signal by a DC voltage; thus we often
refer to those amplifiers as multipliers or modulators. Of course, electronic gain
control has its own problems and great care is needed to make it linear enough and
fast enough, as well as not too noisy. But it solves the problem of mechanical pot
placement, since it now has to handle only a DC control signal, so the pot can be
placed at any convenient place. In digital systems, the pot is replaced by a digital to
analog converter (DAC; in lower speed instruments, a multiplying DAC can be used
to attenuate the signal directly, replacing the VCA altogether).
Oscilloscopes do not need to exploit the full modulation range, as RF
modulators normally do. In contrast to RF modulators, which are four–quadrant
devices (both the carrier and the modulation are AC signals), the gain control in
oscilloscopes needs to work in two quadrants only (AC signal and DC control); four
quadrants would allow simple gain inversion, but this is more accurately done by a
switch. Therefore the modulation cross-talk or the common mode rejection at HF is
not an issue. On the other hand, DC stability is important since it directly affects
measurement accuracy. Whilst RF modulators operate over a limited frequency range,
for oscilloscopes the wideband gain flatness at all gain settings is also very important.
The simple differential amplifier in Fig. 5.4.8a can perform the variable gain
control by varying the emitter current. If we assume that the modulation voltage is
@M œ ZM  Zbe  Zee , the modulation current is:
@M
#Me œ (5.4.64)
Ve

- 5.123 -
P.Stariè, E.Margan System synthesis and integration

By inserting this into the gain equation of the differential amplifier the multiplication
function results:
#V V Me V
@o œ @bb œ @bb œ @bb † @M (5.4.65)
# <e ZX # V e ZX

Unfortunately the bandwidth is also proportional to the emitter current and with the
usual values of V and stray capacitances, the dominant pole at low currents can be
very low. In addition the output common mode level also changes with current.
Almost all wideband multipliers are based on one of the variations of the basic
circuit, now known as the Gilbert multiplier, after its inventor Barrie Gilbert (see
[Ref. 5.56–5.62]). The circuit development can be followed from Fig. 5.4.39a by
noting that if the output is to be a linear function the input has to be nonlinear.

V cc V cc
(1 + x ) Ie (1 − x ) Ie
R R (1 + x ) Ie (1 − x ) Ie
Q3 Q4
o bb

bb i
Q1 Q2 Q1 Q2
R R

2 Ie 2 Ie

M M

Re Re

V ee V ee
a) b)

V cc2
V cc1 (1 + x ) Ie2 (1 − x ) Ie2
R R

o
Q3 Q4
bb Q5 Q6
i
Q1 Q2
R R
2 Ie2
2 Ie1
V ee
V ee
c)
Fig. 5.4.39: The Gilbert multiplier development. a) The gain of an ordinary
differential amplifier is proportional to the emitter current, but so is the bandwidth.
Also, for a linear output a nonlinear input is required. b) Inverting the function by
linearizing the gm stage and loading it by simpler p–n junctions gives a nonlinear
output, such as required by the circuit in a) to give a linear output. c) The simple
combination of b) and a) gives the so called ‘translinear’ stage.

- 5.124 -
P.Stariè, E.Margan System synthesis and integration

Starting from the exponential Me –Zbe relationship:


Me œ Ms ˆeZbeÎZX  "‰ (5.4.66)
and considering that the intrinsic current Ms ¸ "!"% A, then even for currents as low
as " nA we can say that Me ¦ Ms ; so we are not making a big mistake if we use:

Me ¸ Ms eZbeÎZX (5.4.67)

Since the differential pair was made by the same IC process, we can expect that the
devices will be reasonably well matched, so Ms" ¸ Ms# and their temperature
dependence will also be well matched if the transistors are at the same temperature:
Me" Ms" aZbe" Zbe# bÎZX
œ e ¸ eaZbe" Zbe# bÎZX (5.4.68)
Me# Ms#
In order to achieve a linear output current the expected input voltage should follow the
logarithmic function:
Me"
@bb œ ?Zbe œ ZX ln (5.4.69)
Me#
If 3e is the modulation component superimposed on the DC current Me! , we can write:

Me" œ Me!  3e and Me# œ Me!  3e (5.4.70)

Let B be the AC to DC component ratio:


3e
Bœ (5.4.71)
Me!
Then the currents are:

Me" œ Me! a"  Bb and Me# œ Me! a"  Bb (5.4.72)

and, returning to Eq. 5.4.69, we obtain the required nonlinear input function:
Me! a"  Bb "B
@bb œ ZX ln œ ZX ln (5.4.73)
Me! a"  Bb "B

How can this nonlinear relationship at the differential amplifier’s input be


realized? By inverting the Fig. 5.4.39a circuit’s function, such as in Fig. 5.4.39b; that
is, by using relatively large emitter degeneration resistors, resulting in a linear voltage
to current conversion and loading the collectors by similar p–n junctions as Zbe ,
would produce exactly such a nonlinear relationship as the original circuit needs at its
input to produce a linear output.
Fig. 5.4.39c is thus a simple combination of the b and a circuits, but with some
very interesting properties. First, it is compensated and thus quite linear within the
entire input range (" Ÿ B Ÿ "). Also the @bb swing is small (less than ZX ) owing
to the very low impedance ( ¸ ZX ÎMe ) of U$,% , which means that charging and
discharging of stray capacitances is minimal, so the bandwidth limit is owed to the
collector impedances of U&,' and the transistors’ 0T . Finally, the gain is entirely
‘current mode’, with low temperature dependence (both ZX and Ms are canceled in the
expression for @o ) and the gain control is set by the current ratio Me# ÎMe" .

- 5.125 -
P.Stariè, E.Margan System synthesis and integration

0.25
I M = 4mA
0.20

0.15

0.10

0.05
[V]
[V]

0.00 I M = 0.4mA
c1 − c2
c4
c3 −

− 0.05

− 0.10
I M = 4mA
− 0.15

− 0.20
I M = 0.4mA
− 0.25
− 0.5 − 0.4 − 0.3 − 0.2 − 0.1 0.0 0.1 0.2 0.3 0.4 0.5
s [V]
Fig. 5.4.40: DC transfer function of the Gilbert multiplier of Fig. 5.4.39, for #Me" œ % mA
and modulation current #Me# œ MM œ !Þ%–% mA.. The signal source (@s ) range is ±0.5V.

1.00

0.80

I M = 4mA

0.60
[V]
c3 − c4

0.40

0.20

I M = 0.4mA
0.00
0.001 0.01 0.1 1 10
f [GHz]
Fig. 5.4.41: The Gilbert multiplier bandwidth is almost constant over the 10:1 modulation
current range.

Another way of developing this ‘translinear gain cell’ can be followed by


observing Fig. 5.4.42. Two current mirrors, with the current gain proportional to the
emitter area A, can be interconnected by breaking the two emitters carrying the
mirrored currents and biasing them by a current source, thus forming a differential

- 5.126 -
P.Stariè, E.Margan System synthesis and integration

amplifier, whose input nonlinearity is compensated by the nonlinearity of the


remaining two transistors.

A (1 + x ) I e1 A (1 − x ) I e1 (1 + x ) I e2 (1 − x ) I e2

(1 + x ) I e1 (1 − x ) I e1 (1 + x ) I e1 (1 − x ) I e2

1 : A A : 1 1 : A A : 1

2 I e2

a) b)
V ee
Fig. 5.4.42: Another way of developing the Gilbert multiplier is by interconnecting two
current mirrors into a differential amplifier, whose input nonlinearity is compensated by the
nonlinearity of the two grounded transistors.

Once the basic multiplier is developed it is relatively easy to construct a four


quadrant multiplier, Fig. 5.4.43, by adding another differential pair with inputs in
parallel and the collectors cross-coupled.

Vcc

R R
o

I ctrl I ctrl

2(1 + x ) I e2 2(1 − x ) I e2

4 Ie

V ee
Fig. 5.4.43: A four quadrant multiplier developed from the previous circuit. The gain is
controlled by current biasing the compensation transistors. Four quadrant operation is
obtained from the fact that the cross-coupled collectors cancel out if the two tail currents are
equal and the third differential amplifier allows to distribute the tail currents in a symmetrical
manner about the mid–bias value. Thus both the input and the control can be AC signals and
can also be mutually exchanged, not compromising the bandwidth or the linearity.

- 5.127 -
P.Stariè, E.Margan System synthesis and integration

A further differential amplifier can be used to perform the voltage to current


conversion and splitting the current source (%Me ) into the required signal currents. The
gain is now controlled by biasing the compensation transistors with current.
By cross-coupling the collectors of the two differential pairs we have achieved
effective output current cancellation if the two control currents are equal and the two
emitter currents are equal. Varying any pair of currents about this mid–point changes
the polarity as well as the gain of the multiplier for the other input. Thus, a nice
byproduct of four–quadrant configuration is that the ‘signal’ and the ‘control’ inputs
can be mutually exchanged, without compromising the bandwidth or the linearity.
Returning to the M377 design, where only a two–quadrant multiplication is
needed, the multiplier in Fig. 5.4.44 represents the circuit used. It is almost identical
to the four–quadrant multiplier, except that the collectors are not cross-coupled and
the output is taken from a single pair. The differential circuit symmetry was retained
merely because of its good thermal balance and DC stability. For the same reason, all
four collectors must be equally loaded.

Vcc

R R R R

I ctrl I ctrl

2(1 + x ) I e 2(1 − x ) I e

4 Ie

V ee
Fig. 5.4.44: Two quadrant multiplication is sufficient for the oscilloscope continuously
variable gain control; however, the same differential symmetry of the four quadrant multiplier
has been retained for the M377 because of good thermal balance and DC stability.

The multiplier circuits shown represent the basic linearization principle. In


actual implementations, a number of additional linearization techniques are used,
most of them also patented1 by Barrie Gilbert while he was at Tektronix, and some
later when at Analog Devices, [Ref. 5.56–5.61].

1 It might be interesting to note that Barrie Gilbert published an article [Ref. 5.56] describing his
multiplier before Tektronix applied for a patent. Motorola quickly seized the opportunity and started
producing it (as the MC1495). Tektronix claimed the primarity and Motorola admitted it, but
nevertheless continued the production, since once published the circuit was ‘public domain’. Barrie’s
misfortune gave the opportunity to many generations of electronics enthusiasts (including the authors
of this book) to play with this little jewel and use it in many interesting applications. Thanks, Barrie!

- 5.128 -
P.Stariè, E.Margan System synthesis and integration

R ésumé of Part 5

In this part we have briefly analyzed some of the most important aspects of
system integration and system level performance optimization, with a special
emphasis on system bandwidth.
We have described the transient response optimization by a pole assignment
process called ‘geometrical synthesis’ and showed how it can be applied using
inductive peaking. We have discussed the problems of input signal conditioning, the
linearization and error reduction and correction techniques, employing the feedback
and feedforward topologies at either the system level or at the local, subsystem level.
We have also revealed and compared some aspects of designing wideband amplifiers
using discrete components and modern IC technology.
On the other hand, we have said very little about other important topics in
wideband instrumentation design, such as adequate power supply decoupling,
grounding and shielding, signal and supply path impedance control by strip line and
microstrip transmission line techniques, noise analysis and low noise design, and the
parasitic impedances of passive components. But we believe that those subjects are
extensively covered in the literature, some of it also cited in the references, so we
have tried to concentrate on the bandwidth and transient performance issues.
We have also said nothing about high sampling rate analog to digital
conversion techniques, now already established as the essential ingredient of modern
instrumentation. While there are many books discussing AD conversion, most of
them are limited to descriptions of applying a particular AD converter, or, at most, to
compare the merits of one conversion method against others. Only a few of them
discuss ADC circuit design in detail, and even fewer the problems of achieving top
sampling rates for a given resolution, either by an equivalent time or a real time
sampling process, time interleaving of multiple converters, combining analog and
digital signal processing and other techniques, which today (first decade of the XXI
century) allow the best systems to reach sampling rates of up to 20 GSps (Giga
Samples per second) and bandwidths of up to 6 GHz.
Just like many other books, this one, too, ends just as it has become most
interesting (the reader might ask himself whether there is really nothing more to say
or whether the authors simply ran out of ideas? — since most of the circuits presented
are not of our origin, and electronics certainly is an art of infinite variations, we the
authors can be, one hopes, spared the blame). As already said in the Foreword, the
most difficult thing when writing about an interesting subject is not what to include,
but what to leave out. Although we discuss the effects of signal sampling in Part 6
and a few aspects of efficient system design combining analog and digital technology
in Part 7, this book is about amplifier design, so we leave the fast ADC circuit design
discussion for another opportunity.

- 5.129 -
P.Stariè, E.Margan System synthesis and integration

References:
[5.1] D.L. Feucht, Handbook of Analog Circuit Design,
Academic Press, Inc. San Diego, 1990
See also the latest CD version at <http://www.innovatia.com/>

[5.2] S. Roach, Signal Conditioning in Oscilloscopes and the Spirit of Invention,


(J. Williams, [Editor], The Art and Science of Analog Circuit Design, Part 2)
Butterworth–Heinemann, Boston, 1995

[5.3] P.R. Gray & R.G. Meyer, Analysis and Design of Analog Integrated Circuits,
John Wiley, New York, 1969

[5.4] J. Williams, Composite Amplifiers,


Linear Technology Application Note AN-21, July 1986

[5.5] P. Starič, Wideband JFET Source Follower,


Electronic Engineering, August 1992

[5.6] J. Williams, Measuring 16–bit Settling Times: the Art of Timely Accuracy,
EDN Magazine, Nov. 19, 1998, <http://www.ednmag.com/>

[5.7] W. Kester, High Speed Design Techniques, Analog Devices, 1996, <http://www.analog.com/>

[5.8] A.D. Evans (Editor), Designing with Field Effect Transistors, Siliconix, McGraw-Hill, 1981

[5.9] J.E. Lilienfeld, Method and Apparatus for Controlling Electric Current,
US Patent 1 745 175, Jan. 28, 1930
(the FET patent, apparently predating the BJT patent by about 23 years)

[5.10] FET Design Catalog, Siliconix, 1982, <http://www.siliconix.com/>

[5.11] IC Applications Handbook,


Burr-Brown, 1994, <http://www.burr-brown.com/>, <http://www.ti.com/>

[5.12] A New Approach to OpAmp Design, Application Note AN 300-1,


Comlinear Corporation, March, 1985

[5.13] S. Franco, Design with Operational Amplifiers and Analog ICs, McGraw-Hill, 1988

[5.14] F.A. Muller, High Frequency Compensation of RC Amplifiers,


Proceedings of the I.R.E., August, 1954, pp. 1271–1276.

[5.15] B. Orwiller, Vertical Amplifier Circuits, Tektronix, Inc., Beaverton, Oregon, 1969.

[5.16] F.W. Grover, Inductance Calculation, (Reprint)


Instrument Society of America, Research Triangle Park, N. C. 27 709, 1973.

[5.17] J. Williams, High Speed Amplifier Techniques, Application Note AN-47,


Linear Technology, March, 1985, <http://www.lt.com/>

[5.18] C.R. Battjes, Monolithic Wideband Amplifier,


US Patent 4 236 119, Nov. 25, 19802

2 Note : For US Patents go to <http://www.uspto.com/> and type the patent number in the Search pad.
Patent figures are in TIFF graphics format, so a TIFF Viewer software is recomended (links
for downloading and installation are provided within the USPTO web pages).

- 5.131 -
P.Stariè, E.Margan System synthesis and integration

[5.19] J.L. Addis, P.A. Quinn, Broadband DC Level Shift Circuit With Feedback,
US Patent 4 725 790, Feb. 16, 1988

[5.20] J.L. Addis, Precision Differential Amplifier Having Fast Overdrive Recovery,
US Patent 4 714 896, Dec. 22, 1987

[5.21] J.L. Addis, Buffer Amplifier,


US Patent 4 390 852, Jun. 28, 1983

[5.22] J.L. Addis, Feedbeside Correction Circuit for an Amplifier,


US Patent 4 132 958, Jan. 2, 1979

[5.23] T.C. Hill, M M M , Differential Amplifier with Dynamic Thermal Balancing,


US Patent 4 528 516, July 9, 1985

[5.24] J. Woo, Wideband Amplifier with Active High Frequency Compensation,


US Patent 4 703 285, Oct. 27, 1987

[5.25] Wakimoto, Tsutomu, Azakawa, Yukio, Wideband Amplifier,


US Patent 4 885 548, Dec. 5, 1989

[5.26] H. Weber, A Method of Predicting Thermal Stability,


Motorola, Application Note AN-128

[5.27] R. Ivins, Measurement of Thermal Properties of Semiconductor Devices,


Motorola, Application Note AN-226

[5.28] B. Botos, Nanosecond Pulse Handling Techniques in IC Interconnections,


Motorola, Application Note AN-270

[5.29] Field Effect Transistors In Theory and Practice,


Motorola, Application Note AN-211A

[5.30] N. Freyling, FET Differential Amplifier,


Motorola, Application Note AN-231

[5.31] R.W. Anderson, s-Parameter Techniques for Faster, More Accurate Network Designs,
Hewlett-Packard, Application Note AN-95-1

[5.32] R. Gosser, Wideband Transconductance Generator (Quad–Core Amplifier),


US Patent 5 150 074, Sep. 22, 1992

[5.33] New Complementary Bipolar Process, National Semiconductor, Application Note AN-860

[5.34] J. Bales, A Low Power, High Speed, Current Feedback OpAmp with a Novel Class AB High
Current Output Stage, IEEE Journal of Solid-State Circuits, Vol. 32, No. 9, Sept. 1997

[5.35] Development of an Extensive SPICE Macromodel for Current Feedback Amplifiers,


National Semiconductor Application Note AN-840, July, 1992

[5.36] Topics on Using the LM6181 – A New Current Feedback Amplifiers,


National Semiconductor Application Note AN-813, March, 1992

[5.37] T.T. Regan, Designing with a New Super Fast Dual Norton Amplifier,
National Semiconductor Application Note AN-278, Sept., 1981

[5.38] Simulation SPICE Models for Comlinear OpAmps,


National Semiconductor Application Note OA-18, May, 2000

- 5.132 -
P.Stariè, E.Margan System synthesis and integration

[5.39] H.S. Black, Translating System,


U.S. Patent 1 686 792, Oct. 9, 1928

[5.40] H.S. Black, Wave Translation System,


U.S. Patent 2 102 671, Dec. 21, 1937

[5.41] H.S. Black, Inventing the Negative Feedback Amplifier,


IEEE Spectrum, vol. 14, pp. 55–60, Dec., 1977

[5.42] P.J. Walker, Current Dumping Audio Amplifier,


Wireless World, vol. 81, pp. 560–562, Dec., 1975

[5.43] P.J. Walker, M.P. Albinson, Distortion–Free Amplifiers,


U.S. Patent 3 970 953, Jul. 20, 1976

[5.44] J. Vanderkooy, S.P. Lipshitz, Feedforward Error Correction in Power Amplifiers,


Journal of the Audio Engineering Society, vol. 28, No. 1/2, pp. 2–16, Jan./Feb., 1980

[5.45] M.J. Hawksford, Distortion Correction in Audio Power Amplifiers,


Journal of the Audio Engineering Society, vol. 29, No. 1/2, pp. 27–30, Jan./Feb., 1981

[5.46] M.J. Hawksford, Distortion Correction Circuits for Audio Amplifiers,


Journal of the Audio Engineering Society, vol. 29, No. 7/8, pp. 503–510, Jul./Aug., 1981

[5.47] M.J. Hawksford, Low Distortion Programmable Gain Cell Using Current Steering Cascode
Topology, Journal of the Audio Engineering Society, vol. 30, No. 11, pp. 795–799, Nov., 1982

[5.48] S. Takahashi, S. Tanaka, Design and Construction of a Feedforward Error Correction


Amplifier, Journal of the Audio Engineering Society, vol. 29, pp. 31–37, Jan./Feb., 1981

[5.49] A.M. Sandman, Reducing amplifier distortion by error add-on,


Wireless World, vol. 79, pp. 32, Jan., 1973

[5.50] J.L. Addis, Versatile Analogue Chip for Oscilloscope Plug-Ins,


Electronic Engineering, Aug., 1988 (Part I), Sept., 1988 (Part II)

[5.51] J.L. Addis, Good Engineering and Fast Vertical Amplifiers,


(J. Williams, [Editor], Analog Circuit Design, Art, Science and Personalities, Part 1)
Butterworth-Heinemann, Boston, 1991

[5.52] C. R. Battjes, Who Wakes the Buglar?,


(J. Williams, [Editor], The Art and Science of Analog Circuit Design, Part 2)
Butterworth-Heinemann, Boston, 1995

[5.53] P.A. Quinn, Feed-Forward Amplifier,


US Patent No. 4 146 844, Mar. 27, 1979

[5.54] P.A. Quinn, Feed-Forward Amplifier,


US Patent No. 4 146 844 (Reissue 31 545), Mar. 27, 1984

[5.55] P.A. Quinn, Differential Impedance Neutralization Circuit,


US Patent No. 4 692 712, Sep. 8, 1987

[5.56] B. Gilbert, A Precise Four–quadrant Multiplier with Subnanosecond Response,


IEEE Journal of Solid-State Circuits, pp. 365–373, Dec. 1968

[5.57] B. Gilbert, Multiplier Circuit,


US Patent No. 4 156 283, May 22, 1979

- 5.133 -
P.Stariè, E.Margan System synthesis and integration

[5.58] B. Gilbert, High accuracy four–quadrant multiplier also capable of four–quadrant division,
US Patent No. 4 586 155, April 29, 1986

[5.59] B. Gilbert, Synchronous logarithmic amplifier,


US Patent No. 5 298 811, March 29, 1994

[5.60] B. Gilbert, Single supply analog multiplier,


US Patent No. 6 074 082, June 13, 2000

[5.61] B. Gilbert, Circuit having dual feedback multipliers,


US Patent No. 6 456 142, September 24, 2002
[5.62] B. Gilbert, Where Do Little Circuits Come From?,
(J. Williams [Editor], Analog Circuit Design, Part 1)
Butterworth-Heinemann, Boston, 1991

[5.63] P. Gamand, C. Caux, Semiconductor device comprising a broadband and high gain
monolithic integrated circuit for a distributed amplifier,
US Patent No. 5,386,130, January 31, 1995

[5.64] K. Nakahara, Y. Sasaki, Distributed amplifier and bidirectional amplifier,


US Patent No. 5,414,387, May 9, 1995

[5.65] R.W. Chick, Non-uniformly distributed power amplifier,


US Patent No. 5,485,118, January 16, 1996

[5.66] C.D. Motchenbacher, F.C. Fitchen, Low Noise Electronic Design,


John Wiley, New York, 1973, ISBN 0-471-61950-7

[5.67] E. Margan, VG Attenuator Distortion,


Electronics World + Wireless World, Sept., 1992, Circuit Ideas, pp. 765

[5.68] E. Margan, Amplifier Instability,


Electronics World + Wireless World, Apr., 1998, pp. 311–312

- 5.134 -
P. Starič, E. Margan:

Wideband Amplifiers

Part 6:

Computer Algorithms for Analysis and Synthesis


of Amplifier–Filter Systems

If you search for something long enough,


you will certainly discover something else.
(Erik’s First Amendment to Murphy’s Law
applied to scientific research)
P. Stariè, E. Margan Computer Algorithms

How It All Began

Ever since I have heard of ‘electronic brains’, back in the early 1960s, I was
asking all sorts of people how those things work, but I never received any answer, with
the exception of one, coming from a medical profession, saying that no one understood
the biological brains either, so how could I expect to understand the electronic ones?
Much later I discovered that a lot of people did not like using their brains at all, and
they were doing just fine without it, thank you!
In the autumn of 1974, as a student, I had a limited access to an IBM-1130
machine while attending a course on Fortran. It was not exactly a top model of punched
card technology, but it could be used for many purposes, other than calculating the
monthly wages of the University personnel. As a beginner, it took me nearly three
months to program and run a simple "  e>ÎVG response. I had just heard of Moore’s
law, but I guessed he was exaggerating, since I could do the same job with a slide rule
in little less than four hours, so I expected that within my professional lifetime I would
never need a computing power that sophisticated. How wrong I was!
In the spring of 1975 my father bought me an HP-29-C, a programmable pocket
calculator with ‘scientific’ functions, sines, cosines, logs, exps and all that jazz. And in
addition to the four stack registers it had some 96 program registers of ‘continuous’
memory (CMOS, thus the ‘-C’ suffix). Many of my colleagues had similar toys, too, but
while most of them were playing the then very popular ‘Moon landing simulator’ (in
which you typed in the amount of fuel to burn at each step and in response it displayed
your speed and altitude — and a flashing display on crashing), I was busy programming
the 0.5 dB tolerance frequency response of an RIAA phonograph correction network,
optimized to standard E-12 V and G values. Next year in summer, I made a preamp
based on those calculations and inserted it between a 5-transistor power amp and my
new Transcriptor’s ‘Skeleton’ turntable with the ‘Vestigal’ tonearm and a Sonus ‘Blue
Label’ pickup. It worked perfectly and sounded beautifully.
In the late seventies I was totally devoted to audio; however, I had good
relations with the local Motorola representative, who was supplying me with the latest
data sheet and application notes, so I simply could not have missed the microprocessor
revolution. But I was still using digital chips in the same way as analog ones — by
building the hardware, its function was determined once and for all; I never thought of
it as something programmable or adaptable to different tasks.
It was only in the early 1980s, with the first PCs, that I really began to devote a
substantial part of my working time to programming, and even that was more out of
necessity than desire. I was working on the signal sampling section for a digital
oscilloscope project, so I had to know all the interactions with the microprocessor. I
was also busy drawing printed circuit boards with one of the first CAD programs, the
Wintek’s ‘smArtwork’. Some time later I received a first demo version of the
Spectrum-Software’s ‘MicroCAP’ circuit simulator and then the ‘PCMatlab’ by The
MathWorks, Inc.
Then, one day Peter came to my lab and asked me if I could do a little circuit
simulation for him. This turned out to be a start of a long friendship and one of the
results of it is now in front of you.

-6.2-
P. Stariè, E. Margan Computer Algorithms

Contents .................................................................................................................................... 6.3


List of Figures ........................................................................................................................... 6.4
List of Routines ......................................................................................................................... 6.4

Contents:

6.0. Aim and motivation .......................................................................................................................... 6.5


6.1. LTIC System Description — A Short Overview .............................................................................. 6.7
6.2. Algorithm Syntax And Terminology .............................................................................................. 6.11
6.3. Poles And Zeros ............................................................................................................................. 6.13
6.3.1. Butterworth Systems ..................................................................................................... 6.15
6.3.2. Bessel–Thomson Systems ............................................................................................. 6.17
6.4. Complex Frequency Response ....................................................................................................... 6.21
6.4.1. Frequency Dependent Response Magnitude ................................................................. 6.22
6.4.2. Frequency Dependent Phase Shift ................................................................................ 6.28
6.4.3. Frequency Dependent Envelope Delay ......................................................................... 6.31
6.5. Transient Response by Fourier Transform ..................................................................................... 6.35
6.5.1. Impulse Response, Using FFT ...................................................................................... 6.36
6.5.2. Windowing ................................................................................................................... 6.42
6.5.3. Amplitude Normalization ............................................................................................. 6.44
6.5.4. Step Response ............................................................................................................... 6.45
6.5.5. Time Scale Normalization ............................................................................................ 6.46
6.5.6. Calculation Errors ......................................................................................................... 6.51
6.5.7. Code Execution Efficiency ........................................................................................... 6.55
6.6. Transient Response from Residues ................................................................................................ 6.57
6.7. Simple Application Example ......................................................................................................... 6.61
Résumé of Part 6 ................................................................................................................................... 6.63

References ............................................................................................................................................. 6.65

-6.3-
P. Stariè, E. Margan Computer Algorithms

List of Figures:

Fig. 6.3.1: 5th -order Butterworth poles in the complex plane ................................................................ 6.16
Fig. 6.3.2: Bessel–Thomson poles of systems of 2nd - to 9th -order ........................................................ 6.20
Fig. 6.4.1: 5th -order Butterworth magnitude over the complex plane .................................................... 6.23
Fig. 6.4.2: 5th -order Butterworth complex response vs. imaginary frequency ....................................... 6.24
Fig. 6.4.3: 5th -order Butterworth Nyquist plot ....................................................................................... 6.25
Fig. 6.4.4: 5th -order Butterworth magnitude vs. frequency in linear scale ............................................ 6.26
Fig. 6.4.5: 5th -order Butterworth magnitude in loglog scale .................................................................. 6.27
Fig. 6.4.6: 5th -order Butterworth phase, modulo 21 .............................................................................. 6.28
Fig. 6.4.7: 5th -order Butterworth phase, unwrapped .............................................................................. 6.29
Fig. 6.4.8: 5th -order Butterworth envelope delay .................................................................................. 6.32
Fig. 6.5.1: 5th -order Butterworth impulse- and step-response ............................................................... 6.35
Fig. 6.5.2: Impulse response in time- and frequency-domain ................................................................ 6.37
Fig. 6.5.3: The ‘negative frequency’ concept explained ........................................................................ 6.39
Fig. 6.5.4: Using ‘window’ functions to improve calculation accuracy ................................................ 6.43
Fig. 6.5.5: 1st -order system impulse response calculation error ............................................................ 6.52
Fig. 6.5.6: 1st -order system step response calculation error .................................................................. 6.52
Fig. 6.5.7: 2nd -order system impulse response calculation error ........................................................... 6.53
Fig. 6.5.8: 2nd -order system step response calculation error ................................................................ 6.53
Fig. 6.5.9: 3rd -order system impulse response calculation error ........................................................... 6.54
Fig. 6.5.10: 3rd -order system step response calculation error ................................................................ 6.54
Fig. 6.5.11: Step-response of Bessel–Thomson systems of order 2 to 9 ............................................... 6.55
Fig. 6.7.1: Pole layout of 5th -order Butterworth and Bessel–Thomson systems .................................... 6.61
Fig. 6.7.2: Magnitude of 5th -order Butterworth and Bessel–Thomson systems .................................... 6.62
Fig. 6.7.3: Step-response of 5th -order Butterworth and Bessel–Thomson systems ............................... 6.62

List of routines:

BUTTAP — Butterworth poles ............................................................................................................ 6.16


BESTAP — Bessel–Thomson poles ................................................................................................... 6.19
PATS — Polynomial value at = ...................................................................................................... 6.21
FREQW — Complex frequency response at = ................................................................................... 6.21
EPHD — Eliminate phase discontinuities ....................................................................................... 6.29
PHASE — Unwrapped phase from poles and zeros .......................................................................... 6.33
GDLY — Group delay from poles and zeros .................................................................................. 6.34
TRESP — Transient response from frequency response, using FFT ................................................ 6.49
ATDR — Transient response from poles and zeros, using residues ................................................ 6.59

-6.4-
P. Stariè, E. Margan Computer Algorithms

6.0 Aim and Motivation

The analysis and synthesis of electronic amplifier–filter systems using symbolic


calculus is clearly very labor intensive, as we have seen in the previous parts of the
book. The introduction of computers opened up the possibility of solving such
problems numerically. Then, of course, the solutions can be approximated only, but it is
always possible to trade calculation time for accuracy.
Unfortunately, commercially available computer programs are not well suited to
the job from the system designer’s point of view. Most of the so called CAD/CAE
programs require actual specifications of active devices and passive components before
the circuit’s simulation can start. The better the program, the more complex are the
device models used. Also, these programs will perform circuit analysis only, so the user
is left on his own for circuit synthesis and configuration selection, as well as for
modeling the influence of parasitic components, which in return depend on the
particular circuit topology and devices used.
When designing a system, we usually like to start from some ideal performance
goals, so we would prefer to use small and flexible algorithms which would allow us to
quickly calculate and compare the required responses of several systems, before we
decide what kind of system to use and the technology to realize it. As this book deals
with amplifiers, the emphasis will be on the behavior of linear, time invariant, causal
systems (LTIC), especially in the time domain.
The algorithms developed are based mostly on analytically derived formulae
and procedures which can be easily implemented as numerical routines in any computer
language. Some of these formulae have already been presented and discussed in
previous parts, but are repeated here in order to put the computer algorithms into an
adequate perspective. The algorithms have been written for a special maths oriented
programming environment, named Matlab™ [Ref. 6.1, 6.2, 6.3]. Matlab has a large set
of pre-programmed mathematical, logical, and graphical functions and allows the user
to create macro functions (saved as the ‘*.M’ files) of his own. Over the years Matlab
has grown in popularity and has become one of the five most favorite problem solving
environments, setting standards in data analysis and control software. Also, its syntax,
similar to that of the ‘C’ programming language, allows easy implementation in
embedded systems, so we feel its use here is justified. For the sake of readability we
have tried to write the routines and the examples with mainly the basic Matlab
instructions, adding as much comment as possible.

-6.5-
P. Stariè, E. Margan Computer Algorithms

6.1 LTIC System Description — A Short Overview

Assume BÐ>Ñ to be the signal presented at the input of a linear, time invariant,
causal system (LTIC) which has 8 dominant energy storing (reactive) components (or
briefly a 8th -order system). The system output in the time domain, CÐ>Ñ, may be
expressed by a linear differential equation with constant coefficients:
8 7
" ,3 C Ð3Ñ Ð>Ñ œ " +4 BÐ4Ñ Ð>Ñ (6.1.1)
3œ0 4œ0

where the coefficients +4 and ,3 are derived from the system’s time constants, whilst
CÐ3Ñ and BÐ4Ñ are the 3th and 4th derivatives of the output and input signal, as required by
the system’s order. From the theory of differential equations we know that the solution
of Eq. 6.1.1, given the initial conditions CÐ!Ñ, C w Ð!Ñ, Cww Ð!Ñ, á , CÐ81Ñ Ð!Ñ is of the form:

CÐ>Ñ œ Ch Ð>Ñ  Cf Ð>Ñ (6.1.2)

Here Ch Ð>Ñ is the solution of the homogeneous differential equation (in which BÐ>Ñ and
all its derivatives are zero), whilst Cf Ð>Ñ is the particular solution for BÐ>Ñ, which
means that Cf Ð>Ñ œ Cf eBÐ>Ñf. From circuit theory we know that Ch Ð>Ñ represents the
natural (also free, impulse, transient, or relaxation from the initially energized
state) response and Cf Ð>Ñ represents the forced (also particular, final, steady state)
response.
Knowing that Cf Ð>Ñ is a description of the output signal in time very distant from
the initial disturbance, when the system has regained a new state of (static or dynamic)
balance, we can define the system transfer function J Ð=Ñ from Cf Ð>Ñ and BÐ>Ñ:
Cf Ð>Ñ
J Ð=Ñ œ where BÐ>Ñ œ e=> (6.1.3)
BÐ>Ñ

Such an input signal has been chosen merely because it still retains its
exponential form when differentiated, i.e.:
BÐ8Ñ Ð>Ñ œ =8 e=> (6.1.4)

For the same reason Cf Ð>Ñ is expected to be of the form:


Cf Ð>Ñ œ E e=> (6.1.5)

Eq. 6.1.1 may now be rewritten as:

a,8 =8  ,8" =8"  â  ," =  ,! bEe=> œ a+7 =7  +7" =7"  â  +" =  +! be=>
(6.1.6)
Then E must be:
+7 =7  +7" =7"  â  +" =  +!
Eœ (6.1.7)
,8 =8  ,8" =8"  â  ," =  ,!

-6.7-
P. Stariè, E. Margan Computer Algorithms

Returning to Eq. 6.1.3 we can now define the system transfer function as a
rational function of =:
Cf Ð>Ñ E e=> +7 =7  +7" =7"  â  +" =  +!
J Ð=Ñ œ œ =>
œ (6.1.8)
BÐ>Ñ e ,8 =8  ,8" =8"  â  ," =  ,!

Instead of Cf Ð>Ñ, it is much easier in practice to find J Ð=Ñ first, since the
coefficients +3 and ,3 are derived from the system’s time constants. The system’s time
domain response is then found from J Ð=Ñ. From algebra we know that J Ð=Ñ can be
expressed also as a function of its poles and zeros. A 8th -order polynomial can be
expressed as a product of terms containing its roots <5 :
8 8
T8 Ð=Ñ œ " +3 =3 œ $ Ð=  <5 Ñ (6.1.9)
3œ! 5œ"

The value of this product is zero whenever = assumes a value of a root <5 .
Therefore we can rewrite Eq. 6.1.8 as:
Ð=  D" ÑÐ=  D# Ñá Ð=  D7" ÑÐ=  D7 Ñ
J Ð=Ñ œ (6.1.10)
Ð=  :" ÑÐ=  :# Ñá Ð=  :8" ÑÐ=  :8 Ñ

Here the roots of the polynomial in the numerator are the system’s zeros, D4 , and the
roots of the polynomial in the denominator are the system’s poles, :3 .
We shall have this form in mind whenever a system is specified, because we
shall always start the design by specifying some optimum pole–zero pattern as the
design goal and then work towards the required system’s time constants.
The system’s time domain equivalent of J Ð=Ñ, labeled 0 Ð>Ñ, is the system’s
impulse response:
Ch Ð>Ñ œ 0 Ð>Ñ ¹ (6.1.11)
BÐ>Ñœ$ Ð>Ñ

where $a>b is the Dirac’s function (the infinitesimal time limit of the unit area impulse).
The response to an arbitrary input signal may then be found by convolving the
input signal with the system’s impulse response (for convolution see Part 1, Sec. 1.15;
see also the VCON routine in Part 7, Sec. 7.2).
It is owed to Oliver Heaviside (1850–1925, [Ref. 6.4–6.8]), who pioneered the
transform theory, that we solve differential equations through the use of the Laplace
transform1.
The transform is applied to the time variable > through a single time domain
integration, producing a new variable =, whose dimension is >" (frequency). In the
frequency domain the 8th -order differential equation is reduced to an 8th -order
polynomial, whilst the convolution is reduced to simple multiplication. Once solved
(using simple algebra), the result is transformed back to the time domain.

1 Apparently, Heaviside developed his ‘operational calculus’ in the 1890s independently of Laplace.
Although useful and giving results in accordance with practice, his method was considered unorthodox
and suspicious for quite a while and only in the mid 1930s it was realized that the theoretical basis for
his work could be traced back to Laplace. Interestingly, he also develpoed the method of compensating
the dominantly capacitive telegraph lines by inductive peaking, amongst many other things.

-6.8-
P. Stariè, E. Margan Computer Algorithms

Let J Ð=Ñ represent the Laplace transform of 0 Ð>Ñ. Then:


+_
J Ð=Ñ œ _e0 a>bf œ ( 0 Ð>Ñ e=> .> (6.1.12)
_

where _ef denotes the Laplace operator as defined by the integral (see Part 1, Sec.1.4).
Actually, the integration is usually made from > œ ! and not from _, in order
to preserve the response’s causality (i.e., something happens only after closing the
switch). This limitation is caused by the term e=> which for >  ! would not integrate
to a finite value unless 0 Ð>Ñ œ ! for > Ÿ !. Such restriction is readily accomplished if
we modulate the input signal by closing a switch at > œ !. Mathematically, this can be
expressed by multiplying 0 Ð>Ñ by 2Ð>Ñ — the Heaviside’s unit step function. In our case
this is not necessary, since for calculation of the transient response we consider such
input signals which satisfy the convergence condition by definition. Also we shall
always assume that the system under investigation was powered up for a time long
enough to settle down, so we can safely say that all initial conditions are zero (or an
additive constant at worst).
Physically, by multiplying the time domain function by e=> in Eq. 6.1.12 we
have canceled the rotation of the phasor e=> at that particular frequency (=), allowing the
function to integrate to some finite value (see Part 1, Sec.1.2). At other frequencies the
phasors will continue to rotate, integrating eventually to zero. By doing so for all
frequencies we produce the frequency domain equivalent of 0 Ð>Ñ. This same process is
going on in a sweeping filter spectrum analyzer; the only difference is that in our case
an infinitely narrow filter bandwidth is considered. Indeed, such bandwidth takes an
infinitely long energy build up time, thus the integration must also last infinitely long
and be performed in infinitely small steps.
The inverse transform process is defined as:
5 4_
"
0 Ð>Ñ œ _" eJ a=bf œ ( J Ð=Ñ e=> .= (6.1.13)
#14
5 4_

where 5 is an arbitrarily chosen real valued positive constant for which the inversion
solution exists (this restriction is required for functions which do not decay to zero in
some finite time and therefore the integral would not converge, e.g., the unit step).
Eq. 6.1.1 can now be written as:

CÐ>Ñ œ _" šJ Ð=Ñ † _eBÐ>Ñf› (6.1.14)

Note that for transient response calculation, BÐ>Ñ (the time domain input
function), is either the Dirac function (or $Ð>Ñ — the unity area impulse) or the
Heaviside function (or 2Ð>Ñ — the unity amplitude step). In these two cases
_eBÐ>Ñf œ \Ð=Ñ is either " (the transform of the unity area impulse), or "Î= (the
transform of the unity amplitude step), as we have already seen in Part 1.
Eq. 6.1.14 has been used extensively in previous parts to calculate the transient
responses analytically. However, for calculation of the frequency response, we are
interested only in that part of the transformed function which is a function of a purely

-6.9-
P. Stariè, E. Margan Computer Algorithms

imaginary = and therefore a special case of J Ð=Ñ, that is J Ð4=Ñ. It is thus interesting to
examine the possibility of calculating the transient response using the inverse Fourier
transform (a special case of the inverse Laplace transform) of the system frequency
response. We have already seen in Part 1 that the only difference between the Laplace
and Fourier transforms is that = is replaced by 4=, which is the same as making 5 , the
real part of =, equal to zero.
Eq. 6.1.11 shows that the time domain equivalent of the system’s frequency
response J Ð=Ñ is 0 Ð>Ñ resulting from the excitation by $ Ð>Ñ, or, in words, the system
impulse response. Since the impulse response of any system (except the conditionally
stable systems, as well as the oscillating or regenerating systems) decays to zero after
some finite time, we do not have to make those special precautions (as in the inverse
Laplace transform) to allow the integral to converge, but instead we can use the inverse
Fourier transform of J Ð4=Ñ to calculate the impulse response:

0 Ð>Ñ œ Y " eJ a4=bf (6.1.15)

However, the Fourier transform of the unity amplitude step does not
converge, so we shall have to use an additional procedure to calculate the step
response.
It is possible to put Eq. 6.1.1 into numerical form [Ref. 6.20, 6.21]. Whilst there
are ways of using Eq. 6.1.14 in numerical form [Ref. 6.22, 6.23], we shall rather
concentrate on Eq. 6.1.15, since the Fast Fourier Transform algorithm (FFT, [Ref. 6.16–
6.19]), which we are going to use, offers some very distinct advantages. In addition, we
shall develop an algorithm based on the residue theory (Part 1, Sec. 1.9); the details are
given in Sec. 6.6.
Another point to consider, known from modern filter theory, is that optimized
high order systems are difficult to realize in direct form, because the ratio of the
smallest to the largest time constant quickly falls below component tolerances as the
system’s order is increased. Butterworth [Ref. 6.11] has shown that optimum system
performance is more easily met by a cascade of low order systems (several of second
and only one third, if 8 is odd) separated by amplifiers. As a bonus such structures will
satisfy the gain–bandwidth product requirement more easily. So in practice we shall
rarely need to solve high order system equations, usually only at the system integration
level.
The formulae presented above will be used as the starting point in algorithm
development. We shall develop the algorithms for calculating the system poles for a
desired system order, the complex frequency response, the magnitude and phase
response, the group (envelope) time delay, the impulse response, the step response, and
the numerical convolution. Those algorithms can, of course, be written to solve only
our particular class of problems. It is wise, however, to write them to be as universally
applicable as possible, in spite of losing some algorithm efficiency, to suit eventual
future needs.

-6.10-
P. Stariè, E. Margan Computer Algorithms

6.2 Algorithm Syntax And Terminology


Readers who have not used Matlab or other similar software before will
probably have some difficulties in understanding the algorithm syntax and the
operations implied. Here is some of the syntax and terminology used throughout Part 6:
matrix An array of data, organized in 7 rows and 8 columns.
The operations involving matrices follow the standard matrix calculation rules.
vector A single row or single column matrix; either 7 or 8 is equal to ".

scalar A single element matrix; both 7 and 8 are equal to ".

size length Matrix dimension; if M is an 7 by 8 matrix then [m,n]=size(M) returns the


number of rows in m and the number of columns in n. Likewise, max(size(M))
returns 7 or 8, whichever is greater. For vectors max(size(V))is the length,
or the total number of elements in V.
submatrix A smaller matrix contained inside a larger one.
A=V(k) is the kth element of the vector V (k is the index).
A=V(k) is the same as A=V(round(k)) if k is non-integer.
B=M(:,k) is the kth column of M.
C=M(j:k,h:i) is the matrix of hth to ith elements from the jth to kth row.
+ - * / Operations involving matrices of ‘compatible’ dimensions. A scalar can be added to,
subtracted from, can multiply or divide a matrix of any dimension. Two matrices can
be added or subtracted if they have the same dimensions. Multiplication A*B
requires that the number of rows in B is equal to the number of columns in A.
Division A/B requires an equal number of columns in A and B.
.* ./ The dot before the operation specifies element by element multiplication and
division. The matrix containing the inverse values of the elements in matrix A can be
calculated as: 1 ./A (note the space between 1 and the dot).

^ .^ Powers: A.^n is a matrix with each element of A raised to the power of n;


n.^A is a matrix of n to the power of each element of A;
A^n is possible if A is a square matrix (equal number of rows and columns);
exp(A) is e^A, where e = 2.71828...;
A^B, if both A and B are matrices, is an error.
: Colon. Indicates range.
(1:5) is a vector [1,2,3,4,5]; V(1:2:N-1)denotes all odd elements of V.
A(:) is all elements of A in a single column.
= Equal (right to left assignment);
[output arguments]=function(input arguments); or:
[out]=[in1]operation[in2]; i.e.: y=sin(w*t+phi); c=a*b/(a+b);
== Identically equal (relation); if A==0, do something, end.

> >= < <= greater, greater or equal, smaller, smaller or equal;

& | ~ ~= logical operators: ‘and’, ‘or’, ‘not’, ‘not equal’.

; Semicolon, logical end of command line. For matrices it indicates end of a row.

2 + 3j Complex numbers: 2+3j or 2+j*3 or 2+3*sqrt(-1);


Most Matlab operations can deal with matrices containing complex elements.
% Characters following % are ignored by Matlab. Used for comments.

-6.11-
P. Stariè, E. Margan Computer Algorithms

6.3 Poles and Zeros

It is beyond the scope of this text to cover all the background of system
optimization theory. Let us just mention the most important optimization criteria for
each major system family. For the same bandwidth and system order 8:
• the Butterworth family
is optimal in the sense of having a maximally flat pass band magnitude;
• the Bessel–Thomson family
is optimal in the sense of having a maximally flat group (envelope) delay
and, consequently, a maximally steep step response with minimum
overshoot;
• the Chebyshev family
is optimal in the sense of a having maximally steep pass band to stop
band transition, at the expense of some specified pass band ripple;
• the Inverse Chebyshev family
is optimal in the sense of having a maximally steep stop band to pass
band transition, at the expense of some specified stop band ripple;
• the Elliptic (Cauer) family
is optimal in the sense of having a maximally steep pass band to stop
band transition, at the expense of some specified pass band and stop
band ripple.

It must be pointed out, however, that some system families, the Bessel–
Thomson family in particular, can be realized in practice more easily than others, owing
to the lower ratio of the largest to the smallest system time constant for a given system
order, the maximum usable ratio being limited by component tolerances. Also, low
order systems can be realized more easily than high order systems. We shall have to
keep these things in mind when denormalizing the system to the actual upper frequency
limit and deciding the number of stages used to achieve the total amplification factor.
The trouble is that during the design process we select the system poles and
zeros in accordance with certain circuit simplifications, useful for speeding up the
analysis. But the implementation of the poles and zeros in an actual amplifier is more a
matter of practical know–how, instead of a rigorous theory. This is particularly true if
we are pushing the performance to the limits of realizability, since in these conditions
the component stray reactances must be taken into account when specifying the system
time constants. Luckily for the amplifier designer, for the same component and layout
strays the Bessel–Thomson system yields the highest system bandwidth, with the bonus
of an optimal transient response. This is also true for ‘feedback stabilized’ systems,
since the large phase margin offered by this system family aids system stability; also,
feedback induced Q-factor enhancement at high frequencies lowers the required
imaginary versus real part ratio of the response shaping component impedances even
further.

-6.13-
P. Stariè, E. Margan Computer Algorithms

In this text we shall only deal with the Butterworth [Ref. 6.11] and Bessel–
Thomson [Ref. 6.12, 6.13] low pass systems for calculation of poles, since these are
required in wideband amplifier design. If needed, Chebyshev, inverse Chebyshev and
elliptic (Cauer) functions are provided in the Matlab Signal Processing Toolbox, as well
as the low pass to high pass, band pass and band stop transform algorithms. The
toolbox also contains many other useful algorithms, such as RESIDUE, ROOTS, etc.,
which will not be considered here (see [Ref. 6.1, 6.2, 6.19]).
In order to be able to compare the performance of different systems on a fair
basis we must specify some form of system standardization:
a) all systems will have the pole values normalized for an upper half power
angular frequency =h of 1 radian per second (equivalent to the cycle
frequency of 0h œ "Î#1 [Hz]). This leads to the use of a normalized
frequency vector, implying that whenever we write either 0 or =, we shall
actually mean 0 Î0h or =Î=h , respectively.
Please note that this can sometimes cause a bit of confusion, since 0 Î0h
is the same as =Î=h ; but = œ #10 , so we should keep an eye on the factor #1,
especially when denormalizing the poles to the actual system upper half power
frequency.
The frequency response is calculated as a function of =, since the poles
and zeros are mapped in terms of = œ 5  4=, where both the real and
imaginary part are measured in [radÎs], but we usually plot it as a function of 0
(in [Hz]). If the values of the poles are normalized we can use the same
normalized frequency vector to calculate and plot the frequency domain
functions. Therefore to plot the magnitude and phase responses vs. frequency
we shall not have to divide the frequency vector (of a length usually between
100 and 1000 elements) by #1.
Since Matlab will not accept the symbol = as a valid name for a variable
we shall replace it by w=2*pi*f in our routines.
b) all systems will have their DC gain (at = œ !) normalized to E0 œ "
(throughout this text we shall consider low pass systems only). Nevertheless, we
shall try to provide the correct gain treatment in the general case, in order to
broaden the applicability of our algorithms.

Of course, to extract the actual system component values, as well as to scale the
various frequency and time domain responses to comply with the desired upper
frequency 0h , the poles and zeros will have to be denormalized (multiplied) by #10h .
Also, each response will have to be scaled by the required gain factor.
I.e., for a simple current driven shunt VG system, the normalized pole,
="n œ "ÎÐV"n G"n Ñ œ ", is first denormalized to the value of the desired bandwidth,
=" œ ="n † #10h . From =" we get the new component values, V" G" œ "ÎÐ#10h Ñ. Finally,
we multiply V" by the gain factor, V œ EV" , to obtain the desired output voltage from
the available input current, that is @o œ V3i , and then reduce G" by the same amount,
G œ G" ÎE. If G is a stray capacitance it cannot be reduced below the limit imposed by
the circuit topology. Then we must work backwards by first finding the V which would
give the desired bandwidth, and then determine the input current which will give the
required output voltage.

-6.14-
P. Stariè, E. Margan Computer Algorithms

6.3.1 Butterworth Systems

Butterworth systems ([Ref. 6.11], see also Part 4, Sec. 4.3) are optimal in the
sense that all the derivatives of the frequency response are zero at the complex plane
origin, resulting in a maximally flat magnitude response. The normalized squared
magnitude response of an 8th -order Butterworth system is:
"
J # Ð=Ñ œ (6.3.1)
"  Ð=# Ñ8
This can be rewritten as:
"
J Ð=Ñ J Ð  =Ñ œ (6.3.2)
"  Ð  =# Ñ8

This is an all-pole system, since J # Ð=Ñ p _ whenever:

"  Ð=# Ñ8 œ ! (6.3.3)


The roots of Eq. 6.3.3 are:
= œ Ð  "Ñ"Î#8 (6.3.4)

This can be solved using DeMoivre form:


1  5 #1 1  5 #1
" œ cos  4 sin (6.3.5)
#8 #8
where 5 œ !, ", #, â, 8  ".
If, owing to the Hurwitz stability requirement, we associate the poles in the
left half of the complex plane with J Ð=Ñ, then:
5!
J Ð=Ñ œ 8 (6.3.6)
$ Ð=  =5 Ñ
5œ"

where =5 are found from the expression of Eq. 6.3.5 in the exponential form:
#5"
" 8
=5 œ e41 # for 5 œ ", #, $, á , 8 (6.3.7)
and:
8
5! œ $ Ð=5 Ñ (6.3.8)
5œ"

In the general (non-normalized) case, =h œ È


8
5! .
A Butterworth system is completely specified by the system order 8. It is
normalized so that it has its half power bandwidth limit at the unit frequency:
" "
J Ð4Ñ J Ð  4Ñ œ J # Ð"Ñ œ Ê ¹J Ð"ѹ œ (6.3.9)
# È#

-6.15-
P. Stariè, E. Margan Computer Algorithms

Eq.6.3.7 and Eq.6.3.8 are implemented in the Matlab Signal Processing Toolbox
function called BUTTAP (an acronym for BUTTerworth Analog Prototype):

function [z,p,k] = buttap(n)


%BUTTAP Butterworth analog low pass filter prototype.
% [z,p,k] = buttap(n) returns the zeros, poles, and gain
% for the n-th order normalized prototype Butterworth analog
% low pass filter. The resulting filter has n poles on the
% unit circle in the left half plane, and no zeros.
%
% See also BUTTER, CHEB1AP, and CHEB2AP.

% J.N. Little and J.O. Smith 1-14-87


% Revised 1-13-88 LS
% (c) Copyright 1987, 1988, by The MathWorks, Inc.

% Poles are on the unit circle in the left-half plane.


z = [];
p = exp(sqrt(-1)*(pi*(1:2:2*n-1)/(2*n) + pi/2)).';
k = real(prod(-p));

As an example see the complex plane layout of the poles of a 5th -order
Butterworth system in Fig. 6.3.1.
For a desired attenuation + œ "ÎE at some chosen =+  =h we can calculate
the required system order:
log"! ÐE#  "Ñ
8œ =+ (6.3.10)
# log"!
=h
and round it to the first higher integer.

2
ℑ {s }

s4
1
s2

s1 ℜ{s}
0

s3

−1 s5

−2
−2 −1 0 1 2
Fig. 6.3.1: The 5th -order Butterworth system poles in Cartesian
coordinates of the complex plane. The left half of the figure is
the = domain over which the magnitude in Fig. 6.4.1 is plotted.

-6.16-
P. Stariè, E. Margan Computer Algorithms

6.3.2 Bessel–Thomson Systems

Bessel–Thomson systems (see [Ref. 6.12, 6.13]; see also Part 4, Sec. 4.4) are
optimal in the sense that all the derivatives of the group (envelope) time delay response
are zero at origin, which results in a maximally flat group delay. This means that all the
relevant frequencies pass through the system with equal time delay, resulting in a
transient response with a minimal overshoot. In the complex frequency plane a system
with pure time delay is represented by:

J Ð=Ñ œ e=X (6.3.11)


=
We first normalize this by making X œ ". Then we expand e as a polynomial.
However, if this is done using the Taylor series expression for eB Ðand if the polynomial
degree exceeds 4), the resulting polynomial would not meet the Hurwitz stability
criterion, because some of the poles would be in the right half of the complex plane.
But there is another expression for e= which we can use:
"
= " sinh =
e œ œ (6.3.12)
sinh =  cosh = cosh =
"
sinh =
The Taylor series for hyperbolic sine function has even powers of = and the
hyperbolic cosine has odd powers of =. When we divide these polynomials (using long
division) the poles of the resulting polynomial meet the stability criterion. If we express
this as a partial fraction expansion, truncated at the 8th fraction, an 8th -order Bessel–
Thomson system results. This can be expressed as:
-!
J Ð=Ñ œ (6.3.13)
F8 Ð=Ñ
where F8 Ð=Ñ is an 8th -order Bessel polynomial:
8
F8 Ð=Ñ œ " -5 =5 (6.3.14)
5œ!

and each F8 Ð=Ñ satisfies one of the following relations:

F! Ð=Ñ œ "
F" Ð=Ñ œ =  "
F8 Ð=Ñ œ Ð#8  "Ñ F8" Ð=Ñ  =# F8# Ð=Ñ (6.3.15)

The coefficients -5 of the resulting polynomial can be calculated as:


Ð#8  5Ñx
-5 œ a 5 œ !, ", #, á , 8  ", 8 (6.3.16)
#Ð85Ñ 5x Ð8  5Ñx

The function, which will calculate the Bessel polynomial coefficients using
Eq. 6.3.16 will be called BESTAP (this stands for BESsel–Thomson Analog Prototype,
but the name is also in good agreement with the best time domain response of this
system family). Within this function the system poles are extracted using the ROOTS

-6.17-
P. Stariè, E. Margan Computer Algorithms

function in Matlab. This works well up to 8 œ #%; for higher orders the ratio of -8 to -!
is so high that the computer numerical resolution (‘double precision’ or 16 significant
digits) is exceeded, but this is not a severe limitation because in most circuit
configurations the 1% component tolerances will limit system realizability to about
8 œ "$ (assuming a 6-stage system, for which the highest reactive component value
ratio is about 12:1). But if needed, we can always calculate the frequency response from
the polynomial expression, using the coefficients -5 directly in Eq. 6.1.8, instead of
using Eq. 6.1.10, as in the Matlab POLYVAL and FREQS routines.
Bessel–Thomson system poles are found in the left half of the complex plane on
a family of ellipses, having a nearer focus at the complex plane origin and the other
focus on the positive part of the real axis (see Fig. 6.3.2).
The poles calculated in this way define a family of systems with equal envelope
delay (normalized to 1 s). This results in a progressively larger bandwidth and smaller
rise time for each higher 8 (see Fig. 6.5.11). In addition, two other normalizations of the
Bessel–Thomson system are possible.
One is to make the asymptote of the magnitude roll off slope the same as it is
for the Butterworth system of equal order (this is useful for calculating transitional
Bessel to Butterworth systems, as we have seen in Part 4, Sec. 4.5.3). If =+ is to become
the half power cut off frequency of the new system:
-! " "Î8
¹J Ð=+ ѹ œ œ Ê =+ œ - ! (6.3.17)
# =8+ #
"Î8
In this case, with the roots of F8 Ð=Ñ divided by -! , the envelope delay will be
"Î8
equal to -! , instead of ", and the system bandwidth will be smaller for each higher 8.
The other is to have equal bandwidth for any 8, possibly normalized to 1 rad/s,
as is the Butterworth family; in this way we would be able to compare different systems
on a fair basis. Unfortunately there is no simple way of matching the Bessel–Thomson
system bandwidth to that of a Butterworth system of the same order. To achieve this we
have to recursively multiply the poles by a correction factor proportional to the
bandwidth ratio, until a satisfying approximation is reached (the values of poles
modified in such a way for systems of order 2 to 10 are shown in Part 4, Table 4.5.1).
The while loop at the end of the BESTAP routine has a tolerance of 0.0001 and it was
experimentally found to match in only 8 to 12 loop iterations, depending on 8; this
tolerance is satisfactory for most practical purposes, but the reader can easily change it
to suit his needs.
All these three normalization options (group delay, asymptote, and bandwidth)
are being provided for by the BESTAP routine by entering, besides the system order 8,
an additional input argument in the form of a single character string:
'n' for 1 rad/s cutoff frequency normalization (the default),
't' for unit time delay and
'a' for the same attenuation slope asymptote as Butterworth system of equal order.
As in the BUTTAP routine, three output variables are returned. But the number
of arguments returned by BESTAP can be either 3, 2, or just 1. If all three output
arguments are requested, the zeros are returned in z, the poles in p, and the non-

-6.18-
P. Stariè, E. Margan Computer Algorithms

normalized system gain is returned in the output variable k. Since there are no zeros in
this family of systems, an empty matrix is returned in z.
With just two output arguments, only z and p are returned.
When only one output argument is specified, instead of having an empty matrix
returned in z, which would not be very useful, we have decided to return the Bessel
polynomial coefficients -5 . Note that for the 8th -order system there are 8 b "
coefficients, from -8 to -! . The system gain normalization is achieved by dividing the
each coefficient by -! , that is, the last one in the vector c, i.e., c=c/c(nb1). The
coefficients are scaled as for the 't' option (equal envelope delay); other options are
then ignored. But, if necessary, we can always calculate the polynomial coefficients for
those cases from the poles, by invoking the POLY routine, i.e., c=poly(p).

function [z,p,k]=bestap(n,x)
%BESTAP BESsel-Thomson Analog Prototype.
% Returns the zeros z, poles p and gain k of the n-th order
% Bessel-Thomson system. This is an all-pole system, so an
% empty matrix is returned in z. The poles are calculated for
% a maximally flat envelope (group) delay.
%
% Call : [z,p,k]=bestap(n,x);
% where :
% n is the system order
% x is a single-character string, making the poles:
% 'n' - normalized to a cutoff of 1 rad/s (default);
% 'a' - normalized to have the same attenuation
% asymptote as a Butterworth system of same n;
% 't' - scaled for a group-delay of 1s.
% k is the non-normalized system DC gain.
% p are the poles (length-n column vector)
% z are the zeros (no zeros, empty matrix returned)
% With only one output argument :
% c=bestap(n);
% the n+1 coefficients of the system polynomial are returned,
% scaled as in the 't' option, ignoring other options.

% Author : Erik Margan, 881012, Free of copyright !

if nargin == 1 % nargin is the number of input arguments


x='n'; % by default, normalize to 1 rad/s cutoff
end
z=[ ]; % no zeros
if n == 1
if nargout == 1
c=[1, 1]; % first-order system coefficients
else
p=-1; % first-order pole
k=1; % gain
return % end execution of this routine
end
else
% find the Bessel polynomial coefficients
% from factorials :
% 0!=1 by definition, the rest is calculated
% by CUMPROD (CUMulative PRODuct)
fact=[1, cumprod(1:1:2*n)];
binp=2 .^(0:1:n)); % a vector of binary powers
c=fact(n+1:1:2*n+1)./(binp.*fact(1:1:n+1).*fact(n+1:-1:1));
% c is a vector of polynomial coefficients,
% c(1) is at s^n, c(n+1) is at s^0
end

-6.19-
P. Stariè, E. Margan Computer Algorithms

if nargout == 1 % nargout is the number of output arguments


z=c; % the coefficients of the Bessel polynomial
return % end execution of this routine
end

c=c/c(n+1); % Normalize system gain to 1 at DC

if x == 'a' | x == 'A' % | means logical OR


% Normalize to Butterworth asymptote
g=c(1) .^((n:-1:0)/n); % c(1) is the coefficient at s^n
c=c./g; % Normalize gain
end

p=roots(c); % ROOTS extracts poles from coefficients

if x == 'n' | x == 'N'
% Bandwidth normalization to 1 rad/s results in
% progressively greater envelope delay for increasing n
P=p; % copy the poles to P
% Reference (-3 dB point)
y3=1/sqrt(2);
y=abs(freqw(P,1)); % attenuation at 1 rad/s (see FREQW)
while abs( 1 - y3/y ) > 0.0001
P=P*(y3/y); % Make iterative corrections
y=abs(freqw(P,1));
end
p=P; % copy P back to p
end

k=real(prod(-p)); % non-normalized system gain

10 9
8
8 7
6
6 5
4
4 3
2
2
ℑ{s }

1
0

−2

−4

−6

−8

− 10
−6 −4 −2 0 2 4 6 8 10 12 14 16 18
ℜ{s }
Fig. 6.3.2: Complex plane pole map for unit group delay Bessel–
Thomson systems of order 2–9 (with the first-order reference).

-6.20-
P. Stariè, E. Margan Computer Algorithms

6.4 Complex Frequency Response

In Matlab the frequency response is calculated by two routines, called POLY


and FREQS, which require the polynomial coefficients as the input argument. We shall
use two different routines: PATS, which calculates the product of terms containing
polynomial roots at each value of =, according to Eq. 6.1.9 and FREQW, which returns
the complex frequency response J Ð=Ñ, after Eq. 6.1.10, given the zeros, poles and the
normalized frequency vector. FREQW calls PATS as a subroutine.

function P=pats(R,s)
% PATS Polynomial (product form) value AT S.
% P=pats(R,s) returns the values of the product of terms,
% containing n-th order polynomial roots R=[r1, r2,.., rn],
% for each element of s.
% Values are calculated according to the formula :
%
% P(s)=(s-r1)*(s-r2)*...*(s-rn) / (((-1)^n)*(r1*r2*...*rn))
%
% and are normalized so that P(0)=1.
% PATS is used by FREQW. See also POLY and FREQS.

% Author : Erik Margan, 881110, Free of copyright !

[m,n]=size(s);
P=ones(m,n); % A matrix of all ones, same dimension as s
nr=max(size(R)); % number of elements in R
for k=1:nr
if R(k) == 0
P=P.*s; % Multiply, but prevent from dividing by 0.
else
P=P.*(s-R(k))/(-R(k));
end
end

function F=freqw(z,p,w)
% FREQW returns the complex frequency response F(jw) of the system
% described by the zeros (vector z=[z1,z2,...,zm]) and the
% poles (vector p=[p1,p2,...,pn]).
% Call : F=freqw(z,p,w);
% w is the frequency vector; can be real, imaginary or complex.
% F=freqw(p,w) assumes a system with poles only.
% FREQW uses PATS. See also FREQS and FREQZ.

% Author : Erik Margan, 881110, Free of copyright !

if nargin == 2 % nargin returns the number of input arguments


w=p; p=z; z=[ ]; % assume a system with poles only
end
for k=1:max(size(p))
if real(p(k)) >= 0
disp('WARNING : This is not a Hurwitz-type system!')
end
end
if ~any(imag(w))
w=sqrt(-1)*w; % if w is real, assume it to be imaginary
end
if isempty(z)
F=1 ./pats( p, w ) ;
else
F=pats( z, w )./pats( p, w ) ;
end

-6.21-
P. Stariè, E. Margan Computer Algorithms

6.4.1 Frequency Dependent Response Magnitude

The absolute value of the complex frequency response is called magnitude; it is


calculated as a square root of the product of J Ð=Ñ with its own complex conjugate:

¸J Ð=Ѹ œ ÈJ Ð=Ñ J * Ð=Ñ œ ÊŠdÖJ Ð=Ñ×  4eÖJ Ð=Ñ׋ŠdÖJ Ð=Ñ×  4eÖJ Ð=Ñ׋

# #
œ ÊŠdÖJ Ð=Ñ׋  ŠeÖJ Ð=Ñ׋ œ Q a=b (6.4.1)

Assuming a sinusoidal input signal, the magnitude represents the output to input
ratio of the peak signal value at that particular frequency. In practice, when we talk
about the system’s ‘frequency response’, we usually mean ‘the frequency dependent
magnitude’, Q Ð=Ñ. The magnitude contains no phase information.
We can calculate the magnitude by any of the following Matlab basic functions:

M=sqrt( (real(F)).^2 + (imag(F)).^2 ); % or :


M=sqrt( F .* conj(F) ); % or :
M=sqrt( F .* ( F' ) ); % or :
M=abs(F); % abs --> absolute value

We shall use the ABS command, not just because it is easy to type in, but
because it executes much faster when there is a large amount of data to process.
In order to acquire a better understanding of what we are doing, let us write an
example for a 5th -order Butterworth system. In the Matlab command window we write:

[z,p]=buttap(5); ↵ % Note: a command line is executed by "ENTER"

If we now type:

z ↵ answer: []

p ↵ answer: -0.3090 + 0.9511i


-0.3090 - 0.9511i
-0.8090 + 0.5878i
-0.8090 - 0.5878i
-1.0000 + 0.0000i

Since there are no zeros an empty matrix (shown by square brackets) is returned
in z. A 5-element column vector with complex conjugate pole values is returned in p.
Let us plot these poles in the complex plane using Cartesian coordinates:

plot( real(p), imag(p), '*' ), axis([-2,2,-2,2]); ↵


% see the result in Fig.6.3.1.

and the result would look as in Fig. 6.3.1 (for clarity, the distance from the origin and
the unit circle are also shown there, both needing extra ‘plot’ operations, not written in
the example above). From now on we shall not write the ENTER character explicitly.

-6.22-
P. Stariè, E. Margan Computer Algorithms

The system magnitude as a function of the complex frequency = has a very


interesting 3D shape and it is instructive to have a closer look at it:

[z,p]=buttap(5); % 5th-order Butterworth poles


r=(-2:1/20:0); % real frequency vector
w=(-2:1/20:2); % imaginary frequency vector
[x,y]=meshgrid(r,w); % make the complex domain grid
M=abs(1 ./pats(p,x+j*y)); % magnitude due to poles in x+j*y domain
for m=1:max(size(M))
n=find(M(m,:)>12 ); % find magnitude > 12
M(m,n)=12; % limit magnitude to 12 for plot
end
waterfall(x',y',M') % waterfall plot of magnitude in 3-D
% prime(') aligns plots along jw-axis
axis([-2,0,-2,2,0,12]) % set axes limits
view(50,25) % view(azimuth,elevation) set view angle
% add axes labels (Matlab-V format):
xlabel( '{\it\sigma} = \Re\{{\its}\}', 'FontSize', 10 )
ylabel( '{\itj\omega} = {\itj}\Im\{{\its}\}', 'FontSize', 10 )
zlabel( '{\itM}({\its}) = |{\itF}({\its})|', 'FontSize', 10 )
% see the result in Fig.6.4.1.

Fig. 6.4.1 has been created using the Matlab WATERFALL function and shows
the 3D magnitude of the 5th -order Butterworth system over a limited = domain in the
complex plane. The = domain here is the same as the left half of Fig. 6.3.1. Over the
poles, the magnitude would extend to infinity, so we have had to limit the height of the
plot in order to show the low level features in more detail.

↑∞
12

9
M(s) = |F(s)|

0
− 2.0
− 1.5 2.0
1.0
− 1.0 0 | F( j ω )|
σ = ℜ{s} − 0.5 − 1.0 jω = jℑ{s}
0 − 2.0

Fig. 6.4.1: The 5th -order Butterworth system magnitude, plotted over the same =-domain
as the shaded left half of Fig. 6.3.1. The surface represents ¸J Ð=Ѹ, but limited in height in
order to reveal the low level details. Its shape above the 4= axis is ¸J Ð4=Ѹ œ QÐ=Ñ.

-6.23-
P. Stariè, E. Margan Computer Algorithms

Now, we have intentionally limited the = domain to just the left half of the
complex plane (where the real part is either zero or negative). This highlights the shape
of the plot along the imaginary axis, which is — guess what ? — Q Ð=Ñ.
Looking at those lines parallel to the imaginary axis we can see what would
happen if the poles were moved closer to that axis: the magnitude would exhibit a
progressively pronounced peak. Such is the consequence of lowering the real part of the
poles. Since the negative real part is associated with energy dissipative (resistive)
components, it is clear that its role is to suppress resonance. But when we design an
oscillator we need to compensate any energy lost in the parasitic resistances of the
reactive components by an active regeneration (‘negative resistance’ or a positive real
part) in order to set the system poles (usually just one pair for oscillators) exactly on the
imaginary axis.
What is interesting to note is the mirror like symmetry about the real axis, owed
to the complex conjugate nature of the Laplace space. Here we see at work the concept
of ‘negative frequency’, which will be discussed latter in Sec. 6.5, dealing with the
Fourier transform inversion. This symmetry property will allow us to greatly improve
the inverse transform algorithm efficiency.
It is also instructive to see the complex frequency response J Ð4=Ñ in 3D:

w=(-3:0.01:3); % 601 frequencies, -3 to 3, in 0.01 increment


F=freqw(z,p,w); % 601 points of complex frequency response
plot3(w,real(F),imag(F))% 3D plot of the Im and Re part of F(jw)
view(65,15); % view angle, azimuth 65deg., elevation 15deg.
% see the result in Fig.6.4.2.

1.5
ℑ {F ( j ω ) }
1.0
ω =0
0.5 F (0) = 1

0
ℜ{ F ( j ω ) }
− 0.5

−1.0

−1.5
−3
−2
−1
0
1
0.5 1.0 1.5
2 0
3 −1.5
− 1.0 − 0.5
Fig. 6.4.2: The complex 3D plot of J Ð4=Ñ. The response phasor rotates clockwise, going
from negative to positive frequency. The distance from the frequency axis is the magnitude.
The circle on the real axis marks the DC response point. The Nyquist plot (see Fig. 6.4.3)
usually shows only the =  ! part, viewing in the 4= direction. The three projections are
plotted to help those readers who do not have access to Matlab to visualize the shape.

-6.24-
P. Stariè, E. Margan Computer Algorithms

Fig. 6.4.2, which has been created using the Matlab PLOT3 function, shows
J Ð4=Ñ with the phase angle twisting about the 4= axis and the magnitude as the
distance from the 4= axis. The circle marker denotes the point where J Ð4=Ñ crosses the
real axis at zero frequency — the DC system gain normalized to 1.
Whilst the Fig. 6.4.1 waterfall plot shape was relatively easy to interpret and
‘feel’, the 3D curve shape is somewhat less clear. In Matlab one can use the
view(azimuth,elevation) command to see the graph from different viewing angles.
In Fig. 6.4.2, view(65,15) was used. In more recent versions of Matlab the user can
even select the viewing point by the ‘mouse’. To help the imagination of readers
without access to Matlab we have also plotted the three ‘shadows’.
Regarding the symmetry, J Ð4=Ñ is not a mirror image of J Ð4=Ñ — unlike
Q Ð=Ñ — because the phasor preserves its sense of rotation (clockwise, negative by
definition for any system with poles on the left) throughout the 4= axis. But if folded
about the real axis the shape would match.
As a result of such symmetry J Ð4=Ñ can be plotted using only the =   ! part of
the axis, without any loss of information. The Nyquist plot [Ref. 6.9] shows both the
magnitude and the phase angle on the same graph:

w=(0:0.01:3); % 301 frequencies, 0 to 3, increment 0.01


F=freqw(z,p,w); % 301 points of complex frequency response
axis('square'); % the plot axes will have a 1:1 aspect ratio
plot(real(F),imag(F)) % plot the imaginary part versus real part

The result should look like Fig. 6.4.3. The view is as if we look at the Fig. 6.4.2
in the opposite direction of the 4= axis (from +_ towards the origin).
1.0
ℑ{F ( j ω ) }

0.5
ω =1

ω= ∞ ω =0
ℜ{ F ( j ω ) }
0
|
ω)
(j ℑ{F ( j ω ) }
|F ϕ (ω ) = arctan
)= ℜ{ F ( j ω ) }

− 0.5 M
es
as

re
nc
ω i

− 1.0
−1.0 − 0.5 0 0.5 1.0
Fig. 6.4.3: The Nyquist plot of the 5th -order Butterworth system frequency response. The
frequency axis is reduced to a single point projection at the origin and is parametrically
incremented with the phase angle, from the DC point on the real axis, to the half power
bandwidth point at [-0.5, 0.5*j] and to infinity at the origin.

-6.25-
P. Stariè, E. Margan Computer Algorithms

However, in a Nyquist plot it is difficult to see the frequency dependence of the


magnitude and phase, unless we intentionally mark a few chosen frequencies on the
plot, as was done in FigÞ 6.4.3 in order to make the orientation easier. Thus it has
become a standard practice to make separate plots of magnitude and phase as functions
of frequency, as introduced by Bode [Ref. 6.10]. Again, exploiting the symmetry, we
can plot only the =   ! part without loosing any information:

% following the previous example:


M=abs(F); % 301 points of magnitude
plot(w,M) % display M versus w, see Fig.6.4.4.

This should look like Fig. 6.4.4. The special point on the graph is the magnitude
at the unit frequency — its value is "ÎÈ# , or 0.707, and since power is proportional to
the magnitude squared this is the system’s half power cut off frequency.

1.2

1.0
M (ω )

0.8
0.707

0.6

0.4

0.2

0
0 0.5 1.0 1.5 2.0 2.5 3.0
ω
Fig. 6.4.4: The magnitude vs. frequency plot in a linear scale. The
characteristic point is the half power bandwidth.

It has also become a standard practice to enhance the stop band detail by using
either the log Q vs. log = or the semilog dB(Q ) vs. log = plot scale (Fig. 6.4.5):

w=logspace(-1,1,301); % frequency, 301 points, equally spaced in


% log-scale from 0.1 to 10
F=freqw(z,p,w); % 301 points of complex frequency response
M=abs(F); % 301 points of magnitude
semilogx(w,20*log10(M))% display M in dB versus log-scaled w
% see the result in Fig.6.4.5.

-6.26-
P. Stariè, E. Margan Computer Algorithms

By using the loglog scale or a linear dB vs. log frequency scale we can quickly
estimate the system order, since the slope is simply (for all pole systems) 8 times the
first-order system slope (8×20 dB per frequency decade).

0 10 0
M (ω ) -3 dB
[dB] M (ω )

− 20 10 − 1

Sl o
pe
:
− 40 10 − 2

− 10
0d
B/
10
ω
− 60 10 − 3

− 80 10 − 4

− 100 10 − 5
0.1 1.0 10.0
ω
Fig. 6.4.5: Bode plot of the 5th -order Butterworth system magnitude, as in Fig. 6.4.4, but
in a linear dB vs. log frequency scale. In such a scale, all-pole systems have an
asymptotically linear attenuation slope, proportional to the system order (a factor of 108
or 8×20 dBÎ10=). The marked 3dB reference point is the same half power cut off
frequency point as in Fig. 6.4.4.

-6.27-
P. Stariè, E. Margan Computer Algorithms

6.4.2 Frequency Dependent Phase Shift

The phase response is calculated from the complex frequency response as the
arctangent of the ratio of the imaginary versus real part of J Ð=Ñ:
eeJ a=bf
:Ð=Ñ œ arctan (6.4.2)
d eJ a=bf

Using the previously calculated J a4=b we can write this as:

phi=atan(imag(F)./real(F)); % 301 samples of phase of F(jw)

Note that the Matlab arctangent function is called atan. However, Matlab also
has a built in command named ANGLE, using the same Eq. 6.4.2, so:

phi=angle(F); % phase response, modulo 2pi ;


semilogx(w,phi); % show phi in radians vs. log-scaled w ;


3
ϕ (ω ) −π
[rad] +π
2

0
1

−1
−π
−2 2

−3 −π

−4
0.1 1.0 10.0
ω
Fig. 6.4.6: The phase angle vs. frequency plot of the 5th -order Butterworth system.
The circularity of trigonometric functions, defined within the range „1 radians, is
the cause of the discontinuous phase vs. frequency relationship.

Clearly this is a circular function of modulo #1 radians. For systems of 3rd or


greater order the phase will rotate by more than #1, so there will be jumps from 1 to
1 in the phase graph, as in Fig. 6.4.6, and we must ‘unwrap’ it (roll the ‘cylinder’
along the : axis) in order to get a continuous function. Matlab has the ADDTWOPI
(older) and UNWRAP (newer) routine, but both perform irregularly for :  %1,
especially for systems with zeros. The following EPHD routine works correctly.

-6.28-
P. Stariè, E. Margan Computer Algorithms

function q=ephd(phi)
% EPHD Eliminate PHase Discontinuities.
% Outperforms UNWRAP and ADDTWOPI for systems with zeros.
% Use : q=ephd(phi);
% where :
% phi --> input phase vector in radians ( range: -pi>=phi>=pi );
% q --> output phase vector, "unwrapped";
% If phi is a matrix, unwrapping is performed down each column.

% Author : Erik Margan, 890505, Free of copyright !

[r,c]=size(phi);
if min(r,c) == 1
phi=phi(:); % column-wise orientation
c=1;
end
q=diff(phi); % differentiate to detect discontinuities
% compensate for one element lost in diff and round the steps:
q=[zeros(1:c); pi*round(q/pi)];
q=cumsum(q); % integrate back by cumulatively summing
q=phi-q; % subtract the correcting values
if r == 1
q=q.'; % restore orientation
end

The ‘trick’ used in the EPHD routine is to first differentiate the phase, in order
to find where the discontinuities are and determine how large they are, then normalize
them by dividing by 1, round this to integers, multiply back by 1, integrate back to
obtain the corrections, and subtract the corrections from the original phase vector.
Following our 5th -order Butterworth example, we can now write:

alpha=ephd(phi); % unwrapped ;
semilogx(w,180*alpha/pi) % show alpha in degrees vs. log-scaled w;

− 90
ϕ (ω )
[°]

− 180

− 270

− 360

− 450
0.1 1.0 10.0
ω
Fig. 6.4.(: Bode plot of ‘unwrapped’ phase, in a linear degree vs. log frequency scale.

-6.29-
P. Stariè, E. Margan Computer Algorithms

Plotting the phase in linear degrees vs. log scaled frequency reveals an
interesting fact: the system exhibits a 90° phase shift for each pole, 450° total phase
shift for the 5th -order system. Also, the phase shift at the cut off frequency =h is exactly
one half of the total phase shift (at = ¦ =h ).
Another important fact is that stable systems (those with poles on the left half of
the complex plane) will always exhibit a negative phase shift, whatever the system
configuration (low pass or high pass, inverting or non-inverting). If you ever see a
phase graph with a positive slope, first inspect what is the system gain in that frequency
region. If it is 0.1 or higher that is a cause for major concern (that is, if your intention
was not to build an oscillator!).

-6.30-
P. Stariè, E. Margan Computer Algorithms

6.4.3 Frequency Dependent Envelope Delay

The envelope delay is defined as the phase versus frequency derivative:

.:Ð=Ñ
7d Ð=Ñ œ (6.4.3)
.=

(note: : must be in radians!). Now it becomes evident why we have had to ‘unwrap’ the
circular phase function: each #1 discontinuity would, when differentiated, produce a
very high, sharp spike in the envelope delay.
Numerical differentiation can be performed by simply taking the difference of
each pair of adjacent elements for both the phase and the frequency vector:

dphi=phi(2:1:300)-phi(1:1:299);
dw=w(2:1:300)-w(1:1:299);

But Matlab has a built in command called DIFF, so let us use it:

tau=diff(phi)./(diff(w));

By doing so we run into an additional problem. Numerical differentiation


assigns a value to each difference of two adjacent elements, so if we started from R
elements the differentiation will return R  " differences. Since each difference is
assigned to the interval between two samples, instead of the samples themselves, this
results in a half interval delay when the result is displayed against the original
frequency vector w.
If we have low density data we should compensate this by redefining w. For a
linearly scaled frequency vector we would simply take the algebraic mean,
w=(w(2:1:N)-w(1:1:N-1))/2. But for a log-scaled frequency vector, a geometric
mean (the square root of a product) is needed, as in the example below:

w1=sqrt(w(1:1:299).*w(2:1:300));
semilogx(w1,tau) % see the result in Fig.6.4.8.

Note that the values in variable tau are negative, reflecting the fact that the
system output is delayed in time. Since we call this response a ‘delay’ by definition, we
could use the absolute value. However, we prefer to keep the negative sign, because it
also reflects the sense of the phase rotation (see Fig. 6.4.3 and 6.4.7). An upward
rotating phase (or a counter-clockwise rotation in the Bode plot of the complex
frequency response) would imply a positive time delay or output before input and,
consequently, an unstable or oscillatory system.

-6.31-
P. Stariè, E. Margan Computer Algorithms

−1

−2
τe
[s]
−3

−4

−5

−6
0.1 1.0 10.0
ω
Fig. 6.4.): The envelope (group) delay vs. frequency of the 5th -order Butterworth
system. The delay is the largest for frequencies where the phase has the greatest slope.

So far we have derived the phase and group delay functions from the complex
response to imaginary frequency. There are times, however, when we would like to
save either processing time or memory requirement (as in embedded instrumentation
applications). It is then advantageous to calculate the phase or the group delay directly
from the system poles (and zeros, if any) and the frequency vector.
The phase influence of a single pole :5 can be calculated as:
=  e˜:5 ™
:5 Ð=ß :5 Ñ œ arctan (6.4.4.)
d ˜:5 ™
The influence of a zero is calculated in the same way, but with a negative sign.
The total system phase shift is equal to the sum of all particular phase shifts of poles
and zeros:
8 7
:Ð=Ñ œ ":5 Ð=ß :5 Ñ  ":3 Ð=ß D3 Ñ (6.4.5.)
5œ" 3œ"

But owing to the inherent complex conjugate symmetry of poles and zeros, only
half of them need to be calculated and the result is then doubled. If the system order is
odd the real pole is summed just once, the same is true for any real zero. This, of
course, requires some sorting procedure of the system poles and zeros, but sorting is
performed much quicker than multiplication with =, which is usually a lengthy vector.
If we are interested in getting data for a single frequency, or just two or three
characteristic points, then it might be faster to skip sorting and calculate with all poles
and zeros. In Matlab poles and zeros are already returned sorted. See the PHASE
routine, in which Eq. 6.4.4 and 6.4.5 were implemented.

-6.32-
P. Stariè, E. Margan Computer Algorithms

Note also that with the PHASE routine we obtain the ‘unwrapped’ phase
directly and we do not have to recourse to the EPHD routine.

function phi=phase(z,p,w)
% PHASE returns the phase angle of the system specified by the zeros
% z and poles p for the frequencies in vector w :
%
% Call : phi=phase(z,p,w);
%
% Instead of using angle(freqw(z,p,w)) which returns the phase
% in the range +/-pi, this routine returns the "unwrapped" result.
% See also FREQW, ANGLE, EPHD and GDLY.

% Author: Erik Margan, 890327, Last rev.: 980925, Free of copyright!

if nargin == 2
w = p ;
p = z ;
z = []; % A system with poles only.
end
if any( real( p ) > 0 )
disp('WARNING : This is not a Hurwitz-type system !' )
end
n = max( size( p ) ) ;
m = max( size( z ) ) ;
% find w orientation to return the result in the same form.
[ r, c ] = size( w ) ;
if c == 1
w = w(:).' ; % make it a row vector.
end
% calculate phase angle for each pole and zero and sum it columnwise.
phi(1,:) = atan( ( w - imag( p(1) ) ) / real( p(1) ) ) ;
for k = 2 : n
phi(2,:) = atan( ( w - imag( p(k) ) ) / real( p(k) ) ) ;
phi(1,:) = sum( phi ) ;
end
if m > 0
for k = 1 : m
phi(2,:) = atan( ( imag( z(k) ) - w ) / real( z(k) ) ) ;
phi(1,:) = sum( phi ) ;
end
end
phi( 2, : ) = [] ; % result is in phi(1,:)
if c == 1
phi = phi(:) ; % restore the form same as w.
end

A similar procedure can be applied to the group delay. The influence of a single
pole :5 is calculated as:

d ˜:5 ™
75 Ð=, :5 Ñ œ # (6.4.6.)
#
d˜:5 ™  Še˜:5 ™  =‹

As for the phase, the total system group delay is a sum of all delays for each
pole and zero:
8 7
7d Ð=Ñ œ "75 Ð=, :5 Ñ  "73 Ð=, D3 Ñ (6.4.7.)
5œ" 3œ"

-6.33-
P. Stariè, E. Margan Computer Algorithms

Again, owing to the complex conjugate symmetry, only half of the complex
poles and zeros need to be taken into account and the result doubled, and any delay of
an eventual real pole or zero is then added to it. The GDLY (Group DeLaY) routine
implements Eq. 6.4.6 and 6.4.7.

function tau=gdly(z,p,w)
% GDLY returns the group (envelope) time delay for a system defined
% by zeros z and poles z, at the chosen frequencies w.
%
% Call : tau=gdly(Z,P,w);
%
% Although the group delay is defined as a positive time lag,
% by which the system response lags the input, this routine
% returns a negative value, since this reflects the sense of
% phase rotation with frequency.
%
% See also FREQW, PATS, ABS, ANGLE, PHASE.

% Author: Erik Margan, 890414, Last rev.: 980925, Free of copyright!

if nargin == 2
w=p;
p=z;
z=[]; % system has poles only.
end
if any( real( p ) > 0 )
disp( 'WARNING : This is not a Hurwitz type system !' )
end
n=max(size(p));
m=max(size(z));
[r,c]=size(w);
if c == 1
w=w(:).' ; % make it a row vector.
end
tau(1,:) = real(p(1)) ./(real(p(1))^2 + (w-imag(p(1))).^2);
for k = 2 : n
tau(2,:) = real(p(k)) ./(real(p(k))^2 + (w-imag(p(k))).^2);
tau(1,:) = sum( tau ) ;
end
if m > 0
for k = 1 : m
tau(2,:)=-real(Z(k)) ./(real(Z(k))^2 + (w-imag(Z(k))).^2);
tau(1,:) = sum( tau ) ;
end
end
tau(2,:) = [] ;
if c == 1
tau = tau(:) ;
end

-6.34-
P. Stariè, E. Margan Computer Algorithms

6.5 Transient Response by Fourier Transform

There are several methods for time domain response calculation. Three of these
that are interesting from the system designer’s point of view, including the FFT method,
were compared for efficiency and accuracy in [Ref. 6.23]. Besides the high execution
speed, the main advantage of the FFT method is that we do not even have to know the
exact mathematical expression for the system frequency response, but only the graph
data (i.e. if we have measured the frequency and phase response of a system). Although
the method was described in detail in [Ref. 6.23] we shall repeat here the most
important steps, to allow the reader to follow the algorithm development.
There are five difficulties associated with the discrete Fourier transform that we
shall have to solve:
a) the inability to transform some interesting functions (e.g., the unit step);
b) the correct treatment of the DC level in low pass systems;
c) preserving accuracy with as little spectral information input as possible;
d) find to what extent our result is an approximation owed to finite spectral density;
e) equally important, estimate the error owed to finite spectral length.

1.2

1.0
g (t )

0.8

0.6

0.4

0.2
f (t )
0

− 0.2
0 2 4 6 8 10 12 14 16 18 20
t
Fig. 6.5.1: The impulse and step response of the 5th -order Butterworth system. The
impulse amplitude has been normalized to represent the response to an ideal,
infinitely narrow, infinite amplitude input impulse. The impulse response reaches the
peak value at the time equal to the envelope delay value at DC; this delay is also the
half amplitude delay of the step response. The step response first crosses the final
value at the time equal to the envelope delay maximum. Also the step response peak
value is reached when the impulse response crosses the zero level for the first time.
If the impulse response is normalized to have the area (the sum of all samples) equal
to the system DC gain, the step response would be simply a time integral of it.

-6.35-
P. Stariè, E. Margan Computer Algorithms

6.5.1 Impulse Response, Using FFT

The basic idea behind this method is that the Fourier transform is a special case
of the more general Laplace transform and the Dirac impulse function is a special type
of signal for which the Fourier transform solution always exists. Comparing Eq. 1.3.8
and Eq. 1.4.3 and taking in account that = œ 5  4=, we see:

_e0 Ð>Ñf œ Y e0 Ð>Ñ e5> f (6.5.1)

Since the complex plane variable = is composed of two independent parts (real
and imaginary), then J Ð=Ñ may be treated as a function of two variables, 5 and =. This
can be most easily understood by looking at Fig. 6.4.1, in which the complex frequency
response (magnitude) of a 5-pole Butterworth function is plotted as a 3D function over
the Laplace plane.
In that particular case we had:
=" =# =$ =% =&
J Ð=Ñ œ (6.5.2)
Ð=  =" ÑÐ=  =# ÑÐ=  =$ ÑÐ=  =% ÑÐ=  =& Ñ

where ="–& have the same values as in the example at the beginning of Sec. 6.4.1.
When the value of = in Eq. 6.5.2 becomes close to the value of one of the poles,
=i , the magnitude kJ Ð=Ñk then increases until becoming infinitely large for = œ =i .
Let us now introduce a new variable : such that:

:œ=¹ or: : œ 4= (6.5.3)


5œ!

This has the effect of slicing the J a=b surface along the imaginary axis, as we
did in Fig. 6.4.1, revealing the curve on the surface along the cut, which is kJ Ð4=Ñk, or
in words: the magnitude Q Ð=Ñ of the complex frequency response. As we have
indicated in Fig. 6.4.5, we usually show it in a loglog scaled plot. However, for transient
response calculation a linear frequency scale is appropriate (as in Fig. 6.4.2), since we
need the result of the inverse transform in linear time scale increments.
Now that we have established the connection between the Laplace transformed
transfer function and its frequency response we have another point to consider:
conventionally, the Fourier transform is used to calculate waveform spectra, so we need
to establish the relationship between a frequency response and a spectrum. Also we
must explore the effect of taking discrete values ÐsamplingÑ of the time domain and
frequency domain functions, and see to what extent we approximate our results by
taking finite length vectors of finite density sampled data. Those readers who would
like to embed the inverse transform in a microprocessor controlled instrument will have
to pay attention to amplitude quantization (finite word length) as well, but in Matlab
this is not an issue.
We have examined the Dirac function $Ð>Ñ and its spectrum in Part 1, Sec. 1.6.6.
Note that the spectral components are separated by ?= œ #1ÎX , where X is the
impulse repetition period. If we allow X p _ then ?= p !. Under these conditions we

-6.36-
P. Stariè, E. Margan Computer Algorithms

can hardly speak of discrete spectral components because the spectrum has become
very dense; we rather speak of spectral density. Also, instead of individual
components’ magnitude we speak of spectral envelope which for $Ð>Ñ is essentially
flat.
However, if we do not have an infinitely dense spectrum, then ?= is small but
not !, and this merely means that the impulse repeats after a finite period X œ #1Î?=
(this is the mathematical equivalent of testing a system by an impulse of a duration
much shorter than the smallest system time constant and of a repetition period much
larger than the largest system time constant).
Now let us take such an impulse and present it to a system having a selective
frequency response. Fig. 6.5.2 shows the results both in the time domain and the
frequency domain (magnitude). The time domain response is obviously the system
impulse response, and its equivalent in the frequency domain is a spectrum, whose
density is equal to the input spectral density, but with the spectral envelope shaped by
the system frequency response. The conclusion is that we only have to sample the
frequency response at some finite number of frequencies and perform a discrete Fourier
transform inversion to obtain the impulse response.

150 20
128 Input Impulse Impulse Response

100
10

50

0
0
128 Samples

− 50 −10
− 30 0 30 60 90 −30 0 30 60 90
Time / Sampling interval Time / Sampling interval

1.5 1.5
Input Spectrum Response Spectrum
64 Frequency components
1.0 1.0

0.5 0.5

0 0
0 20 40 60 0 20 40 60
Frequency × Sampling interval Frequency × Sampling interval

Fig. 6.5.2: Time domain and frequency domain representation of a 5-pole Butterworth system
impulse response. The spectral envelope (only the magnitude is shown here) of the output is
shaped by the system frequency response, whilst the spectral density remains unchanged. From
this fact we conclude that the time domain response can be found from a system frequency
response using inverse Fourier transform. The horizontal scale is the number of samples (128 in
the time domain and 64 in the frequency domain — see the text for the explanation).

-6.37-
P. Stariè, E. Margan Computer Algorithms

If we know the magnitude and phase response of a system at some finite number
of equally spaced frequency points, then each point represents:

J3 œ Q3 cosÐ=3 >  :3 Ñ (6.5.4)

As the contribution of frequencies components which are attenuated by more


than, say, 60 dB can be neglected, we do not have to take into account an infinitely large
number of frequencies, and the fact that we do not have an infinitely dense spectrum
merely means that the input impulse repeats in time. By applying the superposition
theorem, the output is then equal to the sum of all the separate frequency components.
Thus for each time point the computer must perform the addition:
3Ð=max Ñ
0 Ð>5 Ñ œ " Q3 cosÐ=3 >5  :3 Ñ (6.5.5)
3Ð=min Ñ

Eq. 6.5.5 is the discrete Fourier transform, with the exponential part
expressed in trigonometric form. However, if we were to plot the response calculated
after Eq. 6.5.5, we could see that the time axis is reversed, and from the theory of
Fourier transform properties (symmetry property, [Ref. 6.14, 6.15, 6.18]), we know that
the application of two successive Fourier transforms returns the original function but
with the sign of the independent variable reversed:

Y šY e0 Ð>Ñf› œ Y eJ Ð4=Ñf œ 0 Ð>Ñ (6.5.6)

or more generally:
Y Y Y Y
0 Ð>Ñ p J Ð4=Ñ p 0 Ð>Ñ p J Ð4=Ñ p 0 Ð>Ñ (6.5.7)
o o o o
Y " Y " Y " Y "

The main drawback in using Eq. 6.5.5 is the high total number of operations,
because there are three input data vectors of equal length (=, M, :) and each contributes
to every time point result. It seems that greater efficiency might be obtained by using
the input frequency response data in the complex form, with the frequency vector
represented by the index of the J Ð4=Ñ vector.
Now J Ð4=Ñ in its complex form is a two sided spectrum, as was shown in
Fig. 6.4.3, and we are often faced with only a single sided spectrum. It can be shown
that a real valued 0 Ð>Ñ will always have J Ð4=Ñ symmetrical about the real axis 5 . Thus:

JR Ð4=Ñ œ JT* Ð4=Ñ (6.5.8)

JR and JT are the =  0 and =  0 parts of J Ð4=Ñ, with their inverse


transforms labeled 0R Ð>Ñ and 0T Ð>Ñ. Note that JT* is the complex conjugate of JT .
This symmetry property follows from the definition of the negative frequency
concept: instead of having a single phasor rotating counter-clockwise (positive by
definition) in the complex plane, we can always have two half amplitude phasors
rotating in opposite directions at the same frequency (as we have already seen drawn in
Part 1, Fig. 1.1.1; for vector analysis see Fig. 6.5.3). We can therefore conclude that the

-6.38-
P. Stariè, E. Margan Computer Algorithms

inherent conjugate symmetry of the complex plane allows us to define ‘negative


frequency’ as a clockwise rotating, half amplitude phasor, being the complex conjugate
of the usual counter-clockwise (positive by definition) rotating (but now also half
amplitude) phasor. And this is not just a fancy way of making simple things complex,
but is rather a direct consequence of our dislike of sine–cosine representation and the
preference for the complex exponential form, which is much simpler to handle
analytically.
One interesting aspect of the negative frequency concept is the Shannon
sampling theorem: for a continuous signal, sampled with a frequency 0s , all the
information is contained within the frequency range between ! and 0s Î#, because the
spectrum from 0s Î# to 0s is a mirror image, so the spectrum is symmetrical about 0s Î#,
the Nyquist frequency. Therefore a frequency equal to 0s can not be distinguished from
a DC level, and any frequency from 0s Î# to 0s can not be distinguished from 0s  0 .
But please, also note that this ‘negative frequency’ does not necessarily imply
‘negative time’, since the negative time is defined as the time before some arbitrary
instant > œ ! at which the signal was applied. In contrast, the negative frequency
response is just one half of the full description of the >   ! signal.
However, those readers who are going explore the properties of the Hilbert
transform will learn that this same concept can be extended to the >  ! signal region,
but this is beyond the scope of this text.

ℑ ℑ

−A
dϕ a = A sin ω t 2j −A
ω=
dt −ϕ 2
t=0 A ℜ
a
a A ℜ

t1 = ϕ
ω
ϕ A
2
ϕ= ωt ϕ A
2j
A
j

a) b)
t = 2π
ω

Fig. 6.5.3: As in Part 1, Fig. 1.1.1, but from a slightly different perspective: a) the real signal
instantaneous amplitude aÐ>Ñ œ E sin :, where : œ =>; b) the real part of the instantaneous signal
p p
phasor, d˜0A™ œ 0a œ E sin :, can be decomposed into two half amplitude, oppositely rotating,
complex conjugate phasors, aEÎ#4b sin :  aEÎ#4b sinÐ:Ñ. The second term has rotated by
: œ => and, since > is obviously positive (see the a) graph), the negative sign is attributed to =;
thus, clockwise rotation is interpreted as a ‘negative frequency’.

-6.39-
P. Stariè, E. Margan Computer Algorithms

Eq. 6.5.8 can thus be used to give:

J Ð4=Ñ œ JT Ð4=Ñ  JT* Ð4=Ñ (6.5.9)


but from Eq. 6.5.7 we have:
Y " šJT* Ð4=Ñ› œ 0T* Ð>Ñ (6.5.10)

hence using Eq. 6.5.9 and Eq. 6.5.10 and taking into account the cancellation of
imaginary parts, we obtain:

0 Ð>Ñ œ 0T Ð>Ñ  0T* Ð>Ñ œ # dš0T Ð>Ñ› (6.5.11)

Eq. 6.5.11 means that if JT Ð4=Ñ is the =   0 part of the Fourier transformed
real valued function 0 Ð>Ñ, its Fourier transform inversion 0T Ð>Ñ will be a complex
function whose real part is equal to 0 Ð>ÑÎ#. Summing the complex conjugate pair
results in a doubled real valued 0 Ð>Ñ. So by Eq. 6.5.10 and Eq. 6.5.11 we can calculate
the system impulse response from just one half of its complex frequency response
using the forward Fourier transform (not inverse!):

*
0 Ð>Ñ œ # d ”Y šJT* Ð4=Ñ›• Ÿ (6.5.12)

Note that the second (outer) complex conjugate is here only to satisfy the
mathematical consistency — in the actual algorithm it can be safely omitted, since only
the real part is required.
As the operator Y ef in Eq. 6.5.12 implies integration we must use the discrete
Fourier transform (DFT) for computation. The DFT can be defined by decomposing
the Fourier transform integral into a finite sum of R elements:

" R " #15


J Ð5Ñ œ " 0 Ð3Ñ e4 R (6.5.13)
R 3œ!

That means going again through a large number of operations, comparable to


Eq. 6.5.5. Instead we can apply the Fast Fourier Transform (FFT) algorithm and,
owing to its excellent efficiency, save the computer a great deal of work.
Cooley and Tukey [Ref. 6.16] have shown that if R œ #F and F integer, there is
a smart way to use Eq. 6.5.13, owing to the periodical nature of the Fourier transform.
If Eq. 6.5.13 is expressed in a matrix form then the matrix which represents its
exponential part can be divided into its even and odd part, and the even part can be
assigned to R Î# elements. The remaining part can then also be divided as before and
the same process can then be repeated over and over again, so that we end up with a
number (log# R ) of individual sub-matrices. Furthermore, it can be shown that these
sub-matrices contain only two non-zero elements, one of which is always unity (" or 4).

-6.40-
P. Stariè, E. Margan Computer Algorithms

Therefore multiplying by each of the factor matrices requires only R complex


multiplications.
Finally (or firstly, depending on whether we are using the ‘decimation in
frequency’ or the ‘decimation in time’ algorithm), we rearrange the data, by writing the
position of each matrix element in a binary form, and reordering the matrix it by
reversing the binary digits (this operation is often refered to as the ‘reshuffling’).
The total number of multiplications is thus R log# R instead of the R # required
to multiply in one step. Other operations are simple and fast to execute (addition,
change of sign, change of order). Thus in the case of F œ "!, R œ "!#%, and
R # œ "!%)&(' whilst R log# R œ "!#%!, so a reducction of the required number of
multiplications by two orders of magnitude has been achieved.
Matlab has a command named ‘FFT’ which uses the ‘radix-2’ type of algorithm
and we shall use it as it is. Those readers who would like to implement the FFT
algorithm for themselves can find the detailed treatment in [Ref. 6.16, 6.17 and 6.19].
A property of the FFT algorithm is that it returns the spectrum of a real valued
signal as folded about the Nyquist frequency (one half of the frequency at which the
signal was sampled). As we have seen in Fig. 6.5.2, if we have taken 128 signal
samples, the FFT returns the first 64 spectral components from = œ !, ", #, á , '$ but
then the remaining 64 components, which are the complex conjugate of the first ones,
are in the reversed order.
This is in contrast to what we were used to in the analytical work, as we expect
the complex conjugate part to be on the =  ! side of the spectrum. On the other hand,
this is equivalent, since the 128 samples taken in the signal time domain window are
implicitly assumed to repeat, and consequently the spectrum is also repeating every 128
samples. So if we use the standard inverse FFT procedure we must take into account all
128 spectral components to obtain only 64 samples of the signal back. However, note
that the Eq. 6.5.12 requires only a single–sided spectrum of R points to return R
points of the impulse response. This is, clearly, an additional two–fold improvement
in algorithm efficiency.

-6.41-
P. Stariè, E. Margan Computer Algorithms

6.5.2 Windowing

For calculating the transient response a further reduction in the number of


operations is possible through the use of ‘windowing’. Windowing means multiplying
the system response by a suitable window function. We shall use the windowing in the
frequency domain, the reason for this we have already discussed when we were
considering to what extent DFT is an approximation. Like many others before us, in
particular the authors of various window functions, we, too, have found out that the
accuracy improves if the influence of higher frequencies is reduced.
Since the frequency response of a high order system (third-order or greater) falls
off quickly above the cut off frequency, we can take just R œ #&' frequency samples,
and after inverse FFT we still obtain a time domain response with an accuracy equal to
or better than the vertical resolution of the VGA type of graphics (1/400 or 0.25%). And
as the sample density (number of points) of the transient wavefront increases with the
number of stop band frequency samples, it is clear that the smaller the contribution of
higher frequencies, the greater is the accuracy.
But, in order to achieve a comparable accuracy with 1st - and 2nd -order systems
we would have to use R" œ %!*' and R# œ "!#% frequency samples, respectively.
Thus since we would like to minimize the length of the frequency vector, low order
systems need to be artificially rolled off at the high end. This can be done by
multiplying the frequency response by a suitable window function, element by element,
as shown in Fig. 6.5.4. The window function used in Fig. 6.5.4 (and also in the TRESP
routine) is a real valued Hamming type of window (note that we need only its right
hand half, since we use a single-sided spectrum; the other half is implicitly used owing
to Eq. 6.5.12).

W=0.54-0.46*cos(2*pi*(N+1:1:2*N)/(2*N)); % right half Hamming window

The physical effect of applying the Hamming window is similar to the


implementation of an additional non-dominant pole of high order at about "# =h and a
zero at about #& =h . Multiplication in frequency domain is equivalent to convolution in
time domain, and vice-versa; however, note that the window function is real, thus no
phase distortion occurs!
After extensive experimentation with different types of windows, it was found
that the Hamming window yields the lowest error, most notably at the first few points
of the impulse response. This is understandable, since the FFT of this window has
lowest spectral side lobes and the same is true for the inverse transform. For the same
reason the final value error of the 1st - and 2nd -order system will also be reduced.
Note that the first point of the 1st -order impulse response will be in error
anyway, because we have started from a spectrum of finite density, in which the
distance between adjacent spectral components was equal to "ÎR . The time domain
equivalent of this is an impulse whose amplitude is finite ÐR Ñ and its width is "ÎR (so
that the impulse area is equal to 1). This means that the response rise time is not
infinitely short, so its first point will be smaller than expected, the error being
proportional to "ÎR .

-6.42-
P. Stariè, E. Margan Computer Algorithms

If the system order and type could be precisely identified, this error might be
corrected by forcing the first point of the 1st -order impulse response to a value equal to
the sampling time period, multiplied by the system DC gain, as has been done in the
TRESP routine.
In Sec. 6.5.6 we give a more detailed error analysis for the 1st -, 2nd - and 3rd -
order system for both windowed and non-windowed spectrum.
1.0

0.8 W (Hamming window)


Normalized amplitude

0.6

0.4

0.2

abs( F(jw) )
abs( W .* F(jw) )
0
0 5 10 15 20 25 30 35
Frequency : w = (0:1:255)/8

Fig. 6.5.4: Windowing example. The 1st -order frequency response (only the
magnitude is shown on plot) is multiplied elemet by element by the Hamming
type of window function in order to reduce the influence of high frequencies
and improve the impulse response calculation accuracy. Note that the window
function is real only, affecting equally the system real and imaginary part, thus
the phase information is preserved and only the magnitude is corected.

-6.43-
P. Stariè, E. Margan Computer Algorithms

6.5.3 Amplitude Normalization

The impulse response, as returned by Eq. 6.5.12, has the amplitude such that the
sum of all the R values is equal to R times the system gain. Also, if we are dealing
with a low pass system and the first point of its frequency response (the DC component)
is J Ð!Ñ, then the impulse response will be shifted vertically by J Ð!Ñ (the DC
component is added). Thus we must first cancel this ‘DC offset’ by subtracting J Ð!Ñ
and then normalize the amplitude by dividing by R :
0 Ð>Ñ  J Ð!Ñ
0n Ð>Ñ œ (6.5.14)
R
By default, the TRESP routine (see below) returns the impulse amplitude in the
same way, representing a unity gain system’s response. Optionally, we can denormalize
it as if the response was caused by an ideal, infinitely high impulse; then the "st -order
response starts the exponential decay from a value very close to one, as it should. If the
system’s half power bandwidth, =h , is found at the m+1 element of the frequency
response vector, the amplitude denormalization factor will be:
R
Eœ (6.5.15)
#1 m
The #1 factor comes as a bit of surprise here. See Sec. 6.5.5 about time scale
normalization for an explanation.
The term m can be entered explicitly, as a parameter. But it can also be derived
from the frequency vector by finding the index at which it is equal to 1, or it can be
found by examining the magnitude and finding the index of the point, nearest to the half
power bandwidth value (in both of cases the index must be decremented by 1).
Another problem can be encountered with high order systems, which exhibit a
high degree of ringing, e.g., Chebyshev systems of order 8 or greater. If m<8, some
additional ringing is introduced into the time domain response. This ringing results
from the time frame implicitly repeating with a period X œ #1Î?=, where ?=
describes the finite spectral density of input data. If we have specified the system cut off
frequency too near to the origin of the frequency vector it would cause a time scale
expansion. Thus overlapping of adjacent responses will introduce distortion if the
impulse response has not decayed to zero by the end of the period X . Therefore the
choice of placing the cut off frequency relative to the frequency vector is a compromise
between the pass band and stop band description. In Matlab, the frequency vector of N
linearly spaced frequencies, normalized to 1 at its m+1 element, can be written as:

N=256; m=8; w=(0:1:N-1)/m;

The variable m specifies the normalized frequency unit. The transient response
of both Butterworth and Bessel systems can be calculated with good accuracy by using
a frequency vector normalized to 1 at its 5th sample (m=4). But by placing the cutoff
frequency at the 9th sample (m=8) of a frequency vector of length 256, an acceptably
low error will be achieved even for a 10th -order Chebyshev system. For higher order,
high ringing systems one will probably need to increase the frequency vector to 512 or
1024 elements in order to prevent time window overlapping.

-6.44-
P. Stariè, E. Margan Computer Algorithms

6.5.4 Step Response

Up to this point we have obtained the impulse response. The step response is
not available straight from the Fourier transform (if the unit step is integrated, the
integral will diverge to _; this is why we prefer the more general Laplace transform
for analytical work). However, from signal analysis theory we know that the response to
an arbitrary input signal can be found by convolving it with the system’s impulse
response (for convolution, see Part 1, Sec. 1.15 and Fig. 1.15.1; see Part 7, Sec. 7.2 for
the numerical algorithm). With the unit step as the input signal the convolution is
reduced to a simple time domain integration of the system impulse response.
But this integration must give us the final step response value equal, or at least
very close, to the system DC gain. So we must use the impulse response normalized to
obtain the sum of all its elements equal to the system’s gain. Numerical integration can
be done by cumulative summing. This means that the first element is transferred
unchanged, the second element is the sum of the first two, the third of the first three,
and so on, up to the last element which is the sum of all elements. The CUMSUM
command in Matlab will return in Y a cumulative sum of vector X like this:

% the CUMulative-SUM example :


Y(1)=X(1); % transfer the first sample unchanged
for k=2:max(size(X))
Y(:,k)=Y(k-1)+X(k); % next sample is the sum of all previous
end

However, inside the Matlab command interpreter, this for-loop executes


considerably slower, owing to vector remapping, thus we shall use the built in
CUMSUM command in the TRESP routine.
An interesting problem arises with the first-order system, as well as any system
with zeros, since the first sample of the impulse response will be non-zero, therefore the
step response will also start from above zero, which is not so in the real world. We can
solve this problem by artificially adding a zero valued first element to the impulse
response vector, but we then have to be careful not to increment the total number of
elements by one. That is because we want to keep the sum of the original impulse
response vector, divided by the number of elements, equal to the system gain (the
increased number of elements affects the normalization).
Also, as we have seen in the envelope delay derived from the phase response,
the numerical differentiation, owing to a finite number of elements taken into account,
results in a one half sample shift. The numerical integration, being an inverse process,
does the same thing, even if does not increase the number of elements by one. The
integration shift is also in the opposite direction.
Both these problems will be treated in the next section dealing with time scale
normalization.

-6.45-
P. Stariè, E. Margan Computer Algorithms

6.5.5 Time Scale Normalization

When we plot a time domain response we want to be able to correlate it to the


system’s envelope delay, so we need to specify the time scale normalization factor. This
factor depends on how many samples of the frequency response were used in the FFT
and which sample was at the system’s cut off frequency.
We have already seen that a Fourier transformed signal has a spectrum periodic
in frequency, the period being == œ #1Î?>, thus inversely proportional to the sampling
period. Also the spectral density ?= reflects the overall time domain repetition period,
XR œ #1Î?=, which represents the size of the time domain window. Note that when
we specify the single-sided spectrum, we usually do it from 0 up to the Nyquist
frequency =N œ =s Î# œ R ?=.
Now our frequency vector has been defined as: w=(0:1:N-1)/m. Obviously,
at its m+1 element w(m+1)=1, which is the numerical equivalent of the normalized cut
off frequency =h œ ". Also, ?= can be found as w(2)-w(1), but w(1)=0 and
w(2)=1/m, so we can say that ?= œ =h Îm. Since the Nyquist frequency is
=N œ R ?=, and the sampling frequency is twice that, =s œ #R ?=, then the sampling
time interval is ?> œ "Î=s œ "Î#R ?= œ mÎ#R =h .
But remember that Eq. 6.5.12 allows us to improve our algorithm, obtaining R
time samples from R frequency samples, a result which we would otherwise get from
#R frequency samples. So, we have ?> œ mÎR =h . Also remember that we have
calculated the frequency response using a normalized frequency vector, =Î=h , and the
term =h œ " was effectively replaced by 0h œ ", loosing #1. Our sampling interval
should therefore be:
#1 m
?> œ (6.5.16)
R
You may have noted that this is exactly the inverse of the amplitude
denormalization factor, Eq. 6.5.15, ?> œ "ÎE. This is not just a strange coincidence!
Remember Fig. 6.5.2: the amplitude and the width of the input impulse were set so that
E?> œ R , with R being also the time domain vector’s length, so if ?> œ " then
E œ R . For the unity gain system the response must contain the same amount of
energy as the input, thus the sum of all the response values must also be equal to R . Of
course, R is a matter of choice, but once its value has been chosen it is a constant. So it
is only the system bandwidth, set by the factor m, that will determine the relationship
between the response amplitude and its time scale.
Therefore, after Eq. 6.5.16:

dt=2*pi*m/N; % delta-t
t=dt*(0:1:N-1); % normalized length-N time vector

We may check the time normalization easily by calculating the impulse


response of a first-order system and comparing it to the well known reference, the
analytical first-order low pass VG system impulse response, which is just:
0r Ð>Ñ œ e>ÎVG (6.5.17)

-6.46-
P. Stariè, E. Margan Computer Algorithms

Now, normalizing the time scale means showing it in increments of the system
time constant (VG , #VG , $VG , á ). Thus we simply set VG œ ". For the starting
sample at > œ !, the response 0 Ð!Ñ œ ", so in order to obtain the response of a unity
gain system excited by a finite amplitude impulse we must denormalize the amplitude
(see Eq. 6.5.15) by "ÎE:
" >
0r Ð>Ñ œ e (6.5.18)
E

z=[]; % no zeros,
p=-1; % just a single real pole
N=256; % total number of samples
m=8; % samples in the frequency unit
w=(0:1:N-1)/m; % the frequency vector
dt=2*pi*m/N; % sampling time interval = 1/A
t=dt*(0:1:N-1); % the time vector
F=freqw(z,p,w); % the frequency response
In=(2*real(fft(conj(F)))-1)/N; % the impulse response
Ir=dt*exp(-t); % 1st-order ref., denormalized
plot( t, Ir, t, In )
title('Ideal vs. windowed response'), xlabel('Time')
plot( t(1:30), Ir(1:30), t(1:30), In(1:30) )
title('Zoom first 30 samples')

In the above example (see the plot in Fig. 6.5.5), we see that the final values of
normalized impulse response In are not approaching zero, and by zooming on the first
30 points we can also see that the first point is too low and the rest somewhat lower
than the reference. Windowing can correct this:

W=0.54-0.46*cos(2*pi*(N+1:2*N)/(2*N)); % right half Hamming window


Iw=(2*real(fft(conj(F.*W)))-1)/N; % impulse, windowed fr.resp.
plot( t, Ir, t, Iw ), xlabel('Time')
title('Ideal vs. windowed response')
plot( t(1:30), Ir(1:30), t(1:30), Iw(1:30) )
title('Zoom first 30 samples')

This plot fits the reference much better. But the first point is still far too low.
From the amplitude denormalization factor, by which the reference was multiplied, we
know that the correct value of the first point should be "ÎE œ ?>. So we may force the
first point to this value, but, by doing so, we would alter the sum of all values by
N*(dt-I(1)). In order to obtain the correct final value of the step response, the impulse
response requires the correction of all points by 1/(1+(dt-I(1))/N), as in the
following example:

% the following correction is valid for 1st-order system only !!!


er1=dt-I(1); % the first point error
Iw(1)=dt; % correct first-point amplitude
% note that with this we have altered the sum of all values by er1,
% so we should modify all the values by :
Iw=Iw*(1/(1+er1/N));
Ir=Ir*(1/(1+er1/N));
% the same could also be achieved by : Ix=Ix/sum(Ix);
plot( t(1:30), Ir(1:30), t(1:30), Iw(1:30) ), title('Zoom first 30')
plot( t, (Iw-Ir) ), title('Impulse response error plot')

-6.47-
P. Stariè, E. Margan Computer Algorithms

Likewise we can compare the calculated step response. Our reference is then:

0r Ð>Ñ œ "  e> (6.5.19)

But if the first-order impulse response is numerically integrated the value of the
first sample of the step response will be equal to the value of the first sample of the
impulse response instead of zero, as it should be in the case of a low pass LTIC system.
Also, there is an additional problem resulting from numerical integration, which
manifests itself as a one half sample time delay. Remember what we have observed
when we derived the envelope delay from the phase: numerical differentiation has
assigned each result point to each difference pair of the original data, so that the
resulting vector was effectively shifted left in (log-scaled) frequency by a (geometrical)
mean of two adjacent frequency points, È=8 =8" . Because we work with a linear
scale, the shift is the arithmentic mean, ?=Î#. Since the numerical integration is the
inverse process of differentiation, the signal is shifted right. However, whilst the
differentiation vector had one sample less, the numerical integration returns the same
number of samples, not one more.
So, in order to see the actual shape of the error, we have to compensate for this
shift of one half sample. We can do this by artificially adding a leading zero to the
impulse response vector, then cumulatively sum the resulting R  " elements and
finally take the mean value of this and the version shifted by one sample, as in the
example below which is uses the vectors from above (see the result in Fig. 6.5.6):

Sr=1-exp(-t); % 1st-order step response reference


Sw=cumsum([0, Iw]); % step resp. from impulse + leading zero
% compensate the one half sample delay by taking the mean :
Sw=(Sw(1:N)+Sw(2:N+1))/2;
Sw(1)=0; % correct the first value
plot( t, Sr, t, Sw ), title('Ideal vs. windowed step response')
plot( t(1:50), Sr(1:50), t(1:50), Sw(1:50) )
title('Zoom first 50 samples')
plot( t, (Sw-Sr) ), title('Step response error plot')

This compensation will lower the algorithm’s efficiency somewhat, but


considering the embedded applications numerical addition is fast and division by 2 can
be done by shifting bits one binary place to the right, so the operation can still be
performed solely in integer arithmetics.
For the second-order system we can use the reference response which was
calculated in Part 2, Eq. 2.2.37 (see the error plots in Fig. 6.5.7 and 6.5.8.):
[z,p]=buttap(2); % 2nd-order Butterworth system
% the following variables are the same as before:
N=256; m=8; w=(0:1:N-1)/m; dt=2*pi*m/N; t=dt*(0:1:N-1);
F=freqw(z,p,w); % complex frequency response
W=0.54-0.46*cos(2*pi*(N+1:2*N)/(2*N)); % a right-half Hamming window
Iw=(2*real(fft(conj(F.*W)))-1)/N; % impulse resp., windowed fr.resp.
Sw=cumsum([0,Iw]); % numerical integration, step r.
Sw=(Sw(1:N)+Sw(2:N+1))/2; % compensate half sample t-delay
T=sqrt(2); % 2nd-order response constants,
theta=pi/4; % see example: Part 2, Eq.2.2.39.
Sr=1-T*exp(-t/T).*sin(t/T+theta); % the 2nd-order response reference
plot(t,Sr,t,Sw), title('Ideal vs. windowed step response')
plot(t(1:60),Sr(1:60),t(1:60),Sw(1:60)), title('Zoom samples 1-60')
plot( t, (Sw-Sr) ), title('Step response error plot')

-6.48-
P. Stariè, E. Margan Computer Algorithms

Note that the TRESP routine allows us to enter the actual denormalized
frequency vector, in which case all (but the first one) of its elements might be greater
than 1. The normalized frequency unit m is then found from the frequency response, by
checking which sample is closest to abs(F(1))/sqrt(2), and then decremented by
1 to compensate for the frequency vector starting from 0. But in the case of a
denormalized frequency vector we should also denormalize the time scale, by dividing
the sampling interval by the actual upper cut off frequency, which is w(m+1).
To continue with our 5th -order Butterworth example, we can now calculate the
impulse and step response by using the TRESP routine in which we have included all
the above corrections:

[z,p]=buttap(5); % the 5th-order Butterworth system poles


w=(0:1:255)/8; % form a linearly spaced frequency vector
F=freqw(z,p,w); % the frequency response at w
[I,t]=tresp(F,w,'i'); % I : ideal impulse, t : normalized time
S=tresp(F,w,'s'); % S : step response ( time same as for I )
plot(t(1:100),I(1:100),t(1:100),S(1:100))
% plot 100 points of I and S vs. t

The results should look just like Fig. 6.5.1. Here is the TRESP routine:

function [y,t]=tresp(F,w,r,g)
%TRESP Transient RESPonse, using Fast Fourier Transform algorithm.
% Call : [y,t]=tresp(F,w,r,g);
% where:
% F --> complex-frequency response, length-N vector, N=2^B, B=int.
% w --> can be the related frequency vector of F, or it
% can be the normalized frequency unit index, or it
% can be zero and the n.f.u. index is found from F.
% r --> a character, selects the response type returned in y:
% - 'u' is the unity area impulse response (the default)
% - 'i' is the ideal impulse response
% - 's' is the step response
% g --> an optional input argument: plot the response graph.
% y --> the selected system response.
% t --> the normalized time scale vector.

% Author : Erik Margan, 880414, Last rev. 000310, Free of copyright!

% ----------- Preparation and checking the input data ------------


if nargin < 3
r='u'; % select the default response if not specified
end
G=abs(F(1)); % find system DC gain
N=length(F); % find number of input frequency samples
v=length(w); % get the length of w
if v == 1
m=w; % w is the normalized frequency unit or zero
elseif v == N % find the normalized frequency unit
m=find(abs(w-1)==min(abs(w-1)))-1;
if isempty(m)
m=0; % not found, try from the half power bandwidth
end
else
error('F and w are expected to be of equal length !');
end
if m == 0 % find the normalized frequency unit index
m=max(find(abs(F)>=G/sqrt(2)))-1;
end

-6.49-
P. Stariè, E. Margan Computer Algorithms

% check magnitude slope between the 2nd and 3rd octave above cutoff
M=abs(diff(20*log10(abs(F(1+4*m*[1,2])))));
x=3; % system is 3rd-order or higher (>=18dB/2f)
if M < 15
x=2; % probably a 2nd-order system (12dB/2f)
elseif M < 9
x=1; % probably a 1st-order system (6dB/2f)
end

% ----------- Form the window function ---------------------------


if x < 3
W=0.54-0.46*cos(2*pi*(N+1:2*N)/(2*N)); % right half Hamming
F=W.*F; % frequency response windowed
end

% ----------- Normalize the time-scale ---------------------------


A=2*pi*m; % amplitude denormalization factor
dt=A/N; % calculate delta-t
if v == N
dt=dt/w(m+1); % correct for the actual frequency unit
end
t=dt*(0:1:N-1); % form the normalized time scale

% ----------- Calculate the impulse response ---------------------


y=2*real(fft(conj(F)))-G; % calculate iFFT and null DC offset
if x == 1
er1=A*G-y(1); % fix the 1st-point error for 1st-order system
y(1)=A*G;
end
if r == 'u' | r == 'U' | r == 's' | r == 'S'
y=y/N; % normalize area to G
end

% ----------- Calculate the step response ------------------------


if r == 's' | r == 'S'
if x == 1
y=y*(1/(1+er1/N)); % correct 1st-point error
end
y=cumsum([0, y]); % integrate to get the step response
if x > 1
y=(y(1:N)+y(2:N+1))/2; % compensate half sample t-delay
y(1)=0;
else
y=y(1:N);
end
end

% ----------- Normalize the amplitude to ideal -------------------


if r == 'i' | r == 'I'
y=y/A; % denormalize impulse amplitude
end

% ----------- Plot the graph -------------------------------------


if nargin == 4
plot( t, y, '-r' ), xlabel('Time')
if r == 'i' | r == 'I' | r == 'u' | r == 'U'
title('Impulse response')
else
title('Step response')
end
end

-6.50-
P. Stariè, E. Margan Computer Algorithms

6.5.6 Calculation Errors

Whilst the amount of error in low order system impulse responses might seem
small, it would integrate to an unacceptably high level in the step responses if the input
data were not windowed. In Fig. 6.5.5 to 6.5.10 we have computed the difference
between analytically and numerically calculated impulse and step responses of
Butterworth systems, using both the normal and windowed frequency response for the
numerical method. Fig. 6.5.5 and 6.5.6 show the impulse and the step response of the
1st -order system, Fig. 6.5.7 and 6.5.8 show the 2nd -order system and Fig. 6.5.9 and
6.5.10 the 3rd -order system. The error plots, shown within each response plot, were
magnified 10 or 100 times to reveal the details.
The initial value of the 1st -order system’s impulse response, calculated from a
non-windowed frequency response, is about 0.08% and falls off quickly with time.
Nevertheless, it is about 10× higher than for the impulse response calculated from a
windowed response. If the frequency response is not windowed, the impulse responses
error eventually integrates to almost 4% of the final value in the step response.
The error plots of the second and higher order systems are much smaller, but
they exhibit some decaying oscillations, independently of windowing. This oscillating
error is inherent in the FFT method and it is owed to the Gibbs’ phenomenon (see
Part 1, Sec. 1.2). It can be easily shown that the frequency response of a rectangular
time domain window follows a asin BbÎB curve, and the equivalent holds for the time
response of the frequency domain rectangular window (remember Eq. 6.5.7). Since we
have deliberately truncated the system’s frequency response at the R th sample, we can
think of it as being a product of an infinitely long (but finite density) spectrum with a
rectangular frequency window. This results in a convolution of the system impulse
response with the asin BbÎB function in the time domain, hence the form of the error in
Fig. 6.5.7 to 6.5.10.
This error can be lowered by taking the transform of a longer frequency
response vector, but can never be eliminated. For example, if N=2048 and we specify
the frequency vector as: w=(0:1:N-1)/8, the error will be 8 times lower than in the
case of a frequency vector with N=256, but the calculation will last more than 23 times
longer (the number of multiplications is proportional to R log# R or 11 times, in
addition to 8 times the number of sums and other operations).
As we have stated at the beginning, Sec. 6.0, our aim is to make quick
comparisons of the performance of several systems, and on the basis of those decide
which system suits us best. Since the resolution of the computer VGA type of graphics
is more than adequate for this purpose, and the response error in the case of N=256 can
be seen only if compared with the exact response Ðand even so as only a one pixel
differenceÑ, the extra time and memory requirements do not justify the improvement.
A better way of calculating the response more accurately and also directly from
system poles and zeros is described in the next section.

-6.51-
P. Stariè, E. Margan Computer Algorithms

1.0

Ir, Iw
0.8 Iw - Impulse response from windowed frequency response 0.08
In In - Impulse response from normal frequency response
0.07
Normalized Amplitude

Ir - Reference response
0.6 En - Error: abs(In-Ir) 0.06

Error
Ew - Error: abs(Iw-Ir) 0.05

0.4 0.04

0.03

0.2 0.02
En
0.01
Ew
0 0
0 1 2 3 4 5 6
t / T0

1.0
Sw, Sr
Sn
0.8

Sr - Reference response
Normalized Amplitude

Sw - Step-response from windowed frequency response


0.6
Sn - Step-response from normal frequency response
0.05
En - Error: abs(Sn-Sr)
0.4 0.04
Ew - Error: abs(Sw-Sr)
0.03
Error

En

0.2 0.02

0.01
Ew
0 0
0 1 2 3 4 5 6
t / T0
Fig. 6.5.5 and 6.5.6: The first 30 points of a 256 sample long 1st -order impulse
and step response vs. the analytically calculated references. The error plots En
and Ew are enlarged 10×. Although the impulse response, calculated from the
normal frequency response, has a relatively small error, it integrates to an
unacceptably high value (4%) in the step response. In contrast, by windowing the
frequency response, both time domain errors are much lower, the step response
final value is in error by les than 0.2%.

-6.52-
P. Stariè, E. Margan Computer Algorithms

0.6
Iw - Impulse response from windowed frequency response
In - Impulse response from normal frequency response
0.5 0.05
Ir - Reference response
Iw Ir, In

0.4 En - Error: abs(In-Ir) 0.04


Ew - Error: abs(Iw-Ir)
Normalized Amplitude

0.3 0.03

0.2 0.02

Error
0.1 0.01
Ew

0 En 0

− 0.1
0 1 2 3 4 5 6 7 8
t / T0

1.2

1.0

Sr, Sn, Sw
Normalized Amplitude

0.8

0.6 0.006

Sw - Step-response from windowed frequency response


Sn - Step-response from normal frequency response
Error

0.4 Sr - Reference response 0.004


En - Error: abs(Sn-Sr)

0.2 En Ew - Error: abs(Sw-Sr) 0.002

Ew
0 0
0 1 2 3 4 5 6 7 8
t / T0
Fig. 6.5.7 and 6.5.8: As in Fig. 6.5.5 and 6.5.6, but with 40 samples of a 2nd -order
Butterworth system. The impulse response error for the windowing procedure is higher
at the beginning, but falls off more quickly, therefore the step response final value error
is still much lower (note that the step response error plots are enlarged 100×). The
oscillations in error plots, owed to the Gibbs’ effect, also begin to show.

-6.53-
P. Stariè, E. Margan Computer Algorithms

0.5
In - Impulse response from normal frequency response
Iw - Impulse response from windowed frequency response

0.4 Ir - Reference response 0.004


Iw Ir, In
Normalized Amplitude

En - Error: abs(In-Ir)
0.3 0.003
Ew - Error: abs(Iw-Ir)

Error
0.2 Ew 0.002

0.1 0.001

0 0
En

− 0.1
0 1 2 3 4 5 6 7 8 9 10
t / T0

1.2

1.0 Sr, Sn, Sw


Normalized Amplitude

0.8

0.6 Sr - Reference response


Sn - Step-response from normal frequency response
Sw - Step-response from windowed frequency response
0.4 0.004
Ew - Error: abs(Sw-Sr)
En - Error: abs(Sn-Sr)
Error

0.2 0.002
Ew

En 0
0
0 1 2 3 4 5 6 7 8 9 10
t / T0

Fig. 6.5.9 and 6.5.10: As in Fig. 6.5.5–8, but with 50 samples of a 3rd -order
Butterworth system. Windowing does not help any longer and produces even
greater error. The dominant error is now owed to the Gibbs’ effect.

-6.54-
P. Stariè, E. Margan Computer Algorithms

6.5.7 Code Execution Efficiency

The TRESP routine executes surprisingly fast. Back in 1987, when these Matlab
routines were developed and the first version of this text was written, I was using a
12 MHz PC with an i286-type processor, an i287 math coprocessor, and EGA type of
graphics (640×400 resolution, 16 colors). To produce the 10 responses of Fig.6.5.11,
starting from the system order, finding the system coefficients, extracting the poles,
calculating the complex frequency response, running FFT to obtain the impulse
response, integrating for the step response and finally plotting it all on a screen, that old
PC worked less than 12 seconds. Today (March 2000), a 500 MHz Pentium-III based
processor does it in a few tens of milliseconds (before you can release the ENTER key,
once you have pressed it; although, it takes a lot more time for Matlab working under
Windows to open the graph window). And note that we are talking about floating point,
‘double precision’ arithmetic! Nevertheless, being able to make fast calculations has
become even more important for embedded instrumentation applications, which require
real time data processing and adaptive algorithms.

1.2

1.0
Normalized Amplitude

0.8

0.6

0.4
1 2
9
0.2

0
0 0.5 1.0 1.5 2.0 2.5 3.0
Normalized Time

Fig. 6.5.11: Step responses of Bessel–Thomson systems (normalized to equal


envelope delay), of order 2 to 9, including the 1st -order step response for
comparison. Note the half amplitude delay approaching 1 and the bandwidth
improving (shorter rise time) as the system order increases. The TRESP
algorithm execution speed was tested by creating this figure.

-6.55-
P. Stariè, E. Margan Computer Algorithms

6.6 Transient Response From Residues

The method of calculating the transient response by FFT, presented in Sec. 6.5,
has several advantages over other algorithms. The most important ones are high
execution speed, the possibility of computing from either a calculated complex
frequency response or from a measured magnitude–phase relationship, and the use of
the same FFT algorithm to work both time–to–frequency and frequency–to–time.
Its main disadvantage is the error resulting from the Gibbs’ effect, which
distorts the most interesting part of the time domain response. This error, although
small, can sometimes prevent the system designer from resolving or identifying the
cause of possible second-order effects that are spoiling the measured or simulated
system performance to which the desired ideal response is being compared. In such a
case the designer must have a firm reference, which should not be an approximation in
any sense.
The algorithm, presented in this section, with the name ATDR (an acronym of
‘Analytical Time Domain Response’), calculates the impulse and step responses by
following the same analytical method, that has been used extensively in the previous
parts of this book. The routine calculates the residues at each system transfer function
pole, and then calculates the final response at specified time instants. However, the
residues are not calculated by an actual infinitesimally limiting process, so it is not
possible to apply this routine in the most general case (e.g., it fails for systems with
coincident poles), but this restriction is not severe, since all of the optimized system
families are covered properly. Readers who would like to implement a rigorously
universal procedure can obtain the residues calculated by the somewhat longer
RESIDUE routine in Matlab.
In contrast to the FFT method, whose execution time is independent of system
complexity, this method works more slowly for each additional pole or zero.
A nice feature of this method is that the user has a direct control over the time
vector: the response is calculated at exactly those time instants which were specified by
the user. This may be important when making comparison with a measured response of
an actual system prototype.
As we have seen in numerous examples, solved in the previous parts, a general
expression for a residue at a pole :5 of an 8th -order system specified by Eq. 6.1.10 can
be written like this:
8 7
$ Ð:3 Ñ $ Ð:  D4 Ñ
3œ" 4œ"
<5 œ :p:
lim Ð:  :5 Ñ 8 † 7 e:5 > (6.6.1)
5
$ Ð:  :3 Ñ $ ÐD4 Ñ
3œ" 4œ"

Here 8 is the number of poles, 7 is the number of zeros, : is a vector whose


elements are the system’s poles :3 , :5 is the 5 th pole for which the residue <5 is
calculated, D4 are the zeros, whilst 3 and 4 are the indices.

-6.57-
P. Stariè, E. Margan Computer Algorithms

The terms Ð:  :5 Ñ cancel for each 3 œ 5 before limiting. If we now make


: œ :5 , without using the limiting process, the general applicability of Eq. 6.6.1 is lost,
but for all optimized system families (no coincident poles!) this will still be valid.
In the ATDR routine we form a vector Z of length 8, containing the products
over the index 4 of Ð:5  D4 Ñ — one Ð5 th Ñ element of Z for each residue — and divide
these by the product of all transfer function zeros (if there are any).
Next, we form a matrix D of 8 by 8 elements, each element being a product of
Ð:  :3 Ñ. The elements on the diagonal of D will all have zero value
(on the diagonal : œ :3 ), and since we need the products of the remaining terms, we
must eliminate them to avoid multiplying by zero. This results in a D of Ð8  "Ñ rows
by 8 columns matrix.
In Matlab most matrix operations are designed to perform on columns,
producing a single-row vector of results. The same is true for the PROD (‘product’)
command, so prodÐDÑ returns a row of 8 products performed over each column of D.
This row is returned in D, then Z is divided by D, element by element. The resultant
vector is multiplied by the product of all poles to produce the correct scaling factors of
the residues, returned in the vector P.
If a step response is required, P must be divided by the poles :, element by
element. Finally, each element of P multiplies a vector of (complex) exponential
functions of the time vector multiplied by the 5 th pole.
All the residue values at the same time point are summed (in rows) and each
sum is then an element of the real valued result vector (the complex parts cancel to
better than "!"% and are neglected). In the case of the step response all values must be
increased by 1, the value of the residue of the additional pole at the complex plane’s
origin resulting from the input step function transform operator "Î=.
For the impulse response case there are two options: either the result is left as it
is, representing a response ÐimplicitlyÑ normalized in amplitude to the response of the
same system excited by the ideal (infinite amplitude, infinitely narrow) input impulse,
or it can be normalized to represent a unity gain system by dividing each response value
by the sum of all values. This is desirable when calculating convolution, etc., but then
we have to specify the time vector as sufficiently long (in comparison with the
dominant system time constants), to allow the impulse response to decay to a value
close enough to zero, thus avoiding a system gain error.
Here is how the now familiar 5th -order Butterworth system responses can be
calculated using the ATDR routine:

[z,p]=buttap(5); % 5th-order Butterworth zeros and poles


t=(0:1:300)/15; % 301-point t-vector, 15 samples in one t-unit
I=atdr(z,p,t,'i'); % ideal impulse response
S=atdr(z,p,t,'s'); % step response
plot(t,I,t,S) % plot I and S against t

The resulting plot should look the same as in Fig. 6.5.1 (but with a much better
accuracy!).

-6.58-
P. Stariè, E. Margan Computer Algorithms

function y=atdr(z,p,t,q)
%ATDR Analytical Time Domain Response by simplified residue calculus
% (does not work for systems with multiple poles).
% y=atdr(z,p,t) or
% y=atdr(z,p,t,'n') returns the normalized impulse response of a
% unity gain system, specified by zeros z and
% poles p in time t.
% y=atdr(z,p,t,'i') returns the impulse response, denormalized to
% the ieal impulse input.
% y=atdr(z,p,t,'s') returns the step response of the system.
%
% Specify the time as : t=(0:1:N-1)/T, where N is the number of
% desired time domain samples and T is the number of samples in
% the time scale unit, i.e.: t=(0:1:200)/10

% Author : Erik Margan, 891008, Free of copyright !

if nargin==3
q='n' ; % by default, return the unity gain impuse response
end
n=max(size(p)); % find the number of poles
for k=1:n % test for repeating poles
P=p;
P(k)=[ ]; % exclude the pole currently tested
if all(abs(P-p(k)))==0 % is there another such pole ?
error('ATDR cannot handle systems with repeating poles!')
end
end
dc=1; % set low pass system flag
if isempty(z)
Z=1; % no zeros
else
% zeros
if any( abs(z) <1e-6 )
dc=0; % HP or BP system, clear dc flag
end
if all( abs( real( z ) ) < 1e-6 )
z = j * imag( z ) ; % all zeros on imaginary axis
end
Z=ones(size(p)) ;
if dc
for k=1:n
Z(:,k)=prod(p(k)-z)/prod(-z);
end
else
for k = 1:np
for h = 1:nz
if z(h) == 0
Z(k,:) = Z(k,:)*p(k) ;
else
Z(k,:) = Z(k,:)*(p(k)+z(h))/z(h) ;
end
end
end
end
Z=Z(:); % column-wise orientation
end
if n == 1
D=1; % single pole case
else
for k = 1:n
d=p(k)-p; % difference, column orientation
d(k)=[ ]; % k-th element = 0, eliminate it
D(:,k)=d; % k-th column of D

-6.59-
P. Stariè, E. Margan Computer Algorithms

end
if n > 2
D=(prod(D)); % make column-wise product if D is a matrix
end
D=D.'; % column-wise orientation
end
P=prod(-p)*Z./D; % impulse residues
if q == 's'
P=P./p; % if step response is required, divide by p
end
t=t(:).'; % time vector, row orientation
y=P(1)*exp(p(1)*t); % response, first row
for k = 2:n
y=[y; P(k)*exp(p(k)*t)]; % next row
y=sum(y); % sum column-wise, return a single row
end
y=real(y); % result is real only (imaginary parts cancel)
if (q == 's') & ( isempty(z) | dc == 1 )
y=y+1; % if step resp., add 1 for the pole at 0+j0
end
if ( q == 'i' | q == 'n' ) & ( dc == 0 )
y=-diff([0, y]); % impulse response of a high pass system
end
if q == 'n'
y=y/abs(sum(y)); % normalize impulse resp. to unity gain
end

-6.60-
P. Stariè, E. Margan Computer Algorithms

6.7 A Simple Application Example

The algorithms which we have developed allow us now to make a quick


comparison of the performance of two equal bandwidth 5th -order systems, a
Butterworth and a Bessel–Thomson system. We shall compare the pole loci, the
magnitude and the step response. Let us first calculate and plot the poles:

[z1,p1]=buttap(5); % a 5-th order Butterworth zeros and poles


[z2,p2]=bestap(5,'n'); % a 5-th order Bessel system zeros and poles
p1=2*pi*1000*p1; % denormalize the poles to 1kHz
p2=2*pi*1000*p2;
% plot the imag-vs.-real part of poles
plot( real(p1),imag(p1),'*r', real(p2),imag(p2),'*b' )
axis equal square ; % set axes aspect ratio 1:1

× 10 3
2 ℑ{s }
Bessel System

1
Butterworth System

ℜ{s }
0

−1

−2
3
−2 −1 0 1 2 × 10
Fig. 6.7.1: The Butterworth poles (on the unit cycle) and the
Bessel–Thomson poles (on the fitted ellipse). Note that for the
same bandwidth (" kHz) the values of Bessel–Thomson poles are
much larger, but with a lower ratio of the imaginary to the real part.

Let us calculate and plot the frequency responses:

f=logspace(2,4,401); % a log-spaced frequency vector 10^2 - 10^4 Hz


F1=freqw(z1,p1,2*pi*f);% Butterworth frequency response
F2=freqw(z2,p2,2*pi*f);% Bessel-Thomson frequency response
% plot the dB magnitude vs. log-frequency :
semilogx(f/1000,20*log10(abs(F1)),'-r',f/1000,20*log10(abs(F2)),'-b')
ylabel('Magnitude'), xlabel('f [kHz]')

The frequency response plots are shown in Fig. 6.7.2. Note the equal pass band
(3 dB point) and equal slope at high frequencies. However, the Butterworth system
atenuation is an order of magnitude (20 dB) better.

-6.61-
P. Stariè, E. Margan Computer Algorithms

− 3 dB
− 20 5 pole
5 pole
Butterworth Bessel-Thomson
System System
− 40
M [dB]

− 60

−80

−100
0.1 1.0 10.0
f [kHz]
Fig. 6.7.2: Frequency responses of the Butterworth and Bessel–Thomson system. For an
equal cut off frequency (0h œ " kHz), the Butterworth system stop band attenuation is
about an order of magnitude (10× or 20 dB) better than that of the Bessel–Thomson.

Using the same poles and the ATDR routine, we compare the step responses:

t=(0:1e-5:3); % the 3ms time vector, 100 samples/ms.


y1=atdr(z1,p1,t,'s'); % Butterworth step response
y2=atdr(z2,p2,t,'s'); % Bessel-Thomson step response
plot(t*1000,y1,'-r',t*1000,y2,'-b'), xlabel('t [ms]') % see Fig.6.7.3

1.2

1.0
5 pole
Bessel-Thomson
System
0.8
5 pole
Butterworth
System
0.6

0.4

0.2

0
0 0.5 1.0 1.5 2.0 2.5 3.0
t [ms]
Fig. 6.7.3: Step responses of the Butterworth and Bessel–Thomson system. For the same cut off
frequency (1 kHz) the Bessel–Thomson system’s delay is smaller; the overshoot is only 0.4% and
there is no ringing, so settling down to 0.1% occurs within the first 1 ms. Although the rise times
are nearly equal, the Butterworth system is a poor choice if time domain performance is required,
since it settles down to 0.1% only after some 5 ms (but Chebyshev and Elliptic filter systems are
even much worse in this respect).

-6.62-
P. Stariè, E. Margan Computer Algorithms

Résumé of Part 6

The algorithms shown are small, simple, easy to use, and fast in execution. They
are ideal for starting the system’s design from scratch, to specify the design goals, as
well as to provide a reference with which a realized prototype can be compared.
We have shown how the system performance can easily be evaluated by using
the routines developed for Matlab, the prediction of system time domain response in
particular. We also hope that the development and application examples of these
routines offer a deeper insight on how the system should be designed as a whole.
Still, the reader as the future system’s designer is being let down at the most
demanding task of finding the circuitry and hardware that will perform as required, and
engineering experience is the only help here. This book should help to understand how
it might be possible to push the bandwidth up, smooth the transient, and reduce the
settling time. But there are also many other important parameters which must be
carefully considered when designing an amplifier, such as noise, linearity, electrical and
thermal stability, output power, slew rate limiting, the time it takes to recover from
overdrive, etc.
However, these parameters (with the exception of electrical stability) are in
most cases independent of the system pole and zero locations, but are strongly
influenced by the circuit’s topology and by the type of active devices used for the
realization.
Once the design goals have been set and the circuit configuration selected,
performance verification and iterative finalization can then be done using one of the
many CAD/CAE programs available on the market.
To see the numerical convolution routine and calculation examples and an
actual amplifier–filter system design example calculated using the algorithms
developed so far, please turn to Part 7.

-6.63-
P. Stariè, E. Margan Computer Algorithms

References:

[6.1] J.N. Little, C.B. Moller, PC–MATLAB For Students


(containing disks with Matlab program), Prentice–Hall, 1989
[6.2] MATLAB–V For Students (containing CD with Matlab program),
Prentice–Hall, 1998
[6.3] The MathWorks, Inc., <http://www.mathworks.com/>
[6.4] Oliver Heaviside, Electromagnetic Theory,
Chelsea Pub. Co., 3rd edition, 1971, ISBN: 082840237X
[6.5] Oliver Heaviside, Electrical Papers Edition Volume 1,
American Mathematical Society; January 1970, ASIN: 0821828401
[6.6] W. A. Atherton, Pioneers: Oliver Heaviside — Champion of inductance,
Wireless World, August 1987, pp. 789–790
[6.7] Paul J. Nahin, Oliver Heaviside: The Life, Work, and Times of an Electrical Genius
of the Victorian Age, Johns Hopkins Univ. Pr., October 2002, ISBN: 0801869099
[6.8] Douglas H. Moore, Heaviside Operational Calculus; An Elementary Foundation,
American Elsevier Pub. Co., ASIN: 0444000909
[6.9] H. Nyquist, <http://www.ieee.org/organizations/history_center/legacies/nyquist.html>
[6.10] H.W. Bode, <http://www.ieee.org/organizations/history_center/legacies/bode.html>
[6.11] S. Butterworth, On the Theory of Filter–Amplifiers,
Experimental Wireless & The Wireless Engineer, Vol. 7, 1930, pp.536–541
[6.12] W. E. Thomson, Networks With Maximally Flat Delay,
Wireless Engineer, Vol. 29, October 1952, pp.256–263
[6.13] L. Storch, Synthesis of Constant–Time–Delay Ladder Networks Using Bessel Polynomials,
Proceedings of the I.R.E., Vol. 42, 1954, pp.1666–1675
[6.14] O. Follinger
¨ , Laplace– und Fourier–Transformation,
AEG Telefunken (Abt. Verlag), 1982
[6.15] M. O'Flynn, E. Moriarty, Linear Systems: Time–Domain and Transform Analysis,
J. Wiley & Sons, 1987
[6.16] J.W. Cooley & J.W. Tukey, An Algorithm for the Machine Calculation of Complex Fourier
Series, Math. of Computation, Vol. 19, No. 90, April 1965, pp. 297–301
[6.17] Special Issue on the Fast Fourier Transform,
IEEE Transactions on Audio & Electroacoustics, Vol. AU-15, June 1967
[6.18] E.O. Brigham, The Fast Fourier Transform,
Prentice–Hall, 1974
[6.19] A.V. Oppenheim, R.W. Schafer, Digital Signal Processing,
Prentice–Hall, 1975.
[6.20] R.I. Ross, Evaluating the Transient Response of a Network Function,
Proceedings of the IEEE, May 1967, pp.693–694.
[6.21] R.I. Ross, Iterative Transient Response Calculation Procedures that have Low Storage
Requirement, presented at the Second International Symposium on Network Theory,
Herceg-Novi, BiH, 1972
[6.22] J. Vlach, K. Singhal, Computer Methods for Circuit Analysis and Design,
Van Nostrand Reinhold, 1983
[6.23] E. Margan, Calculating the Transient Response,
Elekrotehniški vestnik, Ljubljana, ELVEA2 58, 1991, 1, pp.11–22

-6.65-
P. Starč, E. Margan:

Wideband Amplifiers

Part 7:

Algorithm Application Examples

Any computer program can be reduced by at least one command line.


Any computer program has at least one command line with an error.
...
Any computer program can eventually be reduced
to a single command line, having at least one error.

(Conservative extrapolation of Murphy's Law to computer programming)


P.Starič, E.Margan Algorithm Application Examples

Contents ................................................................................................................................. 7.3


List of Figures ....................................................................................................................... 7.4
List of Routines ..................................................................................................................... 7.4

Contents:

7.0 Introduction .................................................................................................................................. 7.5


7.1 Using Convolution: Response to Arbitrary Input Waveforms ...................................................... 7.7
7.1.1 From Infinitesimal to Discrete Time Integration ......................................................... 7.7
7.1.2 Numerical Convolution Algorithm .............................................................................. 7.8
7.1.3 Numerical Convolution Examples ............................................................................. 7.10
7.2 System Front-End Design Considerations ................................................................................... 7.17
7.2.1 General Remarks ....................................................................................................... 7.17
7.2.2 Aliasing Phenomena In Sampling Systems ................................................................ 7.17
7.2.3 Better Anti-Aliasing With Mixed Mode Filters ......................................................... 7.21
7.2.4 Gain Optimization ..................................................................................................... 7.32
7.2.5 Digital Filtering Using Convolution .......................................................................... 7.33
7.2.6 Analog Filters With Zeros ......................................................................................... 7.34
7.2.7 Analog Filter Configuration ...................................................................................... 7.36
7.2.8 Transfer Function Analysis of the MFB-3 Filter ....................................................... 7.37
7.2.9 Transfer Function Analysis of the MFB-2 Filter ....................................................... 7.41
7.2.10 Standardization of Component Values .................................................................... 7.44
7.2.11 Concluding Remarks ............................................................................................... 7.44
Résumé and Conclusion ..................................................................................................................... 7.45
References ......................................................................................................................................... 7.47
Appendix 7.1: Transfer Function Analysis of the MFB-3 circuits .............................................(CD) A7.1
Appendix 7.2: Transfer Function Analysis of the MFB-2 circuits .............................................(CD) A7.2

- 7.3 -
P.Starič, E.Margan Algorithm Application Examples

List of Figures:

Fig. 7.1.1: Convolution example: Response to a sine wave burst ..................................................... 7.11
Fig. 7.1.2: Checking Convolution: Response to a Unit Step ............................................................. 7.12
Fig. 7.1.3: Convolution example: 2-pole Bessel + 2-pole Butterworth System Response ................ 7.13
Fig. 7.1.4: Input signal example used for spectral domain processing .............................................. 7.14
Fig. 7.1.5: Spectral domain multiplication is equivalent to time domain convolution ...................... 7.14
Fig. 7.1.6: Time domain result of spectral domain multiplication .................................................... 7.15
Fig. 7.2.1: Aliasing (frequency mirroring) in sampling systems ....................................................... 7.18
Fig. 7.2.2: Alias of a signal equal in frequency to the sampling clock .............................................. 7.18
Fig. 7.2.3: Alias of a signal slightly higher in frequency than the sampling clock ............................ 7.19
Fig. 7.2.4: Alias of a signal equal in frequency to the Nyquist frequency ......................................... 7.20
Fig. 7.2.5: Same as in Fig. 7.2.4, but with a 45° phase shift ............................................................. 7.20
Fig. 7.2.6: Spectrum of a sweeping sinusoidal signal follows the Ðsin =ÑÎ= function ..................... 7.21
Fig. 7.2.7: Magnitude of Bessel systems of order 5, 7 and 9, with equal attenuation at 0N .............. 7.23
Fig. 7.2.8: Step response of Bessel systems of order 5, 7 and 9 ........................................................ 7.25
Fig. 7.2.9: Alias spectrum of a 7-pole filter with a higher cut off frequency ................................... 7.26
Fig. 7.2.10: The inverse of the alias spectrum is the digital filter attenuation required .................... 7.27
Fig. 7.2.11: Comparing the poles: 13-pole A+D system and the 7-pole analog only system ............ 7.28
Fig. 7.2.12: Bandwidth improvement of the A+D system against the analog only system ................ 7.30
Fig. 7.2.13: Step response comparison of the A+D system and the analog only system ................... 7.31
Fig. 7.2.14: Envelope delay comparison of the A+D System and the analog only system ................ 7.32
Fig. 7.2.15: Convolution as digital filtering — the actual 13-pole A+D step response ..................... 7.33
Fig. 7.2.16: Complex plane plot of a mixed mode filter with zeros .................................................. 7.35
Fig. 7.2.17: Frequency response of a mixed mode filter with zeros .................................................. 7.35
Fig. 7.2.18: Alias spectrum of a mixed mode filter with zeros .......................................................... 7.35
Fig. 7.2.19: Time domain response of a mixed mode filter with zeros ............................................. 7.36
Fig. 7.2.20: Multiple Feedback 3-pole Low Pass Filter Configuration (MFB-3) .............................. 7.37
Fig. 7.2.21: Multiple Feedback 2-pole Low Pass Filter Configuration (MFB-2) .............................. 7.37
Fig. 7.2.22: Realization of the 7-pole Analog Filter for the 13-pole Mixed Mode System ............... 7.43

List of Routines:

VCON (Numerical Convolution Integration) ..................................................................................... 7.8


ALIAS (Alias Frequency of a Sampled Signal) ................................................................................ 7.19

- 7.4 -
P.Starič, E.Margan Algorithm Application Examples

7.0 Introduction

In Part 6 we have developed a few numerical algorithms that will serve us as


the basis of system analysis and synthesis. We have shown how simple it is to
implement the analytical expressions related to the various aspects of system
performance into compact, fast executing computer code which reduces the tedious
mathematics to pure routine. Of course, a major contribution to this easiness was
provided by the programming environment, a high level, maths–oriented language
called Matlab™ (Ref. [7.1]).
As wideband amplifier designers, we want to be able to accurately predict
amplifier performance, particularly in the time-domain. With the algorithms
developed, we now have the essential tools to revisit some of the circuits presented in
previous parts, possibly gaining a better insight into how to put them to use in our
new designs eventually.
But the main purpose of Part 7 is to put the algorithms in a wider perspective.
Here we intentionally use the term ‘system’, in order to emphasize the high degree of
integration present in modern electronics design, which forces us to abandon the old
paradigm of adding up separately optimized subsystems into the final product;
instead, the design process should be conceived to optimize the total system
performance from the start. As more and more digital processing power is being built
in to modern products, the analogue interface with the real world needs to be given
adequate treatment on the system level, so that the final product eventually becomes a
successful integration of both the analog and the digital world.
Now, we hear some of you analog circuit designers asking in a low voice
“why do we need to learn any of this digital stuff?” The answer is that digital
engineers would have a hard time learning the analog stuff, so there would be no one
to understand the requirements and implications of a decent AD or DA interface. On
the other hand, for an analog engineer learning the digital stuff is so simple, almost
trivial, and it pays back well with better designs and it acquires for you the respect due
from fellow digital engineers.

- 7.5 -
P.Starič, E.Margan Algorithm Application Examples

7.1 Using Convolution: Response to Arbitrary Input Waveforms


7.1.1 From Infinitesimal to Discrete Time Integration
The time-domain algorithms that we have developed in Part 6 gave us the
system response to two special cases of input signal : the unit-area impulse and the
unit-amplitude step. Here we will consider the response to any type of input signal,
provided that its application will not exceed neither the input nor the output system
capabilities. In technical literature this is known as the BIBO-condition1 . And, of
course, we are still within the constraints of our initial LTIC-conditions2 .
As we have seen in Part 1, Sec. 1.14, the system’ time domain response to an
arbitrary input signal can be calculated in two ways:
a) by transforming the input signal to complex frequency domain, multiplying it by
the system transfer function and transforming the result back to the time domain;
b) directly in time domain by the convolution integral.

A short reminder of the convolution integral definition and the transcription


from differential to difference form is in order here. Let BÐ>Ñ be the time domain
signal, presented to the input of a system being characterized by its impulse response
0 Ð>Ñ . The system output can then be calculated by convolving BÐ>Ñ with 0 Ð>Ñ :
>1

CÐ>Ñ œ ( 0 Ð7  >Ñ BÐ>Ñ .> (7.1.1)


>!

where 7 is a fixed time constant, its value chosen so that 0 Ð>Ñ is time reversed.
Usually, it is sufficient to make 7 large enough to allow the system impulse response
0 Ð>Ñ to completely relax and reach the steady state again (not just the first zero-
crossing point!).
If BÐ>Ñ was applied to the system at >0 , then this can be the lower limit of
integration. Of course, the time scale can always be renormalized so that >0 œ !. The
upper integration limit, labeled >1 , can be wherever needed, depending on how much
of the input and output signal we are interested in.
Now, in Eq. 7.1.1 .> is implicitly approaching zero, so there would be an
infinite number of samples between >0 and >1 . Since our computers have a limited
amount of memory (and we have a limited amount of time!) we must make a
compromise between the sampling rate and the available memory length and adjust
them so that we cover the signal of interest with enough resolution in both time and

1
Bounded input p bounded output. This property is a consequence of our choice of basic mathematical
assumptions; since our math tools were designed to handle an infinite amount of infinitesimal
quantities, BIBO is the necessary condition for convergence. However, in the real analog world, we
are often faced with UBIBO requirements (unbounded input), i.e., our instrumentation inputs must be
protected from overdrive. Interestingly, the inverse of BIBO is in widespread use in the computer
world, in fact, any digital computer is a GIGO type of device ( garbage in p garbage out; unbounded!).
2
Linearity, Time Invariance, Causality. Although some engineers consider oscillators to be ‘acausal’,
there is always a perfectly reasonable cause why an amplifier oscillates, even if we fail to recognise it
at first.

- 7.7 -
P.Starič, E.Margan Algorithm Application Examples

amplitude. So if Q is the number of memory bytes reserved for BÐ>Ñ, the required
sampling time interval is:
>1  >0
?> œ (7.1.2)
Q
Then, if ?> replaces .> , the integral in Eq. 7.1.1 transforms into a sum of Q
elements, BÐ>Ñ and CÐ>Ñ become vectors x(n) and y(n), where n is the index of a
signal sample location in memory, and 0 Ð7  >Ñ becomes f(m-n), with m=length(f),
resulting in:
M
y(n) œ " f(m-n)*x(n) (7.1.3)
n=1

Here ?> is implicitly set to 1, since the difference between two adjacent
memory locations is a unit integer. Good book-keeping practice, however,
recommends the construction of a separate time scale vector, with values from >0 to
>1 , in increments of ?> between adjacent values. All other vectors are then plotted
against it, as we have seen done in Part 6.

7.1.2 Numerical Convolution Algorithm

In Part 1 we have seen that solving the convolution integral analytically can be
a time consuming task, even for a skilled mathematician. Sometimes, even if BÐ>Ñ and
0 Ð>Ñ are analytic functions, their product need not be elementarily integrable in the
general case. In such cases we prefer to take the _ transform route; but this route can
sometimes be equally difficult. Fortunately numerical computation of the convolution
integral, following Eq. 7.1.3 , can be programmed easily:

function y=vcon(f,x)
%VCON Convolution, step-by-step example. See also CONV and FILTER.
%
% Call : y=vcon(f,x);
% where: x(t) --> the input signal
% f(t) --> the system impulse response
% y(t) --> the system response to x(t) by convolving
% f(t) with x(t).
% If length(x)=nx and length(f)=nf, then length(y)=nx+nf-1.

% Erik Margan, 861019, Last editing 890416; Free of copyright!


% force f to be the shorter vector :
if length(f) > length(x)
xx=x; x=f; f=xx; % exchange x and f via xx
clear xx
end
nf=length(f); % get the number of elements in x and f
nx=length(x);
f=f(:).'; % organize x and f as single-row vectors
x=x(:).';
y=zeros(2,nx+nf-1); % form a (2)-by-(nx+nf-1) matrix y, all zeros
y(1,1:nx)=f(1)*x; % first row of y: multiply x by f(1)
for k=2:nf % second row: multiply and shift (insert 0)
y(2, k-1:nx+k-1)=[0, f(k)*x];
% sum the two rows column-wise and
y(1,:)=sum(y); % put result back into first row
end % repeat for all remaining elements of f;
y=y(1,:); % the result is the first row only.

- 7.8 -
P.Starič, E.Margan Algorithm Application Examples

To get a clearer view of what the VCON routine is doing , let us write a short
numerical example, using a 6-sample input signal and a 3-sample system impulse
response, and display every intermediate result of the matrix y in VCON:

x=[0 1 3 5 6 6]; f=[1 3 -1]; vcon(x,h);

% initialization - all zeros, 2 rows, 6+3-1 columns:


0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
% step 1: multiply x by the first sample of f, f(1)=1 and
% insert it into the first row:
0 1 3 5 6 6 0 0
0 0 0 0 0 0 0 0
% step 2: multiply x by the second sample of f, f(2)=3,
% shift it one place to the right by adding a leading zero and
% insert it into the second row:
0 1 3 5 6 6 0 0
0 0 3 9 15 18 18 0
% step 3: sum both rows vertically, put the result in the first row
0 1 6 14 21 24 18 0
0 0 3 9 15 18 18 0
% iterate steps 2 and 3, each iteration using the next sample of f:
0 1 6 14 21 24 18 0
0 0 0 -1 -3 -5 -6 -6

0 1 6 13 18 19 12 -6
0 0 0 -1 -3 -5 -6 -6
% after 2 iterations (because f is only 3 samples long)
% the result is the first row of y:
0 1 6 13 18 19 12 -6
% actually, the result is only the first 6 elements:
0 1 6 13 18 19
% since there are only 6 elements in x, the process assumes the rest
% to be zeros. So the remaining two elements of the result represent
% the relaxation from the last value (19) to zero by the integration
% of the system impulse response f.

% Basically, the process above does the following:


% (note the reversed sequence of f)

0 1 3 5 6 6
-1 3 1
( Æ *) ==> 0 ( Ä +) ==> 0

0 1 3 5 6 6
-1 3 1
( Æ *) ==> 0 1 ( Ä +) ==> 1

0 1 3 5 6 6
-1 3 1
( Æ *) ==> 0 3 3 ( Ä +) ==> 6

0 1 3 5 6 6
-1 3 1
( Æ *) ==> 0 -1 9 5 ( Ä +) ==> 13

% ......... etc.

For convolution Matlab has a function named CONV, which uses a built in
FILTER command to run substantially faster, but then the process remains hidden
from the user; however, the final result is the same as with VCON. Another property
of Matlab is the matrix indexing, which starts with 1 (see the lower limit of the sum

- 7.9 -
P.Starič, E.Margan Algorithm Application Examples

symbol in Eq. 7.1.3), in contrast to most programming languages which use memory
‘pointers’ (base address + offset, the offset of the array’s first element being 0).

7.1.3 Numerical Convolution Examples

Let us now use the VCON routine in a real life example. Suppose we have a
gated sine wave generator connected to the same 5th -order Butterworth system which
we inspected in detail in Part 6 . Also, let the Butterworth system’s half power
bandwidth be 1 kHz, the generator frequency 1.5 kHz, and we turn on the gate in the
instant the signal crosses the zero level. From the frequency response calculations, we
know that the forced response amplitude (long after the transient) will be:

Aout=Ain*abs(freqw(z,p,1500/1000));

where z are the zeros and p are the poles of the normalized 5th -order Butterworth
system; the signal frequency is normalized to the system’s cut off frequency.
But how will the system respond to the signal’s ‘turn on’ transient? We can
simulate this using the algorithms we have developed in Part 6 and VCON:

fh=1000; % system half-power bandwidth, 1kHz


fs=1500; % input signal frequency, 1.5kHz
t=(0:1:300)/(50*fh); % time vector, 20us delta-t, 6ms range
nt=length(t);

[z,p]=buttap(5); % 5th-order Butterworth system


p=2*pi*fh*p; % denormalized system poles
Ir=atdr(z,p,t,'n'); % system impulse-response

d=25; % switch-on delay, 25 time-samples


% make the input signal :
x=[zeros(1,d), sin(2*pi*fs*t(1:nt-d))];

y=vcon(Ir,x); % convolve x with Ir ;

A=nt/(2*pi*fh*max(t)); % denormalize amplitude of Ir for plot

% plot the input, the system impulse response


% and the convolution result :
plot( t*fh, x, '-g', ...
t*fh, [zeros(1,d), Ir(1:nt-d)*A],'-r', ...
t*fh, y(1:nt), '-b')
xlabel('t [ms]')

The convolution result, compared to the input signal and the system impulse
response, is shown in Fig. 7.1.1 .
Note that we have plotted only the first nt samples of the convolution result;
however, the total length of y is length(x)+length(Ir)-1 , or one sample less that
the sum of the input signal and the system response lengths. The first length(x)=nt
samples of y represent the system’s response to x, whilst the remaining
length(Ir)-1 samples are the consequence of the system relaxation: since there are
no more signal samples in x after the last point x(nt), the convolution assumes that
the input signal is zero and calculates the system relaxation from the last signal value.

- 7.10 -
P.Starič, E.Margan Algorithm Application Examples

This is equivalent to a response caused by an input step from x(nt) to zero. So if we


are interested only in the system response to the input signal, we simply limit the
response vector to the same length as was the input signal. Also, in the general case
the length of the system’s impulse response vector, nr=length(Ir) , does not have to
be equal to the input signal vector length, nx=nt. In practice, we often make nr << nx ,
but Ir should be made long enough to include the system relaxation to a level very
close to zero, as only then will the sum of all elements of Ir not differ much from the
system gain.
There is, however, an important difference in the plot and the calculation, that
must be explained. The impulse response which we obtained from Butterworth
system poles was normalized to represent a unity gain system, since we want to see
the frequency dependence on the output amplitude by comparing the input and output
signals. Thus our system should either have a gain of unity, or the output should be
normalized to the input in some other way (e.g., if the gain was known, we could have
divided the output signal by the gain, or multiplied the input signal). But the unity
gain normalized impulse response would be too small in amplitude, compared to the
input signal, so we have plotted the ideal impulse response.

1.0
x (t )
0.6
Ir ( t )
0.2

− 0.2 (t )

− 0.6

− 1.0
0 1 2 3 4 5 6
t [ms]
Fig. 7.1.1: Convolution example: response Ca>b to a sine wave Ba>b switched-on into
a 5th -order Butterworth system, whose impulse-response is Mr a>b, shown here as the
ideal response (instead of unity gain); both are delayed by the same switch–on time
(!Þ& ms). The system responds by phase shifting and amplitude modulating the first
few wave periods, reaching finally the forced (‘steady state’) response.

Can we check whether our routine works correctly?


Apart from entering some simple number sequences as before, we can do this
by entering an input signal for which we have already calculated the result in a
different way, say, the unit step (see Fig. 6.1.11, Part 6). By convolving the impulse
response with the unit step, instead of the sine wave, we should obtain the now known
step response:
% continuing from above:
h=[zeros(1:d), ones(1:nt-d)]; % h(t) is the unit step function
y=vcon(Ir,h); % convolve h with Ir
plot( t*fh, h, '-g', ...
t*fh, [zeros(1,d), Ir(1:nt-d)*A],'-r', ...
t*fh, y(1:nt), '-b')
xlabel('t [ms]')

- 7.11 -
P.Starič, E.Margan Algorithm Application Examples

The resulting step response, shown in Fig. 7.1.2, should be identical to that of
Fig. 6.1.11, Part 6, neglecting the initial 0.5 ms (25 samples) time delay and the
different time scale:

1.2
h (t )
1.0

0.8 (t )
0.6

0.4

0.2 Ir(t )

− 0.2
0 1 2 3 4 5 6
t [ms]
Fig. 7.1.2: Checking convolution: response Ca>b of the 5th -order Butterworth
system to the unit step 2a>b. The system’s impulse response Mr a>b is also shown, but
in its ideal size (not unity gain). Apart from the 0.5 ms (25-samples) time delay and
the time scale, the step response is identical to the one shown in Part 6, Fig.6.1.11.

We can now revisit the convolution integral example of Part 1, Sec. 1.14,
where we had a unit-step input signal, fed to a two-pole Bessel-Thomson system,
whose output was in turn fed to a two-pole Butterworth system. The commands in the
following window simulate the process and the final result of Fig. 1.14.1 . But this
time, let us use the frequency to time domain transform of the TRESP (Part 6) routine.
See the result in Fig. 7.1.3 and compare it to Fig. 1.14.1g .

[z1,p1]=bestap(2,'t'); % Bessel-Thomson 2nd-order system poles


[z2,p2]=buttap(2); % Butterworth 2nd-order system poles
N=256; % number of samples
m=4; % set the bandwidth factor
w=(0:1:N-1)/m; % frequency vector, w(m+1)=1 ;
F1=freqw(p1,w); % Bessel-Thomson system frequency response
F2=freqw(p2,w); % Butterworth system frequency response

[S1,t]=tresp(F1,w,'s');% step-response of the Bessel-Thomson system


I2=tresp(F2,w,'u'); % unity-gain Butterworth inpulse response ;
% both have the same normalized time vector
d=max(find(t<=15)); % limit the plot to first 15 time units
I2=I2(1:d); % limit the I2 vector length to d

% convolution of Bessel-Thomson system step-response with


% the first d points of the Butterworth impulse response :
y=vcon(I2,S1);

A=N/(2*pi*m*max(t)); % amplitude denormalization for I2


% plot first d points of all three responses vs. time :
plot( t(1:d), S1(1:d), '-r',...
t(1:d), I2(1:d)*A, '-g',...
t(1:d), y(1:d), '-b' )
xlabel('Time [s]')

- 7.12 -
P.Starič, E.Margan Algorithm Application Examples

1.2
1.0 S1( t )

0.8 (t )

0.6

0.4

0.2 I2 ( t )

0
− 0.2
0 5 10 15
t [s]
Fig. 7.1.3: Convolution example of Part 1, Sec. 1.14. A Bessel–Thomson
2-pole system step response W" a>b has been fed to the 2-pole Butterworth
system and convolved with its impulse response M# a>b, resulting in the
output step response Ca>b. Compare it with Fig. 1.14.1g.

The VCON function is a lengthy process. On a 12 MHz AT-286 PC, which


was used for the first experiments back in 1986–7, it took more than 40 s to complete
the example shown above, but even with today’s fast computers there is still a
noticeable delay. The Matlab CONV routine is much faster.
The reader might question the relevance of the total calculation time, since, for
the purpose of a circuit design aid, anything below 1 s should be acceptable (this is
comparable with the user’s reaction time, when making a simple goÎno go assessment
of the result). However, imagine an automated optimization program loop, adjusting
the values of some 10–20 circuit components. Such a loop might take hundreds or
even thousands of executions before reaching satisfactory performance criteria, so a
low routine time would be welcome. Moreover, if the routine will eventually be
implemented in hardware, acquiring and processing the signal in real time, a low
routine time is of vital importance. For example, to continuously process a 16 bit
stereo audio stream, divided into 1024 sample chunks, using a 32 word long filter
impulse response, the total routine calculation time should be less than 250 .s.
In some cases, particularly with long signal sequences (N > 1000), it could be
interesting to take the Fourier transform route, numerically.
Here is an example using a signal recorded by a nuclear magnetic resonance
imaging (MRI) system. The MRI RF signal is very weak (< 1 mV), so the detection is
noisy and there is some interference from another source. We shall try to clean it by
using a 5th -order Bessel-Thomson digital filter with a unity gain and a 1 MHz cut off :

load R.dat % load the recorded signal from a file "R.dat"


N=length(R); % total vector length, N=2048 samples
Tr=102.4e-6; % total record time 102.4 us
dt=Tr/N; % sampling time interval, 50 ns
t=dt*(0:1:N-1); % time vector reconstruction

% plot the first 1200 samples of the recorded signal


plot(t(1:1200,R(1:1200),'-g')
xlabel('Time [\mus]') % input signal R, fist 60us, see Fig.7.1.4
G=fft(R); % G is the FFT spectrum of R
G=G(1:N/2); % use only up to the Nyquist freq. ( 10 MHz )
f=(1:1:N/2)/dt; % frequency vector reconstructed

- 7.13 -
P.Starič, E.Margan Algorithm Application Examples

[z,p]=bestap(5,'n'); % 5th-order Bessel filter poles


p=p*2*pi*1e+6; % half-power bandwidth is 1 MHz
F=freqw(z,p,2*pi*f); % filter frequency response

% multiplication in frequency is equal to convolution in time:


Y=F.*G; % output spectrum

x=max(find(f<=8e+6)); % plot spectrum up to 8 MHz


M=max(abs(G)); % normalize the spectrum to its peak value
plot( f(1:x), abs(F(1:x)), '-r', ...
f(1:x), abs(G(1:x))/M, '-g', ...
f(1:x), abs(Y(1:x))/M, '-b' )
xlabel('Frequency [MHz]'), ylabel('Normalized Magnitude')
% see Fig.7.1.5

y=2*(real(fft(conj(Y)))-1)/(N/2); % return to time domain

a=max(find(t<=5e-5));
b=min(find(t>=20e-6));
plot( t(a:b), g(a:b), '-g', t(a:b), y(a:b), '-b' )
xlabel('Time [\mus]') % see Fig.7.1.6

1.5
R (t )
1.0

0.5
0.0

− 0.5

−1.0
−1.5
0 10 20 30 40 50 60
t [ µ s]
Fig. 7.1.4: Input signal example used for the spectral-domain convolution
example (first 1200 samples of the 2048 total record length)

1.0
|G( f ) |
0.8

0.6 |F ( f )|

0.4
|Y ( f ) |
0.2

0.0
0 1 2 4 3 5 6 7 8
f [MHz]
Fig. 7.1.5: The spectrum Ka0 b of the signal in Fig. 7.1.4a is multiplied by the
system’s frequency response J a0 b to produce the output spectrum ] a0 b. Along
with the modulated signal centered at 560 kHz, there is a strong 2.8 MHz
interference from another source and a high level of white noise (rising with
frequency), both being substantially reduced by the filter.

- 7.14 -
P.Starič, E.Margan Algorithm Application Examples

1.5
1.0 R (t ) (t )

0.5
0.0

− 0.5

−1.0
−1.5
5 10 15 20
t [ µ s]
Fig. 7.1.6: The output spectrum is returned to time domain as Ca>b and is
compared with the input signal Va>b, in expanded time scale. Note the small
change in amplitude, the reduced noise level and the envelope delay (approx.
"Î% period time shift), with little change in phase. The time shift is equal to "Î#
the number of samples of the filter impulse response.

Fig. 7.1.6 illustrates the dramatic improvement in signal quality that can be
achieved by using Bessel filters.
In MRI systems the test object is put in a strong static magnetic field. This
causes the nucleons of the atoms in the test object to align their magnetic spin to the
external field. Then a short RF burst, having a well defined frequency and duration, is
applied, tilting the nucleon spin orientation perpendicular to the static field (this
happens only to those nucleons whose resonant frequency coincides with that of the
RF burst).
After the RF burst has ceased, the nucleons gradually regain their original spin
orientation in a top-like precessing motion, radiating away the excess electromagnetic
energy. This EM radiation is picked up by the sensing coils and detected by an RF
receiver; the detected signal has the same frequency as the excitation frequency, both
being the function of the static magnetic field and the type of nucleons. Obviously the
intensity of the detected radiation is proportional to the number of nucleons having
the same resonant frequency3 .
In addition, since the frequency is field dependent a small field gradient can be
added to the static magnetic field, in order to split the response into a broad spectrum.
The shape of the response spectral envelope then represents the spatial density of the
specific nucleons in the test object. By rotating the gradient around the object the
recorded spectra would represent the ‘sliced view’ of the object from different angles.
A computer can be used to reconstruct the volumetric distribution of particular atoms
through a process called ‘back-projection’ (in effect, a type of spatial convolution).
From this short description of the MRI technique it is clear that the most vital
parameter of the filter, applied to smooth the recorded signal, is its group delay
flatness. Only a filter with a group delay being flat well into the stop band will be able
to faithfully deliver the filtered signal, preserving its shape both in the time and the
frequency domain and Bessel–Thomson filters are ideal in this sense. Consequently a
sharper image of the test object is obtained.

3
The 1952 Nobel prize for physics was awarded to Felix Bloch and Edward Mills Purcell for their
work on nuclear magnetic resonance; more info at <http://nobelprize.org/physics/laureates/1952/>.

- 7.15 -
P.Starič, E.Margan Algorithm Application Examples

7.2 System Front–End Design Consideration


7.2.1 General Remarks
The trend in modern instrumentation has been definitely going digital from
the 1970s, benefiting from cheap microprocessor technology, being implemented as
early as possible in the signal processing chain. Likewise, reverting back to analog
occurs only if absolutely necessary and as late as possible. The key features are
precision and repeatability of measurements, and those properties are often called
upon to justify the sacrificing of other system properties (of those, the system
bandwidth is no exception, in fact, it is the first victim in most cases!). In spite of this
digital ‘tyranny’, analog engineers were (for now, at least) able to cope quite
successfully with it. Actually, over the years they have managed to stay well in front
of both demands and expectations.
Another key word of modern technology is miniaturization, and in that
connection with ever-lowering power consumption. So the current technological front
is concentrated on the integration of analog and digital functions on the same IC chip,
using the lowest possible supply voltage and applying clever power management
schemes in both hardware and software.
Of course, digitalization is causing many restrictions as well. In contrast with
analog continuous time systems, digital systems operate in discrete time, on the
transitions of the system clock. And, since there is only a limited amount of memory,
which also has a finite access time, the sampling window is ever shrinking. As if this
were not enough (and in spite of promoting precision!), digital systems are not very
flexible to upgrade ; for example, to change from an 8 bit to a 12 bit system, an analog
circuit would have to be improved by 2% or 16 times (if it was not that good already!);
in contrast, the digital part would have to be redesigned completely, not just changed.
This is because an increased number of gates and flip-flops change state at each clock
transition, increasing the supply current peaks; also the circuit area is increased,
increasing the length of the interconnections. Both facts increase the RF interference
and the possibility of noise injection back into the delicate analog input stage.
In Part 6, Sec. 6.5 we have already learned a few basic facts about the effects
of signal sampling. We know that a finite sampling density means only that the signal
repeats in time, so the length of the sampling window should be chosen in accordance
with the signal length and the sampling frequency. More difficult to handle is the
problem of finite word length, since it sets the effective system resolution and the
conversion noise level.

7.2.2 Aliasing Phenomena in Sampling Systems


To illustrate further the use of the algorithms developed in Part 6, let us
consider the design requirements of a front–end amplifier driving an analogue to
digital converter (ADC). In theory the minimum required amplifier bandwidth should
be equal to the Nyquist frequency, which is one half of the ADC ’s sampling clock
frequency, 0N œ 0c Î#. In practice, however, undistorted reconstruction of a periodic
waveform can be achieved only if the signal content above the Nyquist frequency has
been attenuated to levels lower than the ADC resolution. This is known in the
literature as Shannon’s sampling theorem [Ref. 7.2].

- 7.17 -
P.Starič, E.Margan Algorithm Application Examples

The purpose of filtering the signal above the Nyquist frequency is to avoid
‘aliasing’. Fig. 7.2.1 shows a typical situation resulting in a signal frequency alias in
relation to the sampling clock frequency.

5 C

0
1
A
0

S
−1

0 1 2 3 4 5 6 7 8 9 10
t
Fig. 7.2.1: Aliasing (frequency mirroring). A high frequency signal S, sampled by an ADC
at each rising edge of the clock C of a comparably high frequency, can not be distinguished
from its low frequency alias A, which is equal to the difference between the clock and
signal frequency, 0a œ 0s  0c . In this figure, 0s œ Ð*Î"!Ñ 0c , therefore 0a œ Ð"Î"!Ñ 0c
(Yes, a negative frequency! This can be verified by increasing the clock frequency very
slightly and watch the aliased signal apparently moving backwards in time).

The alias frequency 0a is simply a difference between the signal frequency 0s


and the sampling clock’s frequency 0c :
0a œ 0s  0c (7.2.1)

Aliasing can be best understood if we recall a common scene in Western movies,


where the wheels of the stage coach seem to be rotating backwards, while the
horses are being whipped to run wild to escape from the desperados behind. The
perceived frequency of rotation of the wheel is equal to the difference between
the actual rotation frequency and the frequency at which the pictures were taken.

A wheel, rotating at the cycle frequency 0w equal to the picture rate 0p (or its
integer multiple or sub-multiple, 0w œ 8 0p Î7, where 7 is the number of wheel
arms), would be perceived as stationary. Likewise, if an ADC’s sampling frequency is
equal to the signal frequency (see Fig. 7.2.2), the apparent result is a DC level.

5 C

0
1 A

−1 S

0 1 2 3 4 5 6 7 8 9 10
t
Fig. 7.2.2: Alias of a signal equal in frequency to the sampling clock looks like a DC.

- 7.18 -
P.Starič, E.Margan Algorithm Application Examples

Furthermore, a signal with a frequency slightly higher than the sampling


frequency could not be distinguished from a low frequency equal to the difference of
the two, as in Fig. 7.2.3. Experienced Hi-Fi enthusiasts and car mechanics will surely
remember seeing this, if we remind them of the ‘stroboscope effect’.

5 C

0
1 A

−1 S

0 1 2 3 4 5 6 7 8 9 10
t
Fig. 7.2.3: A signal frequency slightly higher than the sampling frequency aliases
into a low frequency, equal to the difference of the two (but now positive).

Here is the ALIAS routine for Matlab, by which we can calculate the aliasing
for any clock and signal frequency desired.

function fa=alias(fs,fc,phi)
% ALIAS calculates the alias frequency of a sampled sinewave signal.
% Call : fa = alias( fs, fc, phi ) ;
% where: fs is the signal frequency
% fc is the sampling clock frequency
% phi is the initial signal phase shift

% Erik Margan, 920807, Free of copyright!

if nargin < 3
phi = pi/3 ; % signal phase shift re. clk, arbitrary value
end
ofs = 2 ; % clock offset
A = 1/ofs ; % clock amplitude
m = 100 ; % signal reconstruction factor is equal to
% the number of dots within a clock period
N = 1 + 10 * m ; % total number of dots
dt = 1 / ( m * fc ) ; % delta-t for time reconstruction
t = dt * ( 0 : 1 : N ) ; % time vector

fa = fs - fc ; % alias frequency (can be negative!)


clk = ofs + A * sign( sin( 2 * pi * fc *t ) ) ; % clock
sig = sin( 2 * pi * fs * t + phi ) ; % sampled signal
sal = sin( 2 * pi * fa * t + phi ) ; % alias signal

plot( t, clk, '-g',...


t, sig, '-b',...
t, sal, '-r',...
t(1:m:N), sig(1:m:N), 'or')
xlabel( 't' )

- 7.19 -
P.Starič, E.Margan Algorithm Application Examples

Of course, the sampled signal is more often than not a spectrum, either
discrete or continuous, and aliasing applies to a spectrum as well as to discrete
frequency signals. In fact, the superposition theorem applies here, too.
We have noted that a sampled spectrum is symmetrical about the sampling
frequency, because a signal, sampled by a clock with exactly the same frequency,
aliases as a DC level which depends on the initial signal phase shift relative to the
clock. However, something odd already happens at the Nyquist frequency, as can be
seen in Fig. 7.2.4 and Fig. 7.2.5. In both figures the signal frequency is equal to the
Nyquist frequency ("Î# the sampling frequency), but differs in phase. Although the
correct alias signal is equal in amplitude to the original signal, we perceive an
amplitude which varies with phase.

5 C

0
1 A

X
0

−1 S

0 1 2 3 4 5 6 7 8 9 10
t
Fig. 7.2.4: When the signal frequency is equal to the Nyquist frequency,
there are two samples per period and the correct alias signal is of the same
amplitude as the original signal. However, the perceived alias amplitude is a
function of the phase difference between the signal and the clock. A "!°
phase shift results in a low apparent amplitude, as shown by the X waveform.

5 C

0
1 S A X

−1

0 1 2 3 4 5 6 7 8 9 10
t
Fig. 7.2.5: Same as in Fig. 7.2.4, but with a 45° phase shift.
The apparent amplitude of X is now higher.

In fact, if our ADC were to be sampling a slowly sweeping sinusoidal signal,


its spectral envelope would follow the asin =Xs bÎ=Xs function, shown in Fig. 7.2.6,
with the first zero at the Nyquist frequency, the second zero at the sampling frequency
and so on, a zero at every harmonic of the Nyquist frequency.

- 7.20 -
P.Starič, E.Margan Algorithm Application Examples

1.0 1.00 fh 0.707


0.8 fh fN fs
0.707 fN
0.6 fa =
fN fs 10
sin ω T s
0.4 ω Ts 1
F1 = as
0.10 jω ym
0.2 1+ ω pt o
a te

0
sin ω T s
− 0.2 ω Ts
− 0.4 0.01
0.1 1.0 10.0 0.1 1.0 10.0
f / fN f / fN

Fig. 7.2.6: The spectrum resulting from sampling a constant amplitude sinusoidal signal
varying in frequency from 0.10N to 100N follows the asin =Xs bÎ=Xs function, where
Xs œ "Î0s . The function is shown in the linear vertical scale on the left and in the log of the
absolute value on the right. The first zero occurs at the Nyquist frequency, the second at the
sampling frequency and so on, at every Nyquist harmonic. Note that the effective sampling
bandwidth 0h is reduced to about 0.43 0N . The asymptote is the same as for a simple VG low
pass filter, #! dBÎ"!0 with a cut off at 0a œ 0N ÎÈ"! .

The aliasing amplitude follows this same asin =Xs bÎ=Xs function, from the
Nyquist frequency up. An important side effect is that the bandwidth is reduced to
about 0.43 0N . This can be taken into account when designing an input anti-aliasing
filter and partially compensate the function pattern below the Nyquist frequency by an
adequate peaking.

7.2.3 Better Anti-Aliasing With Mixed Mode Filters

By the term ‘mixed mode filter’ we mean a combination of analog and digital
filtering which gives the same result as a single filter having the same total number of
poles. The simplest way to understand the design requirements and optimization, as
well as the advantages of such an approach, is by following an example.
Let us imagine a sampling system using an ADC with a 12 bit amplitude
resolution and a 50 ns time resolution (sampling frequency 0s œ #! MHz). The
number of discrete levels resolved by 12 bits is E œ #"# œ %!*'; the ADC relative
resolution level is simply "ÎE, or in dB, + œ #! log"! Ð"ÎEÑ œ (# dB. According to
the Shannon sampling theorem the frequencies above the Nyquist frequency
(0N œ 0s Î# œ "! MHz) must be attenuated by at least #"# to reduce the alias of the
high frequency spectral content (signal or noise) below the ADC resolution.
As we have just learned, the asin =Xs bÎ=X= function of the alias spectrum
allows us to relax the filter requirements by some 4 bits (a factor of %Þ& or "$ dB) at
the frequency 0.7 0s ; for a while, we are going to neglect this, leaving it for the end of
our analysis.
Let us also assume a % V peak to peak ADC input signal range and let the
maximum required vertical amplifier sensitivity be & mVÎdivision. Since oscilloscope
displays usually have 8 vertical divisions, this means %! mV of input for a full scale
display, or a gain of "!!. We would like to achieve the required gain–bandwidth
product with either a two- or a three-stage amplifier. We shall assume a 5-pole filter

- 7.21 -
P.Starič, E.Margan Algorithm Application Examples

for the two-stage amplifier (a 3-pole and a 2-pole stage), and a 7-pole filter for the
three-stage amplifier (one 3-pole stage and two 2-pole stages). We shall also inspect
the performance of a 9-pole (four-stage) filter to see if the higher bandwidth (achieved
as a result of a steeper cut off) justifies the cost and circuit complexity of one
additional amplifier stage.
Now, if our input signal was of a square wave or pulse form, our main
requirement would be a shortest possible ADC ‘aperture’ time and an analogue
bandwidth as high as possible. Then we would be able to recognize the sampled
waveform shape even with only 5 samples per period. But suppose we would like to
record a transient event having the form of an exponentially decaying oscillating
wave, along with lots of broad band noise, something like the signal in Fig. 7.1.4. To
do this properly we require both aliasing suppression of the spectrum beyond the
Nyquist frequency and preserving the waveform shape; the later requirement limits
our choice of filter systems to the Bessel–Thomson family.
Finally, we shall investigate the possibility of improving the system bandwidth
by filtering the recorded data digitally.
We start our calculations from the requirement that any anti-alias filter must
have the attenuation at the Nyquist frequency 0N equal to the ADC resolution level.
Since we know that the asymptote attenuation slope depends on the system order 8
(number of poles) as 8 ‚ #! dBÎ"!0 , we can follow those asymptotes from 0N back
to the maximum signal level; the crossing point then defines the system cut off
frequency 0h8 for each of the three filter systems.
Since we do not have an explicit relation between the Bessel–Thomson filter
cut off and its asymptote, we shall use Eq. 6.3.10 for Butterworth systems to find the
frequency 0a at which the 5-, 7-, and 9-pole Butterworth filter would exhibit the
E œ #"# attenuation required. By using 0h œ 0N# Î0a we can then find the Butterworth
cut off frequencies. Then by using the modified Bessel–Thomson poles (those that
have the same asymptote as the Butterworth system of comparable order) we can find
the Bessel–Thomson cut off frequencies which satisfy the no-aliasing requirement.

A=2^12; % ADC resolution limit sets the required attenuation


fs=2e+7; % ADC sampling frequency, 2! MHz
fN=fs/2; % Nyquist frequency, 1! MHz
M=1e+6; % megahertz scale-factor

% the normalized 5-, 7- and 9-pole system asymptotes, all crossing


% the ADC resolution limit, 1/A, at fN, after Eq.6.3.10, will have
% the following cutoff frequencies :

fh5=fN/10^(log10(A^2-1)/(2*5));
fh7=fN/10^(log10(A^2-1)/(2*7));
fh9=fN/10^(log10(A^2-1)/(2*9));
disp(['fh5 = ', num2str(fa5/M), ' MHz'])
disp(['fh7 = ', num2str(fa7/M), ' MHz'])
disp(['fh9 = ', num2str(fa9/M), ' MHz'])
% the disp commands return the following values :

» fh5 = 1.8946 MHz


» fh7 = 3.0475 MHz
» fh9 = 3.9685 MHz

- 7.22 -
P.Starič, E.Margan Algorithm Application Examples

We now find the poles and the system bandwidth of the 5-, 7-, and 9-pole
Bessel–Thomson systems, which have their responses normalized to the same
asymptotes as the above Butterworth systems of equal order:

N=601; % number of frequency samples


f=fN*logspace(-2,0,N); % length-N frequency vector, from 2 decades
% below fN to fN, in log-scale
w=2*pi*f; % angular frequency

[z5,p5]=bestap(5,'a'); % Bessel-Thomson asymptote-normalized systems


[z7,p7]=bestap(7,'a');
[z9,p9]=bestap(9,'a');

p5=p5*2*pi*fa5; % Scaling-up the poles by the previously


p7=p7*2*pi*fa7; % calculated cutoff frequencies, so that all
p9=p9*2*pi*fa9; % three responses have 1/A attenuation at fN

M5=20*log10(abs(freqw(p5,w))); % Calculate magnitudes in dB ;


M7=20*log10(abs(freqw(p7,w)));
M9=20*log10(abs(freqw(p9,w)));
% plot magnitudes in dB vs. log frequency
db3=-3.0103; % the -3dB reference level
% and the ADC resolution limit
semilogx( f/M, M5, '-r',...
f/M, M7, '-g',...
f/M, M9, '-b',...
fN*[0.05, 0.35]/M, [db3, db3], '-k'...
[f(1), f(N)]/M, [1/A, 1/A], '-k' )
xlabel( 'f [MHz]' ) ;
ylabel( 'Attenuation [dB]' )

0
− 3dB
− 10

− 20
Attenuation [dB]

− 30
f 5 = 1.16 MHz
− 40 f 7 = 1.66 MHz
f 9 = 1.97 MHz
− 50

− 60

− 70 ADC resolution: − 72 dB

− 80
0.1 1.0 10.0
f [MHz]
Fig. 7.2.7: Magnitude vs. frequency of Bessel–Thomson 5-, 7-, and 9-pole
systems, normalized to the attenuation of 2c12 (72 dB) at 0N (10 MHz).

- 7.23 -
P.Starič, E.Margan Algorithm Application Examples

Fig. 7.2.7 shows the frequency responses, calculated to have the same
attenuation, equal to the relative ADC resolution level of 72 dB, at the Nyquist
frequency. We now need their approximate 3 dB cut off frequencies:

m=abs(M5-3.0103); % compare the magnitudes with the -3dB level


x5=find(m==min(m)); % and find the index of each frequency limit

m=abs(M7-3.0103);
x7=find(m==min(m));

m=abs(M9-3.0103);
x9=find(m==min(m));

[f5, f7, f9]=f([x5, x7, x9]); % find the cutoff frequencies

% display cutoff frequencies of the Bessel-Thomson systems :


disp(['f5 = ', num2str(f5/M), ' MHz'])
disp(['f7 = ', num2str(f7/M), ' MHz'])
disp(['f9 = ', num2str(f9/M), ' MHz'])
» f5 = 1.166 MHz
» f7 = 1.660 MHz
» f9 = 1.965 MHz

Note that these values are much lower then the cutoff frequencies of the
asymptotes, owing to the more gradual roll–off of Bessel–Thomson systems. Also,
note that a greater improvement in performance is achieved by increasing the system
order from 5 to 7 then from 7 to 9. We would like to have a confirmation of this fact
from the step responses (later, we shall also see how these step responses would look
when sampled at the actual sampling time intervals).

fs=2e+7; % sampling frequency


t=(0:1:500)/fs; % time vector (to calculate the rise times we need
% a much finer sampling then the actual 50 ns

S5=atdr(z5,p5,t,'s'); % Step responses


S7=atdr(z7,p7,t,'s');
S9=atdr(z9,p9,t,'s');

% plot the step responses


% and the 0.1 and 0.9 reference levels to compare the rise times :
x10=t(50,130), x90=t(150,300), y10=[0.1,0.1], y90=[0.9,0.9];

plot(t,S5,'-r', t,S7,'-g', t,S9,'-b', x10,y10,'-k', x90,y90,'-k' )


xlabel( 't [us]' )

% calculate the rise times :


x5a=find( abs(S5-y10) == min( abs(S5-y10) ) );
x5b=find( abs(S5-y90) == min( abs(S5-y90) ) );
x7a=find( abs(S7-y10) == min( abs(S7-y10) ) );
x7b=find( abs(S7-y90) == min( abs(S7-y90) ) );
x9a=find( abs(S9-y10) == min( abs(S9-y10) ) );
x9b=find( abs(S9-y90) == min( abs(S9-y90) ) );
Tr5=t(x5b)-t(x5a);
Tr7=t(x7b)-t(x7a);
Tr9=t(x9b)-t(x9a);

- 7.24 -
P.Starič, E.Margan Algorithm Application Examples

1.2

1.0

0.8

0.6 Rise times:

Tr9 = 178 ns
0.4 Tr7 = 213 ns
Tr5 = 302 ns
0.2

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
t [ µ s]
Fig. 7.2.8 : Step responses of the 5-, 7-, and 9-pole Bessel–Thomson systems, having equal
attenuation at the Nyquist frequency. The rise times are calculated from the number of
samples between the 10% and 90% of the final value of the normalized amplitude.

We see that in this case the improvement from order 7 to order 9 is not so high
as to justify the added circuit complexity and cost of one more amplifying stage.
So let us say we are temporarily satisfied with the 7-pole filter system.
However, its 1.66 MHz bandwidth for a 12 bit ADC, sampling at 20 MHz, is simply
not good enough. Even the 1.96 MHz bandwidth of the 9-pole system is rather low.
The question is whether we can find a way around the limitations imposed by the anti-
aliasing requirements?
Most ADC recording systems do not have to show the sampled signal in real
time. To the human eye a screen refreshing rate of 10 to 20 times per second is fast
enough for most purposes. Also many systems are intentionally made to record and
accumulate large amounts of data to be reviewed later. So on most occasions there is
plenty of time available to implement some sort of signal post–processing.
We are going to show how a digital filter can be combined with the analog
anti-aliasing filter to expand the system bandwidth beyond the aliasing limit without
increasing the sampling frequency.
Suppose we could implement some form of digital filtering which would
suppress the alias spectrum below the ADC resolution and then we ask ourselves
what would be the minimum required pass band attenuation of such a filter. The
answer is simple: the filter attenuation must follow the inverse of the alias spectrum
envelope. But if we were to allow the spectrum around the sampling frequency to
alias, our digital filter would need to extend its attenuation characteristic back to DC.
Certainly this is neither practical nor desirableÞ Therefore since 0s œ #0s , our
bandwidth improvement factor, let us call it F , must be lower than #.

- 7.25 -
P.Starič, E.Margan Algorithm Application Examples

So let us increase the filter cut off by F œ "Þ($; the input spectrum would
then contain frequencies only up to F0N , which would alias back down to 0s  F0N ,
in this case #  "Þ($ œ !Þ#(0N . This frequency is high enough to allow the realization
of a not too demanding digital filter.
Let us now study the shape of the alias spectrum which would result from
taking our original 7-pole analog filter, denoted by J(o , and push it up by the chosen
factor F to J(b , as shown in Fig. 7.2.9. The spectrum WA between 0N and F0N is
going to be aliased below the Nyquist frequency into WB .

− 10 F7o F7b
fs fs
B fN =
− 20 2
Attenuation [dB]

− 30

− 40

− 50

− 60 fs − B fN B fN
SB SA
− 70 ADC resolution

− 80
1 10 100
f [MHz]

Fig. 7.2.9: Alias spectrum of a 7-pole filter with a higher cut off frequency. J(o is
our original 7-pole Bessel-Thomson analog filter, which crosses the 12-bit ADC
resolution level of  (# dB at exactly the Nyquist frequency, 0N = 0W Î# œ 10 MHz.
This guaranties freedom from aliasing, but the bandwidth is rather low. If we move
it upwards by a factor F œ "Þ($ to J(b , the spectrum WA will alias into WB . Note the
alias spectrum inversion: 0N remains in its place, whilst F0N is aliased to 0s  F0N .
Note also that the alias spectral envelope has changed in comparison with the
original: in the loglog scale plot a linearly falling spectral envelope becomes curved
when aliased. This change of the spectral envelope is important, since it will allow
us to use a relatively simple filter response shape to suppress the aliased spectrum
below the ADC resolution.

Note that in the log frequency scale the aliased spectrum envelope is not
linear, even if the original one is (as defined by the attenuation characteristic of the
analog filter).
If we flip the spectrum WB up, as in Fig. 7.2.10, the resulting spectral envelope,
denoted by Jrq , represents the minimal attenuation requirement of a digital filter,
which would restore freedom from aliasing.

- 7.26 -
P.Starič, E.Margan Algorithm Application Examples

− 10 Frq
fs
− 20

F7o F7b
Attenuation [dB]

− 30
fN
− 40

− 50

− 60 fs − B fN B fN
SB
− 70 ADC resolution
B
− 80
1 10 100
f [MHz]

Fig. 7.2.10: If we invert the alias spectrum WB the envelope of the resulting
spectrum Jrq represents the minimum attenuation requirement that a digital filter
should have in order to suppress the aliased spectrum below the ADC resolution.

The following procedure shows how to calculate and plot the various elements
of Fig. 7.2.9 and Fig. 7.2.10, and Jrq in particular, starting from the previously
calculated 7-pole Bessel–Thomson system magnitude J(o :

% the following calculation assumes a log frequency scale and


% a linear in dB attenuation.
A=2^12; % number of levels resolved by a 12-bit ADC
a=20*log10(1/A); % ADC resolution, -72dB
Nf=601; % number of frequency samples
f=logspace(6,8,Nf); % frequency vector, 1 to 100 MHz
B=1.73; % chosen bandwidth increase (max. 1.8)
% the original 7-pole filter magnitude crosses a at fN :
F7o=20*log10(abs(freqw(p7, 2*pi*f)));
% F7o shifted up by B to F7b :
F7b=20*log10(abs(freqw(p7*B, 2*pi*f)));

fA=B*fN; % F7b crosses ADC resolution (a) at fA


xn=min(find(f>=fN)); % index of fN in f
xa=min(find(f>=fA)); % index of fA in f
Sa=F7a(xn:xa); % source of the alias spectrum
fa=f(xn:xa); % frequency band of Sa
Sb=F7a(xa:-1:xn); % the alias spectrum, from fs-fA to fN
fb=fs-f(xa:-1;xn); % frequency band of Sb
Frq=a-Sa; % min. required dig.filt. magnitude in dB
fr=fb; % in the same freq. range: fs-fa to fN
M=1e+6; % MHz scale factor
semilogx( f/M,F7o,'-r', f/M,'-b', fa/M,Sa,'-y', fb/M,Sb,'-c',...
fr/M,Frq,'-m', [f(1),f(Nf)]/M,[a,a],'--k',...
[fN,fN],[-72,-5],':k', [fs,fs],[-80,0],':b' )
xlabel('f [MHz]')

- 7.27 -
P.Starič, E.Margan Algorithm Application Examples

As can be seen in Fig. 7.2.10, the required minimum attenuation Jrq is broad
and smooth, so we can assume that it can be approximated by a digital filter of a
relatively low order; e.g., if the analog filter has 7 poles, the digital one could have
only 6 poles. The combined system would then be effectively a 13-pole. Of course,
the digital filter reduces the combined system’s cut off frequency, but it would still be
higher than in the original, non-shifted, analog only version. However, the main
problem is that the cascade of two separately optimized filters has a non-optimal
response and the shape of the transient suffers most. This can be solved by simply
starting from a higher order system, say, a 13-pole Bessel–Thomson. Then we assign
7 of the 13 poles to the analogue filter and 6 poles to the digital one.
The 6 poles of the digital filter must be transformed into appropriate sampling
time delays and amplitude coefficients. This can be done either with ‘z-transform’
mapping, or simply by calculating its impulse response and use it for convolution
with the sampled input signal, as we shall do here.
But note that since now the 7 poles of the analog filter will be taken from a
13-pole system, they will be different from the 7-pole system discussed so far (see a
comparison of the poles in Fig. 7.2.11). Although the frequency response will be
different, the shape of the alias band will be similar, since the final slope is the same
in both cases. Nevertheless, we must repeat the calculations with the new poles.
The question is by which criterion do we choose the 7 poles from the 13. From
Jrq in Fig. 7.2.9 we can see that the digital filter should not cut sharply, but rather
gradually. Such a response could be achieved if we reserve the poles with the lower
imaginary part for the digital filter and assign those with high imaginary part to the
analog filter. But then the analog filter step response would overshoot and ring,
compromising the dynamic range. Thus, the correct design strategy is to assign the
real and every other complex conjugate pole pair to the analog filter, as shown below.

× 10 7
4
ℑ {s}
3

1
ℜ {s}
0

−1

−2

−3

−4
−4 −3 −2 −1 0 1 2 3 4 × 10 7
Fig. 7.2.11: The 13-pole mixed mode system, the analog part marked by ‘×’, the
digital by ‘+’; compared with the original 7-pole analog only filter, marked by ‘*’.
Note the difference in pattern size (proportional to bandwidth!).

- 7.28 -
P.Starič, E.Margan Algorithm Application Examples

The values of the mixed mode filter poles for the analog and digital part are:

» p7a : 1e+7 * -3.0181


-2.8572 - 1.1751i
-2.8572 + 1.1751i
-2.3275 - 2.3850i
-2.3275 + 2.3850i
-1.1637 - 3.7353i
-1.1637 + 3.7353i

» p6d : 1e+7 * -2.9785 - 0.58579i


-2.9785 + 0.58579i
-2.6460 - 1.77250i
-2.6460 + 1.77250i
-1.8655 - 3.02640i
-1.8655 + 3.02640i

Here is a calculation of aliasing suppression using a 13-pole mixed mode filter


by using the above poles:

fN=1e+7; % Nyquist frequency


M=1e+6; % MHz-us conversion factor
fs=2*fn; % sampling frequency
A=2^12; % ADC dynamic range
a=20*log10(1/A); % ADC resolution limit in dB
Nf=601; % number of frequency samples
f=logspace(6,8,Nf); % log-scaled frequency vector, 1 - 100 MHz

% find the frequency normalization factor :


f7=fN/10^(log10(A^2-1)/(2*7));
% the 7-pole analog-only filter, used as a reference :
[z7,p7]=bestap(7,'a');
p7=2*pi*f7*p7; % denormalized poles of the reference
F7o=20*log10(abs(freqw(p7,2*pi*f))); % magnitude of the reference

[z13,p13]=bestap(13,'a'); % the 13th-order Bessel-Thomson system


B=1.73; % chosen bandwidth increase
p13=B*2*pi*f7*p13; % denormalize the 13 poles (f7 same as ref.)
p13=sort(p13); % sort the poles in ascending abs. value
p7a=p13([1,4,5,8,9,12,13]); % analogue filter pole selection
p6d=p13([2,3,6,7,10,11]); % digital-equivalent filter poles

F7a=20*log10(abs(freqw(p7a,2*pi*f))); % analogue system magnitude


F6d=20*log10(abs(freqw(p6d,2*pi*f))); % digital-equivalent magnitude
F13=20*log10(abs(freqw(p13,2*pi*f))); % total a+d system magnitude

xa=max(find(F7a>=a)); % freq. index of F7a crossing at a


xn=max(find(f<=fN)); % Nyquist frequency index
fa=f(xn:xa); % frequency band above Nyquist
Fa=F7a(xn:xa); % response spectrum above Nyquist
fb=fs-f(xa:-1:xn); % alias band
Fb=F7a(xa:-1:xn); % alias spectrum
Frq=a-Fb; % alias suppression requirement
semilogx( f/M, F7o, '-k', ...
f/M, F7a, '-r', ...
f/M, F6d, '-b', ...
f/M, F13, '-g', ...
fa/M, Fa, '-y', ...
fb/M, Fb, '-c', ...
fb/M, Frq, '-m', ...
[f(1),f(Nf)]/M, [a,a],'--k' )
axis([1, 100, -80, 0]);

- 7.29 -
P.Starič, E.Margan Algorithm Application Examples

The result of the above plot operation (semilogx) can be seen in Fig. 7.2.12,
where we have highlighted the spectrum under the analog filter J(a beyond the
Nyquist frequency, its alias, and the inverted alias, which represents the minimum
required attenuation Jrq of the digital filter. Note how the 6-pole digital filter’s
response J6d just touches the Jrq . Probably it is just a coincidence that the bandwidth
increase factor F chosen for the analog filter is equal to È$ (we have arrived at this
value by repeating the above calculation several times, adjusting F on each run). This
process can be easily automated by iteratively comparing the samples of Jrq and J6d ,
and increase or decrease the factor F accordingly.

10
fN fs
0
− 3dB
Frq
−10 f h7
f h13
−20 F13 F7a
F7o
Attenuation [dB]

−30
F6d

− 40

− 50

− 60
Sb Sa
− 70 ADC resolution

− 80
1 10 100
f [MHz]
Fig. 7.2.12: The bandwidth improvement achieved with the 13-pole mixed mode filter.
The J(o is the original 7-pole analog only filter response, as in Fig. 7.2.9. The new
analog filter response J(a , which uses 7 of the 13 poles as shown in Fig. 7.2.11, was
first calculated to have the same 72 dB attenuation at the Nyquist frequency 0N (as
J(o ), and then all the 13 poles were increased by the same factor F œ "Þ($ as before.
The resulting J(a response generates the alias spectrum Wb from its source Wa . The
envelope of the inverted alias spectrum Jrq sets the upper limit for the digital filter’s
response, J6d , required for effective alias suppression. The response J"$ is the total
analog + digital 13-pole system, which crosses the ADC resolution limit at 0R , and
suppresses the alias band below the ADC resolution level, which was the main goal. In
this way the system’s cut off frequency has increased from "Þ'' MHz for J(o to #Þ#
MHz for the J"$ . Thus, a bandwidth improvement of about "Þ$# is achieved (not very
much, but still significant — note that there are ways of improving this further!).

As expected, the digital filter has reduced the system bandwidth below that of
analog filter; however, the analog  digital system’s response J"$ has a cut off
frequency 0h"$ well above the 0h( of the original analog only 7-pole response, the J(o :

07o œ "Þ'' MHz 013 œ #Þ# MHz

Therefore the total bandwidth improvement factor is 013 Î07o œ "Þ$#.

- 7.30 -
P.Starič, E.Margan Algorithm Application Examples

The digital filtering process which we are going to use is actually a


convolution of discrete signal samples with the filtering coefficients which represent
the filter impulse response (sampled) in the time domain. Therefore, if we are going
to implement the digital filter in software, we should pay attention to the number of
samples of the digital filter impulse response. A high number of samples means a
longer time to calculate the convolution. Luckily, the pole selection used will have a
nearly Gaussian, non-oscillating, impulse response with only a small undershoot, so a
suitable digital filter can be made with only 11 samples (see Fig. 7.2.15).
Let us now calculate and compare the two step responses to examine the effect
of bandwidth improvement in time domain:

t=2e-9*(0:1:500); % time vector, 2 ns resolution


M=1e+6;
g7o=atdr(z70,p70,t,'s'); % reference 7-pole system step-response
g13=atdr(z13,p13,t,'s'); % 13-pole step-response

% plot the step responses with the 0.1 and 0.9 reference levels :
plot( t*M, g7o, '-k',...
t*M, g13, '-r',...
t([5, 25])*M, [0.1, 0.1], '-k',...
t([15, 45])*M, [0.9, 0.9], '-k' )
xlabel('t [us]')

1.2
t r7
1.0

0.8

0.6 t r7 = 210 ns
g7o t r13 = 155 ns
0.4 g13

0.2 t r13
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
t [µ s]
Fig. 7.2.13: Step response comparison of the original 7-pole analog only filter g(o and
the improved 13-pole mixed mode (7-pole analog + 6-pole digital) Bessel–Thomson
filter, g"$ . Note the shorter rise time and longer time delay of g"$ . The resulting
rise time is also better than that of the 9-pole analog only filter (see Fig. 7.2.8).

The greatest improvement, however, can be noted in the group delay; as


shown in Fig. 7.2.14, the mixed mode system more than doubles the linear phase
bandwidth, thus putting a lower constraint on spectrum analysis.

% continuing from previous calculations:


gd7o=gdly(z7,p7,2*pi*f); % group-delay for the original 7-pole and
gd13=gdly(z13,p13,2*pi*f); % the mixed-mode 13-pole system
semilogx( f/M, gd7o*M, '-k', f/M, gd13*M, '-r')

- 7.31 -
P.Starič, E.Margan Algorithm Application Examples

− 0.1
τe
[ µ s] τ e7 τ e13

− 0.2

− 0.3
1 10 100
f [MHz]
Fig. 7.2.14: Envelope delay comparison of the original 7-pole analog only
filter 7e( and the mixed mode 13-pole Bessel–Thomson filter 7e"$ . The 7-pole
analog only filter is linear to a little over 2 MHz, while the 13-pole mixed
mode filter is linear well into the stop band, up to almost 5 MHz, more than
doubling the useful linear phase bandwidth.

The reader is encouraged to repeat all the above calculations also for the 5-
and 9-pole case to examine the dependence of the bandwidth improvement factor on
the analog filter’s slope.
As we have seen, the bandwidth improvement factor is very sensitive to the
steepness of the attenuation slope, so the 9-pole analog filter assisted by an 8-pole
equivalent digital filter may be found to be attractive now, extending the bandwidth
further.

7.2.4 Gain Optimization


We have said that we need a total gain of 100. Since the analog 7-pole filter
can be realized with a three-stage amplifier (one 3-pole stage and two 2-pole stages,
see Fig. 7.2.18), the gain of each stage can be optimized by taking the third root of the
total gain, "!!"Î$ œ %Þ'%"' or "$Þ$ dB (compare this to a two-stage 5-pole filter,
where each stage would need "!!"Î# œ "! or #! dB). The lower gain should allow us
to use opamps with lower 0T and still have a good phase margin to prevent pole
drifting from the optimum (because of the decreasing feedback factor at high
frequencies). This is important, since the required 12 bit precision is difficult to
achieve without local feedback, and the cost of fast precision amplifiers can be high.
As calculated before, for the sampling frequency of 20 MHz the bandwidths
are 1.660 MHz for the 7-pole analog only filter and 2.188 MHz for the 13-pole mixed
mode system. In addition to the 13.3 dB of gain, each amplifier stage will need at least
20 dB of feedback at these frequencies, to prevent the response peaking which would
result from the finite amplifier bandwidth (if it were too low). By accounting for the
amplifier gain roll–off of 20 dBÎdecade we conclude that we need amplifiers with a
unity gain bandwidth of at least 70 MHz for a 9-pole filter and 120 MHz for a 7-pole
filter. Note also that amplifiers with a unity gain bandwidth of 160 MHz would be
required for the two stages of the 5-pole filter with 20 dB of gain per stage and a
system cutoff frequency of only 1.160 MHz!

- 7.32 -
P.Starič, E.Margan Algorithm Application Examples

7.2.5 Digital Filtering Using Convolution


Before we set off to design the analog filter part, let us check the digital
filtering process. If we take the output of the analog filter and convolve it with the
impulse response of the 6-pole equivalent digital filter, we obtain the response of the
composite 13-pole filter. We would also like to see how the step response would look
when sampled at the actual ADC sampling frequency of 20 MHz:

dt=2e-9; % 2 ns time resolution for plotting


Nt=501; % length of time vector
t=dt*(0:1:Nt-1); % time vector
M=1e+6; % microsecond normalization factor
fs=2e+7; % actual ADC sampling frequency (20MHz)
r=1/(fs*dt); % num. of dt's in the sampling period

g7a=atdr(z7,p7a,t,'s'); % step-response of the analog part


h6d=atdr(z7,p6d,t,'n'); % normalized impulse resp. digital part
g13=conv(g7a,h6d); % digital filtering = convolution
g13=g13(1:max(size(t))); % take only length(t) samples

plot( t*M, g7a, '-r',...


t*M(1:r:Nt), g7a(1:r:Nt), 'or',...
t*M, Nt*h6d, '-g',...
t*M(1:r:Nt), Nt*h6d(1:r:Nt), 'og',...
t*M, g13, '-b',...
t*M(1:r:Nt), g13(1:r:Nt), 'ob' )
xlabel('t [us]')

The plot can be seen in Fig. 7.2.15. The dot markers indicate the first 15 time
samples of the analog filter step response, the digital filter impulse response and the
composite mixed mode filter step response.

1.2
1.0

0.8 g 7a
0.6 g 13

0.4 f 6d
11
0.2

− 0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
t [ µ s]
Fig. 7.2.15 : Convolution as digital filtering for a unit step input. The 7-pole analog
filter step response ga( is sent to the 6-pole equivalent digital filter, having the
impulse response of 0d' . Their convolution results in the step response g"$ of the
effectively 13-pole mixed mode composite filter. The marker dots represent the
actual time quantization as would result from the ADC sampling at 20 MHz. The
impulse response of the digital filter is almost zero after only 11 samples, so the
digital filter needs only the first 11 samples as its coefficients for convolution. Note
that even if the value of the first coefficient is zero, it must nevertheless be used in
order to have the correct response.

- 7.33 -
P.Starič, E.Margan Algorithm Application Examples

Of course, to simulate the actual digitalization, we should have multiplied the


normalized signal by 212 and then rounded it to integers to obtain a vertical range of
4096 discrete levels. However, the best computer monitors have a vertical resolution
of only "Î% of that. If viewed on screen the signal would be seen as if it were digitized
with a 10 bit resolution at best (but with lower quantization noise). On paper, with a
printer resolution of 600 dots per inch, the vertical size of this figure would have to be
6.5 inches (16.5 cm) in order to appreciate the full ADC resolution.
Nevertheless, measurement precision is always of prime importance,
particularly if additional operations are required to extract the information from the
recorded signal. In such a case the digital filtering can help, since it will additionally
suppress the quantization noise, as well as the amplifier’s noise, by a factor equal to
the square root of the number of filter coefficients. Disregarding the analog input
noise for the moment, if the quantization noise of the 12 bit ADC were ± 2 LSBs at
the maximum sampling frequency, its precision would be effectively 10 bits. So, if
the impulse response of our digital filter was 11 samples long the quantization noise
would be averaged by an additional È11 , or about 3.3 times, resulting in an almost 2
bit improvement in precision. Effectively, the internal signal precision in memory
would be restored to nearly 12 bits.
A final note on the step response: we have started the design of the mixed
mode filter because we have assumed that aliasing would be a problem. However,
aliasing is a problem only for periodic signals, not for the step! Nevertheless, we are
interested in the step response for two reasons:
• one is that even if we were not to care for it, our customers and the people using
our instrument would want to know the rise time, the settling time, etc., and, also
very important, our competitors would certainly not spare their means of checking
it if they can show they are better, or just very close but cheaper!
• the other reason is that the step response will give us a direct evidence of the
system phase linearity, a parameter of fundamental importance in spectrum
analysis.

7.2.6 Analog Filters With Zeros


By using a 13-pole mixed mode filtering we have so far achieved a bandwidth
improvement of about 1.37 compared to a 7-pole analog only filter. But even greater
improvements are possible if we use analog filters with zeros. Combining a Bessel
low pass response with zeros in the stop band is difficult but not impossible to
achieve. The zeros must be in pairs and purely imaginary, and they must be placed at
the Nyquist frequency and its first harmonic, the sampling frequency (beyond 0s the
filter attenuation is high, so any aliasing from there will already be below the ADC
resolution). In this way the zeros effectively prevent aliasing down to DC, and they
also modify the filter slope to become very steep near them, allowing us to increase
the bandwidth further still: a factor of up to 1.87 can be achieved.
One such example, calculated from the same initial assumptions as before, is
shown in the following figures. Fig. 7.2.x5 shows the poles and zeros, Fig. 7.2.x6
shows the frequency responses, and Fig. 7.2.x7 shows the alias spectrum in relation to
the filter responses and the resulting alias suppression by the digital filter.

- 7.34 -
P.Starič, E.Margan Algorithm Application Examples

8
× 10
ℑs
analog filter
1 poles
zeros
digital filter
poles
ℜs
0

−1

8
−1 0 1 × 10
Fig. 7.2.16: An example of a similar mixed mode filter, but with the analog
filter using two pairs of imaginary zeros, one pair at the sampling frequency
and the other pair at the Nyquist frequency.
10
fN fs
0
Fa7
− 10 F d6
− 20
Attenuation [dB]

− 30 F13x
− 40
− 50
− 60 F a7z
− 70
− 80
− 90
1 10 100
f [MHz]
Fig. 7.2.17: By adding the zeros the analog filter frequency response in
modified from Ja( to Ja(z . Jd' is the digital filter response and J"$x is the
composite mixed mode response.
10
fN fs
0
Fd6 Sr
− 10
− 20
Attenuation [dB]

− 30
− 40 Fa7z
− 50
− 60
Sb Sa
− 70
− 80 Fd6 × S b
− 90
1 10 100
f [MHz]
Fig. 7.2.18: The spectrum Wa is aliased into Wb . The difference between the ADC
resolution and Wb (both in dB) gives Wr , the envelope of which determines the
minimum attenuation required by Jd6 to suppress Wb below the ADC resolution.

- 7.35 -
P.Starič, E.Margan Algorithm Application Examples

Fig. 7.2.19 shows the time domain responses. Note the influence of zeros on
the analog filter response.

1.2
g a7
1.0 g a7z

0.8

0.6 g 13
f d6
0.4

0.2

0
− 0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
t [ µ s]
Fig. 7.2.19: The step response of the 7-pole analog filter ga( is modified by the
4 zeros into ga(z . Because the alias spectrum is narrower the bandwidth can be
increased. The mixed mode system step response g"$ has a rise time of less than
150 ns (in contrast with the 180 ns for the case with no zeros).

7.2.7 Analog Filter Configuration

So far we have inspected the system performance in detail, without knowing


the circuit’s actual configuration. Now it is time to select the analogue filter
configuration and calculate the component values needed to comply with the required
performance. To achieve this we have to consider both the system’s accuracy (which
has to be equal or better than the 12 bit ADC resolution) and the gain–bandwidth
product of each stage, which we have already accounted for in part.
In general, any filter topology could be used if only the transfer function is
considered. However for high frequency applications the filter topology which suits
us best is the ‘Multiple–Feedback’ type (MFB), built around operational amplifiers.
MFB uses a phase inverting, virtual ground amplifier with the filtering components in
its feedback loop, as shown in Fig. 7.2.16 and 7.2.18. Without feedback, the 12 bit
precision, offered by the ADC, would be impossible to obtain and preserve in the
analog circuit. As a bonus, this topology also offers lowest distortion.
In fact, the non-inverting configurations, such as the ‘Sallen–Key’ (SK) filter
type or the ‘Frequency Dependent Negative Resistance’ (FDNR) type, must support a
common mode signal as high as the input signal, whilst the filtering is done on the
small differential signal. The mismatch of the inverting and non-inverting opamp
input impedance is inevitable in filters, so the finite and frequency dependent
common mode rejection plays an important role regarding signal fidelity. As a ‘rule of
thumb’ for high speed opamps: invert if you can; if you can not, invert twice!
Another factor, which becomes important in high frequency active filters, is
the impedance of the filtering components; the MFB configuration allows us to use
low resistance and moderate capacitance values, thus minimizing the influence of

- 7.36 -
P.Starič, E.Margan Algorithm Application Examples

strays. In the figures below are the schematic diagrams of a 3-pole and a 2-pole stage.
We shall use the 3+2+2 cascade to implement the 7-pole filter required.

R4 3 R3 2
R2
s
R1 C1
1
o

C3 C2
A

Fig. 7.2.20. The ‘Multiple–Feedback’ 3-pole (MFB-3) low pass filter configuration

R3 2
R2
s
R1 1
C1
o

C2
A

Fig. 7.2.21: The ‘Multiple–Feedback’ 2-pole (MFB-2) low pass filter configurations

7.2.8 Transfer Function Analysis of the MFB-3 Filter

Our task is to find the transfer functions of these two circuits and then relate
the time constants to the poles so that the required component values can be found.
The 3rd -order stage will be the first in the cascade, so let us calculate its
components first. For the complete analysis please refer to Appendix 7.1; here we
write only the resulting transfer function, which in the general case is:
=" =# =$
J a=b œ E!
a=  =" ba=  =# ba=  =$ b
=" =# =$
œ E! (7.2.2)
=$  =# a="  =#  =$ b  = a=" =#  =" =$  =# =$ b  =" =# =$

For the MFB-3 circuit the coefficients at each power of = are:

" V% " V$ V$
a="  =#  =$ b œ Œ"   Œ"    (7.2.3)
G$ V% V$ G# V$ V# V"

" " V$
"  aV$  V% bŒ  
V# V" V#
=" = #  = " = $  = # = $ œ  (7.2.4)
G# G$ V$ V% G1 G 2 V 1 V 3

V$ V% "
=" =# =$ œ Œ"   (7.2.5)
V# V$ G" G # G $ V " V $ V %

- 7.37 -
P.Starič, E.Margan Algorithm Application Examples

and the DC gain E! is:


V#
E! œ  (7.2.6)
V$  V %

By examining the coefficients and the gain we note that we can optimize the
component values by making the resistors V" , V$ , and V% equal:

V" œ V$ œ V% œ V (7.2.7)

The coefficients and the gain equations now take the following form:

# " V
a="  =#  =$ b œ  Œ#   (7.2.8)
G$ V G# V V#

" #V " V
=" =#  =" =$  =# =$ œ Œ$   † (7.2.9)
G# G$ V # V# G1 G2 V # V#

#V "
=" =# =$ œ † (7.2.10)
V# G" G# G$ V$

V#
E! œ  (7.2.11)
#V
To simplify our expressions we shall substitute the term VÎV# by:
V "
Qœ œ  (7.2.12)
V# #E!
so the coefficients can be written as:
# #Q
O# œ a="  =#  =$ b œ  (7.2.13)
G$ V G# V

$  #Q Q
O" œ =" =#  =" =$  =# =$ œ  (7.2.14)
G# G$ V# G1 G2 V#

#Q
O! œ =" =# =$ œ (7.2.15)
G" G# G$ V$

Next we can normalize the resistance ratios and the VG time constants in
order to eliminate the unnecessary variables. After the equations are solved and the
component ratios are found we shall denormalize the component values to the actual
cut off frequency, as required by the poles. We can thus set the normalization factor:
"
R œ (7.2.16)
V
so instead of V we have the normalized resistance VR :

VR œ R V œ " (7.2.17)

- 7.38 -
P.Starič, E.Margan Algorithm Application Examples

Accordingly, the capacitance values are now also normalized, and to


distinguish the new values from the original ones we shall label the capacitors as:
G" G# G$
Ga œ Gb œ Gc œ (7.2.18)
R R R
With these changes we obtain:
# #Q
O# œ  (7.2.19)
Gc Gb

$  #Q Q
O" œ  (7.2.20)
Gb Gc G a Gb

#Q
O! œ (7.2.21)
Ga Gb Gc
We can now express, say, Ga from Eq. 7.2.21:
#Q
Ga œ (7.2.22)
O ! G b Gc
which we insert into Eq. 7.2.20:
$  #Q O! Gc
O" œ  (7.2.23)
Gb Gc #
From this we express Gb :
#a$  #Q b
Gb œ (7.2.24)
#O" Gc  O! Gc#

Now we can eliminate Gb from Eq. 7.2.19:


# #Q
O# œ  (7.2.25)
Gc #a$  #Q b
#O" Gc  O! Gc#

and we remain with only Gc , for which we have a third-order equation:


O" # #a$  #Q bO# %a$  #Q b
Gc$  # Gc  Gc  œ! (7.2.26)
O! a#  Q bO! a#  Q bO!

By substituting the coefficients of this equation with :, ; , and <:


O"
: œ # (7.2.27)
O!

#a$  #Q bO#
;œ (7.2.28)
a#  Q bO!

%a$  #Q b
<œ  (7.2.29)
a#  Q bO!

- 7.39 -
P.Starič, E.Margan Algorithm Application Examples

we can rewrite Eq. 7.2.23 as:

Gc$  : Gc#  ; Gc  < œ ! (7.2.30)

The real general solution for the third-order has been written already in Appendix 2.1:

#È # "  È$ a# :$  * : ;  #( <b :


Gc œ  :  $; sin arctan– 
$ $ * È% < :$  :# ; #  ") : ; <  % ; $  #( <# —Ÿ $

(7.2.31)

By inserting the poles =" , =# , and =$ into the expressions for O! , O" , and O$ ,
and the expression for gain in Q , and then using it all in :, ; , and <, we finally obtain
the value of Gc . The explicit expression would be too long to write here, and, anyway,
it is only a matter of simple substitution, which in a numerical algorithm would not be
necessary. With the value of Gc known we can go back to Eq. 7.2.24 to find the value
of Gb :
"
#Œ$  
E!
Gb œ (7.2.32)
#a=" =#  =" =$  =# =$ bGc  =" =# =$ Gc# ‘

and from Eq. 7.2.20 we express Ga :

" "
Ga œ  † (7.2.33)
E! =" =# =$ Gb Gc

Now that the normalized capacitor values are known, the denormalization
process makes use of the circuit’s cut off frequency, =$h (which, to remind you, is
different from the 7-pole filter cut off =(h , as it is from the total system cut off, ="$h );
=$h relates to O! and the poles as:

O! œ =$$h œ  =" =# =$ (7.2.34)

From =$h we can denormalize the values of V and the capacitors to acquire
some suitable values, such that the opamp of our choice can easily drive its own
feedback impedance and the input impedance of the following stage. For high system
bandwidth, it is advisable to choose V as low as possible, usually in the range
between "&! and (&! H. Let us say that we can set V œ ##! HÞ This means that:

"
R œ (7.2.35)
##!

Accordingly, we denormalize the capacitors:


G" œ R Ga (7.2.36)
G# œ R Gb (7.2.37)
G$ œ R Gc (7.2.38)

- 7.40 -
P.Starič, E.Margan Algorithm Application Examples

The cut off frequency is:

=$h " $ " "


03h œ œ Ê † (7.2.39)
#1 #1 V E! G" G# G$

From the system gain we obtain the value of V# :

V# œ # V E! (7.2.40)

By inserting the first three poles from p7a for =" , =# , and =$ , we obtain the
following component values:

% The poles of the 7th-order analog filter:

p7a: 1e+7 * -3.0181


-2.8572 - 1.1751i
-2.8572 + 1.1751i
-2.3275 - 2.3850i
-2.3275 + 2.3850i
-1.1637 - 3.7353i
-1.1637 + 3.7353i [rad/s]

% The first three poles of p7a are assigned to the MFB-3 circuit:

s1 = -3.0181 * 1e+7 [rad/s]


s2 = ( -2.8572 - 1.1751i ) * 1e+7 [rad/s]
s3 = ( -2.8572 + 1.1751i ) * 1e+7 [rad/s]

% The single stage gain is:

Ao = 100^(1/3) = 4.642

% ------ MFB-3 components: ------------------------------

R=R4=R3=R1 = 220 Ohm R2 = 2042 Ohm

C3 = 102.8 pF C2 = 274.5 pF C1 = 24.9 pF

% The cut off frequency is:

f3h = 4.879 MHz

7.2.9 Transfer Function Analysis of the MFB-2 Filter

The derivation for the two second-order stages, which are both of the form
shown in Fig. 7.3.10, is also reported in detail in Appendix 7.1. Again, here we give
only the resulting transfer function:
V3 "

@o V2 V2 V1 V 3 G 1 G 2
œ  †
@s V3 V3 V3 " V3 "
=2  = Œ   "  †
V1 V2 V3 G 2 V2 V1 V3 G1 G 2
(7.2.41)

- 7.41 -
P.Starič, E.Margan Algorithm Application Examples

By comparing this with the general form:


=" = # =" = #
L a=b œ E! œ E! # (7.2.42)
a=  =" ba=  =# b =  = a="  =# b  =" =#

we find the system gain:


V2
E0 œ  (7.2.43)
V3
and the component values are found from the denominator polynomial coefficients, in
which, in order to optimize the component values, we again make V" œ V$ œ V :
"
=" =# œ (7.2.44)
E0 V2 G1 G2

" "
a="  =# b œ Œ#   (7.2.45)
V G2 E0

Solving for G2 and G1 , respectively, we have:


" "
G2 œ Œ#   (7.2.46)
V a="  =# b E0

a="  =# b "
G1 œ (7.2.47)
=" = # V a#E0  "b

Again we introduce the normalization factor R œ "ÎV, so that


VR œ R V œ " and we accordingly normalize the capacitor values:
G" G#
Ga œ Gb œ (7.2.48)
R R
Then:
" "
Gb œ Œ#   (7.2.49)
a="  =# b E0

a="  =# b "
Ga œ (7.2.50)
=" =# #E0  "

Let us say that here, too, we use the same values for V and R , as before
(V œ ##! H, R œ "Î#!!; note however that in general we can use a different value if
for whatever reason we find it more suitable — one such reason can be the preferred
values of capacitors). Thus:

G" œ R Ga G# œ R Gb (7.2.51)

The cut off frequency of the MFB-2 circuit is simply:


=#h È =" =# "
0#h œ œ œ (7.2.52)
#1 #1 È
V E0 G1 G2

- 7.42 -
P.Starič, E.Margan Algorithm Application Examples

From p7a we use the 4th and the 5th pole for the first MFB-2 stage and the 6th
th
and 7 pole for the second MFB-2 stage. With those we obtain the following
component values:

% The first MFB-2 circuit:


s1 = ( -2.3275 - 2.3850i ) * 1e+7 [rad/s]
s2 = ( -2.3275 + 2.3850i ) * 1e+7 [rad/s]
f2h = 5.304 MHz

------ component values: -----------------------------


Ao = 4.642 R=R3=R1 = 220 ohm R2 = 1021 ohm
C2 = 216.3 pF C1 = 18.5 pF

% The second MFB-2 circuit:


s1 = ( -1.1637 - 3.7353i ) * 1e+7 [rad/s]
s2 = ( -1.1637 + 3.7353i ) * 1e+7 [rad/s]
f2h = 6.227 MHz

------ component values: -----------------------------


Ao = 4.642 R=R3=R1 = 220 ohm R2 = 1021 ohm
C2 = 432.7 pF C1 = 6.7 pF

We can now finally draw the complete circuit schematic diagram with
component values:

s R14 R13 R12


+1
220 Ω 220 Ω 2042Ω
C0 R0 UGB R11 C11
12pF 1MΩ 220Ω 25pF
C13 C12
103pF 275pF
A1

R23 R22 R33 R32

220Ω 1021Ω 220Ω 1021Ω


R21 C21 R31 C31
o

220 Ω 18.5pF 220 Ω 6.8pF to ADC


C22 C32
216pF A2 433pF A3

Fig. 7.2.22: The realization of the 7-pole analog filter for the ADC discussed in the
aliasing suppression example. The input signal is separated from the filter stages by a
high impedance Ð1 MH, 12 pF Ñ unity gain buffer (UGB). The first amplifier stage A1
with a gain of 4.64 is combined with the third-order filter, the components are
calculated from the first three poles taken from the 7-pole analog part of the 13-pole
mixed-mode system. The following two second-order filter stages A2 and A3 have the
same gain as the first stage, whilst their components are calculated from the next two
complex conjugate pairs of poles from the same bunch of 7. All three amplifying stages
are inverting, so the final inversion must be done either at the signal display, the digital
filter routine, or simply by taking the 2’s complement of the ADC’s digital word.

- 7.43 -
P.Starič, E.Margan Algorithm Application Examples

7.2.10 Standardization of Component Values

More often than not, multi-stage filters will have awkward component values,
far from either of the closest preferred standard E12 or E24 values. In addition the
greater the number of stages, the larger will be the ratio of maximal to minimal
values, forcing the use of components with very tight tolerance.
Fortunately we are not obliged to use exactly the calculated values; indeed, we
are free to adjust them, paying attention that each stage keeps its time constants as
calculated. I.e., the capacitors will be probably difficult to obtain in E24 values, but
resistors are easily available in E48 and even E96 values, so we might select the
closest E12 values for the capacitors and then select the resistors from, say, E48.
Considering for example the first 2-pole stage (E# ) we could use 18 pF for G#"
(instead of 18.5); then G## would be 210 pF (say, 180 and 30 pF in parallel), the
resistors V#" and V#$ can be set to 226 H and V## can be set to 1050 H.
The other two stages can be changed in a similar way. It is advisable not to
depart from the initial values too much, in order to keep the impedances close to the
driving capability of the amplifiers (remember that each amplifier has to drive both
the input impedance of the following stage and the impedance of its own feedback
network) and also to remain well above the influence of stays (low value capacitances
and the amplifier inverting inputs are the most sensitive in this respect).

7.2.11 Concluding Remarks

The initial design requirement for the procedure described is probably an


overkill, since we shall very rarely have the noise level or some other interfering
signal as high as the full ADC amplitude range at the Nyquist frequency limit or
above. If we are confident that the maximum disturbance level at the Nyquist
frequency will always be some 30 dB lower than the full range amplitude, we can
relax the filter specifications accordingly, either by having a less steep and less
complicated filter, or by raising the filter’s cut off frequency so that the attenuation at
the Nyquist frequency intersects the level 30 dB above the ADC resolution.
For the example above, the maximum input signal was 40 mV, so, in the case
of an interfering signal of, say, 1.3 mV (30 dB), our filter would need an attenuation
of about 42 dB at the Nyquist frequency. For the 7th -order analog part of the filter,
having the attenuation slope of 140 dBÎ100 , this would result in a nearly doubled
bandwidth (the digital part remains the same), but note that the alias spectrum can
now extend down to DC, since its source spectrum includes the sampling frequency.
Also, as we have seen at the beginning of this section, the asin =Xs bÎ=Xs spectral
envelope allows us a further 12–13 dB at about 0.70s . In such a case an additional
filter stage, with zeros at 0N and 0s and the third harmonic of 0N (all on the imaginary
axis), such as an inverted (type II) Chebyshev filter, could be used to improve the
alias rejection at low frequencies without spoiling the upper roll off slope of the
frequency response by much, thus also preserving the smooth step response.

- 7.44 -
P.Starič, E.Margan Algorithm Application Examples

Résumé and Conclusion

We have shown only a small part of the vast possibilities of use offered by the
application of numerical routines, either for the purpose of system design or for
implementing them within the system’s digital signal processing.
Some additional information and a few examples can be found on the CD
included with the book, in form of the ‘*.M’ files to be run within Matlab. Many of
those files were used to produce the various figures in the book and can be used as a
starting point for further routine development.
One of the problems of writing a book about a fast developing subject is that
by the time the writers have finished the editing, several years might have passed and
the book is no longer at the forefront of the technology’s development.
We have tried to prevent the book from becoming outdated too soon by
including all the necessary theory and covering the fundamental design principles
which are independent of technology, and thus of lasting value. We have also tried to
cover some of the most important steps in the development from a historical point of
view, to show how those theoretical concepts and design principles have been applied
in the past.
Although the importance of staying alert and following the new developments
and ideas is as high as ever, the knowledge of the theory and its past applications
helps us to identify more quickly the correct paths to follow and the aims to pursue.

- 7.45 -
P.Starič, E.Margan Algorithm Application Examples

References:
[7.1] J.N. Little and C.B. Moller, The MathWorks, Inc.:
PC-MATLAB For Students (containing disks with Matlab program)
Prentice-Hall, 1989
MATLAB-V For Students (containing CD with Matlab program)
Prentice-Hall, 1998
Web link : <http://www.mathworks.com/>
See also the books on electronics + Matlab at:
<http://www.mathworks.com/matlabcentral/link_exchange/MATLAB/Books/Electronics/>

[7.2] C.E. Shannon, Collected Papers,


IEEE Press, Cat. No.: PC 0331

See also:
<http://en.wikipedia.org/wiki/Nyquist-Shannon_interpolation_formula>

[7.3] A.V. Oppenheim and R. W. Schafer, Digital Signal Processing,


Prentice-Hall, 1975

[7.4] K. Azadet, Linear-phase, continuous-time video filters based on mixed A/D structure,
ECCTD 1993 - Circuit Theory and Design, pp. 73–78, Elsevier Scientific Publishing

[7.5] E. Margan, Anti-aliasing with mixed-mode filters,


Electronics World and Wireless World, June, 1995, pp. 513–518
<http://www.softcopy.co.uk/electronicsworld/>

- 7.47 -
Wideband Amplifiers Index

Index

Amplifiers negative feedback, voltage,


amplifier stages, basics, 3.9 5.95, 5.114
cascode non peaking, 3.37 negative feedback, current, 5.62–79
basic circuit analysis, 3.37–38 feedback (see error correction)
damping of U# emitter, 3.37–40 feedforward (see error correction)
input impedance, basic, 3.49 frequency response, definition,
compensation of U# , 3.46–47 2.14, 6.7–8, 6.21–26
step response and preshoot, 3.40 0T –Doubler, 3.75
thermal compensation of U" , 3.44 Gilbert multiplier, 5.123
thermal distortion of step signal, 3.43 four-quadrant, 5.127
thermal stability, gain control, continuous, 5.125–127
bias optimization, 3.44 introduction, 3.7
cascode differential, 3.70–71, 5.108 improving linearity of, 120
improved Darlington, 5.109–110 JFET source follower, 3.79
feedforward correction, 5.111 circuit analysis, 3.79–82
cascode emitter peaking, 3.49 capacitances, inter-electrode, 3.80
circuit analysis, basic, 3.49–52 envelope delay, 3.84–85
Bode plot of, 3.53 frequency response, 3.82–83
complex poles of, 3.53 magnitude, 3.83
input impedance irregularity, 3.50 considering input generator
input impedance compensation, resistance, 3.93
3.54–56 with inductive generator
poles, placement of, impedance, 3.90, 3.93
see complex poles input admittance, 3.92
thermal problems, 3.42–46 input impedance, 3.89–90
cascode folded, 3.68 input protection network of, 5.52
Cascomp, 5.112–114 against long term overdrive,
common base, 3.17 5.25–26
base emitter effective against static charges, 5.52–53
capacitance, 3.19 negative input conductance,
effective emitter resistance, 3.35 normalized, plot of, 3.91
input impedance, 3.18 compensation of, 3.92
transimpedance, 3.34 alternative compensation of, 3.94
input impedance, 3.34 overdrive recovery, 5.47
Miller capacitance, 3.33–34 phase response, 3.84
common emitter, 3.9 tendency to oscillate, 3.90, 3.93
circuit analysis, 3.9–15 similarity with Colpitts oscillator,
emitter resistance, 3.12 3.90
voltage gain, calculation of, 3.14–15 step response, 3.85–86
input impedance, 3.14 considering input generator
input pole, 3.15 resistance, 3.87
Miller capacitance, 3.13 MOSFET source follower, 5.48
CRT driver 3.10, 5.24 maximum amplitude range, 4.65
differential, 3.69 Miller capacitance, Miller effect, 3.13
circuit analysis of, 3.7–8 multistage, 4.9,
common mode operation, 3.70 two stage, inductively peaked, 5.10
differential mode operation, 3.70 optimum number of stages for
two stages example of, 5.9–10 minimum rise time, 4.17–19
drift correction of, 3.69, 5.106–107 negative feedback (see error correction)
simple, 5.43 non-peaking, multistage, DC coupled, 4.9
active, 5.45 decibels per octave, explanation of,
envelope delay/advance, 4.11
definition of, 2.20–21 envelope delay, 4.12–13
error correction, 5.94, 5.98, 5.104 frequency response, 4.9–10
feedforward, 5.96–100, 5.111–116 optimum single stage gain, 4.17–18
improved voltage feedback, 5.80 optimum number of stages, 4.17–19

- X.1 -
Wideband Amplifiers Index

phase response, 4.12 analysis of 7-pole (of 13-pole Bessel),


rise time calculation, 4.15 7.28
slew rate limit, 4.16–17 frequency response, 7.23, 7.30
step response, 4.13–15 step response, 7.24–25, 7.31
two-stages considering of 5.17–19 envelope delay improvement, 7.32
upper half power frequency, with zeros, 7.34
calculation of, 4.10 digital, using convolution 7.33
non-peaking multistage AC coupled, 4.21 equivalent 6-pole (of 13), 7.33
analysis of, 4.21 poles of, 6.29
frequency response, low frequency, impulse response, 7.33
4.22 mixed mode systems, 7.21
phase response, 4.23 bandwidth improvement, 7.30
step response, 4.24 13-pole Bessel system response in
operational amplifiers, 5.57 discrete samples, 7.33
clasical, voltage feedback, 5.57 poles of, compared with poles of the
high-speed analog only filter, 7.28
current feedback, 5.62 Fourier transform
voltage feedback, 5.80 Fourier series, 1.11–16
overdrive, 5.89–91 calculation of frequency components,
large signal non-linearity, 5.47 1.13–15
recovery, 5.89, 5.120 Fourier integral 1.18–20
phase delay/advance, definition of, 2.20 basic derivation of, 1.18–19
phase response, definition of, 2.21 introduction of complex
rise time, basic, definition of, 2.9 frequency =, 1.23
slew rate, limiting, maximum, limits of, 1.24
4.16–17, 5.61 Fast Fourier Transform (FFT) algorithm,
with separate LF and HF path, 5.45–46 see Numerical Methods
Attenuators Group advance, see Envelope advance
high resistance, 5.26 Group delay, see Envelope delay
capacitively compensated, 5.26–40 Inductive peaking circuits, 2.7
envelope delay, 5.30–31 introduction, 2.7–8
frequency response, 5.27–30 comparison of Butterworth (MFA)
hook effect, 5.41–42 frequency responses, 2.103
cause of, 5.41 diagrams, 2.104
compensation of, 5.42 comparison of Bessel (MFED) step
input generator resistance responses, 2.103
consideration 5.34 diagrams, 2.104
compensation of, 5.34–35 principle of , 2.9–11
loop inductances, 5.37–38 series 2-pole, 2.13
damping of, 5.39–41 bandwidth improvement, 2.26,
phase response, 5.30 Table 2.2.1
step response, 5.32–33 Bessel poles for MFED response,
low resistance, 5.55 2.16
driver stage for, 5.54 circuit analysis of, 2.13–14
electronic control, 5.122 critical damping, 2.17
Convolution, 1.37–39, 1.81–83, 7.7–13 envelope delay, principle of, 2.20
Coupling factor 5 , (see T-coil 2-pole) envelope delay, calculation of, 2.21–22
Current Mirrors, 3.72, 5.62–64, 5.69, 5.80, frequency response, 2.17
5.82, 5.83, 5.127 input impedance, 1.25–26
current source in the emitter circuit, 7–parameter, calculation of, 2.18
3.72–74 overshoot, 2.26, Table 2.2.1
current symmetry analysis, 3.72 rise time improvement ( r , 2.26,
improved current generator, analysis of, Table 2.2.1
3.74 phase response, calculation of, 2.19
Delay lines, 2.46–50, 5.24 rise time improvement,
Filters calculation of, 2.24
analog, 7.43 step response, calculation of, 1.76–77
analysis of MFB, 2-pole, 7.41 step response, double pole,
analysis of MFB, 3-pole, 7.37 calculation of, 2.23

- X.2 -
Wideband Amplifiers Index

comparison of parameters, 2.26 calculation of, 2.84


upper half power frequency =H , 8–parameter for MFED,
calculation of, 2.17–18 calculation of, 2,84
series 3-pole, 2.27 overshoot, 2.89, Table 2.8.1
circuit analysis of, 2.27–28 comparison of parameters, 2.89
bandwidth improvement ( b , 2.34, phase response, 2.86
Table 2.3.1 poles, calculation of, 2.85
comparison of parameters, 2.34, rise time improvement ( r , 2.89,
Table 2.3.1 Table 2.8.1
with Bessel poles (MFED), 2.29–30 step response, 2.88
with Butterworth poles (MFA), shunt–series, 2.91
2.28–29 bandwidth improvement ( b , 2.101,
envelope delay, calculation of , 2.32 Table 2.9.1
frequency response, Braude parameters, 2.96
calculation of, 2.28 circuit analysis of, 2.91–96
7–parameter for MFA, comparison of parameters, 2.101
calculation of, 2.29 envelope delay, 2.97
7–parameter for MFED, frequency response, 2.96
calculation of, 2.29–30 MFA, PS/EM parameters, 2.96
8–parameter for MFA, MFED, PS/EM parameters, 2.96
calculation of, 2.29 overshoot, 2.101, Table 2.9.1
8–parameter for MFED, phase response, 2.97
calculation of, 2.29 pole placement. 2.101
overshoot, 2.34 Table 2.3.1 rise time improvement ( r , 2.101,
phase response, 2.31 Table 2.9.1
pole placement, 2.34 Shea parameters, 2.96
rise time improvement ( r , 2.34, step response, 2.99–100
Table 2.3.1 T-coil 2-pole, 2.35
special parameters (SPEC), 2.31 all pass pole/zero placement, 2.40
table of main parameters, 2.26 bandwidth improvement ( b , 2.48,
step response, 2.32–33 Table 2.41
shunt 2-pole, 2.73 bridging capacitance Gb ,
bandwidth improvement ( b , 2.81, calculation of , 2.40
Table 2.7.1 circuit analysis of, 2.35–42
circuit analysis of, 2.73–74 coupling factor 5, calculation of, 2.41
comparison of parameters, 2.81 envelope delay, 2.43
envelope delay, 2.75 frequency response, 2.42
frequency response, 2.74 mutual inductance PM , analysis,
7–parameter for MFA, basic relations, 2.35–36
calculation of, 2.74 overshoot, 2.48, Table 2.4.1
7–parameter for MFED, phase response, 2.34
calculation of, 2.75 pole placement, 2.40
overshoot, 2.81, Table 2.7.1 rise time improvement ( r , 2.48,
phase response, 2.74–75 Table 2.41
pole placement, 2.81 step response, 2.45
rise time improvement ( r , 2.81, all pass, 2.46–47
Table 2.7.1 example of TV interconnections,
step response, 2.78–80 2.48–50
shunt 3-pole, 2.83 T-coil 3-pole, 2.51
bandwidth improvement ( b , 2.89, bandwidth improvement ( b , 2.59,
Table 2.8.1 Table 2.5.1
circuit analysis of, 2,83 capacitance ratio GÎGi
envelope delay, 2.86 for MFA, 2.54
frequency response, 2.86 for MFED, 2.53
7–parameter for MFA, for CD, 2.54
calculation of, 2.84 circuit analysis of, 2.51–54
7–parameter for MFED, comparison of parameters, 2.59
calculation of, 2.84 envelope delay, 2.56
8–parameter for MFA, frequency response, 2.54–55

- X.3 -
Wideband Amplifiers Index

low coupling cases, 2.58–59 consideration of base lead stray


coupling factor 5, 2.53 inductance, 3.66–67
8–parameter, calculation of, 2.52–53 consideration of collector to base
overshoot 2.59, Table 2.5.1 spread capacitance, 3.67
phase response, 2.55 consideration of U" input resistance,
rise time improvement ( r , 2.59, 3.65
Table 2.5.1 Table of, see Appendix 2.4 (on the CD)
step response, 2.57–58 Laplace transform, Introduction to, 1.5–6
trigonometric relations, 2.52 Laplace transform, direct, 1.23
variation of parameters, sinusoidal function, 1.7–8
frequency response, 2.59–61 three different ways of expression, 1.7
step response, 2.59–61 derivation of, 1.23
L+T peaking, 2.63 examples of, 1.25
bandwidth improvement ( b , 2.71, unit step function, 1.25
Table 2.6.1 exponential function, 1.26
bridging capacitance Gb , sinusoidal function, 1.26–27
calculation of, 2.65 cosine function, 1.27
capacitance ratio Gb ÎG , 2.71, damped oscillations, 1.27–28
Table 2.6.1 linear ramp, 1.28
capacitance ratio GÎGi , 2.71, function >8 , 1.28–29
Table 2.6.1 function >e5" > , 1.29–30
characteristic circle diameter ratio, function >8 e5" > , 1.29–30
2.63 Table of direct transforms, 1.30
comparison of main parameters, 2.71, properties of, 1.31
Table 2.6.1 convolution, calculation of, 1.37–39
coupling factor 5, calculation of, 2.65 convolution, example of, 1.81
envelope delay, 2.69 convolution, graphical
frequency response, 2.66–67 presentation of, 1.82
7–parameter, calculation of, 2.65 final value, 1.37–38
8–parameter, calculation of, 2.64 impulse $a>b, 1.35–36
overshoot, 2.71, Table 2.6.1 initial value, 1.37
phase response, 2.68 linearity (1), 1.31
rise time improvement ( b , 2.71, linearity (2), 1.31
Table 2.6.1 real differentiation, 1.31–32
step response, 2.69–71 real integration, 1.32–35
MFA, formula of, 2.7 change of scale, 1.34–35
MFED, formula of, 2.70 applications to
group A, formula of, 2.70 capacitance, 1.41–42
group C, formula of, 2.70 inductance, 1.41
Chebyshev with 0.05° phase parallel VG , 1.42–43
ripple, formula of, 2.71 resistance, 1.42
Gaussian to 12 dB, formula of, Complex line integrals, 1.45
2.71 definition, 1.45
double 2nd -order Bessel pole examples of, 1.49–51
pairs, formula of, 2.71 Table of complex vs. real integrals,
trigonometric relations, 2.63–65 1.48
misaligned T-coil parameters, Contour integral
step responses of, 2.105 around the pole at D œ !, 1.50–51
T-coil construction, 2.105 encircling a regular domain, 1.53
coupling factor 5 diagram of, 2.106 encircling an irregular domain, 1.53,
T-coil BJT inter stage coupling, 3.57 1.55, 1.65–72
bridging capacitance Gb , 3.60 encircling a single pole at D œ +, 1.53,
circuit analysis, 3.57–60 1.71–72
coupling factor 5, 3.60 Cauchy’s way of expressing analytic
envelope delay, 3.62 functions, 1.55–56
frequency response, 3.61 Residues of function with many poles,
mutual inductance QP , 3.59 1.57
phase response, 3.62 calculation of 1.58–59
step response, 3.64 Residues of function with multiple poles,

- X.4 -
Wideband Amplifiers Index

1.61–62 BUTTAP routine, 6.16


Laurent series, 1.61 optimized pole families, 6.13
examples of, 1.63 step response
Complex integration around many poles, as a time integral of the impulse
1.65, 1.69, response, 6.45–50
Cauchy–Goursat Theorem, 1.65–66 directly as the sum of residues, 6.57
Equality of Integrals Phase advance, principle of, 2.21
-4_ Phase delay, principle of, 2.20
( J Ð=Ñe=> .= and ( J Ð=Ñe=> .=, 1.67 Poles
-4_ Butterworth (MFA), 4.27, 6.15
Laplace transform, inverse, 1.72 calculation routine (BUTTAP), 6.16
application of, 1.73–79 envelope delay, 4.33
VPG circuit impulse response, derivation of. 4.27–30
calculation of 1.73–76 frequency response, 4.31–32
VPG circuit step response, ideal MFA, frequency response,
calculation of, 1.76–78 4.36–37
VPG circuit step response, graphical Paley–Wiener Criterion, 4.37
presentation of, 1.79 phase response, 4.32
convolution, graphical Table of, 4.35
presentation of, 1.82 step response, 4.34
convolution, calculation of, Bessel (MFED), 4.39, 6.17
1.37–39, 7.12–13 calculation routine (BESTAP), 6.19
Mutual inductance, derivation of, 4.39–40
see Inductive peaking, T-coil 2-pole envelope delay, 4.45
Numerical analysis frequency response, 4.42
aliasing of sampled signals, 7.17 Gaussian freq. resp. vs. 5-pole
alias spectrum, 7.26, 7.30, 7.35 Bessel, comparison of, 4.49
convolution, 7.7 phase response, 4.43–44
examples of, 7.9–13 with linear frequency scale, 4.44
as digital filtering, 7.33 Table of, 4,48
VCON routine, 7.8 step response, 4.45–46
discrete signal representation in time Gaussian step resp. vs. 5-pole
and frequency domain, 6.37 Bessel, comparison of, 4.49–50
envelope delay, 6.31 staggered poles, 4.63–64
Fast Fourier Transform (FFT) algorithm, repeated complex pole pairs,
6.40–41 4.63–64
frequency response Bessel (MFED) normalized to equal
Bode plots, 6.26, 6.27 cut off frequency, 4.51
complex domain {J a=b}, 6.24 envelope delay response, 4.53
imaginary domain {J a4=b}, 6.25 frequency response, 4.51
magnitude Q a=b œ kJ a=bk, 6.22 phase response, 4.52
Nyquist diagram, 6.25 Table of, 4.54
impulse response calculation, 6.36 step response, 4.53
ATDR routine, 6.59 comparison of 5th -order Butterworth and
from complex frequency response, 5th -order Bessel, 6.61
using FFT, 6.36 interpolation between MFA and MFED
from residues, 6.57 poles, 4.55
normalization practical interpolation procedure,
amplitude, ideal, 6.44 4.56–57
amplitude, unity gain, 6.46 derivation of modified Bessel poles,
time scale, 6.46 4.55
TRESP routine, 6.49 envelope delay, example of. 4.61
Matlab command syntax and terminology, frequency response, example of, 4.59
6.11 interpolation procedure, 4.56
phase response, 6.28 practical example of, 4.59–62
poles step response, example of, 4.62
Bessel–Thomson, 6.17 Table of modified Bessel poles, 4.58
BESTAP routine, 6.19 MFA, Maximally Flat Amplitude,
Butterworth, 6.15 see Butterworth

- X.5 -
Wideband Amplifiers Index

MFED, Maximally Flat Envelope Delay, Transistor MOSFET, 5.50


see Bessel Spectrum
optimized system families, 6.13 discrete
Time advance, see Envelope advance complex, bilateral, 1.9–13
Time delay, see Envelope delay real, unilateral, 1.14
Transistor BJT of a square wave, 1.11–12, 1.14
as an impedance converter, 3.17 of timely spaced square waves,
from base to emitter, general 1.17–18
analysis, 3.20 sampled, 6.37
from emitter to base, general continuous
analysis, 3.17–19, 3.20–21 of a single square wave, 1.19–20
Table of, 3.25 from discrete to continuous, 1.18
common base small signal from Fourier integral to direct Laplace
model, 3.20 transform, 1.23
Early voltage of, 3.46 frequency limited, Gibbs’ phenomenon,
conversions of impedances, 3.25, 1.16
Table 3.2.1 spectral filtering
examples of impedance conversion, multiplication in frequency domain
3.21 equals convolution in time domain,
capacitor in the emitter circuit, 1.83, 7.13
3.22–23 anti-alias filters, 7.21
inductance in the base circuit, Signal
3.23–24 Gibbs’ phenomenon, 1.16, 4.36–37, 6.51
transform of combined impedances, 3.26 impulse $a>b (Dirac function), 1.35
as seen from base, 3.27–28 ramp, 1.28
example of, 3.28 saw-tooth, 1.13
as seen from emitter, 3.26 step 2a>b (Heaviside function), 1.25
example of, 3.27 sine wave, 1.7–8, 1.26–27
complete input impedance, 3.26, 3.28 sine wave, damped, 1.27–28
thermal stability, optimum bias, 3.44 square wave, 1.11–12
Transistor JFET (see JFET source follower)

- X.6 -

You might also like