AC Theory
AC Theory
AC Theory
AC electrical theory
An introduction to phasors, impedance and admittance, with emphasis on radio frequencies. By David W. Knight* Version 0.09. 23rd October 2010. D. W. Knight, 2005 - 2010.
* Ottery St Mary, Devon, England. The most recent version of this and associated documents can be obtained from the author's website: http://www.g3ynh.info/
Table of Contents
Preface.............................................................2 1. Field electricity............................................3 2. Circuit analysis overview..........................10 3. Basic electrical formulae...........................14 4. Resonance..................................................17 5. Impedance, Resistance, Reactance............19 6. Vectors & Scalars.......................................20 7. Balanced Vector Equations........................23 8. Phasors.......................................................25 9. Voltage Magnification & Q.......................28 10. Power Factor & Scalar Product ..............32 11. Phasor dot product...................................34 12. Complex Numbers...................................35 13. Complex arithmetic.................................38 14. Impedances in Parallel.............................39 15. Parallel resonance....................................41 16. Dynamic Resistance................................43 17. Double-slash notation..............................44 18. Parallel-to-Series transformation.............46 19. Series-to-Parallel transformation.............47 20. Parallel resonator in parallel form...........48 21. Imaginary resonance................................50 22. Phase analysis..........................................52 23. Resistance tuned LC resonator?..............55 24. Phasor theorems.......................................58 25. Generalisation of Ohm's Law..................62 26. General statement of Joule's Law............63 27. Bandwidth................................................65 28. deciBels & logarithms.............................66 29. Bandwidth of a series resonator..............69 30. Logarithmic frequency............................73 31. A proper definition for resonant Q...........74 32. Bandwidth in terms of Q.........................75 33. Lorentzian line-shape function................76 34. Maximum power transfer........................78 35. The potential divider...............................82 36. Output impedance of potential divider....83 37. Thvenin's Theorem................................84 38. Measuring source resistance....................84 39. Error analysis...........................................85 40. Antenna system Q....................................90 41. Basic impedance transformer..................91 42. Auto transformers....................................95 43. Prototype Z-matching network................97 44. Admittance, conductance, susceptance....98 45. Parallel resonator BPF...........................101 46. Unloaded Q of parallel resonator..........104 47. Current magnification............................106 48. Controlling loaded Q.............................108
Preface
This document provides an introduction to the subject of AC circuit analysis, with particular emphasis on radio-frequency applications. It is based on an article which accompanied a collection of writings on the subject of radio-frequency impedance matching and measurement; which were first made available via the Internet in 2005, and were grouped under the working title "From Transmitter to Antenna". Its purpose was (and still is) to widen the audience for the other articles by providing essential background material, but it can just as well be read by those who have a more general interest. The approach adopted is that of starting with the basic laws of DC electricity and expanding them to deal with AC. The modified laws are then used to derive and explore results which are normally accepted without proof, thereby explaining the origins of various standard formulae and demonstrating the general method by which linear circuit design equations are obtained. The level of treatment is one which does not demand a high level of mathematical skill at the outset; because the required techniques are introduced as the narrative progresses. Hence the discussion should be accessible to anyone who has some knowledge of basic algebra and is reasonably familiar with circuit diagrams and electrical terminology. Apart from providing a conventional introduction to AC theory however, there is also a subtext. This relates to the author's concerns as a scientist: one being that there appears to be an almost universal public misconception regarding the nature of electricity; and the other being a lack of mathematical rigour in the way in which phasor techniques are commonly used. Both of these issues can (and should) be addressed at this stage in the development of working knowledge, and so the accompanying discussion attempts to do that. As all experienced engineers and physicists know, our understanding of electricity comes from Maxwell's equations. The problem for those who wish to teach electrical subjects however, is that the electromagnetic field approach requires advanced mathematics and does not lead directly to the practicalities of circuit design. Hence it is sensible to hold back on the more abstract ideas until they become unavoidable; but that leaves the problem of how to dispel the notion that electricity is synonymous with electrons flowing through wires. The author's solution is to provide an extended preamble; which gives a purely qualitative explanation of electricity in terms of fields; and is intended to leave the reader with the same mental picture as will be held by those who are familiar with Maxwell's theory. On the matter of mathematical rigour; it is not the intention to wrap the subject in formalism, but merely to eliminate certain bad practices. To this end, we pay particular attention to the definitions and properties of the mathematical objects involved, and develop a way of working which identifies and preserves the algebraic signs of the circuit parameters. The result is an internally-consistent theory of circuits, which produces results which are correct in both magnitude and phase. In this way, we eliminate the need for the so-called 'physical considerations' traditionally used to resolve ambiguities; and we discover which of the two possible definitions of admittance is actually the correct one. David Knight. November 2009.
1. Field electricity
When AC theory is introduced, and especially when there is a bias towards radio frequencies, the very first new idea required (by many people at least) is a correct understanding of the word 'electricity'. The teaching of basic science often involves what are known as 'lies to children', and the one about electricity being "electrons flowing through wires" is an intellectual dead-end. Electricity is actually an invisible form of light. Specifically, it is electromagnetic energy of very long wavelength (in comparison to visible light); which is why we can build devices called 'radio transmitters' which cause electricity to propagate off into space. Hence we will never understand high-frequency electricity by counting electrons; and we must first refine our ideas of voltage and current by thinking about certain mysterious entities known as 'fields'. The term 'field' has a vernacular meaning: "region of influence", and this is the route by which it came into the language of physics. In scientific parlance however, a field is more rigorously defined as 'a quantity which can take on different values, and possibly also different directions of maximum action, at different points in space and time'. The geometric field idea can be used (say) to describe the 3D temperature gradient around a hot object, or the average velocities of molecules in a flowing liquid; but the fields which are the most perplexing, and which ultimately reveal the deepest secrets of the Universe, are those which appear to produce action at a distance. Of these socalled 'force fields'; the gravitational, electric and magnetic are the most familiar; and of course, it is the latter two which concern us here. The beginning of what is loosely called 'modern physics' can be traced to a single deduction made by James Clerk Maxwell in the latter part of the 19th Century. Maxwell collected the details of every known scientific result concerning electricity and magnetism, and lent his phenomenal mathematical skill to the problem of finding a single theory. This led him to discover inconsistencies in the laws of induction (i.e., those laws which govern the effects of time-varying electric and magnetic fields) which would nowadays be interpreted as violations of the principle of conservation of charge. Electrons had not been discovered at the time, and electric charge was thought to be some kind of fluid; but whatever it was, the physical ideas of the day did not permit it to disappear from one place and reappear in another. He fixed the problem by making a bold and unprecedented step; which was to postulate the existence of a new kind of electric current, not associated with flowing charges, which he called 'displacement current'. Speculation is one thing; but Maxwell had a test for his theory. With the inclusion of displacement current, the modified laws still allow the existence of electricity when all of the terms relating to physical matter are deleted. This 'free electricity' has to be in the form an oscillating electric field combined with an oscillating magnetic field, with the directions of action of the fields disposed at right-angles. It had turned out that the fields do not represent mysterious action at a distance after all (although it took some years before that point was fully accepted). They are instead stores of and agents for the transfer of pure energy. The liberation of electricity is however subject to a strict condition; which is that the energy exists by virtue of continuous transfer between the two fields according to the laws of induction; i.e., a decaying electric field gives rise to a magnetic field, and a decaying magnetic field gives rise to an electric field; and so the energy swaps backwards and forwards. The mechanism only works if the energy is propagating through space in a direction at right-angles to the crossed fields with a velocity given by the expression: v = 1/() where (Greek lower case "mu") is the magnetic permeability and ("epsilon") is the electric permittivity of the surrounding medium. Permeability is a constant of proportionality obtained from the force of magnetic attraction or repulsion which occurs between wires carrying an electric current. Permittivity is a constant obtained from the relationship between the physical dimensions of a capacitor and its capacitance. Maxwell found that the best available measurements for the permeability and permittivity of vacuum, 0 and 0 ("mu nought" and "epsilon nought") gave a propagation velocity for free energy,
4 c = 1/(0 0) , which turned out to be the same as the speed of light. Thus he was able to confirm a suggestion put forward by Michael Faraday some years before, which is that light is composed of electromagnetic waves. Maxwell had also shown, of course, that electrical energy is a form of light; and that older ideas derived from DC experiments were no longer tenable. Maxwell died in 1879 at the age of 46, only six years after the publication of his great treatise on electricity and magnetism. Thus it was left to others to explore the ramifications of his work. In the latter part of the 19th Century, there were two great interpreters of Maxwell's electromagnetic theory: Oliver Heaviside and Heinrich Hertz; both of whom were brilliant mathematicians in their own right. These two scientists independently cleared-up Maxwell's notation and reduced a nest of algebraic clutter to a set of four equations which describe the fields. The four 'Maxwell's equations' which we know today are actually a variant of the form preferred by Heaviside (extra terms, which are zero for the Universe in its present state, are nowadays usually deleted). The climax of Hertz's work was the creation and detection of Maxwellian waves under laboratory conditions1; which means that Hertz is the father of radio telecommunications, and also the inventor of the first radio antennas. His clarification of Maxwell's theory was also the basis of the work of one Albert Einstein, a Zurich patent examiner with a habit of daydreaming about objects in relative motion. Einstein realised that Maxwell's separation of light and matter implies that the speed of light is constant regardless of any motion on the part of the observer. This led to the Special and General Theories of Relativity, which overturned all 19th Century notions of space and time. He also gave us the explicit unification of electricity and magnetism, by showing that electromagnetic induction is a relativistic phenomenon. Most readers will be aware that an electro-mechanical generator works by moving a coil of wire relative to a strong magnetic field. The changing magnetic field (as seen from the coil's viewpoint) gives rise to an electric field, which manifests itself as a voltage across the ends of the coil. Einstein tells us that the magnetic field does not so much create an electric field; it is an electric field when seen from a moving frame of reference. Likewise, an electric field is a magnetic field when viewed by a moving observer. This means that generators (and by a converse principle, electric motors) make use of relativistic effects when they convert energy between its electrical and mechanical forms. Heaviside's extended version of Maxwell's equations was background to the work of Paul Dirac, who later went on to predict the existence of anti-matter. Heaviside's most important work however was carried out before the advent of radio as a technology, and was primarily related to the problems of long-distance electrical communication (telegraphs and telephones). It is Heaviside who gave us the correct picture of electricity, by way of another corollary of Maxwell's theory called 'the principle of continuity of energy' (not to be confused with the principle of conservation of energy). The principle of continuity dictates that energy cannot simply disappear from one location and reappear in another, it must, in some sense, make a journey. This, incidentally, is not the same as imagining that energy follows a specific route; because we can only explain phenomena such as optical diffraction (and remain consistent with quantum theory) if we allow that even the very smallest quantity of energy can follow a multiplicity of paths during flight. Nevertheless, it retains a form of integrity (it is conserved), which means that electric and magnetic fields from different energy sources cannot combine to make electromagnetic radiation. It is intriguing to note that, were such combination possible, every stray field would interact and the Universe would explode, perhaps to expand to a state in which such interaction can no longer occur. The continuity principle allows us to break reality down into separate energy transfer processes; and so without it, we would not be able to understand the Universe. On a more immediate level however, it tells us exactly how the energy flows in an electrical circuit, and indeed, how it flows into space from a radio antenna.
1 Hertz, the Discoverer of Electric Waves. Julian Blanchard, Bell System Technical Journal, July 1938, Vol. 17, No. 3, p326 - 337. [Available from http://bstj.bell-labs.com/ ]
5 The explanation for the principle of continuity comes, once again, from the work of Einstein, this time from his investigation of the photoelectric effect. It also follows logically from Maxwell's equations; the issue being that there must be a reason for the rate (frequency) at which energy swaps between the fields in a propagating electromagnetic wave. On the assumption that energy is the simplest thing in the Universe, the only possible governing factor is the amount of energy being transported. Hence it was bound to be discovered that light is made up of 'particles' (i.e., discrete units, not solid objects), each oscillating at a frequency dictated by the amount of energy it contains. The particles, of course, are nowadays called 'photons', and the relationship between energy and frequency is known to be a direct proportionality: E=hf where h is 'Plank's constant', and has a value of 6.6260689610-34 Joule seconds. This introduces another strange concept, known as 'wave-particle duality'; which is sometimes claimed to be a paradox but is actually nothing of the sort. For a glib explanation, we can say that it would only be paradoxical if a batch of propagating energy was not made up of discrete units, because then the energy would have no way of knowing its own frequency and so would not be able to form a wave. For a more formal way of thinking about this issue however; note that an electromagnetic wave is defined in relation to a route through a field. The wave-like nature of the energy flow is detected by inserting probes (measuring devices) into the field and building up a picture by intercepting photons. From this we infer that photons travel as waves, even though we can only discern that by adding together the small packets of energy delivered by them. This, incidentally, raises a general point in relation to scientific observation; which is that there are no paradoxes in nature. Paradoxes exist only in the mind of the observer, and result from attempts to interpret information using faulty starting assumptions. When we detect waves, we do so on the assumption that our probes measure field strength. This is very convenient, because it allows us solve problems using field theory; but when the meaning of an observation is in doubt, or when discrepancies begin to accrue, it is important to remember that all measurements are ultimately purely dependent on what can be inferred from the absorption and emission of energy. We rarely need to think about the granularity of light when working at electrical frequencies, because the amount of energy in each photon is exceedingly small. The wave property is instead the most dominant feature, and is often reinforced by a behaviour called 'coherence'; which is the ability of identical photons traversing the same set of paths to synchronise their fields. In this way, we see smooth waves in the collective behaviour of many particles. Heaviside knew nothing of this, but the photon theory explains his continuity principle by identifying the energy carrier. The fields extend throughout space, because they represent the propensity to exchange energy, and the photons can turn up anywhere that the fields have finite intensity; but the photons themselves undergo no additional exchange processes during transit. Our ideas on the flow of electromagnetic energy are nowadays associated with John Henry Poynting, who formalised the continuity principle in a theorem which bears his name. Heaviside however, had already been using the principle for some time, and had a far more elegant derivation lodged with his publisher at the time of Poynting's first public presentation. Poynting also, did not interpret his findings correctly, whereas Heaviside had no such trouble; and so it was the latter who first described the underlying mechanism2. What follows may come as a shock to those who have been taught the 'lies-to-children' version. It turns out that the inside of a good conductor is the one part of a circuit where the transmission of electrical energy does not take place. We can understand Heaviside's explanation by using Faraday's "lines of force", which provide a way of visualising the electric and magnetic fields. Some field patterns relevant to to the workings of circuits are shown below:
2 Oliver Heaviside, Paul J Nahin. 2nd edition (paperback). John Hopkins University Press 2002. ISBN 0-8018-6909-9. Ch. 7, Tech note 3, p129-131.
The left-hand diagram represents an electric field as it might exist between two charged spheres, or between two electrical conductors seen in cross-section. Recall that like charges repel, and opposite charges attract; and so a positively charged particle will be repelled by the (+) electrode and attracted to the (-) electrode. Hence, depending on the starting point, the arrows show the direction in which a positive charge will be accelerated, and the lines show the path which will be followed. There are, of course, an infinite number of possible starting positions, and so the field has an infinite number of lines; but a sparse representation is sufficient to give the general idea. The curvature of the lines arises because the mutual force between pairs of charged bodies is governed by an inverse-square law, i.e., the attraction or repulsion is strong when the bodies are close, but falls off rapidly with distance. Hence a particle close to one electrode will have a trajectory almost perpendicular to the surface, but the field line becomes curved further away because the particle is then influenced by both electrodes. The middle diagram shows the lines of the magnetic field surrounding a wire when a current of positive charges is flowing away from the observer. The wire is shown in cross section, and the cross within its boundary represents the flow direction as the tail fins of a receding dart. The righthand diagram shows the field when the current is flowing towards the observer; the dot being interpreted as the point of an approaching dart. Notice that we use a convention established before the discovery of the electron (by J J Thomson in 1897); which is that current flows from (+) to (-). Electrons flow the other way; but the continued use of conventional current makes no difference to the theory and serves to preserve the intelligibility of past scientific literature. In the case of a magnetic field, the arrows drawn on the lines of force show the direction in which a compass will point when placed in proximity to the wire (presuming that the current is large enough to overcome the Earth's magnetic field). The clockwise 'rotation' of the field lines when the (conventional) current is flowing away is known as 'Maxwell's corkscrew rule'. This rule derives from the convention that the field lines around a bar magnet (or compass needle) emerge from the North-seeking pole and return to the South-seeking pole. Magnetic bodies repel when their force lines are in opposition (North pole to North pole), and attract when their force-lines point in the same direction (North pole to South pole). Hence a compass needle is repelled by the field lines coming towards it and attracted to the field lines going away from it. The Maxwellian fields have the same geometric properties as Faraday lines; and so we can now forget about forces on charged bodies and magnets and think about electromagnetic energy. Recall that Maxwell discovered that light travels with its electric field oscillating at right angles to its magnetic field, the direction of propagation being at right angles to both fields. Heaviside and Poynting now tell us that the transport of energy in electrical circuits occurs in exactly the same way. An arbitrarily chosen point in an electric or magnetic field has both intensity (i.e., magnitude, or strength) and a direction of action. Such points are known as 'vectors', and a region of space filled with vectors is called a 'vector field'. There is also a mathematical operation called 'vector multiplication', which can be applied at points where two fields cross to produce a new vector at right-angles to the original two. Notional directions were assigned to the electric and magnetic vectors in the discussion above; and the adopted convention is, of course, one which gives the correct direction of energy transport when the fields are combined using the vector product (or
7 "cross product", as it is also known). The electric field is usually given the symbol E, and the magnetic field the symbol H. The cross product is then written: P=EH where P is known as the 'Poynting vector', and gives the intensity and average direction of the energy flow at some point in the combined electric and magnetic (i.e., electromagnetic) field. Now consider the electromagnetic field around a wire in a circuit (as shown on the right). Electric field lines emerge perpendicular to the surface, magnetic field lines encircle the conduction current; and there are an infinite number of points at which they cross at right angles. Thus, assuming that the E and H fields are related in accordance with the continuity principle, we can work out the direction in which energy is travelling (and also the rate of flow) at any location. The key to the right of the diagram gives the direction of the Poynting vector in relation to the E and H fields. It follows that if the electric field is strictly perpendicular to the wire, then the Poynting vector lies parallel to the wire; and the fields as they are depicted have it running away from the observer. Notice also, that all of the propagating energy is on the outside of the wire (albeit in greatest concentration close to the surface where the magnetic field is strongest); and it transpires that if the wire is a perfect conductor, there is none on the inside at all. It requires both an electric field and a magnetic field for the transportation of energy. A perfect conductor, however, is a material which, by definition, cannot sustain an electric field. This can be understood by noting that the electrical resistance between any two points within the body of a perfectly conducting object is zero, in which case there can be no voltage difference and so no electric field. Hence electrical energy cannot flow inside a good conductor. This understanding, incidentally, gives rise to a semantic difficulty regarding whether there is a difference between 'electricity' and 'electrical energy'. It is hard to justify the preservation of different meanings for the two terms, and yet people will persist in saying that electricity "flows through" conductors. We can sidestep the issue by saying that electricity flows along the wires, but that does little to rectify the basic misconception. The general consensus now seems to be that unqualified use of the word 'electricity' should be avoided altogether in any rigorous scientific context. The electricity for which the utility company demands payment however, is definitely of the Heaviside, rather than the electrons in wires, variety. Now that we have established the location of the electrical energy; it must be added that a small amount does flow into (but not through) practical conductors. This is because metal (presuming that the temperature is too high for it to be superconducting) always has some resistance. The inflowing energy is, of course, lost from the fields and converted into heat. The mechanism of energy delivery can be understood, once again, using the Poynting vector; and it explains not only unwanted losses, but also what happens in relation to devices which are deliberately made resistive so that they can absorb large amounts of energy. We start by imagining a small particle of resistance, such as an infinitesimal resistive region in an otherwise perfectly conducting wire. We know of course, that resistance is distributed throughout conducting materials; but the continuity principle allows us to break energy transfer processes down into separate components, which can later be combined to give the overall picture. When a current flows along a conductor, a resistive region gives rise to a voltage drop or 'potential difference'. Hence an electric field exists between a point just upstream and and a point just downstream of the obstacle. The diagram on the right shows the interaction between the magnetic field encircling the wire and a single electric field line (other lines are left to the imagination).
8 Using the sense of the Poynting vector given earlier, we see that energy flows into the resistance from both the upstream and downstream sides. Now, mentally, rotate the diagram about the wire axis and it becomes a spherical wave-front converging onto the point resistance. A further important observation arises when we consider what happens when the direction of the current is reversed. In that case, the directions of the electric and the magnetic vectors are both reversed, and so energy continues to flow into the resistance. The fields explain the strange fact that the direction of energy flow cannot be reversed by swapping the polarity of the power supply. It can be reversed however, by replacing the resistance with a device which is a source of energy (a battery or generator) or by devices which can store energy (inductors and capacitors); this being a matter which we will explore in detail later. When we think of the Poynting vector in relation to a complete electrical system, we are really thinking of the average of a large number of microscopic energy transfer processes. In the case of a simple circuit consisting only of a generator and a resistive load; the Poynting vector is directed along the wires, from generator to load; and the direction is the same on both sides of the generator. Should we examine the average energy flow close to a wire however, we will see that the direction is tilted very slightly towards the wire, on account of the distributed resistive losses. So now we have the basic field picture of electricity, but there remain a few issues which need to be explained. Particularly, we need to look again at electric current, and the matter of why it is defined in terms of moving charges. As it says in every school physics textbook, an Ampere is a current of one Coulomb per second; and since the charge of an electron is -1.602189210-19 Coulombs, an Amp flowing from (+) to (-) corresponds to 1/1.602189210-19 = 6.2414601221018 electrons per second flowing from (-) to (+). This is correct (assuming that the current is due to electrons), but only in the special limiting case when the frequency of the electromagnetic energy being transferred tends to zero. In other words; it is only strictly true for DC electricity. As the frequency increases, the correspondence between conduction current and effective current becomes progressively less accurate; which is why Maxwell invented displacement current. The role of the electrons in conduction is actually an optical one. They interact with the electromagnetic field in such a way as to increase the amount of energy which can be stored in the region of space immediately surrounding the conductor. This creates a duct through which the energy prefers to flow; in a manner analogous to the way in which mirages are sometimes seen in hot deserts, and over-the-horizon VHF radio communication becomes possible on hot days. Ducting occurs when the refractive index of the medium increases with distance from the surface, i.e., a light ray which tries to move away is bent towards the parallel direction. The existence of electrons was hypothesised, and indeed the name was coined, some time before J J Thomson identified them as the current-carriers in cathode-ray tubes (1897). It must then have seemed to many that the electrical fluid theory was confirmed; but the discovery was actually its nemesis. Upon estimating the number of free electrons in a given volume (say) of copper, it turns out that the average velocity of propagation of an electron current through a solid medium is of the order of a few millimetres per second. The mass of the electron is also so small that the amount of energy transferred by collisions with the atoms of the conductor comes nowhere near to the amount transferred by the electromagnetic field. In fact, the collision energy is merely the resistive loss which occurs in imperfect conductors. The electrons do not carry the electrical energy, but they do extract a small tax for the service they provide in guiding it along the outside surfaces. So how are we now to think of electric current? The current which conveys energy would seem to be best described as a displacement current, at least in the sense that it is not carried by electrons. It merely becomes correlated with the number of electrons passing a given point in a conductor per second when the generator frequency is low. For historical reasons however (i.e., because Maxwell modified the definition of current instead of replacing it) we think of current as the sum of
9 conduction and displacement currents. This turns out to be a reasonable approach, because there are many situations in which conduction current is important. Electronics, for example, is the art of controlling electricity by controlling the conduction current. Charged particles are also involved in the workings of chemical energy sources (batteries and fuel cells) and electrochemical processes in general (e.g., electro-plating). For the vast range of AC electrical problems however (including the design of electronic circuits) conduction current is a source of misconceptions, and needs to be distinguished from current in the general electromagnetic sense. We shall proceed by thinking in terms of a type of current called 'current'; which may or may not be correlated with the movements of charged particles, but for most of the time we don't care whether it is or not. This might seem to imply that we have turned current into an abstract idea, but actually we have simply unloaded some unnecessary baggage. Many readers will be aware that voltage is sometimes referred to as 'electromotive force' (EMF). Its unit of measurement is not a pure force in the Newtonian sense, but it is proportional to the force exerted on a charged particle; and so the habit of referring to it as a force is loosely (and widely) accepted. Likewise, current is proportional to the force exerted on a magnet in the field, and its unit, the Ampere, is the measure of magnetomotive force. The electric field in a circuit is everywhere proportional to the driving voltage. Likewise, the magnetic field is everywhere proportional to the effective current; and the two fields between them set the intensity of the Poynting vector. Also, it has to be said that AC ammeters are calibrated according to the amount of energy delivered, and so read the true, or magnetomotive, quantity. Before we move on, it is perhaps worth making a few observations on the use of the term 'displacement current'. There are occasional (non peer-reviewed) publications which agonise about the supposedly deep philosophical implications of this strange quantity. That is unfortunate, because it doesn't actually exist. Maxwell coined the term because he initially imagined it as distortions of the ther, the latter being an elastic medium supposed to permeate all space and thereby 'explain' the paradoxical phenomenon of action at a distance. The ther has now gone the way of the Earth-centred Universe (and good riddance); and Science has come to explain it all in terms of fields and particles. It will do no harm to think of displacement current as a convenient fudge; which allows us to extend the laws of DC electricity to higher frequencies and thereby avoid having to subject every problem to the full electromagnetic treatment. Hence, 'displacement current' is that which has to be evoked because electromagnetic energy doesn't always follow the wires. It is not a physical current. It is just a quantity which corrects for the difference between the magnetomotive force and the conduction current. Magnetomotive force (or 'MMF') is incidentally, not completely synonymous with current. Were it so, we would gladly drop the misleading concept 'current' altogether; but unfortunately, we are stuck with it. The reason is that MMF and current are only identical in circuits formed of a single conducting loop. When the circuit is composed of overlapping loops, disposed in such a way that adjacent conductors carry current in the same direction, the MMF is increased due to a phenomenon called 'magnetic flux linkage'. Such overlapping structures are, of course, known as coils or inductors, and have the property that they allow the amount of magnetic energy which can be stored in a given volume of space to be magnified. Still, for the greater (non-overlapping) part of an electrical circuit, current and MMF are practically the same; and to a good approximation, we can dispense with the details and treat coils as separate objects having a single magnetic concentrating property called 'inductance'. Certainly, it is very useful to know how to calculate inductance from the number of turns and the physical dimensions of a coil; but it is a matter which can separated from the general business of designing electrical systems.
10
11 is a purely topological representation (like the famous London Underground map). It was a curious and usually unremarked discovery of the early circuit experimenters, that it doesn't matter how the equipment is laid out, or whether a component is large or small, or how long the wires are (provided that they are a lot more conductive than any of the designated resistances). This, of course, ceases to be true as the frequency is increased; and this breakdown of DC theory is partly due to the finite speed of light. Consider a sine-wave generator connected by means of relatively long wires to a resistive load. If we measure the voltage difference between any two points in the circuit, we obtain a quantity which is proportional to the total electric field existing between those points. In this case, the field is associated with electromagnetic energy propagating from the generator to the load. Since it takes a finite time for the energy to make the journey, this means that the voltage measured across the generator will not be identical to the voltage measured across the load. If we presume that the resistive loss in the wires is negligible, the main difference will be in the relative phases of the two sine waves; i.e., if we take some reference point on the waveform, such as the zero-crossing-point on going from negative to positive, we will find that the load waveform is delayed relative to the generator waveform. This will not be noticeable if the measurements are made using an ordinary AC voltmeter, but the time difference can certainly be demonstrated using a dual-channel oscilloscope; and the same effect will give rise to performance deviations in more complicated (i.e., phase critical) circuits. There are ways of dealing with such problems (and in the example case, it is to represent the wires as inductances with some capacitance between them), but it is important to understand that the 'truth' of circuit diagrams is contingent upon unspecified factors. Knowing the difference between representation and reality is the art (as opposed to the science) of circuit analysis. No one would want to apply the full electromagnetic theory to routine circuit problems; and indeed, success in the solving of Maxwell's equations for some particular class of problems is often regarded as a scientific event. Hence electricity is primarily associated with circuits, rather than fields and waves, and being 'good at it' requires a level of understanding which is difficult to formalise. Experience comes with time, but we can at least invite entry to the Guild by offering a straightforward rule of thumb. A light wave in vacuum completes one cycle of variation of its electric and magnetic fields upon travelling a distance given by the expression: =c/f where f is the frequency; c = 299792458 metres/second is the speed of light; and (Greek "lambda") is the wavelength. Due to the essentially refractive nature of circuits, the apparent velocity of signal propagation through an electrical network is never exactly c, but it rarely deviates from c by more than a few %. Hence we can easily obtain an idea of the phase errors which will accumulate as a result of constructing a circuit on a particular physical scale. A given amount of phase error does not necessarily translate into the same error in some other quantity, but if we confine ourselves to thinking of the order of the error (i.e., its magnitude thereabouts), it is a fairly good guide. On that basis, we can answer the question: "If I build the circuit as drawn, how well will its performance agree with my analysis?" The scale on which attention to physical detail (layout, wire lengths, component size, etc.) will be required, for a given level of agreement, is shown for various frequencies in the table below (using the mental-arithmetic approximation c = 3108 m/s).
12 Frequency f 300KHz 3MHz 30MHz 300MHz 3GHz Wavelength =c/f 1Km 100m 10m 1m 10cm Construction scale for a given accuracy 10% 1% 0.1% 100m 10m 1m 10m 1m 10cm 1m 10cm 1cm 10cm 1cm 1mm 1cm 1mm 0.1mm
Assuming that most signal-processing circuits are constructed on a scale of about 10cm, we can see that there will be no serious discrepancies between analysis and practical results for frequencies up to about 3MHz (ignoring displacement currents for the time being). Beyond that, we are definitely into the realm of radio-frequency engineering, where layout is important; but note that this does not mean that analysis will fail. Rather, as was alluded-to earlier, it requires a modified approach, where some physical variables have to be turned into theoretical circuit components. As we approach ultra-high frequencies (UHF) however, the struggle to adapt the lumped component representation will become increasingly difficult; and the need to resort to Maxwell's equations (or at least, to standard solutions obtained from the scientific literature) will become more and more apparent. While on the subject of scale incidentally; note that the wavelength range of visible light runs from 0.7 to 0.4 microns (where 1 micron = 1m = 10-6 m). If we were to represent the interaction of visible light and matter using circuit diagrams, the circuits would have to be built on the molecular scale (around 1nm = 10-9 m). Hence visible light has no measurable tendency to be guided by ordinary electrical circuitry; but it does have the convenient habit of reflecting from the components, so that we can see them. We can now draw together two threads from the preceding discussion. Representing a circuit diagrammatically begins with a basic assumption; that either the circuit is infinitesimally small, or that the speed of light is infinite. Either way, it means that every part of the circuit is assumed (initially) to be in instantaneous communication with every other part. It is also assumed that the electrical energy always follows the wires, whereas it is actually distributed in the fields surrounding the circuit. In both cases, in the absence of corrective measures, this can result in disagreement between the behaviour calculated from circuit theory and the measured performance of the actual circuit. In resolving the potential for discrepancy, we must first recognise that circuit diagrams fall into two categories: those which are used for production engineering and end up in service manuals; and theoretical diagrams used by designers. As we will see in this and subsequent articles; a theoretical diagram is actually a type of mathematical statement; which can be extended to describe the behaviour of a physical circuit to an almost arbitrary degree of precision. A production diagram, on the other hand, is just a record of the interconnections in a set of manufactured sub-assemblies (resistors, transistors, coils, etc.). As has already been implied: for equipment operating at audio frequencies and below, there may be a great deal of similarity between the diagram used by the design engineer and the diagram in the service manual; but for well-designed radio equipment, this will not necessarily be the case. On the subject of circuit diagrams; it will be noted that the North American or Japanese preferred (zig-zag line) symbol for resistance is used here. This convention is adopted (or, in the author's case, was never un-adopted) because the rectangular box symbol was already in use by theoreticians long before European standards (apparently intended solely for the convenience of draughtsmen) were put forward. In this, and all of the other documents produced by this author, the box symbol is used strictly to represent a generalised electrical network. It is also, in particular (and in keeping with long-standing practice), used to represent a generalised two-terminal linear network called an
13 'impedance' (a mathematical construct with which we are about to become very familiar). In the following sections of this article, we will derive the basic AC theory, which deals with notionally discrete resistances, inductances and capacitances, these being known as ideal components. As will be shown, the behaviour of networks of these components can be determined by starting with a handful of empirical electrical formulae (i.e., formulae determined by experiment) and then using the properties of the Poynting vector to determine the rules of combination. We will not, incidentally, need to know how to carry out 3D vector multiplication explicitly; we simply need to know that, for multiplication operations of any type, when the two quantities being multiplied have the same algebraic sign, the answer is positive, and when they have opposite signs, the answer is negative. This will result in an extendible theory of linear networks; extension being a matter of incorporating circuit modules (black boxes) called 'component models' or 'equivalent circuits', which have ports defined as networks of ideal components. These modules were mentioned previously as a way of dealing with non-linear devices; but the approach can be applied to any electrical device which requires the use of a more sophisticated theory (such as electromagnetics), or which is not well described by treating it as a discrete ideal component. It will be noticed that we have talked of 'resistances, capacitances and inductances', rather than 'resistors, capacitors, and inductors'. There may be no great difference between the two conceptions from the point-of-view the audio engineer; but for the radio engineer, ideal components and practical components are not the same. Whether a practical component can be regarded as an ideal component is a matter of internal dimensions and wavelength. Some capacitors, for example, are made by rolling-up long lengths of metallised plastic film, giving an assembly which behaves quite like a pure capacitance at low frequencies, but turns into an inductance at radio frequencies (thus radio engineers have a certain fondness for capacitors which have small electrodes). Similarly, an inductor is made by coiling-up a length of wire, and its property of pure inductance is modified by the resistance of the wire and by the time it takes for an electromagnetic wave to propagate along it. Even the humble resistor is not perfect; and in general: every practical two-terminal device is replaced by an equivalent circuit, which may (sometimes) correspond to a discrete ideal linear component at low frequencies, but at high frequencies will always mutate into a network of resistances, capacitances and inductances. The subject of component models is developed in detail in later articles in this series; but for now, the important point is that circuits designed on the assumption of ideal behaviour may require extra work if they are to be realised in practice. Neglecting possible (but usually minor) non-linearities; the difference between ideal and practical components can always be attributed to a combination of three factors: time delays; displacement currents; and the finite resistance of conductors at ordinary temperatures. The same can also be said of wiring and layout. Hence, an accurate theoretical model for a practical electrical system may also require components to represent spurious effects in the circuit at large. In the case of extreme time delays (i.e., long connecting wires), we can introduce a two-port module called a 'transmission line', which is a solution of Maxwell's equations in a box. Minor effects can be accounted for by including 'stray' or 'parasitic' resistances, capacitances and inductances here and there. In the absence constructional details and related corrections however; it is implicit in any theoretical circuit: that the components will be grouped on a scale which is small in comparison to wavelength; that interconnection resistance is negligible in comparison to specified resistance; and that there may be a need for shielding to prevent energy from turning-up in places where it is not wanted. Having cautioned against the pitfalls however; we should also caution against over-modelling. Even at radio frequencies, there need not always be a great deal of difference between the idealised and the practical representations. This is because, firstly: circuits will often work adequately when built according to the assumption that practical components are nearly ideal; and secondly, nominal component values are subject to manufacturing tolerances, which means that accurate performance can only be achieved by making some components adjustable. Adjustment not only serves to
14 correct the nominal component value to the required value, it can also absorb deficiencies in the original analysis. Hence, it is important to understand that a simple approach will often do the trick; and extreme attention to subtleties is usually only needed when attempting to achieve the most exacting standards of radio frequency (RF) performance.
15
Observe that only the entries in the left-hand column contain fundamental scientific information. The uppermost entry is a statement of Ohm's law; which is that the electrical current I is proportional to the voltage applied across the ends of a conductor, the constant of proportionality being known as the resistance. The entry below it is the power law; which represents the observation that a conductor (resistor) heats up as a consequence of an electric current (i.e., it dissipates energy) and the power consumed (i.e., the energy delivered per unit-of-time) is the product of the applied voltage and the current, i.e., P=VI [Watts]. Also, by using the substitutions "I=V/R" and "V=IR", we obtain two alternative power laws: "P=V/R" and "P=IR". The expression "P=IR" is known as Joule's law, and is a statement of the fundamental relationship between electricity and thermodynamics. One important point to note about the standard power and resistance formulae however, is that they are all derived from experiments with DC electricity. They represent incomplete statements of Ohm's law and Joule's law, because they can only be applied to AC circuits when the load on the generator is a pure resistance. Later-on we will show how to state these laws in a completely general way, but some groundwork will be required before that can be done. Notice incidentally, the correspondence between the power law P=VI and the earlier-given definition of the Poynting vector P = E H. The former is a dimensionally-reduced version of the latter, as can be seen by noting that the unit of electrical field strength is 'Volts per metre', and the unit of magnetic field strength is 'Amperes per metre'. Also, even though we do not need to know how to perform 3-dimensional vector multiplication; it is not difficult to understand that any type of multiplication also multiplies the units of measurement. Hence the unit of the Poynting vector is 'Watts per metre-squared', which is a measure of illumination; i.e., the delivery of electrical power is a matter of illuminating the receiving object with electromagnetic energy. In the case of inductors and capacitors, the entries in the left-hand column tell us that they also obey Ohm's law when connected to a generator of alternating voltage but, insofar as we can construct them without inadvertently including resistance, they consume no power. The reason why the ideal versions of these components cannot dissipate energy is that they have no resistance by definition, i.e., they cannot convert energy into heat or work. Instead, over the course of a generator cycle, the amount of energy which flows into the component is exactly equal to the amount which flows out. This property forces a 90 phase difference between the voltage and current
16 waveforms; a -cycle or 'quadrature' offset being that which causes the Poynting vector to reverse its direction four times per cycle. Hence, although the average or steady-state power consumption is zero, the instantaneous power-flow is alternating at twice the generator frequency. In the case of an inductor, the AC resistance or reactance XL (measured in Ohms) is directly proportional to the inductance (in Henrys) and to the frequency f (in Hertz, i.e., cycles per second)) of the applied voltage. The quantity 2f is known as the angular frequency (i.e., the frequency in radians per second, where 2 radians corresponds to 360) and is often given the symbol (Greek lower-case "omega"). In the case of a capacitor, the reactance XC is inversely proportional to the angular frequency, and also inversely proportional to the capacitance (in Farads). Note also that capacitive reactance is shown as being negative; because it transpires that when capacitors and inductors are connected to form resonant circuits, the reactance of the inductor, in some sense, cancels the reactance of the capacitor. This means that one of the types of reactance has to be considered to be negative and, as will be explained later, we choose it to be the capacitive variety in order to be consistent with the conventions of trigonometry. The other entries in the table are derived from the formulae in the left-hand column, using only Ohm's law and a basic electrical rule known as Kirchhoff's first law (pronounced: "kir-khov"). Kirchhoff's law tells us that the sum of all the currents flowing into a given point in a circuit is equal to the sum of all the currents flowing out. This law was originally regarded as proof of the principle of conservation of charge in DC circuits (what goes in must come out); but it also turns out to be true of current in the general (magnetomotive) sense, provided that we use the correct rules of addition (to be determined shortly) in circuits involving both resistance and reactance. The entries in the middle and right-hand columns are, of course, the well-known series and parallel combination formulae for passive electrical components. These expressions may all be regarded as examples of simple mathematical models (in this case, in the sense that a single component can serve to represent a combination of several components). Of these, the formula for resistances in series is the simplest of all, and tells us that whenever we encounter two resistors in series, we can treat them as a single resistor with a value equal to the sum of the two resistances. That this statement is derived from existing physical laws can be seen by applying some basic techniques of circuit analysis to the circuit shown below: Resistors in Series To analyse this circuit, we first observe that, as a requirement of Kirchhoff's first law, the current in the two resistors must be the same. Ohm's law then tells us that V1 = I R1 and V2 = I R2. Now, since voltage is analogous to pressure, common sense (otherwise known as Kirchhoff's second law) tells us that the total pressure-drop is equal to the sum of the pressure-drops across the two resistors, i.e., V = V1 + V2 Putting these ideas together we have: V = I R1 + I R2 = I (R1 + R2) Now, if we postulate a hypothetical resistance R which represents the series combination of R1 and R2, it must be possible to replace R1 and R2 with this resistance and obtain the same current for a given voltage, i.e., V = I R = I (R1 + R2) Hence: R = R1 + R 2
17 Resistors in Parallel In the case of two resistors in parallel, the voltage across the two resistors is the same. Hence Ohm's law tells us that: V = I 1 R1 = I 2 R2 . . . . . . . (3.1) and Kirchhoff's first law tells us that: I = I 1 + I2 Now, if we postulate a hypothetical resistance R which represents the parallel combination of R1 and R2, we have: V = I R = (I1 + I2) R We can eliminate I1 and I2 by using equation (3.1) above, i.e., I1=V/R1 and I2=V/R2, hence: V= [ (V/R1) + (V/R2) ] R The voltage can then be factored out and cancelled to give: 1 = [ (1/R1) + (1/R2) ] R and dividing each side of the equation by R gives: 1/R = (1/R1) + (1/R2) This is one form of the standard expression for resistors in parallel, and a little rearrangement will give us the other. Inverting the expression above gives: R = 1 / [ (1/R1) + (1/R2) ] We then arrange the terms inside the square brackets to have a common denominator (multiply the 1/R1 term by R2/R2 and multiply the 1/R2 term by R1/R1), i.e., R = 1 / [ {(R2/(R1R2)} + {R1/(R1R2)} ] hence: R = 1 / [ (R1 + R2 ) / R1 R2 ] which, upon inversion, gives: R = R1 R2 / (R1 + R2 ) The formulae for inductors and capacitors in series and parallel may also be derived by using exactly the same approach as was used above; the only difference being that inductive reactance XL=2fL, or capacitive reactance XC= -1/(2fC), is substituted in place of resistance. The 2f factors and any minus signs disappear by cancellation, leaving formulae involving only inductance or capacitance. Note incidentally, that the inductors in the illustrations in the previous table are shown orientated at right-angles to each other, this being done as a reminder that the formulae are only true when there is no magnetic coupling between the coils. Note also, that the capacitor formulae take on the opposite forms of their resistance counterparts; this being due to the reciprocal (inverse) relationship between capacitance and capacitive reactance.
4. Resonance
The combination rules discussed above allow us to deal with resistors, or capacitors, or inductors in series and parallel, but for reasons which will become clear in the following sections, they do not provide a method for dealing with combinations of resistance and reactance (if we try to add resistance to reactance directly, our calculations will not agree with our measurements). We can however deal with combinations of inductive and capacitive reactance, provided that we observe the convention that capacitive reactance is negative. We may therefore add to our repertoire of standard formulae by writing general expressions for pure reactances in series and parallel, i.e.: Reactances in series X = X1 + X2 Reactances in parallel X = X1 X2 / (X1 + X2)
18 Now, since inductive reactance is positive and increases with frequency, and capacitive reactance is negative and decreases with frequency; if an inductance is placed in series or parallel with a capacitance, there will occur a frequency at which the two reactances cancel. That frequency, of course, is the resonant frequency of the combined reactances. A resonant frequency is usually denoted by the symbol ' f0' ("f nought") In the case of an inductor and a capacitor in series, the reactance goes to zero, i.e., the combination behaves like a short-circuit (neglecting resistance), when XL+XC=0. In the case of an inductor and a capacitor in parallel, since the term XL+XC is on the bottom of a fraction, it would appear that the reactance goes to infinity, i.e., the combination behaves like an open-circuit when XL+XC=0. A complete open-circuit does not appear in practice however, because in the parallel case, it transpires that we are not at liberty to neglect the resistances of the coil and the capacitor. We therefore cannot calculate the exact resonant frequency of a practical parallel tuned circuit, nor the resistance which remains when the reactance has been cancelled, until we have developed a more comprehensive theory; and so that is another matter which we must leave until later. We can say however, that the exact resonant frequency of a series tuned circuit, and the approximate resonant frequency of a typical parallel tuned circuit occurs when: XL = -XC i.e., 2 f0 L = 1/(2 f0 C ) Now, if we rearrange this equation to put both instances of f0 on one side we have: f0 = 1/( 4 L C ) and taking the square-root gives: f0 = 1/[ 2 (L C) ] 4.1 (pronounced: "f nought equals one over two pi root LC", or in rhyme: "One over two pi root LC gives the resonant frequency"). Equation (4.1) is, of course, is the standard resonance formula; but before accepting it we should note that, because it contains a square root, every combination of L and C has two resonant frequencies associated with it. Every equation involving a square root has two solutions because the square root of a number is, by definition: 'a quantity which, when multiplied by itself, gives the number in question'. When two negative numbers are multiplied, the result is a positive number. Hence, if x is a positive number, we must note not only that: x = x x but also that: x = (-x) (-x) hence: (x) = x So, the resonance formula stated explicitly becomes: f0 = 1/[ 2(L C) ] and there are two solutions, numerically identical but of opposite sign. By convention, we usually assume the positive result; but since there were no restrictions on the validity of the arguments used in deriving the formula, the negative frequency solution must exist and must mean something. AC electrical theory, as we delve more deeply into it, will present us with various little conundrums (often involving square roots); and although negative frequency is one of the most trivial, we will be ill prepared for the others if we simply let it pass. The negative frequency solution arises because a sinusoidal waveform is derived from circular motion, and there are two possibilities for this: clockwise and anti-clockwise. This does not mean that the positive and negative frequencies are identical however, as the following argument will illustrate: Consider an alternator (mechanical AC generator) stopped at a position where it would give zero output voltage (or current) if it were spinning, and where it will give an initially positive output if its shaft is turned
19 anti-clockwise (see illustration below). Now, if the shaft is turned clockwise, the output will initially go negative. It follows that the difference between the positive and negative frequency outputs is that while the voltage (or current) associated with one is positive the voltage associated with the other is always negative, and vice versa. Hence changing the sign of a frequency has the effect of shifting the phase of the associated waveform by 180. Incidentally, for anyone who might insist on taking the direction of rotation analogy too seriously; it is of course obvious that if the generator is an electronic oscillator, the concept of rotation is meaningless. In that case, the negative frequency solution can be obtained by swapping the connections (as it can with any generator or resonator).
20 positive (a statement which we will qualify later), but there are two opposing types of reactance, which of course we know as inductive reactance, XL=2fL, and capacitive reactance, XC=-1/(2fC). Inductive reactance arises through the storage of energy in a magnetic field, and capacitive reactance through the storage of energy in an electric field. When inductance, capacitance, and resistance are combined within the same two-terminal 'black box', the opposing reactances will always tend to cancel-out to some degree, and so the two types of reactance make only a single contribution to the impedance at any particular frequency. There is however, no way in which resistance and reactance can be combined to form a single numerical quantity, because the physical processes they represent turn out to be mutually exclusive. A natural distinction arises between resistance and reactance because perfect energy dissipation implies that the Poynting vector never reverses; whereas perfect storage and return implies alternation, with the Poynting vector spending equal amounts of time in the two possible flow directions. Hence, for a resistance; when the instantaneous voltage is positive, the current is positive; and when the voltage is negative, the current is negative; i.e., the voltage and current waveforms are perfectly in phase. For the Poynting vector to alternate and give zero average power delivery however, there must be a -cycle difference between the voltage and current waveforms. It is the 0 difference in the resistive case, and the 90 difference in the reactive case, which gives rise to a condition of mathematical independence, or othogonality, which we can exploit to obtain a generalised form of Ohm's law. Once we have that generalisation, the rules of combination follow and give rise to a complete and internally-consistent AC theory. For DC circuits, we can write Ohm's law as "V=IR". For AC circuits therefore, we must suspect that we can write something along the lines of "V=IZ", as long as we recognise that impedance, Z, the generalised attribute of objects which obey Ohm's law, must be represented by some composite quantity containing two distinct elements R and X. In circumstances such as this, it is traditional to see if anyone has developed a branch of mathematics which suits the problem, and the clue regarding where to look lies in the independence of R and X. If two quantities are completely independent, they must in some sense exist in different dimensions (i.e., they always move at rightangles to each other). This means that impedance cannot be represented by an ordinary number, i.e., a one-dimensional quantity lying on a line between - and +, it must be represented by a point on a two-dimensional plane, which is another way of saying that Z can be plotted as a point on a graph of R against X. With regard then to solving problems involving impedance, it so happens that we are spoilt for choice, because there are no less than two appropriate branches of mathematics, namely vectors and complex numbers. The vector approach traditionally preferred by engineers is that of making sketches or graphs, and using trigonometry to work out the actual numbers; whereas the complex number approach is algebraic, in that it allows equations involving two-dimensional objects to be written-down and re-arranged. Both approaches are equivalent however, and sometimes one can clarify the other, and so we will adopt a notation and a way of thinking which enables us to switch freely between them.
21 which we mean that Z is characterised by an amount R in the resistance dimension and an amount X in the reactance dimension. In the context of vectors, ordinary numbers are known as scalars, because the effect of multiplying a vector by a scalar is to scale it (i.e., magnify or shrink it) without otherwise changing it. Thus, if s is a scalar, we can write: sZ(R , X) = Z'(sR , sX) Note also, a widely used mathematical notation, which is to use an apostrophe or "prime" ( ' ) to indicate that an object has been modified. We can immediately deduce a rule for adding vectors by observing that two quantities will only add together if they exist in the same dimension (you can't increase the length of an object by adding to its width). Thus, if we want to add two impedance vectors Z1(R1 , X1) and Z2(R2 , X2) , i.e., find out what happens when the impedances are placed in series, all we have to do is add the R parts and the X parts separately to find the new impedance Z(R1+R2 , X1+X2) . This operation is indicated by the '+' symbol, just as in ordinary arithmetic, i.e. if Z(R , X) = Z1(R1 , X1) + Z2(R2 , X2) then R=R1+R2 and X=X1+X2 One point in treating impedances as vectors, is that it enables us to draw diagrams in order to visualise what is going on. We can do this by representing an impedance as a line in a plane, with a particular length and orientation. In this sense, a vector diagram is like a navigation chart, with the distances, in this case, measured in Ohms. Mathematicians calls such maps 'spaces', by analogy with ordinary space; and a space in which distance is measured in Ohms is called impedance space. Now observe that although the R and X parts of an impedance exist in different dimensions, they both exist in the same space because they are connected by the fact that they are measured using the same units (i.e., Ohms). We may therefore deduce that the difference between a space and a graph is that all of the axes in a space must be labelled in the same units; whereas the axes of a graph can have different units (e.g., temperature vs. time). You may, of course, have heard of fourdimensional space-time, which appears to disobey the rule just stated, but in fact the unit of the fourth physical dimension is not time but the speed of light multiplied by time, i.e., ct. The units of ct are metres per second seconds, i.e., metres, and so Einsteinian space has four dimensions with units of length. Working in impedance space; if we adopt the standard convention that resistance increases to the right and reactance increases upwards, we can obtain the line representing an impedance by plotting a point, then moving right by a distance R and upwards by a distance X (or downwards if X is negative), and plotting another point. The length of the line which joins the two points is called the magnitude or 'modulus' of Z, and is written |Z| (and pronounced "mod Z"). The magnitude is always positive by definition, and is obtained by using Pythagoras' theorem (the square on the hypotenuse of a rightangled triangle is equal to the sum of the squares of the other two sides). Hence: |Z| = +(R + X) 6.1 Notice also that the definition of magnitude has a meaning for Tan = X / R ordinary numbers because they can be regarded as a one-dimensional Cos = R / |Z| vectors. Hence, if s is a scalar: Sin = X / |Z| |s| = +(s) i.e., the effect of taking the magnitude of an ordinary number is simply to remove the sign (+ or -). The direction of Z is given by the angle (lower-case "phi") it makes with the horizontal (resistance) axis, which is the angle whose tangent is X/R, i.e.,
22 X/R=Tan Hence: =Arctan(X/R) 6.2 ( 'Arctan' is sometimes written ' Tan-1 ' ). Note that can be positive or negative; and in particular, if we adopt the standard trigonometric convention that a positive angle is obtained by going anti-clockwise from zero (see diagram right), will be positive for an impedance with an inductive reactance and negative for an impedance with a capacitive reactance. Notice also that |Z| and , taken together, provide a complete characterisation of a two-dimensional vector and so give us an alternative way of recording its properties. The form introduced earlier: Z(R,X) is known as the rectangular form because it contains a list of values in dimensions chosen to be at right-angles to each other. The alternative: Z(|Z|,) is known as the polar form, because it uses polar co-ordinates (distance and bearing). The polar form uses different units in its two dimensions (Ohms, degrees or radians); whereas the rectangular form has the same units in both dimensions (Ohms, Ohms). There is no ambiguity between the rectangular and polar forms because the list in brackets is optional, and a vector has the same properties regardless of how it is defined. Also, if a specific vector quantity is to be noted by putting actual numbers into the brackets, a degrees () symbol next to the angle will indicate that the polar form is intended. We can now regard equations (6.1) and (6.2) as the transformations which take a twodimensional vector from the rectangular to the polar form. The reverse transformations are obtained from the standard trigonometric relations: Cos=R/|Z| and Sin=X/|Z|, i.e., R = |Z| Cos and X = |Z| Sin The full set of transformations is summarised in the following table: Rectangular form Z( R , X ) Z( |Z|Cos , |Z|Sin ) Polar form Z( [R + X] , Arctan[X/R] ) Z( |Z| , ) 6.3
If we want to add two impedances graphically, we simply place the beginning of the second against the end of the first, and draw a new line from the beginning of the first to the end of the second. Thus we get a new impedance, with a new magnitude and a new direction. This might all seem rather unnecessary in view of the simple addition rule given earlier, but the meaning of vector addition is (hopefully) obvious when it is visualised in this way. Whatever the method used in performing the arithmetic however, the point in doing it, as we shall see, is that it allows us to keep track of the relationship between the voltage applied across an impedance and the corresponding current.
23
Now note that since I and V are vectors, we can write them in polar or rectangular forms using the transformations (6.3) given earlier, i.e., I( |I| , ) = I( |I|Cos , |I|Sin ) V( |V| , ) = V( |V|Cos , |V|Sin ) In general, it is natural to think of currents and voltages in their polar forms; but the rectangular form is important for understanding what happens when the phase angle is either 0 or 180. Taking a current vector as an example: I( |I| , 0 ) = I( |I|Cos0, |I|Sin0 ) = I( +|I| , 0 ) and I( |I| , 180 ) = I( |I|Cos180, |I|Sin180 ) = I( -|I| , 0 ) When a two dimensional vector lies along the 0 direction, either pointing with it or in opposition, its extent in one of its spatial (i.e., rectangular form) dimensions is zero; and, as in our interpretation of negative frequency given in section 4, the minus (-) symbol is associated with a 180 phase shift (or phase reversal).
24 So, now that we know that both current and voltage are vectors, we must conclude that V=IZ is the general statement of Ohm's law. It transpires however, that we may admit the validity of the other possibilities I=V/Z and V=IZ under certain circumstances. The point is that, in AC theory, we are usually interested not in the absolute phases of the voltages and currents (i.e., the phases relative to some external reference), but in their phases relative to each other. This means that we are often at liberty to choose the direction of one of the vectors in order to learn the directions of the others relative to it. The direction chosen for this special reference vector is in principle arbitrary; but a simplification occurs if we choose it to be either 0 or 180 because Sin goes to zero in either case, and a vector which is zero in one of its spatial dimensions behaves, in this context, as though it has one less dimension. A two-dimensional vector which drops a dimension in this way, of course, becomes a one-dimensional vector, i.e., a scalar. Hence, whenever a voltage or current appearing in a mathematical expression is written as a scalar, the symbol can be (and, as we shall see later, must be) interpreted to mean that the corresponding vector is lying along the 0 axis. A vector which transforms as a scalar in some specific context is called a pseudoscalar. A pseudoscalar has the property that when its space co-ordinates are reflected with respect to the origin (0,0) it changes sign, whereas a true scalar remains unchanged4. Hence voltages and currents become pseudoscalars when we choose their directions to be 0 or 180. Another electrical pseudoscalar is resistance; a special kind of impedance which can be treated as a scalar, but which becomes negative if the co-ordinates of impedance space are reversed. If we choose the current in Ohm's law to be our reference vector, and set its phase angle to 0, it becomes a pseudoscalar of value equal to its extent in the 0 direction; i.e., it is identifiable as the quantity |I|Cos0 or +|I|. Thus, in the relationship V=IZ, we can recognise I as the reference vector against which the phase of V will be determined: I = I( |I| , 0 ) = +|I| and V = I Z = (+|I|) Z The pseudoscalar current I is therefore equal to the current magnitude |I|, the latter being the quantity registered by an ordinary AC ammeter (an device ignorant of phase). It is however not identical to the magnitude because it can be negative in principle, even if not usually in practice; i.e., if, for some reason, the reference phase is chosen to be 180, then: I = I( |I| , 180 ) = -|I| An AC ammeter must be considered to register magnitude |I| rather than pseudoscalar current I because swapping the connections makes no difference to the reading, i.e., the instrument can never give a negative indication (and putting a minus sign in front of each of the numbers on the scale won't help, because then it will never be able to give a positive indication). We can however equate the meter reading |I| with the reference vector I if we want to know the phase of the voltage relative to 0. A similar logic applies in the case of the relationship I=V/Z, where, on the (correct) assumption that the reciprocal of a vector (i.e., 1/Z) is also a vector, we can identify the pseudoscalar voltage V as the reference vector against which the phase of I will be determined: V = V( |V| , 0 ) = +|V| |V| is, of course, the quantity registered by an ordinary AC voltmeter, and we can equate it to V if we want to know the phase of the current relative to 0. So, what we have seen here is that if one of a set of voltage or current vectors is replaced by its magnitude, it becomes a reference vector pointing at 0. We may also deduce the converse, which is that if a vector should happen to be pointing at 0 by virtue of a choice made elsewhere, then it too can be replaced by its magnitude. It is however important to understand that there is a difference between a vector which has dropped a dimension and a magnitude, because there will be
4 Elementary Particles, Enrico Fermi, Silliman memorial lecture series, Yale University Press, 1951. Definition of pseudoscalar: p9.
25 many circumstances in which we will want to use the magnitudes of vectors which are free to point in any direction. In particular, we will need this distinction later in order to generalise Joule's law. It will however become apparent that adoption of the convention that vectors written as scalars (i.e., un-bold) are pointing at 0 (or 180) preserves the meaning of most of the DC and pure-resistanceonly formulae which appear in standard textbooks. The correspondence arises because, whenever a vector is written as a scalar, a statement is made to the effect that the phase of that vector (except for the sign) can be ignored. A DC formula works for AC when the circuit contains only pure resistance because, in that case, rotating one vector to point at 0 rotates all of the others to point at 0, and so they can all drop a dimension. Hence V=IZ becomes V=IR (for example). One consequence of all of this is that, in formulae, we should avoid writing voltages and currents as scalars unless we really mean them to be pointing at 0 or 180. We must however permit a common convention, without which the notation will appear very cumbersome; which is that whenever we refer to a current or a voltage without mentioning phase we mean magnitude, i.e., the observable quantity which can be measured with a two-terminal meter. In other words, a measurement taken from a voltmeter may be written in isolation (say) Vout=27V, but as soon as it is inserted into a formula with other vectors it acquires a phase, even if we don't need to know what that is, and must then be identified as |Vout|. So, mindful of the warning that reference vectors and magnitudes are not quite the same thing, the expression V=IZ, now tells us that if we multiply an impedance by the magnitude of the current passing through it, we will obtain a vector representing the magnitude of the applied voltage and its phase relative to the phase of the current. This is an extremely useful result, and stems from the fact that the vector representation has captured the physics of the situation exactly. In effect, having observed that resistance and reactance act independently on the current, and that inductive and capacitive reactances act in opposition; we have elected to represent pure inductive reactance as a vector pointing at +90, pure resistance as a vector pointing at 0, and pure capacitive reactance as a vector pointing at -90. Thus we have satisfied the requirement that the Poynting vector must alternate for reactance, but not for resistance, and we have incorporated it into the definition of impedance itself. Now however, it follows, that when some mixture of resistance and reactance is connected across a generator, the angle for the voltage-current phase difference will lie at some intermediate value, and the use of vectors allows this angle, the phase angle , to be determined from simple geometry. More to the point, a phase angle of 0 implies that an impedance will absorb all of the power delivered to it, and a phase angle of 90 implies that an impedance will not accept any power. Thus we can observe that the phase angle represents not only the relationship between voltage and current for an impedance, but also the effectiveness with which power can be delivered to it.
8. Phasors
As we have just shown; one of the interpretations of Ohm's law is that, if an impedance vector is scaled by a current magnitude, it is transformed into a voltage vector. Since the act of scaling a vector does not change its direction, it transpires that both the impedance vector and the voltage vector contain the same phase information, and that this information is conserved after multiplication by a scalar. Put in plain language, this means that, although the current through an impedance will change according to Ohm's law as the applied voltage is changed, the V-I phase relationship will not change provided that the frequency is held constant. It is for this reason that vectors used in impedance related applications are known as 'Phasors' (i.e., phase-vectors, or 'carriers of phase'), and diagrams involving them as 'Phasor Diagrams'. The special properties of phasors (as distinct from vectors in general) are as follows:
26 The phase co-ordinate is defined in relation to other phasors rather than to an absolute time reference. Time is measured in degrees or radians relative to one cycle of the frequency at which the analysis is being carried out. A phasor deemed to be pointing at 0 may be replaced by its magnitude, and a phasor deemed to be pointing at 180 may be replaced by the negative of its magnitude. Phasors are strictly two-dimensional; i.e., the vector cross product (which produces a new vector at right angles to the original two) has no meaning for phasors. Shown below is a phasor diagram illustrating what happens when an impedance consisting of a resistance, an inductance, and a capacitance in series is connected across a generator. We can easily deduce the total impedance by inspection in this case; but notionally, it is obtained by regarding the individual series elements as phasors: ZR(R,0), ZL(0,XL), and ZC(0,XC); and adding them together. Thus: ZR(R, 0) + ZL(0, XL) + ZC(0, XC) = Z(R, XL+XC) We can draw the resultant phasor Z by moving along by a distance R and moving up by a distance XL+XC (or down if XL+XC is negative), but notice that in the diagram, the resistances and reactances have all been scaled by a reference phasor I, which is equal to the magnitude of the current. By so doing, all of the quantities have been turned into voltages, and so the diagram has become a voltage phasor diagram.
With regard to the physical phenomena represented here; observe that, since R, C, and L are in series, they must all carry the same current. We can deduce the magnitudes of the voltages across across the three components using Ohm's law, i.e., |VL|=IXL, |VC|=IXC, and VR=IR (the latter being written as a scalar because it is in phase with I and therefore pointing at 0). We also know the relative phases of these voltages because they are all linked to the phase of the common current; i.e., the voltage IR across the resistance is in phase with the current, the voltage IXL across the inductance is at +90 relative to the current, and the voltage IXC across the capacitance is at -90 relative to the current. We can therefore add these three voltages as vectors to obtain the magnitude of the generator voltage and its phase relative to the phase of the current; although in the diagram the voltages across the two reactances have been added first to produce the more diagrammatically convenient quantity IXL+IXC (this being the voltage across the total reactance in the system). Note that the voltages across the two reactances always tend to cancel because there is a fixed 180 phase difference between them, and so the magnitude of the voltage across the total reactance is always smaller than the magnitude of either IXL or IXC . The relationship between the phase-angle obtained from a phasor diagram and the waveforms which can be observed using a two-channel oscilloscope is shown below (where > means "greater than" and < means "less than") :
27
Here we have obtained a waveform which is exactly in phase with the current by measuring the voltage across the resistive component (bottom trace). When this is compared against the waveform of the total voltage V (using the upward zero-crossing as an arbitrary reference point), we find that V is advanced in time (i.e., leading) relative to I when the impedance is inductive (XL+XC>0), and V is retarded (lagging) relative to I when the impedance is capacitive (XL+XC<0). If we call the time difference observed on the oscilloscope t (where '' is upper-case "Delta", a symbol normally used to mean "the difference in"), then the ratio of t to the time of a complete cycle is the same as the ratio of the phase angle to a complete circle. The time-per-cycle (also known as the period of the waveform) is of course the reciprocal of the frequency (1/f), hence, if is measured in radians: t/(1/f) = /(2) i.e., t = / (2f) Note incidentally, that it is impossible (neglecting the use of superconductors) to make a series LCR network from which all of the resistance can be isolated; because practical inductors and capacitors always have some internal resistance. A measurement made across any part of the total series resistance will however always produce a voltage which is in phase with I. A device which measures current by sampling the voltage across a resistance is, of course, an ammeter. It was stated earlier that capacitive reactance is defined as a negative quantity in order to make AC theory consistent with trigonometry. The convention we follow is, of course, that which says that the phase angle of a vector increases as it rotates in the anti-clockwise direction. Hence the choice of XC as the negative reactance stems from the fact that voltage lags (i.e., peaks 90 later
28 than) current for a capacitor, whereas voltage leads (peaks 90 ahead of) current for an inductor. This can be remembered by considering what happens when a capacitor in series with a resistor is connected to a battery: a large inrush of current precedes the build-up of the voltage across the capacitor terminals. If the capacitor is replaced by a coil, the opposite happens; the build-up of current is delayed by a back-voltage produced by the growing magnetic field. There is no need for convoluted reasoning in AC theory however; just remembering the sign of XC takes care of everything. Note however, that many technical articles follow the hallowed tradition of treating XC as negative in some statements and positive in others. This is done as an aid to comprehension, because it encourages the reader to re-derive all of the mathematics in order to find out what the writer was trying to say.
29 In order to avoid misconceptions it is important to be aware that the vertical rod itself is not the antenna. To use the rod as a radiator, we must apply a voltage to the pair of terminals formed by it and the ground-plane, and so the antenna is the combination of the rod and the ground-plane. The input impedance of an electrically short (less than a quarter-wavelength long, i.e., </4) vertical antenna looks predominantly like a very small capacitor, which is essentially the capacitance which exists between the vertical section and the ground. A small capacitor has a large negative reactance (recall XC= -1/2fC), and so we need to place a large inductive reactance (XL=2fL) in series with the antenna to make the whole thing look like a resistor. If we now fill-in some of the details about the antenna and the loading coil, we are in a position to calculate the voltages across the antenna terminals and the loading coil for a given generator power, and also the overall efficiency of the complete antenna system (i.e., the proportion of the applied power which is actually radiated). Any mechanism which dissipates energy, i.e., consumes power, must look electrically like a resistance. The resistive part of the antenna input impedance is shown to contain two components Ra and Rr . Ra is the electrical losses of the antenna, due mainly to the RF resistance of the metal conductors used to make the rod and the ground plane and the dielectric losses (RF heating) of any insulating materials used. Rr is the radiation resistance of the antenna, i.e., a resistive component associated with energy radiated into space. Both Ra and Rr are in some sense distributed over the whole antenna, but they appear as a single resistive component (Ra+Rr) in the antenna input impedance. Take for example an antenna with a radiation resistance of 2 and an input reactance of -3000. These are the approximate values to be expected for vertical section and ground-plane radial lengths of about 7% of the wavelength at the frequency of operation, i.e., 0.07 (graphs for estimation of radiation resistance for short antennas are given in the references listed below5 6). If we are very careful about the materials used in the antenna system, we might keep the loss component Ra down to about 0.5, so that the input impedance of the antenna will look like 2.5 of resistance and -3000 of reactance. To cancel the antenna reactance (Xa=-3000), we need to place a coil having XL=+3000 in series with it. Such a 'loading coil' is normally placed out with the antenna, mainly because coils inside metal boxes have more losses than coils mounted in wideopen spaces, but even so, the coil will not be perfect and will have a distributed RF resistance which looks like another resistive component in the antenna input impedance. The amount of coil resistance is given by the Q of the coil, which is the ratio of reactance to loss resistance, i.e., QL=XL/RL. A well-made loading coil might have a Q of about 400, and so RL=XL/QL=7.5. With the reactance of the antenna now cancelled by the coil, the input impedance of the whole antenna system now looks like a pure resistance of 2.5+7.5=10. Suppose we now decide to deliver 10 Watts (10W) from a generator (transmitter) into this 10 resistance. Knowledge of the power level enables us to calculate the antenna current and hence the voltages which appear between the various terminals, but a word of caution is in order before using the standard power formulae for this purpose. The expressions: "P=IV", "P=IR", and "P=V/R" are all deeply suspect because, if we try to convert them into vector expressions simply by changing the voltages and currents into vectors, then the equations which result will be nonsense because power (energy per unit-of-time) is scalar (strictly, pseudoscalar, as we will see later). We need to develop some additional ideas on the subject of vectors before this matter can be fully resolved, but the standard expressions will balance if all of the vectors involved can drop a dimension. Hence we can concede that the expressions are true for voltage and current magnitudes, provided that the generator is driving a purely resistive load (a somewhat restrictive condition, but we happen to satisfy it here). Thus, using the standard formulae, we can work out that the current in the antenna will be
5 "Efficiency of Short Antennas", Stan Gibilisco W1GV, Ham Radio, Sept 1982, p18-21. Graphs of radiation resistance vs. electrical length for short verticals and dipoles. Efficiency calculations. 6 "How long is a piece of wire?" J J Wiseman, Electronics and Wireless World, April 1985, p24-25. Discussion of the efficiency (or lack thereof) of electrically short verticals. The effect of top loading.
30 |I| = I = (P/R) = (10/10) = 1A and the voltage at the generator will be |V| = V = (PR) = (1010) = 10V. Note however that the current will result in a voltage of 3000V (IX) across both reactances, and although these voltages are cancelled at the generator, we can certainly experience them as real by touching the junction between the rod and the loading coil (not recommended). In fact, the electric field-strength at the top of the loading coil is so great that a neon lamp or a small fluorescent tube held there will light without any wires connected to it (see photograph below). The actual voltage appearing across the coil, |VL|, can be obtained by using Pythagoras' Theorem, i.e., |VL| = I(XL+RL) = I(3000+7.5) = 3000.01V and the voltage across the antenna terminals (i.e., the voltage between the bottom of the rod and the ground plane) is, |Va| = I(Xa+[Ra+Rr]) = I(3000+2.5) = 3000.001V. These voltages are barely different from the voltages across the (theoretical) pure reactances, and reflect the fact that reactance dominates the impedances of both the coil and the antenna; but despite the reactive input impedance of the antenna we have nevertheless turned it into an effective radiator. One way to look at this is to say that by resonating the antenna with a loading reactance, we exploit the voltage magnification of the resulting tuned circuit in order to force power into a reactive load. We could, of course, do the same by sheer brute force; but that would involve using a generator with an output of just over 3000V to get a measly 1A into the antenna. One further point to note here is that the voltages calculated by the above methods are all RMS voltages. RMS, as mentioned earlier, stands for 'square-Root of the Mean of the Squares', this being a mathematical trick to find an equivalent constant (DC) voltage or current which gives the same heating effect as an alternating voltage or current. The voltages and currents referred to in the theory of impedance must be RMS values by definition, because that is the only way in which Ohm's law can be generalised to include both DC and AC (DC becomes a special case of AC with f = 0). The need for an RMS average arises because the ordinary average of a sinusoidal alternating voltage is zero (the voltage spends as much time being positive as it does negative). If however, we square the instantaneous voltage, we obtain a function which is proportional to the power it will deliver to a resistance. If we average that power function (i.e., take the mean of the squares) and then take the square root, we will obtain the equivalent direct voltage, i.e., the constant voltage which will deliver the same amount of power to a given resistance. The RMS average of a sine wave is the instantaneous peak value divided by the square-root of 2, i.e., VRMS=VPk /2, and VPk=VRMS2 . Thus if we calculate a voltage of 3000VRMS across the antenna terminals, the maximum instantaneous (peak) voltage will be 30001.4142=4247V, and it is this higher figure which must be used in calculating the voltage ratings of the components used. The final piece of information we can extract from the vertical antenna example under discussion is the efficiency of the system. In this case, all we have to do is note that the total input resistance was 10, whereas the radiation resistance was 2. This gives an efficiency of 2/10 or 20%, i.e., 2W radiated for 10W in, 20W radiated for 100W in. This incidentally, is not a disaster, and represents a good figure for a short loaded-vertical; around 10W of SSB radiated from a reasonably high location being sufficient for worldwide short-wave communication in suitable atmospheric conditions.
31
Voltage magnification in action For the antenna in the illustration on the right, the frequency of operation is 1.84MHz and the physical length of the antenna assembly is 1.45m from the bottom of the rubber mounting base to the top of the neon lamp soldered to the tip. The section above the loading coil is 0.76m long (0.0047). The clamp holding the fluorescent tube is made from acrylic resin (Perspex), and there is no electrical connection between the tube and the whip. The glow from both lamps is visible at an input level of 1W, but since the photograph was taken on a bright summer's day (albeit in the shade), the power input to the antenna matching network (AMU) was turned up to 100W to overcome the daylight. The antenna is one of the author's old 160m mobile whips from the early 1970s. It is not an optimal design, but it gave useful service (a range of several miles using 1W of AM) despite having an efficiency of considerably less than 1%. The long thin shape of the coil does not give maximum Q, but it does cause the coil to radiate to some extent (some of its 'loss' resistance is actually radiation resistance). The 6W fluorescent tube was added for this demonstration, but the neon bulb at the tip was always used as a tuning aid. The generator in the photograph is a Kenwood TS430s HF transceiver with its mains power supply; and the AMU is an MFJ989C T-network. The input to the antenna is resistive when the length is adjusted correctly (about 25, mainly due to the coil), and the AMU was used to transform this resistance to 50, as required by the generator. Those wishing to reproduce this demonstration should note that, apart from the mains lead, there is no proper ground-plane for the set-up, and the author had to tune-up wearing rubber gloves in order to avoid getting burnt fingers. Mounting the antenna on a car is safer.
32
From this, we can determine a correction factor for the |I||V| power formula, particularly by observing that the cosine (adjacent / hypotenuse) of the phase angle is P/(|VI|), i.e.; P = |V I| Cos or, after factoring out the pseudoscalar I: P = |V| I Cos or, since I=|I|: P = |V| |I| Cos Notice that Cos is zero for =90 (no power is delivered to a pure reactance), and Cos=1 for =0 (real and apparent power are the same for a pure resistance). Be aware also that the formula above appears in standard textbooks as:
33 " P = V I Cos " but unfortunately, there is nothing we can do to salvage this traditional version. We will prove later that the un-bold symbols V and I, when used in an equation, must be interpreted as phasors pointing at 0 or 180 because that is the only way in which we can incorporate DC and AC into the same theory. V and I however can only point in the same direction when =0. The standard formula is therefore internally inconsistent (a mathematical oxymoron). The best that can be said for it is that there is little choice but to assume V and I to be magnitudes in this instance, since the expression is nonsense otherwise. The quantity ' |V||I|Cos ' is known as the scalar product or dot product of the two vectors, and is defined in the same way for all vectors (regardless of the number of dimensions): AB = |A| |B| Cos It is the component (shadow length) of B when projected onto the direction of A multiplied by the length of A (and vice versa, i.e., A and B are interchangeable). Had we attacked the DC power formula "P=IV" with a foreknowledge of vector theory, we would have failed it on the grounds of dimensional inconsistency (P has too many dimensions) and deduced that the scalar product is required, i.e., P = V I = |V| |I| Cos 10.1 Instead, we attacked the problem backwards and discovered the definition of the scalar product instead. Note however, that there is a subtle difference between the general vector dot product and the phasor dot product, which will be discussed shortly. In the context of impedance, Cos is known as the power-factor (PF), and is of particular interest to electricity generating companies, which prefer their customers to place pure resistances across the supply so that they do not have to run their generators into reactive loads. Thus if a load such as an electric motor is inductive as well as resistive, a suitable capacitor placed across it or in series with it can be used to cancel the reactance and bring the power-factor to unity, i.e., =0 and Cos=1. This brings the apparent power into coincidence with the actual power consumed, and has the effect of minimising the consumer's electricity bill as well as minimising the stress on the generators and power transmission equipment. Thus power-factor correction, in relation to electricity distribution, is equivalent to the business of bringing an antenna system into resonance. The reactance-cancelling step in antenna matching, and the insertion of a loading coil into a vertical antenna, can both perfectly well be regarded as a forms of power-factor correction. Now, since power can only be delivered to the resistive part of an impedance, only that part of voltage-multiplied-by-current which corresponds to true power (i.e., the |I|R component) can be measured in Watts. The reason is that power (the amount of energy delivered or work done in unit time) establishes the relationship between electricity and thermodynamics, and the connection is through energy dissipation. It is therefore the convention in electrical engineering, to express apparent power in volt-amps (VA) and only true power in Watts. Many readers will already be aware that mains transformers and portable electric generators (for example) are rated in VA; the implication being that to get the full power output without over-stressing the device, it is necessary to make the apparent power in VA equal to the true power in Watts, i.e., to provide the transformer or generator with a resistive load. Since maximum power output will be associated with a particular value of load resistance; it transpires that all generators, not just radio transmitters, require impedance matching if the maximum allowable output is to be obtained. One further point which should be noted on the subject of power, is that power flowing from a generator to a resistance is, by convention, positive. In DC circuits, this means that, if the voltage applied to a resistance is taken to be positive, then the current in the resistance must also be taken to be positive; and if the voltage is taken to be negative, then the current is negative. Also note that, in AC circuits, power is calculated from RMS voltages and currents. This means that it is already an average or steady-state quantity, and there is therefore never a need to compute the RMS value (it can be done, out of mathematical curiosity, but it is numerically not the same as
34 |VRMS||IRMS|Cos , see ref 7). Thus the term "RMS Watts", commonly seen in the Hi-Fi literature, is nonsense and should be avoided.
7 "RMS watt, or not?" Lawrence Woolf, Electronics World Dec 1998, p1043-1045. Why VRMS IRMS is not RMS power.
35
36 is taken, there are two possibilities because qq is the same as (-q)(-q), i.e., (q)=q. Hence: x + (b/2a) = [ (b - 4ac)/4a ] = [ (b - 4ac) ]/2a finally, we subtract b/2a from both sides to obtain: x = [ -b (b - 4ac) ] / 2a 12.3 which is, of course, the standard school formula for solving quadratic equations. The formula (12.3) looks innocuous enough, but what happens when 4ac is larger than b ? In that case, the solution for x has a term containing the square-root of a negative number (i.e., a number which is negative when multiplied by itself) even though the basic rules of arithmetic demand that when a number is squared, the answer must always be positive. Take, for example, the seemingly innocent quadratic equation x-x+1=0. In this case: a=1, b= -1, and c=1, and the solution is: x = (1/2) ((-3) )/2 The best simplification we can manage is to factor out the square root of -1, i.e., x = 0.5 0.866(-1) Thus there are two solutions, x = 0.5+0.866(-1) and x = 0.5-0.866(-1), both of which contain a part which is a real number, and a part which is not a real number. That which is not real is imaginary, and so the oddball quantity ' (-1) ' was given the symbol ' i ', (by Leonhard Euler, 1707-1783) and this symbol is still used by mathematicians. When it became apparent to scientists researching into electricity that this branch of mathematics might be useful however, the symbol ' i ' had already been allocated to represent current, and so the next letter in the alphabet, ' j ', was allocated for use in conjunction with electrical problems (here we will write the symbol in bold, to make it easier to spot). Thus we can write the un-simplifiable solution to the previous example as: x = 0.5 0.866j That which is not simplifiable is complex, and so in this case, x is a complex number. ' j ' is called the imaginary operator, because it operates on a number in such a way as to make it impossible to add it to a real number. Once ' j ' (or ' i ' ) was discovered, mathematicians went on to find general solutions for cubic equations, and quartic equations (i.e., equations involving x and x4 ), and it was proved that no other type of imaginary operator was required. This means that all numbers can be reduced to the sum of a real part and an imaginary part, and expressed in the general form: x = a + jb with the proviso that sometimes b=0 and the number is purely real, and sometimes a=0 and the number is purely imaginary. Thus it is not so much that complex numbers are peculiar, but that real numbers are a special class of complex numbers which just happen to have the imaginary part equal to zero. Once it was understood that numbers are in general complex, the next step was to work out what that meant. The clue comes from our earlier discussion of vectors. Firstly, we may observe that all real numbers must lie on a line stretching between - and +. Secondly we may observe that j causes imaginary numbers to exist in a dimension separate from real numbers. Therefore the effect of j is to rotate the number-line through 90. Thirdly, we may observe that the numbers 0 and 0+j0 are the same, so that the real and imaginary number-lines must cross at 0. The upshot is that complex numbers (i.e., all numbers) can be represented as points in a plane, which is the same as saying that the number a+jb can be plotted as a point on a graph of a vs. b. That graph is, of course, number space, and maps in this space are known as Argand diagrams.
37 We must observe, at this point, that complex numbers are so like impedances that had they been discovered by electrical engineers, they might well have been named after impedances. Naturally, since complex numbers are the general class of numbers to which all numbers belong, they are essential for solving all kinds of mathematical problems, but nowhere is the association so direct and so profound that all we have to do to convert an impedance into a complex number is to write: Z = R + jX This says that impedance is a quantity with a real part R and an imaginary part X. The original terms 'real' and 'imaginary' are also perfectly appropriate, because the apparent power ( P=IVR ) dissipated in a resistance is indeed real, while the apparent power ( P=I|VX| ) dissipated in a pure reactance is entirely imaginary. Thus it is hard to make a logical distinction between the two statements: "impedances can be represented by complex numbers" and "impedances are complex numbers". It follows also, from the relationships implicit in Ohm's law, that if impedances can be treated as complex numbers, then so too can voltages and currents. This does not mean that these objects have somehow ceased to be vectors however, far from it. The complex number form is just another two-dimensional vector representation, which complements the rectangular and polar forms we have already met. In fact, it is merely a version of the rectangular form in which the 90 difference between the dimensions is imposed by the j operator; and a vector always behaves in the same way regardless of how it is defined. This minor change makes a huge difference however, because it allows a phasor to be written as an ordinary algebraic sum. An expression with j in it might not seem ordinary of course; but it is so in the sense that the existence of j is required by the rules of common arithmetic, and so j is by definition subject to those rules. The complex form of a phasor makes the rectangular form effectively redundant. The transformations from the complex to the polar form are given below, and are very similar to the transformations given earlier in table 6.3. Complex form Z = R + jX Z = |Z|(Cos + jSin ) Polar form Z( [R + X] , Arctan[X/R] ) Z( |Z| , ) 12.4
Notice also that j can be regarded as a phasor operator, because its effect on an algebraic expression is to turn that expression into a phasor (another good reason for writing j in bold). Hence, in the matter of writing properly balanced vector equations, we may note that if a live phasor (i.e., one which has not been turned into a scalar by taking a magnitude or a scalar product) exists one one side of the '=' symbol, then there must be a live phasor or an expression with j in it (i.e., a live phasor) on the other side.
38 Euler's Formula: For those familiar with exponents, note that: Cos + jSin = ej This equation is known as Euler's formula, and defines the relationship between algebra and trigonometry; where 'e' is sometimes referred to as Euler's number and is, to more decimal places then you'll probably ever need: 2.718 281 828 459 045 235 360 287 471 352 662 497 757 247 093 699 959 574 966 967 627 724 076 630 353 547 594 571 382 178 525 166 427 427 466 391 932 003 059 921 817 413 596 629 043 572 900 334 295 260 595 630 738 132 328 627 943 490 763 . . . . etc., etc.
39
( R1R2 - X1X2 ) +j( R1X2 + X1R2 ) Z= ( R1 + R2 ) +j( X1 + X2 ) Now multiply numerator and denominator by the complex conjugate of the denominator: [ ( R1R2 - X1X2 ) +j( R1X2 + X1R2 ) ] [ ( R1 + R2 ) -j( X1 + X2 ) ] Z= [ ( R1 + R2 ) +j( X1 + X2 ) ] [ ( R1 + R2 ) -j( X1 + X2 ) ] and multiply out the terms in the denominator to show that it is now real: [ ( R1R2 - X1X2 ) +j( R1X2 + X1R2 ) ] [ ( R1 + R2 ) -j( X1 + X2 ) ] Z= ( R1 + R2 ) + ( X1 + X2 ) The terms in the numerator are now multiplied out and rearranged so as to separate the real and imaginary parts, i.e., the numerator is put into the form a+jb as follows: (R1R2-X1X2)(R1+R2)+(R1X2+X1R2)(X1+X2) +j[(R1X2+X1R2)(R1+R2)-(R1R2-X1X2)(X1+X2)] Z= ( R1 + R2 ) + ( X1 + X2 ) Simplification of this expression involves multiplying out the brackets and crossing out any pairs of terms which are equal and opposite: R1R2 +R1R2 -X1X2R1 -X1X2R2 +X2X1R1 +R1X2 +X1R2 +X1X2R2 +j[R1X2 +R1R2X2 +X1R1R2 +X1R2 -X1R1R2 -R1R2X2 +X1X2 +X1X2] Z= ( R1 + R2 ) + ( X1 + X2 ) Which leaves us with: [ R1R2 +R2R1 +R1X2 +R2X1 ] +j[ R1X2 +R2X1 +X1X2 +X2X1] Z= ( R1 + R2 ) + ( X1 + X2 ) This solution can be written in various ways, depending on preference; e.g.:
40 [ R1R2 (R1+R2) +R1X2 +R2X1 ] +j[ X1X2 (X1+X2) +X1R2 +X2R1 ] Z= ( R1 + R2 ) + ( X1 + X2 ) or: [ R1(R2+X2) + R2(R1+X1) ] +j[ X1(R2+X2) + X2(R1+X1) ] Z= ( R1 + R2 ) + ( X1 + X2 ) The real part of expression (14.1) is R, and the imaginary part is X, and so we can write: R1R2 (R1+R2) +R1X2 +R2X1 R= (R1 + R2) + (X1 + X2) Or, alternatively, using expression (14.1a): R1(R2+X2) + R2(R1+X1) R= (R1 + R2) + (X1 + X2) and X= (R1 + R2) + (X1 + X2) X1(R2+X2) + X2(R1+X1) and X= (R1 + R2) + (X1 + X2) X1X2 (X1+X2) +X1R2 +X2R1 14.1a 14.1
The formula (and variants) given above for impedances in parallel, while not exactly memorable, has the advantage of being completely general. First note that if we put X1=0 and X2=0, then all of the reactive terms vanish and we are left with the formula for resistors in parallel, i.e., R=R1R2/(R1+R2). Similarly, if we put R1=R2=0, we end up with the parallel reactance formula X=X1X2/(X1+X2). More usefully however, we can put only X2=0 and find out what happens when a resistance is placed in parallel with an impedance, and we can put R2=0 and find out what happens when a pure reactance is placed in parallel with an impedance. The latter operation is of particular importance in the matter of devising and analysing antenna matching networks. Dimensional consistency The solution to the parallel impedance problem is our first example of what might be called a 'messy' mathematical derivation. As such, it is fairly typical of circuit analysis problems, which involve no difficult logical steps, but tend to expand into large numbers of terms, many pairs of which subsequently turn out to be equal and opposite and so cancel. Thus the problem expands alarmingly, and then contracts again into one or more relatively simple expressions. It can be difficult to keep track of the various parts of the equation when carrying out such manipulations, which means that mistakes are likely to occur. There is however a simple realitycheck, which identifies invalid terms and gives an immediate indication of the likely correctness of the result. This is the test of dimensional consistency which, with a certain amount of practice, can be carried out at a glance. The rules are as follows: If two quantities are to be added together (or subtracted) it must be possible to express them in the same units. It would make no sense to add a distance in metres to a temperature in C. It would also make no sense to add a distance in metres to a distance in centimetres, but in that case the distance in centimetres can be divided by 100 to convert it into metres, and then the addition can be performed.
41 It follows, that if a '+' or a '-' symbol appears anywhere in an equation, the dimensions of the quantities on either side of that symbol must be the same. An equation, which supposedly represents a certain quantity, must have dimensions appropriate to that quantity. Take, for example, the expression for the real part of two impedances in parallel, as derived above: R1R2 (R1+R2) +R1X2 +R2X1 R= (R1 + R2) + (X1 + X2) The truth of this statement is not immediately obvious, but a check of dimensional consistency can very quickly tell us if it is capable of being true. In this case, the denominator (the bottom part) of the fraction has two brackets each containing quantities having the units of resistance (Ohms). Hence the terms (R1+R2) and (X1+X2) have dimensions of [], and the overall dimensions of the denominator are []. In the case of the numerator; there are three terms to be added, each having the dimensions of [], and the overall dimensions of the numerator are []. Dividing the dimensions of the numerator by the dimensions of the denominator we obtain: []/[]=[]; and so the equation is dimensionally consistent and represents a quantity which can be expressed in Ohms. It is also possible to test the dimensional consistency of equations involving mixed units. The point here is that units have aliases, which are composites of other units; and so we can check any equation, provided that we know the relationships between the units used. In the context of circuit analysis, these relationships are easily obtained, because they are embedded in the basic formulae from which the mathematical argument is constructed. Ohm's law, V/I=Z, for example, tells us that Ohms are equivalent to Volts divided by Amperes, and so a quantity having the latter dimensions, i.e., a voltage divided by a current, may legitimately replace a quantity measured in Ohms. Similarly, the reactance laws XL=2fL and XC= -1/2fC, tell us that Ohms can also be replaced with [Henrys radians / second], or by [1/(Farads radians / second)]. Thus we should not be confused by structures such as: Z = R +j{ 2fL -1/(2fC) } The bracket after the j is internally consistent, and represents a quantity measured in Ohms.
42 components used in HF antenna matching applications, RC will be of the order of 0.1, and RL typically a few ohms. In the general electronic literature, several different definitions are used for the resonant frequency of a parallel tuned circuit; the alternatives being the frequency at which the impedance of the circuit has its largest magnitude, and the frequency at which XL= -XC . Here however, we will adopt the most straightforward definition, which is the frequency at which the impedance is purely resistive (also known as the 'unity power-factor frequency'). We can find this frequency by setting the imaginary part equal to zero in equation (14.1) above, i.e.: X = [XCXL(XC +XL) +XCRL +XLRC ] / [ (RC + RL) +(XC + XL) ] = 0 (Where the subscripts 1 and 2 have been changed to C and L as befits the current problem). Now notice, that to make the reactance equal to zero, we only need to make the numerator of this expression equal to zero, i.e., we can ignore the denominator. Hence: XCXL(XC +XL) +XCRL +XLRC = 0 . . . (15.1) We now need to make the frequency dependence of this expression explicit by using the substitutions: XC= -1/2f0C , and XL=2f0L, i.e.: -(2f0L / 2f0C )( 2f0L - 1/ 2f0C ) - ( RL / 2f0C ) + ( 2f0L RC ) = 0 The resonant frequency can now be found by re-arranging this expression to get f0 on its own. Also, since we know that the series-resonance formula is an approximation for the expression we are about to derive, we expect the result to look like the series-resonance formula with an additional correction term or factor. We can begin by multiplying-out the first two brackets. Hence: -( 2f0L / C ) + ( L / 2f0C ) - ( RL / 2f0C ) + ( 2f0L RC ) = 0 Now we will put all of the terms containing 2f0 on one side, and the terms containing 1/(2f0) on the other. 2f0 ( LRC - L/C ) = (1 / 2f0 ) [( RL/C ) - ( L/C )] Then multiply both sides by 2f0, and divide both sides by (LRC-L/C): (2f0) = [( RL/C ) - ( L/C )] / ( LRC - L/C ) and factor-out 1/LC from the right-hand side: (2f0) = ( 1 / LC ) ( RL - L/C ) / ( RC - L/C ) Here we will also multiply top and bottom by -1 to put the L/C terms first, L/C generally being much larger than the resistance-squared terms, hence: (2f0) = ( 1 / LC ) ( [L/C] - RL ) / ( [L/C] - RC ) which rearranges to: f0= 1 2 LC
Thus we find that the resonant frequency of a parallel tuned circuit is the same as that for a series tuned circuit except for a correction factor [(L/C - RL)/(L/C - RC)], which is usually close to unity. Notice that this factor is equal to 1 if RL and RC are zero; and also that the factor is 1 when RL=RC. Example: A 3H coil is connected in parallel with a 42pF capacitor. The approximate resonant frequency is: 1/(2[LC])=1/( 2[310-64210-12] ) =14.178649MHz. In the region of 14MHz, the coil has a loss resistance of 2 and the capacitor has an equivalent series resistance (ESR) of 0.1. Thus L/C=71428.57, RL=4, and RC=0.01. Hence the correction factor is: [(71428.57-4)/(71428.57-0.01)]=0.99994414. The precise resonant frequency (to the nearest 1Hz) is therefore 0.99994414.178649=14.178253MHz.
L/ C R 2 L L/ C R 2 C
15.2
43 The quantity L/C is called the '"L C Ratio" of the tuned circuit (and it has units of 'Ohms squared'). Note that: L/C = - XL XC = |XL XC| It will turn out that the L/C ratio is an important parameter of resonant circuits. Also, there is some precedent for referring to the square root of the L/C ratio as the characteristic resistance of the tuned circuit, by analogy with the characteristic impedance of a lossless transmission line, which is R0 = (L/C) where L is inductance per unit of length and C is capacitance per unit of length; but the lengths cancel and so the characteristic resistance of an ideal transmission line is the square root of its L/C ratio. In the example given above, the resonant frequency differed from the ideal case by only 0.0028% or 396Hz, the reason being that the L/C ratio was very large in comparison to the squares of the loss resistances. In HF radio applications, the L/C ratios of tuned circuits are generally in the order of several tens of thousands of , whereas the value tolerances of radio components are seldom better than 1% and often considerably worse. In order to obtain an exact resonant frequency, it is necessary to make either the coil or the capacitor adjustable; and the required adjustment range will easily swallow any deviation caused by using the ideal-case formula f0=1/[2(LC)]. We may therefore conclude that, in normal circumstances, the assumption of zero losses may be perfectly acceptable when calculating the resonant frequency of a parallel-tuned circuit; but, as we shall see in the next section, it is not acceptable when calculating the impedance at resonance.
44 Rp0 = [ RLRC + (L/C) ] / (RL+RC) 16.2 Notice that this formula has lost all of its reactance terms, which is very convenient. If we apply it to our example data, where L/C=71428.57, we obtain: Rp0 = (20.1/2.1) + 71428.57/2.1 = 0.095 + 34013.61 Rp0 = 34.0137 K The approximation is almost exact for components of moderate Q. Also we may observe that the term RLRC/(RL+RC) is much smaller than the term (L/C)/(RL+RC), and given that we are unlikely to know the component resistances very accurately, we might as well drop the first term. Hence the appropriate formula for calculating the dynamic resistance is: Rp0 = (L/C) / (RL+RC) 16.3 This equation is an excellent approximation for the dynamic resistance, but strangely, it is not the one offered in most textbooks. The usual approximation is that, in addition to XL+XC being zero, the ESR of the capacitor is assumed to be zero. This causes all of the terms containing RC in equation (16.1) to disappear, and gives rise to a considerable simplification, viz.: Rp0 = RL XC / RL i.e., Rp0 = XC / RL If we apply this formula to our example data we obtain: Rp0 = 71432.56399 / 2 = 35.7163K In this case the deviation from the true value is 1702.6, or 5%, which may be a reasonable approximation for many purposes, but needs to be treated with caution. Also, the failure to eliminate reactance from the formula makes computation more difficult.
45 combination into a mathematical object; we must ensure that the quantities on either side of the // symbol are of the same type and that they are expressed in the same units. In this case we can fix the problem by noting that, if the report is to have any useful meaning, a test frequency must be stated somewhere. If that frequency is, say, 14MHz, then the reactance of the capacitor becomes -1/(2fC) = -113.7, and its impedance (assuming that losses are negligible) is 0-j113.7. Hence we can re-state the test load as (68 // -j114) . This is the same as saying (68+j0 // 0-j114) ; and is, of course, a complete statement of the load impedance in the form Z1//Z2 which can be converted into the R+jX form if so desired. . A particular logic emerges from these observations and it is important to be aware of it: 17.1) A resistance is an impedance. Resistances and impedances are the same type of object. A resistance in parallel with an impedance is an impedance. A resistance is simply an impedance which happens to have its imaginary part equal to zero. This means, incidentally, that the preferred pseudoscalar symbol for Z is usually R, rather than Z. 17.2) A reactance is not an impedance. The statement: Z = 68 // 114 has a completely different meaning to the statement: Z = 68 // -j114 (the first is a resistance in parallel with a resistance, the second is a resistance in parallel with a reactance). Mathematically, a reactance cannot be combined directly with an impedance, but a reactance can be converted into an impedance by multiplying it by j. Looking at this another way: impedance and reactance have reference directions which are 90 apart. To make them compatible, it is necessary to rotate one of them through 90. 17.3) Scalability is preserved. When the double slash notation is used to create a mathematical object, i.e., the same type of phasor exists on both sides of the // symbol, it has the useful property that a common factor can be multiplied-in or divided-out of the parallel object, i.e.: sZ1 // sZ2 = s (Z1 // Z2) Proof: sZ1 // sZ2 = sZ1 sZ2 / (sZ1 + sZ2 ) = s Z1 Z2 / (Z1 + Z2 ) = s (Z1 // Z2) 17.4) The associative rule. The double slash notation can be extended to represent any number of impedances in parallel: Z1 // Z2 // Z3 //....// Zn = 1 / [ (1/Z1) + (1/Z2) + (1/Z3) + . . . . . + (1/Zn) ] and the associative rule of arithmetic (and linear electrical devices in parallel) is obeyed, i.e.: (Z1 // Z2) // Z3 = Z1 // Z2 // Z3 17.5) Double-slash product definition. The // notation implies a specialised kind of phasor multiplication, which we might call the double-slash product or the parallel product of a pair of phasors. Since its use in conjunction with parallel capacitors is pointless, we will adopt the following strict mathematical definition: a // b = ab/(a+b)
46
18.1
Xp Rp X= ( Rp + Xp )
Further pieces of information which we can extract from the parallel-to-series transformation, and which will be useful later, are the phase-angle, magnitude and Q of an impedance in its parallel form: Phase angle and Q of an impedance in parallel form: The phase angle for an impedance in its series form was given earlier as expression (6.2): = Arctan(X / R) By using expression (18.1) above we can substitute for X and R to obtain: = Arctan(Xp Rp / Rp Xp) i.e., = Arctan(Rp / Xp) which also tells us that X/R=Rp/Xp , i.e. the ratio of resistance to reactance of an impedance in its series form is the inverse of the ratio for the impedance in its parallel form. Also, since we know that |X|/RLoss is an expression for the Q of an electrical component, we may further note that component Q can be expressed as:
47 Qcomp = RpLoss / |Xp| (the higher the parallel loss resistance, the higher the Q). Magnitude of an impedance in parallel form: The magnitude of an impedance in its series form is given by (6.1): |Z| = (R + X). Substituting for R and X using expression (18.1) we obtain: |Z| = [{ (Rp Xp) + (Xp Rp) }/ (Rp + Xp) ] = [{ Rp Xp ( Xp + Rp) }/{ (Rp + Xp) }] We can take the square root of the RpXp term and so factor it out of the square-root part of the expression, provided that we only use the positive result (magnitudes are always positive). Hence: |Z| = | Rp Xp / ( Rp + Xp ) | 18.2 A convenient rearrangement of this expression can be obtained by forcibly factoring Xp from the denominator: |Z| = | Rp Xp / { Xp [ (Rp/Xp) + 1 ] } | Now, since Rp and Rp/Xp are always positive, we can drop the magnitude brackets to obtain: |Z| = Rp / + [ (Rp/Xp) + 1 ] 18.3 This form is particularly useful for frequency response calculations, because it allows the reactance contribution to be treated as a correction factor: 1 / [ (Rp/Xp) + 1 ] which goes to unity (1) when the reactance is large in comparison to the resistance.
48
(The symbol " " means: "is by definition equal to". The symbol " // " means "in parallel with"). As the diagram above illustrates; the parallel impedance representation allows us to visualise the circuit as an ideal parallel resonator with a resistance connected across it. This separates the reactive and the resistive parts of the problem and tells us immediately that unity power-factor resonance occurs when XLp= -XCp, and that the dynamic resistance is given by the value of RLp//RCp at f0. We can, of course, relate the parallel impedance form of the resonator to the series impedance form by using the transformations given in the previous section (equations 19.3), i.e., RCp = (RC + XC) / RC 20.1 XCp = (RC + XC) / XC 20.2 RLp = (RL + XL) / RL 20.3 XLp = (RL + XL) / XL 20.4 Using the appropriate transformations (20.2 and 20.4), the resonance condition XLp= -XCp becomes: (RL + XL) / XL = -(RC + XC) / XC which can be rearranged to: XC (RL + XL) + XL (RC + XC) = 0 and then to: XCXL(XC +XL) +XCRL +XLRC = 0 We have seen this expression before as equation (15.1), and so the derivation may continue as in section 15 to give the parallel resonance formula (15.2). f0 = {1/[ 2(LC) ] }{ [(L/C - RL)/(L/C - RC)] } The dynamic resistance Rp0 is given by: Rp0 = RLp//RCp Hence using the transformations (20.1) and (20.3) we have: Rp0 = [ (RL + XL) / RL ] [ (RC + XC) / RC ] [ (RL + XL) / RL ] + [ (RC + XC) / RC ] which simplifies to: (RL + XL) (RC + XC) Rp0 = RC(RL + XL) + RL(RC + XC)
20.5
49 Thus we obtain another formula for the dynamic resistance of a parallel resonator, and it is interesting to compare it with equation (16.1), which was our original derivation (here we show it rearranged slightly): RC(RL + XL) + RL(RC + XC) Rp0 = 20.6 (RL + RC) + (XL + XC) The two formulae are radically different in appearance; but it is easy to verify, by plugging in the numbers from the example in section 16, that they both give exactly the same answer. This leaves the issue of which one of them is the best simplification; and the answer in this case is that it is equation (20.6). We can tell by looking at the power or degree of the numerator and denominator of each equation. Observe first that all of the quantities involved in the expressions are measured on Ohms. Hence the numerator of (20.5) has dimensions of 4 and the denominator has dimensions of . In equation (20.6) however, the numerator has dimensions of and the denominator . Hence the numerator of (20.6) is of lower degree than that of (20.5), and the denominators likewise. This means that (20.5) can be simplified further and ultimately transformed into (20.6); although for anyone who cares to try it, the manipulations required are laborious, and require the use of equation (15.1) as a substitution. Something more tractable happens however when we multiply equations (20.5) and (20.6) and take the square root to obtain a new expression for Rp0 , i.e., we take the geometric mean of the two formulae. In this case the denominator of (20.5) cancels the numerator of (20.6) and we obtain: R p0 =
Here we can make the following simplifying assumptions: 1) Since XL is normally much greater than RL in radio circuits, RL can be deleted from the numerator without making much difference. 2) Since XC is also usually much greater than RC, RC can be deleted from the numerator without making much difference. 3) If the Qs of the resonator components are reasonably high, (XL+XC) is very nearly zero at resonance and can therefore be deleted from the denominator without making much difference. The result is: R p0 =
R2 X2 R 2 X2 L L C C
R LRC XLXC
2 2
20.7
This expression can be simplified by observing that everything inside the square root bracket is squared, but in doing so we must be mindful of a common fallacy. The square root of the square of a number is not the number itself. A square root always has two solutions, one positive, one negative; and if only one of the solutions can be true, additional information is required for selection of the correct one. In this case, we know that Rp0 must be positive if the network is passive, and so we accept the positive square roots; but note that in section 6 we defined the positive square root of a square as a magnitude, i.e.; +(X) = |X| This rule must be strictly applied, because simply deleting the superscripts and the square root symbol would have given us a negative value for Rp0 because XC is negative. Hence: Rp0 = |XL||XC| / (RL + RC) We have noted before that: |XL||XC| = L/C hence:
X2 X2 L C
R LRC
50 Rp0 = (L/C) / (RL+RC) which we have seen before as equation (16.3). While it is instructive to attack a derivation from several directions and verify that all approaches lead to the same conclusion, the point of the parallel impedance representation is that it often makes problems easier to solve. The parallel resonator is a good example because the parallel representation gives a direct separation of the resistive and reactive parts of the problem. A further and very important point however, is that we do not use the parallel representation with a view to converting it into the series form at the earliest opportunity. It is simply another way of expressing impedance; and it is no less authoritative than the series form. Hence if we have data for an inductor or capacitor in series form, we can transform it into the parallel form and use it like that. The parallel form may seem less authoritative than the series form because the expression for Rp (equation 19.3a) has reactance in it, and so explicitly varies with frequency. In reality however, the resistive component in the series form also varies with frequency, due to a variety of frequency dependent losses such as, skin effect and dielectric absorption [see "Components and Materials"], capacitive and inductive coupling to resistive materials in the vicinity of the component, and of course our old friend radiation. Thus, when solving problems using simple circuit models, we need to be aware that resistances inserted to represent losses are expected to vary with frequency, regardless of representation. Thus the practical problem of finding the dynamic resistance of a parallel resonator becomes that of measuring the impedances of the components at a frequency reasonably close to the desired resonance, transforming the losses into parallel resistances, and taking the parallel combination of those.
L/C R 2 L L/C R 2 C
51 will not resonate. The same argument applies, of course, to the capacitive branch. The parallel resonance formula can therefore be seen to tell us that true (i.e., real) resonance cannot occur if the resistance in either branch rises above a certain critical value, that value being the square-root of the L/C ratio (the characteristic resistance), i.e., R0 = (L/C) If the resistance in one branch rises above (L/C), then the current in that branch will always be too feeble to bring the system into resonance. What happens instead is that the phase of the total current I can approach and move away from the phase of the generator voltage as the frequency is varied, but it is never able to reach it. The 'resonant frequency' is simply the imaginary frequency of closest approach (and it does not exist on the real frequency line). It is imaginary because the combined impedance of the two branches can never become real (i.e., resistive) by cancellation. Note however, by inspecting the circuit, that the combined impedance does become resistive at zero and infinite frequencies, but that this is not due to cancellation: At 0Hz, XL=0 and XC ('' means "approaches" or "tends towards"), so the impedance is simply RL ; and at infinite frequency, XL and XC=0, so the impedance is RC. If a real resonant frequency does not exist therefore, what will be obtained is a network which has a voltage-current phase relationship always on one side or the other of zero degrees, only approaching 0 at zero or infinite frequency. It was mentioned earlier that resonant circuits used in HF radio applications tend to have large L/C ratios, often greater than 10000. In the case of parallel resonance, one reason for this policy should now be apparent; i.e., we need to obtain a high characteristic resistance ( R0 = +[L/C] ) in order to ensure that the circuit will function properly with practically realisable inductors. A parallel resonator with an L/C ratio of 100, for example, will not work if the RF resistance of the inductive branch is greater than 10 at the expected resonant frequency, and it is by no means impossible for a practical inductor to exceed such a limit. We may conclude, from this discussion, that a parallel tuned circuit will only resonate usefully if (L/C) is made larger than the resistance in either of the branches. The qualification 'usefully' must be applied however, because if the resistance in both branches is allowed to become larger than the critical value, then both the numerator and the denominator of the term inside the square-root bracket will become negative, and so the term itself will be positive. Thus there will be a real resonance, but the current in both of the branches will be feeble, and so the resonance will also be feeble and of little practical use. One final significance of the characteristic resistance which is worth remembering is that it is equal to the magnitudes of the reactances in the circuit at the 'ideal case' resonant frequency, i.e., the resonant frequency when the resistance in both branches is equal. This frequency, as was mentioned earlier, is given by the series resonance formula, i.e., f0s = 1/[ 2(L C) ] or in radians / sec: 2f0s = 1/[(L C) ] Now, if we call the inductive reactance at this frequency XL0s , then: XL0s = 2f0s L = L / [(L C) ] and, since any number is the square of its own square root: XL0s = +(L/C) Similarly, for the capacitive reactance: XC0s = -1/[ 2f0s C ] = -[ (L C) ] / C XC0s = -(L/C)
52
53 multiply out the numerator, we will obtain two terms which contain XLXC, and this leads to an alternative expression; i.e.: XLRC + XLXC + XCRL + XCXL Tan = RL(RC + XC) + RC(RL + XL) becomes: XLRC - (L/C)XC + XCRL - (L/C)XL Tan = RL(RC + XC) + RC(RL + XL) which rearranges to: XL(RC - L/C) + XC(RL - L/C) Tan = 22.1 RL(RC + XC) + RC(RL + XL) Which, since the L/C ratio is a fixed parameter for the resonant circuit, somewhat simplifies calculation. We will now use the expression above to evaluate the effect of resistance in a fairly representative parallel resonator. For this example we will use an inductance of 1H and a capacitance of 100pF. This combination gives an L/C ratio of 10000 and hence a critical resistance R0=(L/C)=100. The 'ideal' resonant frequency, i.e., the resonant frequency when RL=RC is: f0s = 1/[ 2(L C) ] = 15.91549431MHz (i.e., 2f0s = 100M radians/sec), and at this frequency, XL= -XC=(L/C)=100. Shown below is a set of graphs of the I-V phase relationship for our example resonator with various values of RC and RL between 1 and (L/C). These graphs were produced using the Open Office Calc spreadsheet program (available free from OpenOffice.org), the procedure being to create columns for frequency, XL, and XC, and use the calculated reactance values in the Arctangent (inverse tangent) of equation (22.1) given above. Note that spreadsheets often give the results of inverse trigonometric functions in radians, and so it is necessary to multiply the expression by 180/=57.29577951 to get the result in degrees (there are 2 radians in 360), i.e.: = -57.29577951Arctan{ [XL(RC-L/C)+XC(RL-L/C)]/[RL(RC+XC)+RC(RL+XL)] } The plotted curves below were created using the spreadsheet "chart" tool [see accompanying spreadsheet file: par_res_ph.ods].
54
Of the curves shown, only the example with RL=1 and RC=1 constitutes a good healthy resonance. The choice of 1 in each of the branches incidentally was made simply so that the resonant frequency would coincide with f0s . Any curve with at total resistance RL+RC=2 will have an almost identical appearance. The Q of the resonance (as will be explained later) is 50 in this case (i.e., Q0=XL/[RL+RC] ), which is fairly high; and so the phase of the current lags the voltage by nearly 90 at frequencies a few percent below the resonant frequency, and leads it by nearly 90 at frequencies a few percent above. Hence the circuit provides the generator with a nearly pure inductive load below resonance, and a nearly pure capacitive load above. In the case where RL=50 and RC=50, the Q of the resonance is 1. A large resistive component is present in the impedance at all frequencies, and so the I-V phase difference never approaches 90 in either direction. The curves for RL=50 and RC=1, and RL=1 and RC=50, are included to show that the resonant frequency (the point where the curve crosses the zero phase-difference axis) moves to low frequency when RL exceeds RC, and vice versa. The curves for RL=100 and RC=1, and RL=1 and RC=100 show that the 'resonant frequency' goes to zero when RL=(L/C), and goes to infinity when RC=(L/C). These results seem to indicate that the parallel resonator is infinitely tunable by means of a variable resistor, a proposition which warrants careful examination.
55
56 wide-range VFOs and variable capacitance tuning. For full short-wave coverage however, it was necessary to provide the receiver with a band-switch, one reason being that it was very difficult to obtain a tuning range of much greater than about an octave without changing coils. The larger the coil, the larger the self-capacitance, and so the band-switch selects progressively smaller coils with progressively shorter connecting wires as the frequency is increased. Winding a set of candidate coils and checking them for self-resonance will quickly indicate to the designer that a set of frequency ranges like 1-2, 2-4, 4-8, 8-16, and 16-32MHz is easily achievable, but trying to reduce the number of bands to four (e.g., 1-2.35, 2.35-5.52, 5.52-13, 13-30.6) requires careful construction, and reducing the number to three (1-3.16, 3.16-10, 10-31.6) is very difficult using a conventional rotary switch. This does not mean that a three-band solution cannot be obtained, but it falls close to the borderline at which it becomes preferable to use an elaborate low-capacitance technique such a 'turret bandchanger', i.e., a rotating turret which carries the coils and enables them to be connected to the active circuit via very short leads (see below).
Motorised bandchanger turret from 1958 vintage Marconi AD307 aviation transmitter.
In the matter of making a resistance tuned parallel resonator therefore; our rough calculations, and observations of what others have been able to achieve in practice, seem to indicate an approximately 2:1 rule-of-thumb for the upper limit of the frequency range. What this means in this instance however, is that we should not try to push the resonator to much more than about twice its ideal-case resonant frequency; so if we consider our example 1H in parallel with 100pF resonator, which resonates at about 16MHz when the resistance in both branches is equal, we might reasonably expect to be able to tune it from 0-32MHz. This, although not infinite, is nevertheless a phenomenal tuning range; but unfortunately, there is a catch. The problem is that if we use a variable resistor of value equal to (L/C), the Q of the circuit will be approximately 1. We will investigate the relationship between Q and bandwidth shortly, but we can pre-empt those findings by stating that such a circuit will be completely useless as a band-bass filter such as might be used to provide a radio receiver with selectivity. We might therefore consider raising the circuit Q to 10, by reducing the value of the variable resistor in our example (L/C)=100 resonator so that the total resistance in both branches adds up to 10. In this way, as we shall see, we sacrifice 'some' of the tuning range in order to obtain a poor but possibly useful Q. To find the tuning range which results, we can use the full parallel resonance formula: f0 = {1/[ 2(L C) ] }{ [(L/C - RL)/(L/C - RC)] }
57 i.e., f0 = f0s [(L/C - RL)/(L/C - RC)] Now if, for the sake of simplicity, we assume that all of the resistance is in the inductive branch at the low frequency limit, and in the capacitive branch at the high frequency limit; the correction factor [(L/C - RL)/(L/C - RC)] becomes 0.99499 when RL=10, and 1.00504 when RC=10. So, for our 1H in parallel with 100pF resonator, with its ideal-case resonance of 15.915MHz, we obtain a tuning range of 15.836 to 15.996MHz, a spectacular 0.5% with a Q of 10. Of course, if we dispense with the variable resistor and use a variable capacitor or inductor instead, we can easily obtain a tuning range of more than 2:1, while sustaining a Q of around 50. So much for the resistance-tuned resonator as a variable band-pass filter, but perhaps we can use it as the frequency-determining device in an oscillator? That was certainly the suggestion in the "circuit idea" article from whence it came. An oscillator is effectively an amplifier with some of its output fed back into its input via a frequency-selective network, and we can easily design an amplifier with sufficient gain to overcome the losses of the candidate circuit. The first problem however, is that the spurious self-resonances of the network are of higher Q than the desired resonance, and if nothing is done about them the circuit will probably oscillate at somewhere around the self-resonance frequency of the inductive branch. We might however decide to use an amplifier which is too slow to oscillate at VHF, or implement some kind of low-pass filter in order to ensure that the system has no gain in the troublesome self-resonance region. Thus by some artifice we might actually get the oscillator to submit to our will, at which point it will deliver its final insult by producing a signal which appears to be modulated by a hissing noise. This illmannered behaviour will occur because an oscillator is effectively a generator of filtered white noise. No oscillator produces a pure sine-wave. A practical RF oscillator always produces a band of noise centred on a selected resonance of the frequency-determining network, with a width determined by the network Q and the amplifier gain. The usual objective in oscillator design for radio applications is therefore to obtain as high a Q as possible, so that the desired output is a spike in the amplitude vs frequency domain sufficiently narrow to be regarded as a sine wave. Thus the resistively tuned parallel LC resonator is of little practical appeal in situations demanding spectral purity. Its significance to this discussion lies instead in the fact that the "circuit idea" is a plausible fallacy; and that its appearance in an electronics publication did not generate a flurry of letters pointing out its flaws. We might comment, at this point, that it is necessary to build a circuit and try it before recommending it to others; but that is no help in finding out what went wrong if the circuit should fail to work as expected. There is a great deal of difference between a resonance and a useful resonance; and a practical circuit component operated at radio frequencies does not bear description as a pure inductance, capacitance, or resistance. Wide-range resistance-tuned LC oscillators (operating at low frequencies) have nevertheless been built10, but the theory of operation depends on more than the simple observation that resistance appears in the parallel resonance formula. Such circuits were used many years ago for resistanceto-frequency converters in scientific instrumentation applications, but are nowadays rendered obsolete by simple (and spectrally noisy) RC oscillators such as can be implemented using CMOS logic inverters or the 555 timer IC.
10 "Theory and Application of Resistance Tuning", C. Brunetti and E. Weiss. Proc. IRE, June 1941, p333-344.
58
59 24.3 Magnitude product theorem: The magnitude of the product of two (complex) numbers is equal to the product of their magnitudes. |N1 N2| = |N1| |N2| 24.3 Proof: Let N1=a1+jb1 and N2=a2+jb2 Then: N1 N2 = ( a1 +jb1 )( a2 +jb2 ) = a1a2 - b1b2 +j(a1b2 + a2b1) |N1 N2| = [(a1a2 - b1b2) + (a1b2 + a2b1)] = [(a1a2) + (b1b2) -2a1a2b1b2 + (a1b2) + (a2b1) +2a1a2b1b2 ] = [a1(a2 + b2) + b1(a2 + b2)] = [(a1 + b1)(a2 + b2)] = [(a1 + b1)][(a2 + b2)] = |N1| |N2| 24.4 Scaling theorem: The magnitude of the product of a scalar and a complex number is equal to the product of the scalar and the magnitude. |sN| = s|N| 24.4 Proof: Let s be a scalar, and N=a+jb. sN = sa +jsb |sN| = [(sa) + (sb)] = [s(a + b)] = s(a + b) = s|N| i.e., a scalar can be factored out of or multiplied into a magnitude bracket in the same way that it can be done with any other type of bracket. 24.5 Drop-dimension theorem: A phasor with a phase angle of 0 or 180 transforms as a scalar: N( |N| , 0) = +|N| 24.5 N( |N| , 180) = -|N| Proof: A phasor pointing at 0 can be represented as a complex number with a positive real part and a zero imaginary part. A phasor pointing at 180 can be represented as a complex number with a negative real part and a zero imaginary part. Hence if: N = a+j0 then: N=a and |N| = +(a + 0) = |a| Hence: N = N = |N| where N is a pseudoscalar equal in value and sign to the real part of N. This may appear trivial, but it shows that our assumption that a phasor which has dropped a dimension can be treated as a scalar is universal, rather than a special interpretation of a particular
60 phasor expression. A further implication however is that N is not identical to the magnitude of N, because magnitudes are always positive whereas N can be positive or negative. We can force N to become equal to |N| by stipulating that = 0. We can also drop a dimension, i.e., set the imaginary part to zero, by choosing = 180, but in that case we get N =-|N|. Thus the alleged scalar which results from dropping a dimension is not a magnitude, but it is a quantity which is equal in magnitude to a magnitude, and if = 0 it is positive. This may seem a pedantic distinction, but the point in making it is that if we restrict the scope of our phasor algebra through erroneous interpretation, we lose the ability to include DC electricity in our theory, and we lose the ability to explore exotic ideas such as negative resistance. The pseudoscalar we obtain by dropping a dimension can be negative, even if usually it isn't. 24.6 Square magnitude theorem: The product of a complex number and its complex conjugate is the square of the complex number's magnitude. N N* = |N| 24.6 Proof: Let N=a+jb and N*=a-jb N N* = a + b but |N| = (a + b) therefore N N* = |N| Hence the product of a complex number and its complex conjugate is a true scalar. It is also literally a scalar product. Recall that the definition of a scalar product is: ab = |a| |b| Cos but if a and b are identical, then =0 and Cos=1. Hence: NN = |N| = N N* 24.7 Conjugate product theorem: The complex conjugate of the product of two complex numbers is the product of the complex conjugates. (N1 N2)* = N1* N2* 24.7 Proof: Let N1=a1+jb1 and N2=a2+jb2 Then: N1 N2 = (a1 +jb1)(a2 +jb2) = a1a2 - b1b2 +j(a1b2 + a2b1) Therefore: (N1 N2)* = a1a2 - b1b2 -j(a1b2 + a2b1) = a1(a2 -jb2) -jb1(a2 -jb2) = (a1 -jb1)(a2 -jb2) = N1* N2*
61 24.8 In-phase quotient theorem: If two phasors are in phase, their ratio can be treated as a scalar. N1(|N1|, ) / N2(|N2|, ) = |N1| / |N2| 24.8 proof: Using the polar to complex transformation (12.4): N1(|N1|, ) = |N1|(Cos + jSin) N2(|N2|, ) = |N2|(Cos + jSin) where the phase angle is the same in both cases. Therefore: N1 / N2 = |N1|(Cos + jSin) / [ |N2|(Cos + jSin) ] = |N1| / |N2| 24.9 Magnitude Caveat: The mathematical operation of 'taking a magnitude' destroys information. Specifically, it is important to be aware that if |a| = |b|, then it is not necessarily true that a=b. The magnitude operation discards the directional information of a vector, and the sign information of a scalar. Note for example that although: |a| = |a*| = |-a| = |-a*| any one of the quantities inside magnitude brackets is definitely not identical to any of the others. The magnitude retains only the length of the object. In so doing however, it does retain the unit of measurement; i.e., a magnitude is a length in impedance space, or voltage space, or current space, etc., and so has the units of the space in which it exists. 24.10 Magnitude Equivalence: As noted above, for any complex number Z : |Z| = |Z*| = |-Z| = |-Z*| When designing electrical circuits, it is not unusual to meet situations in which the magnitude of a voltage or current needs to be determined, but the phase is unimportant. As the theorems above show, when only the magnitude is needed, all of the impedances involved in the calculation can be replaced by their magnitudes (provided that the impedances are factors, i.e., multipliers or divisors, not terms in a summation). What is less obvious however, is that an impedance Z enclosed between magnitude brackets can then be replaced by one of the alternatives having the same magnitude, namely Z*, -Z and -Z*. This principle of Magnitude Equivalence allows us to deduce alternative networks which will produce the same outcome. In particular, it allows us to identify situations in which inductance can be replaced by capacitance and vice versa. 24.11 About these Theorems: The theorems given above do not appear in standard engineering textbooks. Therefore it is legitimate to ask: 'Why have they been stated here when everyone else manages without them?' The answer to the question is this: By sticking to the mathematical rules: particularly by ensuring that we always use properly balanced vector equations, and by using any simplifications which can be proved in a general way; we eliminate the need for phasor diagrams. Essentially, we can let the algebra do all of the reasoning. We can still use phasor diagrams for the purpose of explaining what is going on, but they become merely illustrative and make no difference whatsoever to the outcome of a problem solving exercise. The traditional role of the phasor diagram has been to help in resolving the ambiguities caused by unrigorous mathematical definitions. But the mathematics is self-consistent. If the problem is defined correctly, the hand-waving becomes unnecessary.
62
(-V) = (-I) R R = (-V) / (-I) (-I) = (-V) / R Where: V I and Z are phasors, V* I* and Z* are complex conjugates, |V| |I| and |Z| are magnitudes, V and I are phasors pointing at 0, (-V) and (-I) are negative values of V and I (and
63 thus are phasors pointing at 180), and un-bold Z is not normally used, because an impedance pointing at 0 already has the symbol R.
11 The Art of Electronics, Paul Horowitz [W1HFA] and Winfield Hill, 2nd edition 1989, Cambridge University Press. ISBN 0-521-37095-7. Tunnel (Esaki) diode p14-15, & p1060. Back diode p891, 893. 12 Physical Electronics, C L Hemenway, R W Henry, M Caulton, Wiley & Sons, New York, 2nd edn. 1967. Library of Congress cat. card no. 67-23327. Section 14.6: The tunnel diode, p290-294.
64 Consider the system shown on the right. In this case, V and I are not necessarily in phase, but we can easily obtain an expression for I in terms of V, and since we now appear to have a version of Joule's law which allows I to point in any direction, there is no need to impose a restriction on any of the phasors involved. Thus we can write: I = V / (R+jX) Now, putting the reciprocal impedance into the a+jb form by multiplying numerator and denominator by the complex conjugate of the denominator we obtain: I = V (R -jX) / (R + X) and, using the conjugate product theorem (24.7): I* = V* (R +jX) / (R + X) Now we can insert these definitions into equation (26.2): P=I*RI, thus: P = V V* R (R -jX)(R +jX)/[(R + X)] and using the square magnitude theorem (24.6) we obtain: P = |V| R / (R + X) 26.3 Thus without any convoluted discussion about phases or reference phasors, we have obtained in a few lines of algebra a general expression for the power dissipated in an impedance in terms of the applied voltage. Now let us check that this is consistent with the conventional approach: Here we use the power factor (V I scalar product) rule (10.1). P = VI = |V| |I| Cos Now, using the diagram on the right, we can see that Cos (adjacent / hypotenuse) is R/(R + X). Hence: P = |V| |I| R / (R + X) . . . 26.4 From Ohm's law we know that: I = V / (R+jX) and using the magnitude ratio theorem (24.1) we obtain: |I| = |V| / |(R+jX)| i.e.: |I| = |V| / (R + X) Now, substituting this into expression (26.4) we have: P = |V| R / (R + X) Which is the same as equation (26.3) and so demonstrates that the power-factor rule is already embedded in Joule's law when we write the latter as a properly balanced and un-restricted vector equation. Notice also, that however we manipulate the power law, the average direction of power flow is always dictated by the sign of the resistance. We can, incidentally, also obtain equation (26.3) by using the series to parallel transformation discussed in section 19. If we write the impedance in parallel form, then the power is simply given by the square of the voltage magnitude divided by the equivalent parallel resistance. Thus, if: Z = R+jX = R" // jX" Then: P = |V| / R" where: R" = (R + X) / R i.e.: P = |V| R / (R + X)
65 The universal steady-state electrical power laws can be summarised as follows: P = |I| R P = |V| R / (R + X) P = VI = |V| |I| Cos where |I| is the reading obtained from an ammeter, and |V| is the reading obtained from a voltmeter. If the impedance has no reactive component, or if the frequency approaches 0Hz (DC), the general formulae above revert to their standard textbook forms: P = I R P = V / R P=VI where V and I are the readings from AC or DC instruments and can be positive or negative; but if V is negative then I must be negative (presuming that the resistance to which the power is being delivered is positive).
27. Bandwidth
We are often interested the way in which the gain or loss of a network or circuit varies over a particular band of frequencies. We will introduce this type of analysis shortly in connection with resonant networks, but before doing so it is necessary to define the term 'bandwidth'. Most readers will be aware that an amplifier will generally show a fall-off of gain at low frequencies, this often being due to the increasing magnitudes of the reactances of coupling capacitors in series with the signal path; and it will also show a fall-off of gain at high-frequencies, this being due to a variety of factors including the falling magnitudes of the reactances of any stray capacitances in parallel with the signal path. Consequently amplifiers, and indeed many other types of circuit, usually show a hump-like frequency response; and will only pass signals usefully over a particular frequency range. The problem in defining bandwidth therefore lies in the definition of what we mean by 'useful' and, since this will vary according to the particular application, there is really no resolution to the issue. We must therefore eschew vague concepts like 'usefulness' in favour of a definition on which everyone can agree; and so it is universally accepted that bandwidth, unless stated otherwise, is defined in terms of what are known as the 'half-power points', i.e., the upper and lower frequency points at which the power delivered by the system (for a constant input) has fallen to half of that which is delivered at the frequency at which the maximum response occurs. The half-power points are chosen, as we shall see, because they have special mathematical significance; and for simple networks at least, knowledge of where they lie provides a complete definition of the frequency-response function of the system. Now, if we call the power delivered at the frequency of maximum response Pmax, then the power delivered at the half-power points is: P = Pmax / 2 and P / Pmax = We can express this ratio in deciBels using the general definition: Ratio in dB = 10Log10( P / Pref ) (where Pref is the reference power level against which power P is being compared). i.e. 10Log10() = -3.010299957 Hence the half-power points are also known as the ' -3dB ' points, and the frequency interval between the lower point and the upper point is also often called the ' -3dB bandwidth '. It is a good idea to be specific in this way, because when the term 'bandwidth' is used without qualification, there is always the fear that it may involve some non-standard definition.
66
67 If n = Ba then a = LogB(n) ("if n equals B to the power of a, then a is the log to the base B of n") If a logarithm is written without the base subscript, then base 10 is usually implied, i.e., 'Log' means 'Log10' (although some older documents deviate from this convention). Naperian (natural) logarithms, which crop-up frequently in physics, use Euler's number e as the base (e2.71828), and can be written either "Loge" or "ln", the latter pronounced "line" and being short for 'log-Naperian'. Hence, working in base 10: if m=10a, and n=10b, then mn=10a+b. All we have to do to perform the multiplication is look-up the logarithms a and b, add them together, then look up the quantity a+b in a table of anti-logarithms to find the required mn. The anti-log of a number x is simply 10x. As an alternative to using tables, the same operations can be achieved by using two identical engraved logarithmic scales and sliding one relative to the other, the device for so doing (invented by William Oughtred in 1622) being known as a slide rule14. We no longer need to use logarithms for everyday multiplication, but we do need to memorise some of their basic properties in order to use logarithmic units with confidence. The first and most fundamental point is that any number raised to the power of zero is one, i.e., B0 = 1, always, regardless of B. Hence: Log(1) = 0, always, regardless of base. Now, if we represent power gain in deciBels, i.e., N/dB = 10Log10(P / Pref ) then if P=Pref ; N/dB = 10Log10(1) = 0 i.e., a system which neither amplifies nor attenuates a signal has a gain of 0dB. Also we find that if P is greater than Pref, then N/dB is greater than zero, and vice versa. Hence a positive quantity in dB represents gain, and a negative quantity represents loss (i.e., negative gain). Finally, the additive property of logarithms allows that if we subject a signal to a number of processes, and note the gains in dB (positive or negative) for each of those processes, we can find the overall gain of the system simply by adding all of the individual stage gains together. So much for the basics, but now we arrive at the point which causes greatest difficulty: A quantity in dB implies a logarithmic power ratio. It can however also be taken to represent a voltage ratio, or a current ratio, but the definition must be modified in that case. The reason why the definition can be extended to embrace current and voltage ratios is that power is a function of the voltage across, and also the current through, an impedance. Hence we can substitute for power using the general power laws derived earlier, i.e., P=|V|R/(R+X) and P=|I|R. To obtain the voltage ratio formula we write: N/dB = 10Log10{ [|V|R/(R+X)] / [|Vref |R/(R+X)] } which reduces to: N/dB = 10Log10[ ( |V| / |Vref | ) ] and if we adopt the convention that the impedance against which the two voltages are compared is a resistance: N/dB = 10Log10[ (V / Vref ) ] A similar argument applies for the current ratio formula: N/dB = 10Log10[ (I / Iref ) ] Now everything would be fine of we left the quantity inside the logarithm brackets as the square of a voltage or current ratio, but everyone who teaches the subject will insist on performing a 'simplification', which is to note that a number can be squared by doubling its logarithm. Hence we get rid of the power of 2 by writing: N/dB = 20Log10(V / Vref ) and
14 "When Slide Rules Ruled" Cliff Stoll, Scientific American, May 2006, p68-75.
68 N/dB = 20Log10(I / Iref) Which is all very clever, but leaves people struggling to decide whether they should use 10Log() or 20Log(), and so leads to lots of mistakes. So remember: a ratio in Bels is the Log of a power ratio, and a deciBel is a tenth of a Bel (that's where the 10 comes from). By Joule's law, the square of a voltage or current magnitude ratio is also analogous to a power ratio, and the squaring can be obtained by doubling the logarithm (that's where the 20 comes from). In using deciBels, the basic approach is to consider the power levels at two points in a circuit or power transmission system and thereby define the gain. It is also useful however, to express power in relation to some external reference or standard, and this leads to an extension of the notation, some commonly encountered variants being as follows: Reference voltage Reference current Unit Definition = (PR) = (P/R) dBm dBu dBW dBV dB relative to 1mW in 50* dB relative to 1mW in 600 dB relative to 1W dB relative to 1V 223.6mV 774.6mV 1V 4.472mA 1.291mA -
* in old audio publications and service manuals, ' dBm ' may be used to mean ' dB relative to 1mW in 600 '.
By extending the definition in this way, the dB notation may be used to express an absolute power (rather than a relative power); and if a reference resistance is specified, an absolute voltage or current as well. For example, if the line output level from an audio recorder is specified as -10dBu, then the output voltage is obtained by rearranging the expression: -10 = 20Log(Vout/Vref ) where Vref = (0.001 600) = 774.6mV Hence: Vout = Vref 10- = Vref / (10) = 244.9mV RMS. The dBW notation was brought into European Amateur Radio documents some years ago, this being the preference in the field of broadcast and professional radio engineering. Thus a 400W transmitter (for example) becomes a 10Log(400)=26dBW transmitter; and it is possible to determine the effective radiated power (ERP) of a radio installation by adding the transmitter power in dBW to the (negative) gain in dB of the antenna feeder and the gain in dB of the antenna. This is all very well of course, but it does beg the question: 'why, for a group of spectrum users generally only equipped to measure voltage and resistance to a reasonable accuracy, is it necessary to state power restrictions in a way which requires a knowledge of exponential functions in order to work out what they mean?' It would seem equally logical to state road speed restrictions in dBmph or dBkm/h, and so for the sake of any bureaucrats who might read this, we will also address the question: 'do speed ratios require the 10Log() or the 20Log() formula?' This question, perhaps surprisingly, is not meaningless, and can be answered by noting that power is equivalent to energy delivered per unit-of-time. A power ratio is thus an energy per [unit-of-time] compared to a reference energy per [unit-of-time], and since the ' [unit-of-time]s ' will cancel (provided that they are the same - seconds are very popular), a power ratio is also an energy ratio. Hence: N/dB = 10Log10(E / Eref ) Newton's laws of motion tell us that the kinetic energy of a moving body is given by E=mv/2 (where m is the mass and v is the velocity), so energy is proportional to velocity squared as well as to voltage squared and current squared. Hence, a speed in dBmph is given by 20Log(v), so 30mph becomes 29.5dBmph and 70mph becomes 36.9dBmph.
69 Now, having upgraded all of our road signs to be in keeping with the preferred notation for Government standards documents, we are only left with the problem of how to measure money in deciBels. Here we may note that currency names are often derived from weights (of silver, but there has been some devaluation since Roman times), and that Newton's and Einstein's laws tell us that mass is proportional to energy. Thus we can deduce that the 10Log() formula is the correct one in this case. We might have solved this conundrum without recourse to physics however, by recalling the famous old saying: "money is power".
70 So the load resistance, having served to allow us to define the bandwidth, has promptly vanished; and the bandwidth becomes the interval between the points where the current has fallen to 1/2 of its value at resonance. Furthermore, we can observe that we will always obtain this result regardless of which resistance we define as the load. RL , RC , and RLoad are only symbols, and since the corresponding resistances are connected in series, we can swap their designations at will. We can also consider any combination of these resistances to be the load, including the total resistance R, and this will always cancel and tell us that the half-power points occur when I=I0/2. Thus to define the bandwidth of a series resonant circuit, we do not need to designate any resistance as a load, we need only to consider the current. So it transpires that we can choose any resistance in a series network and analyse the power dissipated in it to determine the bandwidth; and since we are interested here in the relationship between bandwidth and Q, the obvious resistance to choose is R, the total resistance. We can always isolate a portion of R to determine the power delivered to it or the voltage across it if we so wish, that is a trivial matter of proportions; but for a general analysis, the problem simplifies to that of understanding the behaviour of the simple series LCR network shown below. The first part of the analysis is to determine the frequency response function for this circuit and plot it as a graph to see what it looks like. A good function to plot for this purpose is the ratio P/P0 vs. frequency, because this ratio has a value of 1 at f0 and is also in the correct form for conversion into deciBels. The power ratio is equal to the square of the current ratio: P/P0 = I/I0 = (I/I0), because P=IR and P0=I0R. Hence we will start by obtaining an expression for the current ratio. The general expression for the current is: I = |I| = |V| / |Z| where Z=R+j(XL+XC) At the resonant frequency however, the impedance is purely resistive, so: I0 = |V| / R Hence: I / I0 = ( |V| / |Z| ) / ( |V| / R ) = R / |Z| = R / { (R + [XL+XC]) } which, by writing the reactances explicitly, gives: I / I0 = R / { (R + [2fL -1/(2fC)]) } 29.1 and since P/P0 = (I/I0): P / P0 = R / (R + [2fL -1/(2fC)]) 29.2 Graphs of both of these functions are shown below, the procedure used for generating them being to choose a value for f0 and an L/C ratio, and then calculate (using the Open Office Calc spreadsheet program) a set of points at closely spaced intervals for various different values of Q0 (see accompanying file ser_res.ods). The initial choices are arbitrary, since (as we are about to show) the shape of the curve obtained depends entirely on Q0. In this case, the author chose f0=10MHz and L/C=104, i.e., C=L/104. Values for L and C were then obtained by solving the resonance formula for L, i.e., 107 = 1/[2(LC)] = 1/[2(L/104)] = 100/[2L] L = 100/[2107] = 1.59154943H C = L/104 = 159.154943pF Since (L/C)=100 is also the value of XL and -XC at resonance, resonant Q values of 100, 10 and 1 correspond to total resistances (R) of 1, 10 and 100 respectively. Notice in the graphs below how the squaring pushes the curve of P/P0 downwards in comparison to I/I0. Notice also that the half power level is 1/2=0.7071 for I/I0 and for P/P0, and that the deciBel scales on the right differ accordingly.
71
So, having shown how the shape of the frequency response function varies with resonant Q, we will now derive an expression for the relationship between Q and bandwidth, the bandwidth being defined as the interval between the upper and lower half-power points. The procedure is to write a general expression for the current and solve it for the frequencies at which I=I0/2. Notice that the word "frequencies" is plural: the expression will be a quadratic equation. As we have already determined, I=|V|/|Z|, and I0 =|V|/R. Hence, at the -3dB bandwidth limits: I = |V|/|Z| = I0/2 = |V|/(R2) Thus the bandwidth limits occur at the frequencies where: |Z| = R2 i.e., (R + X) = R2 R + X = 2R X = R which, taking the square root of both sides and noting that there are two possibilities from so doing, gives:
72 X = R Now, writing X explicitly we obtain the expression: R = 2fL -1/(2fC) which we must solve for f. We may proceed by putting the right hand side onto a common denominator (i.e., by multiplying top and bottom of the 2fL term by 2fC): R = [(2f)LC -1]/(2fC) i.e., 2fCR = (2f)LC -1 This rearranges to: (2f)LC 2fCR -1 = 0 Which is a quadratic equation in the form af+bf+c=0, with a=4LC, b=2CR and c=-1. Notice however, that this particular equation will have four solutions, rather than the usual two, because the b term has a '' symbol attached to it. The reason for that is that there are both positive and negative frequency solutions for each of the band-edges. To obtain all four of these frequencies we apply the general solution for quadratic equations (12.3): f = [-b (b - 4ac) ] / 2a f = { 2CR [(2CR) +44LC] }/(24LC) and using the substitution C=C/C to obtain a cancellation of C from all but one term: f = {CR [(CR) +4LC/C] }/(4LC) f = [R (R +4L/C) ]/(4L) In order to determine which are the positive frequency solutions among these four possibilities, observe that +(R+4L/C) is always larger than R. Hence the upper (positive) bandwidth limit is: f+ = {[+(R +4L/C)] + R}/(4L) and the lower (positive) bandwidth limit is: f- = {[+(R +4L/C)] - R}/(4L) and the bandwidth is: fw = f+ - f- = {[(R +4L/C)] + R - [(R +4L/C)] + R}/(4L) i.e., fw = R/(2L) . . . . . (29.3) Now recall that the resonant Q can be defined as Q0=XL/R=2f0L/R. Hence: Q0/f0 = 2L/R Hence: fw = f0/Q0 29.4 This is the classic expression for the bandwidth of an LC resonator, and is exact when the resistance in the circuit remains constant with frequency. We have, of course, already observed that the loss resistances of inductors and capacitors vary with frequency, but it transpires that this will make practically no difference to the accuracy of the expression under most circumstances. Resistance makes only a small contribution to the overall shape of the bandwidth function because it only makes a significant contribution to the magnitude of the impedance (and hence to the current) when the frequency is close to resonance. Far from resonance, the impedance magnitude is dominated by the reactive component unless the Q of the resonator is very low. The physical laws governing the various processes which contribute to the loss resistance moreover are smoothly varying functions of frequency (unless some additional system resonance is encountered in the region of interest), and so the loss resistance component will not normally vary significantly over a small frequency interval. Consequently, for a reasonably high Q, the relationship: " Bandwidth=f0/Q " is sufficiently accurate to be presumed exact for all normal engineering purposes.
73
74
There is no theoretical minimum frequency on the logarithmic scale; although the lowest electromagnetic frequency which can be encountered in practice is the reciprocal of the age of the universe, about 1/(13.73109365.2421992460) = 2.310-18 Hz by current reckoning. Practical DC electrical systems come nowhere close to that because even the best stop working after a few years. Hence zero frequency is impossible; we employ the concept merely as a mathematical convenience for the purpose of circuit analysis. Note incidentally, that some writers have been moved to claim that there is a flaw in Maxwell's equations because electrical formulae tend to produce infinities when frequency is set to zero. This is a fallacy of course; the infinities being merely a reflection of the fact that zero frequency is not a property of the Universe. A practical consequence however, is that annoying "divide-by-zero" errors occur when putting zero frequency into (for example) frequency-response calculations. The solution, when calculating a frequency response which needs to appear as though it starts from 0Hz, is to input a very low frequency instead of zero. For radio-frequency calculations, starting from 1Hz, instead of 0Hz, will usually do the trick.
75
76
but, from the rules of logarithms discussed in section 28: ex / ex = ex-x and ex / ex = ex -x = e-(x-x ) Hence: 1 P/P0 = 1 + [ Q0 (ex-x - e-(x-x ) ) ]
0 0 0 0 0 0 0
77 The quantity ex-x - e-(x-x ) is related to a function known as the hyperbolic sine (Sinh, pronounced "shine"), which is defined as: Sinh(x) = (ex - e-x)/2 Hence: 1 P/P0 = 33.3 1 + [ 2 Q0 Sinh(x - x0) ]
0 0
x3 + 3! +
x4 + 4!
x5 + ............. 5!
where an exclamation mark indicates a factorial number, the factorials being defined as: Factorial 0! 1! 2! 3! 4! n! (n+1)! Value 1 1 21 321 4321 n(n-1)(n-2) . . . . . . 1 (n+1)n!
x3 3! +
x4 4!
x5 + ............. 5!
and that by subtracting one series from the other we can obtain a series for ex-e-x=2Sinh(x) 2x3 2Sinh(x) = 2x + 3! or x3 Sinh(x) = x + 3! + 5! x5 + 7! x7 + 9! x9 +......... + 5! 2x5 + 7! 2x7 + 9! 2x9 +.........
Now notice that when the magnitude of x is somewhat less than 1, the magnitudes of the terms in which x is raised to a high power become very small, and so we can make the approximation: Sinh(x) x when |x| < 1, and Sinh(x) x when |x| << 1 ( ' ' means 'approximately equal to'; ' << ' means 'much less than' ). Substituting this into equation (33.3) we get: 1 P/P0 1 + [ 2 Q0 (x - x0) ] which is a Lorentzian with w=1/(2Q0). Hence the electrical resonance curve is Lorentzian when |x - x0| << 1. The electrical resonance curve is, of course, an electromagnetic resonance curve; and like any spectral line, is Lorentzian when the Q of the resonance is reasonably large.
78
This result is, of course, well known, but it is by no means the whole story, and its interpretation is subject to various common misconceptions. We can settle all of these issues by deriving the complete maximum power transfer condition (see box below). This requires the use of calculus,
79 which will not be explained here, but those unfamiliar with the technique may still avail themselves of the result. The maximum power transfer theorem: In the circuit shown on the right, the power P delivered to the load Z is: P = |I| R where |I| = |V| / |(Z + Zg)| Hence: P = |V| R / |(Z + Zg)| = |V| R / |(R+Rg +j[X+Xg])| P = |V| R / [(R + Rg) + (X + Xg)] There are two maximum power transfer conditions to be obtained here, one being the value of load reactance, and the other being the value of load resistance. For changes in either of these variables, there will be a peak in the graph of power versus the variable, and the peak will of course occur at the point where the gradient of the curve is zero. Hence, for the reactance condition, maximum power transfer occurs when P/X=0, and for the resistance condition, maximum power transmission occurs where P/R=0 (where is known as "partial d" or "curly d" and indicates a partial differential; i.e., differentiation of one variable with respect to another is carried out with all other variables held constant). In order to carry out these differentiations on the expression above, we can use the quotient rule: If y = N/D then dy/dx = (DdN/dx - NdD/dx)/D Hence if we let N = |V| R and D = (R+Rg)+(X+Xg) = R+Rg+2RRg+X+Xg+2XXg then: N/X=0, D/X=2X+2Xg, N/R=|V|, and D/R=2R+2Rg. Hence: P/X = [0 - |V|R(2X+2Xg)] / D = -2|V|R(X+Xg) / D therefore: P/X=0 when X=-Xg (maximum power transfer occurs when the power factor is 1) and P/R = { |V|[(R+Rg)+(X+Xg)] - |V|R(2R+2Rg) }/ D = |V|[ (X+Xg) + R + Rg + 2RRg -2R -2RRg ] / D = |V|[ Rg + (X+Xg) -R ] / D therefore: P/R=0 when Rg+(X+Xg)-R=0 , i.e., P/R=0 when R=[Rg+(X+Xg)] Notice that this latter maximum power transfer condition is a magnitude: it is the same as; R = |Rg +j(X+Xg)| i.e., maximum power transfer occurs when the load resistance is equal to the magnitude of the impedance formed by the source resistance and the total reactance. This also means that if the source impedance is purely resistive, then maximum power transfer occurs when the magnitude of the load impedance is equal to the source resistance. Observe also that when the unity powerfactor condition X=-Xg is satisfied, the (X+Xg) term disappears and maximum power transfer occurs when R=Rg . Thus the overall maximum power transfer condition occurs when R=Rg and X=-Xg, i.e, when Z = Zg* The condition obtained when the load impedance is the complex conjugate of the source impedance is known as a conjugate match.
80 We can address the most common misconception regarding impedance matching by stating that, although unity power-factor (X=-Xg) is always desirable, is it is not necessary and not always desirable that the load resistance should be equal to the source resistance. The reason can be understood by considering the poor generator, which must dissipate power in its internal resistance, and will therefore get hot. If we assume that power-factor correction will normally be carried out, then there is no need to consider the reactances in the system, and we can analyse the power dissipated in the generator using the case where both the source impedance and the load are purely resistive. Thus: Pg = I Rg where the current is, as defined earlier: I=V/(R+Rg). Hence: Pg = V Rg / (R+Rg) 34.2 We will plot this function shortly; but when doing so it will be interesting to use the comparison between the load power and the power wasted in the generator as a measure of the power transmission efficiency. We can define efficiency as: Transmission efficiency = Power delivered / Total power generated and here we will give it the symbol (Greek lower case 'eta'). Thus: = P / (P + Pg) Now, substituting the definitions of P (34.1) and Pg (34.2) into this expression we get: = VR/(R +Rg) / { [VR/(R +Rg)] + [VRg/(R +Rg)] } i.e., = R / (R + Rg) Shown plotted below for comparison are: P, the power delivered to the load; Pg , the power dissipated as heat in the generator; P+Pg , the total power generated; and , the ratio of power delivered to power generated.
The tabulated results below show the various power levels as a proportion of the maximum deliverable power Pmax , and are applicable to any power-factor corrected generator-load system.
81 Load R / Rg 0 1/16 1/8 1/4 1/2 1 2 4 8 16 Total Power (P+Pg) / Pmax .................... 4.00 ................... 3.76 .................. 3.56 ................ 3.20 ............. 2.67 .......... 2.00 ...... 1.33 .... 0.80 .. 0.44 . 0.26 Power Loss Pg / Pmax .................... 4.00 .................. 3.54 ................ 3.16 ............. 2.56 ......... 1.78 ...... 1.00 .. 0.44 . 0.16 0.05 0.01 Load Power P / Pmax 0.00 . 0.22 .. 0.40 ... 0.64 .... 0.89 ..... 1.00 .... 0.89 ... 0.64 .. 0.40 . 0.22 Load power / dB - -6.55 -4.03 -1.94 -0.51 0.00 -0.51 -1.94 -4.03 -6.55 Efficiency R / (R+Rg) 0.00 . 0.06 . 0.11 .. 0.20 ... 0.33 ..... 0.50 ....... 0.67 ........ 0.80 ......... 0.89 ......... 0.94
Notice in the graph above that as the load resistance R is increased and becomes greater than source resistance Rg , the power delivered to the load tails off gently. The reason for this behaviour is that, as the current drawn from the generator reduces, the output voltage increases; and so the system possesses a self-regulating property when lightly loaded. When the load is twice the source resistance, the power delivered is still 89% of the maximum possible, a droop in output of only 0.51dB. The major advantage of light loading however is seen in the transfer efficiency. When a conjugate match is achieved, the efficiency is only 50%, but it rises to 67% (2/3) when R=2Rg , and 80% (4/5) when R=4Rg . This means that light loading, when compared to conjugate matching, gives a reduction in generator dissipation and power input for a given power output. In radio practice, of course, the generator is a radio transmitter; and if the transmitter is designed for light loading it can have smaller heat-sinks and reduced battery or mains power consumption in comparison to a transmitter designed for conjugate matching. Consequently, the figure often referred to as the "output impedance" of a radio transmitter (often 50) is usually nothing of the sort, it is instead (and should be called) the preferred load-resistance, or alternatively the design load resistance. The preferred load resistance of a broadband transistor power amplifier is usually higher than the output impedance, and attempting to provide such an amplifier with a conjugate match will result in excessive internal dissipation, overheating, and possibly catastrophic failure. Fortunately, most modern amplifiers are provided with protection circuitry to prevent overdissipation, and this circuitry gives the transmitter a loading characteristic which makes it appear that the source resistance is higher that it really is. This loading characteristic will be different from the power transfer curve derived above because it is caused by the action of non-linear circuit elements (level detectors etc.), and so the load resistance which corresponds to the middle of the permitted operation window is known as the pseudo output-impedance (or, if you like, the pseudo source-resistance). The diagram below shows what the power transfer curve might look like with the operation window centred on twice the source resistance (unprotected transfer-function shown dotted).
82
Notice that the protection circuitry also operates when the load resistance is higher than the preferred value. This is not usually necessary for the protection push-pull transistor power amplifiers (the most common type of output stage in modern practice); but it helps to ensure that any harmonic suppression filter after the amplifier will function correctly, and it occurs because the load impedance is traditionally detected using a bridge circuit (often called a reflectometer or SWR bridge, but really an impedance bridge) balanced for a particular value of resistance. An interesting discussion of the conditions which provoke transistor failure is given by Bob Pearson15. If the protection circuitry is correctly designed and adjusted, the pseudo output-impedance should be the same as the preferred load-resistance. When determining the effect of source impedance on the Q of antenna systems and bandpass filters however, it is the true output-impedance, not the preferred load-resistance which must be used. Unfortunately, this quantity is often impossible to obtain from the manufacturer's data; but, as we shall see shortly, it can be measured with the aid of two dummy load resistors of different value.
83 If the impedances are pure resistances, the formula above reverts to: Vout = Vin R1 / (R1 + R2) 35.3 where R1 is the resistance across which Vout is said to appear. Alternatively, multiplying by R2/R2: Vout = Vin (R1 // R2) / R2 35.4
84
85 Rg = R1R2(V2 - V1) / (R2V1 - R1V2) If (say) V1 is factored out of the numerator and denominator, a form is obtained which makes it clear that only the voltage ratio is needed: Rg = R1 R2( [V2/V1] - 1) / (R2 - R1V2/V1) 38.1 A respectable difference between the two load resistors is necessary in order to minimise the effect of measurement errors, but too large a deviation from the preferred load resistance is likely to provoke a transistor power-amplifier's protection circuitry. For a transmitter designed to operate into a 50 load; 25 and 100 dummy-load resistors correspond to the upper and lower 2:1 SWR points and should give an easily discernible output voltage difference. A 25 resistor can be had by connecting two 50 dummy load resistors in parallel with a coaxial T-piece. 100 coaxial resistors are less readily available, but an old-fashioned 75 load will do instead. In the calculation, the actual resistance of the load measured with an accurate resistance meter should be used (rather than the nominal value stamped on the resistor). Example: The output voltage of a Kenwood TS430S 100W HF transmitter was measured with two different dummy loads. The measurement frequency was 1.9MHz, and the test power level was very approximately 1W. One load was a 75 nominal coaxial resistor measuring 75.10.7, the other was the combination of this resistor and a 50 nominal coaxial resistor in parallel with it, the combination measuring 29.60.3. The voltage ratio was measured using an oscilloscope with a 10M 10 probe. The resistors and the probe were attached directly to the antenna socket using coaxial T-pieces (no cables). The measurement was made by attaching and removing the 50 resistor from the T-piece with the transmitter running and noting the change in the peak to peak excursion of the output waveform. Using the following designations: R2=75.1, R1=29.6, the voltage ratio V2/V1 was 1.364 0.04. Using equation (38.1), the source resistance Rg was calculated to be 23.3. An error analysis (see next section), gave an estimated standard deviation of 3.4; i.e., Rg=23.33.4. Note incidentally, that this determination assumes that the output impedance does not change with power output level. Given that power transistors are non-linear devices, this may not be the case.
86 and x+3 [ref.16]. The use of standard deviations rather than 'brick wall' tolerances reflects the reality that there is always a finite probability that the true result will lie outside the stated error range. We can only ever have absolute confidence that the magnitude of the true answer lies somewhere between zero and infinity, but we expect only 3 measurements in every 1000 to fall outside x3. It is always advisable to try to write down an ESD for every measurement made. This is a reasonably straightforward matter where direct measurements are involved, but a difficulty arises in situations where several measurements are made and then put into a formula in order to obtain the required result. The problem is that of working out how much influence the deviation of a particular variable has on the overall result, and how to add the various deviations together in order to arrive at the overall ESD. It is therefore fortuitous that we have been engaged in the study of vectors, because it turns out that this is a problem of vector addition and magnitudes. If two or more measurements are made in such a way that the outcome of one has no influence on the outcome of any of the others, the measurement errors are said to be uncorrelated. An example of uncorrelated errors is that of readings taken from two separate instruments, where an error or inaccuracy in the reading of one instrument is not related to any error or inaccuracy in the reading of the other. On the other hand, the errors in two measurements made using the same instrument may be correlated, in the sense that if the instrument always reads too high or too low, it will introduce errors in the same direction in both cases. If measurement errors are correlated, then it means that there is some systematic (design, interpretation, or calibration) defect in the measuring process; but if we believe that the measurements have been made to the best of our abilities with the equipment available, then it is usually sensible to assume that any measurement errors are uncorrelated. Now, if the errors in two or more measurements are uncorrelated, this means that a deviation from the true value in one measured quantity can occur without influencing the deviations in any of the other quantities. If we determine a quantity by applying a formula to a set of measurements, each measurement will contribute a random error to the result, but there is just as much chance that the error due to one measurement will partly cancel the error due to another as there is that a pair of error contributions will both increase or decrease the result. Therefore it will be unduly pessimistic to add the uncertainty contributions of the individual measurements directly. Instead, we should allow for the independence of the uncertainty contributions by regarding each one as a vector pointing in a direction which is at right angles (orthogonal) to all of the others. In effect, by virtue of its randomness, each uncertainty contribution exists in its own dimension, and we may identify its magnitude as its length in that dimension. It follows that the overall uncertainty is the length (i.e., the magnitude) of the vector which results from the addition of a set of orthogonal uncertainty vectors. This situation is represented in the diagram below, where U1, U2, and U3 are the uncertainty contributions to the determined value of an unknown, and U is the overall uncertainty in the result. We can easily find U by successive application of Pythagoras' theorem, as follows: Let the magnitude of the vector sum of U1 and U2 be U12: U12 = ( U1 + U2 ) Then U is the magnitude of the vector sum of U12 and U3: U = ( U12 + U3 ) but U12 = U1 + U2 Hence: U = ( U1 + U2 + U3 )
16 Data Reduction and Error Analysis for the Physical Sciences , Philip R Bevington. McGraw-Hill, 1969. Library of Congress cat. card # 69-16942. 2-2: Sample mean and standard deviation. Area under the Gaussian distribution: Table C-2, p308.
87 This process can be extended to find the magnitude of a vector in an arbitrary number of dimensions (we can't make perspective drawings in more than three dimensions, but there is no restriction on the number of dimensions that a vector can have). Hence: U = ( U1 + U2 + U3 + . . . . + Un ) Now note that this formula says: "to find the overall uncertainty; calculate the sum of the squares of the uncertainty contributions and take the square root." The uncertainty contributions are not the same as the uncertainties in the measurements made. Imagine that an unknown quantity x is given by a formula f, which is a mathematical function involving measurable quantities (variables) m1, m2, m3, etc. We can express this situation by writing: x = f(m1, m2, m3, ...) and we can determine x by plugging m1, m2, m3, etc. into the formula. We can also determine the uncertainty contribution due to any one of the variables by changing it and noting the change which occurs in x. The obvious amount by which to change the variable is its standard deviation, hence: x+x1 = f(m1+1, m2, m3, ...) Here we have assumed that a positive change in m1 will cause a positive change in x. This might not be the case, but since we intend to add the contributions from changes in each of the variables as orthogonal vectors, it makes no difference either way. Now, restoring m1 to its original value we determine the uncertainty contribution due to m2. x+x2 = f(m1, m2+2, m3, ...) and so on. If we work through all of the variables in this way and determine their error contributions, we can obtain an estimate of the standard deviation of x by summing the squares of the contributions and taking the square root: = ( x1 + x2 + x3 + . . . . + xn ) Note that there are a number of assumptions inherent in this procedure: firstly, as discussed before, that the uncertainties are uncorrelated; and secondly that we have assumed that the function f is linear for changes in any of the variables. The latter condition is normally true to a good approximation for small changes, and the effect of any non-linearity is mitigated by the fact that the object of the exercise is to obtain an estimate. Example: The output resistance Rg of an RF amplifier was determined by loading the output with two different resistances and noting the change in the output voltage with all other conditions held constant. The applicable formula is equation (38.1): Rg = R1 R2 (NV - 1) / (R2 - R1 NV) Where NV is the ratio of the output voltages: NV = V2/V1 The voltage measurements were made using an oscilloscope, and it was considered that each measurement had an uncertainty of about 2%. It was also considered that these uncertainties were uncorrelated because they were incurred by different operations; one operation being to set the transmitter carrier level and oscilloscope Y-shift until the waveform just touched the top and bottom of the measuring graticule with the higher value resistor connected, the other being to read the height on the graticule with the lower value resistor connected. The overall uncertainty of the voltage ratio measurement was therefore taken to be the square root of the sum of the squares of the two voltage measurements; i.e., (2+2)=2.8%, which was rounded to 3% in view of the approximate nature of the estimate. The actual voltage ratio was 1.364, and 3% of 1.364 is 0.04. Hence: NV = 1.364 0.04 The resistances were measured using a multimeter known to read correctly within 0.1 against a standard resistance of 100.0. The stated accuracy of the instrument was 0.8% 1 digit. The
88 measured resistances were R1=29.6 and R2=75.1. Hence: R1=29.60.34 R2=75.10.7 The output impedance Rg was calculated from the formula (38.1) using a spreadsheet program and determined to be 23.3. The output impedance was also calculated with each of the measured values individually incremented and decremented by an amount equal to its estimated standard deviation, and the resulting deviation in Rg was noted. The spreadsheet (Rg_meas.ods) is shown below:
Note that the formula is somewhat non-linear in its behaviour because the deviations caused by incrementing and decrementing a variable are not exactly equal and opposite. The correct way to allow for this effect is to take the average of the deviation magnitude (RMS) for each case. Therefore, the estimated standard deviation in Rg is: = ( 0.579 + 0.253 + 3.359 ) = 3.418 Hence: Rg = 23.3 3.4 Notice that the major contributor to the uncertainty in Rg in this case is the uncertainty in NV. We cannot ignore the effect of the resistance uncertainties however, because if we repeat the experiment with more closely spaced values for R1 and R2 , we will find that their contributions to the uncertainty increase dramatically. Analytical approach to error analysis: While the error analysis technique just described is perfectly respectable; those who write computer programs will generally prefer an analytical approach. The derivation of an error function from a formula requires the use of calculus. Those who are unfamiliar with calculus may proceed to the next section without losing track of the narrative. The analytical form of an error function is obtained from the observation that an error in a variable is transmitted through a formula according to the rate of change of the formula with respect to the variable. Thus the error contribution from a variable is the partial derivative of the formula with respect to the variable multiplied by the deviation in the variable. Hence if x = f(m1, m2, m3, ...) and the ESDs of the measured quantities are 1, 2, 3, etc.; the contribution which the variable m1 makes to the ESD of x is given by: x1 = ( f/m1) 1 and so on (strictly we should take the modulus of the derivative because standard deviations are by definition positive, but it does not matter in this case because orthogonal addition involves squaring
89 of the error contributions). Hence the analytical form of the error function is: = { [(f/m1)1] + [(f/m2)2] + [(f/m3)3] + .... } Example: The output impedance of a generator is obtained from the formula: Rg = R1 R2 (NV - 1) / (R2 - R1 NV) Differentiation of this function requires the use of the quotient rule: If y = N/D then dy/dx = (DdN/dx - NdD/dx)/D where, in this case, the numerator is: N = R1R2(NV - 1) = R1R2NV - R1R2 and the denominator is: D = R2 - R1NV Differentiating the numerator with respect to each of the variables gives: N/R1 = R2(NV - 1) , N/R2 = R1(NV - 1) , N/NV = R1R2 and differentiating the denominator with respect to each of the variables gives: D/R1 = -NV , D/R2 = 1 , D/NV = -R1 Using these results we obtain: Rg/R1 = [ D(N/R1) - N(D/R1) ] / D = [ (R2 - R1NV) R2(NV - 1) - R1R2(NV - 1)(-NV) ] / (R2 - R1NV) = [ (R2 - R1NV) R2(NV - 1) + R1R2(NV - 1)NV ] / (R2 - R1NV) = [ R2NV - R2 - R1R2NV + R1R2NV + R1R2NV - R1R2NV ] / (R2 - R1NV) Rg/R1 = R2(NV - 1) / (R2 - R1NV) Rg/R2 = [ D(N/R2) - N(D/R2) ] / D = [ (R2 - R1NV)R1(NV - 1) - R1R2(NV - 1) ] / (R2 - R1NV) = [ R1R2NV - R1R2 - R1NV + R1NV - R1R2NV + R1R2 ] / (R2 - R1NV) Rg/R2 = R1NV(1 - NV ) / (R2 - R1NV) Rg/NV = [ D(N/NV) - N(D/NV) ] / D = [ (R2 - R1NV)R1R2 - R1R2(NV - 1)(-R1) ] / (R2 - R1NV) = [ R1R2 - R1R2NV + R1R2NV - R1R2 ] / (R2 - R1NV) Rg/NV = R1R2(R2 - R1) / (R2 - R1NV) The error function in this case is: = { [(Rg/R1)R1] + [(Rg/R2)R2] + [(Rg/NV)Nv] } The derivatives all share a common denominator D, and so on writing the expression in full, a factor (1/D) can be removed from the square root bracket. Hence: = [1/(R2 - R1NV)] { [R2(NV-1)R1] + [R1NV(1-NV )R2] + [R1R2(R2-R1)Nv] } In the previous section, we determined Rg = 23.3 from the following measurements: R1=29.60.34 , R2=75.10.7 , NV = 1.3640.04 These give: D = (R2 - R1NV) = 1205.8673 Rg/R1 = R2(NV - 1) / D = 2052.9636 / 1205.8673 = 1.7025 Rg/R2 = R1NV(1 - NV ) / D = -435.0100 / 1205.8673 = -0.3607 Rg/NV = R1R2(R2 - R1) / D = 101144.68 / 1205.8673 = 83.8771 = { [(Rg/R1) R1] + [(Rg/R2) R2] + [(Rg/NV) Nv] } = { [1.7025 0.34] + [0.3607 0.7] + [(83.8771 0.04] } = { 0.5789 + 0.2525 + 3.3551 }
90 Note that the error contributions in the expression above are very close to the averages of the deviations calculated by the incremental (spreadsheet) method used previously. Finally we have: = 3.414 Rg = 23.3 3.4 A spreadsheet version of this calculation (which can be used as a template) is given on sheet 2 of the accompanying file: Rg_meas.ods .
91 be shortened; and this will reduce the antenna capacitance (and hence increase the reactance), and sadly for efficiency, will cause the radiation resistance to fall. The larger antenna reactance will necessitate a larger loading reactance, and although this will bring more resistance with it, the increase in reactance will be greater than the increase in total resistance and the Q will rise. A point can be reached where serious curtailment of the modulation bandwidth occurs, although, for this system, it it not predictable using lumped-component theory. The coil can be regarded as a lumped component provided that the whip is long enough to ensure that most of the radiation occurs from the whip rather than from the coil. If that condition applies, then the Q of the antenna system can never be larger than the Q of the loading coil because the total series resistance will always be that of the coil plus a little extra. The maximum tolerable Q (causing some, but not serious, audio degradation) occurs when the antenna system bandwidth is the same as the audio bandwidth, and for SSB on 1.9MHz this figure is 1900/2.7=704. It is extremely difficult to make a lumped inductor with a Q of greater than about 400, so on 160m the Q limit can be avoided by controlling the length of the coil. If, on the other hand, the whip is of length comparable to or shorter than the coil, then most of the radiation occurs from the coil. In that case, the lumped component description fails completely, and the system is best described as a quarter-wave transmission-line resonator. In the transmission-line regime, the Q and hence the voltage magnification can become enormous, and the usable input power is limited by the tendency for the air around the top of the coil to ionise and become electrically conductive. Coils operated at or slightly below the quarter-wave transmission line resonance frequency are used for artificial lightning experiments, in which context they are known as 'Tesla coils'. In particular, the voltage-magnifier coil connected in series with the output from a step-up transformer is known as the 'Extra Coil'. The transmission-line properties of coils will be discussed in a separate article. While on the subject of MF and HF mobile antennas; when forced to use a very short whip, it is possible to increase the antenna capacitance artificially (and hence reduce the reactance) by adding a capacitance hat to the antenna (some prongs sticking-out sideways symmetrically; or, if there's a risk that you might poke someone's eye out, an aluminium disk ). Reducing the antenna reactance in this way reduces the amount of loading inductance required, and hence allows the coil to be wound with thicker wire for a given size (less resistance). Placing the hat at the top of the antenna moreover, increases the current in the vertical section, and actually increases the radiation resistance slightly (every little helps).
92 we will assume that a tightly-coupled transformer is ideal when operating within its pass-band, on the understanding that it requires a more advanced analysis to determine what the pass-band is. Such a transformer is also approximately perfect when used as part of a low-impedance electrical network. A transformer loaded with an impedance Z is represented on the right. Here NP is the number of turns in the primary (generator side) winding, and NS is the number of turns in the secondary (load side) winding. The dots next to the windings indicate either the start or the finish (it doesn't matter how this is designated, as long as it is done consistently), and it it assumed that both coils are wound in the same sense (clockwise or anticlockwise when looking at a particular end of the coil). The dotted line between the coils indicates that the transformer is wound on a magnetic core, the purpose of which (in this instance) is to produce a very tight magnetic coupling between the windings. If all of the magnetic field from the primary winding is captured by the core and linked to the secondary winding (i.e., if there is no magnetic leakage), and if the coils and the core have no heating losses, then all of the power delivered by the generator is transferred to the load. Also, if the inductive reactance of the windings is much larger than the magnitudes of the impedances seen on either side, and the capacitance of both windings is very small, then the secondary voltage will be in phase with the primary voltage, and the secondary current will be in anti-phase with the primary current (i.e., as a current appears to flow into the primary, a current appears to flow out of the secondary). If the number of turns in the secondary winding is greater than the number of turns in the primary, then VS will be larger than VP , and vice versa; and the voltage transformation will be in proportion to the turns ratio, i.e., VS = VP NS / NP . . . . (41.1) It follows that if the power produced by the generator is transferred to the load without loss, then the VI product will be conserved; which means that if the voltage is stepped up, then the current will be stepped down to keep VI very nearly constant (and vice versa). This implies that the transformer performs on the current the inverse of the transformation it performs on the voltage, i.e. (interpreting the currents in the sense of the arrows in the diagram above): Is = IP NP / NS . . . . (41.2) Now, by definition, the impedance looking into the transformer primary is: Z' = VP / IP which gives, using (41.1) and (41.2) as substitutions: Z' = VS (NP/NS) / [ IS (NS/NP) ] and since Z = VS/IS : Z' = Z (NP/NS) 41.3 Thus, to a reasonably good approximation; a tightly-coupled transformer having relatively large winding reactances scales an impedance according to the square of the turns ratio. Now let us consider the problem in reverse, and see what a transformer does to the output impedance of a generator. Here we will call the apparent source impedance as seen from the secondary side of the transformer Zg', with Zg as the actual generator output impedance. The relationship between Zg' and Zg is perhaps guessable; but to derive it mathematically requires a trick, which is that of defining an equivalent circuit with all of the source resistance moved to the secondary side of the transformer. A suitable approach to the derivation is then to write expressions for the voltage V across the load using both the original and the equivalent circuits and then equate the two expressions.
93
For the left-hand circuit, let us define Z' as the load impedance seen by the generator, its relationship to to the load Z being given by equation (41.3) above: Z' = Z (NP/NS) The voltage V' is then the output of a potential divider formed by Zg in series with Z', i.e.: V' = Vg Z' / (Zg + Z') and V' is related to the load voltage V by the turns ratio, ie.: V = V' NS / NP Hence: V = (NS/NP) Vg Z' / (Zg + Z') V = (NS/NP) Vg [1 + (Z' / Zg) ] . . . . (41.4) For the right hand circuit, V is the output of a potential divider formed by Zg' and Z : V = Vg' Z / (Zg' + Z) V = Vg' [1 + (Z / Zg') ] where: Vg' = (NS/NP) Vg Hence: V = (NS/NP) Vg [1 + (Z / Zg') ] Equating this to expression (41.4) gives: 1 + (Z' / Zg) = 1 + (Z / Zg') i.e., Zg' = Zg Z / Z' but, by rearrangement of equation (41.4), Z / Z' = (NS/NP), hence: Zg' = Zg (NS/NP) 41.5 Thus a tightly-coupled output transformer scales the source impedance according to the square of the turns ratio, a generator with a low output impedance being converted into a generator with a high output impedance by means of a step-up (NS>NP) transformer and vice versa. The broadband output transformer of a fairly typical 100W short-wave radio transmitter (Kenwood TS430s) is shown on the right. The transformer core is a block of ferrite with two hollow channels passing through it (known colloquially as a "pig nose"). The primary winding consists of two short lengths of copper or brass tubing passing through the core and connected together at one end by a strip of copper-laminate board. The secondary winding is a length of PTFE-coated multi-strand silver-plated copper wire threaded through the copper tubes (the reason for the choice of materials is explained in another article18). To make a complete turn around the core, a conductor must pass through one hole and back through the other. As shown below diagrammatically, the copper tubes form a centre18 Components and Materials. [www.g3ynh.info/]
94 tapped single turn, with the DC power supply (B+) connected to the centre tap, and the other ends connected to the collectors of the RF power transistors (a matched pair of 2SC2290s). The transformer in the photograph has four turns and so increases the amplifier output impedance by a factor of 16. There is something more to the use of an output transformer than impedance transformation however, the principal issue being that the transmitter discussed above uses a 13.8V power supply and yet must deliver 100W into a 50 load. The required output power and target load impedance defines the output voltage as V=(PR)=(10050)=70.7V RMS, i.e., 70.722=200V peak-to-peak (p-p). A simplified version of the power amplifier circuit is shown below, and we can deduce the minimum allowable transformer step-up ratio by examining it.
This is a so-called push-pull amplifier circuit, in which one transistor provides the positive halfcycle of the output waveform, and the other transistor provides the negative half-cycle. When a bipolar transistor is turned hard on, its collector voltage does not go to zero, but stops at some saturation voltage, which is usually around 1V. Also, it is not a good idea to drive the transistors close to saturation because this will lead to considerable distortion of the output waveform. Therefore we must assume that the output stage can produce positive and negative half cycles of no more than about 12.5V across half of the primary winding, i.e., 25V per transistor across the whole winding , hence 50V p-p. To obtain 200V p-p (70.7V RMS) therefore, a voltage step-up ratio of 1:4 is required. The fact that this transformation increases the source impedance by a factor of 16 is a secondary consideration; and is of no great concern unless the source impedance begins to approach the design load resistance, the latter situation being associated with low transfer efficiency and poor load regulation as discussed earlier. It follows, that to keep the output impedance as low as possible, a step-up ratio just sufficient to provide the required output voltage is optimal. The actual output impedance (Rg') of the TS430s transmitter (measured at the antenna socket, see the example at the end of section 38) is about 23 (measured 23.33.4 at 1.9MHz) for a design load resistance of 50. The output impedance of the power amplifier (Rg) is therefore approximately 23/16=1.4. Dye and Granberg19 give an approximate formula for calculating the output impedance of a transistor power amplifier below 100MHz as: Rg = (Vcc - Vsat) / Pout(max) where Vcc is the supply voltage, and Pout(max) is the maximum power available from the amplifier. If we assume a saturation voltage Vsat of about 1V, this gives: Rg = 12.8 / 100 = 1.64. Multiplying this by 16 gives Rg'=26.2, which is within 1 of the measurement without taking any of the circuitry between the power amplifier and the antenna socket into account.
19 Radio Frequency Transistors, Norm Dye and Helge Granberg. Motorola inc. / Butterworth Heinemann, Newton MA. 1993. ISBN 0-7506-9059-3. Output impedance of a power amplifier: p118.
95 Notice incidentally, that the power amplifier is shown as feeding into a low-pass filter (LPF) before connection to the antenna system. Such a filter is always necessary with a broad-band transistor power amplifier, because such amplifiers produce relatively high levels of harmonics. The push-pull configuration actually cancels even harmonics, but there are still high levels of odd harmonics (3rd, 5th, 7th etc.) which must be removed (in engineering, the first harmonic is the same as the fundamental). In section 34 we noted that the power amplifier protection circuitry operates when the load impedance is too high, as well as when it is too low. This is not usually necessary for the protection of the amplifier, but the LPF may not provide the required degree of harmonic attenuation when incorrectly terminated; and so the protection circuitry helps to keep spurious emissions within acceptable limits if the load impedance is too high.
It should be obvious by inspection of the '1:1' auto-transformer circuit, that the voltage - current relationship for the load seen by the generator is given by: V/I = jXL // Z If the coil has losses moreover, we can represent these as a resistance (RL say) in series with the coil: V/I = ( RL + jXL ) // Z We can also transform the impedance of the coil into its parallel form (see section 19), in which case the load on the generator becomes: V/I = RLp // jXLp // Z The implication is that, unless the magnitude if the inductive reactance is very much larger than the magnitude of the load impedance, the transformer will not preserve the load phase relationship. If the load is reactive, the parallel loss component will also alter the load phase relationship slightly. In the previous section, we introduced the idea that an impedance located on one side of a transformer can be transferred to the other side in an equivalent circuit by the act of multiplying it by the turns-ratio squared. So we might represent the inductance of a transformer as a separate inductance L in parallel with the primary side of an ideal transformer (of otherwise infinite inductance), or we might represent it as an inductance L' in parallel with the secondary side. The
96 transformation rule (33.3) tells us that: j2f L = j(NS/NP) 2f L' i.e., L = (NS/NP) L' This is a remarkable result because, not only does it give us the basis for constructing equivalent circuits to serve as models for real transformers, it also tells us something about inductors. The expression can only be true if the inductance of the coil is proportional to the square of the number of turns in it. We can see why by considering the two 1:N auto-transformer equivalent circuits shown below: In the left-hand circuit, the inductance of the transformer is referred to the primary side, and for reasons of convention is given the symbol AL. In the right-hand circuit, the inductance is referred to the secondary side and is given the symbol L. From the foregoing discussion, we can immediately write the relationship between L and AL: L = N AL We can also interpret L as the inductance of the whole coil, and AL as the inductance of one turn of the coil. AL is known as the inductance factor, and depends on the physical dimensions of the coil and the nature of any magnetic core material. It may be interpreted either as the inductance of a one-turn coil, or as the inductance of an auto-transformer referred across a one-turn tap. AL has the units of inductance (Henrys), but is more informatively given units of inductance / turn ("Henrys per turnsquared"). Continuously variable auto-transformer: One of the drawbacks of ferrite or iron-cored transformers as impedance matching devices is that the the transformation ratio can only be altered in a stepwise fashion, by changing windings or tappings one turn at a time (or half a turn if the core has two holes). If the turns in the coils are few, as tends to be the case in radio-frequency applications, then the steps available can be very coarse indeed. It is however possible to make a continuously variable inductor or auto-transformer by rotating a coil about its axis and tapping into it with a rolling contact, the coil end-connections being made by slipping contacts (known, for historical reasons, as "brushes"). Such a device is known colloquially as a "roller coaster", and an example is shown in the photograph on the right. This is the motor-driven variable impedance transformer from a 1957 vintage Collins 180L-3A automatic HF antenna tuner. The tuner is designed to match end-fed wire (Marconi) antennas of 14 to 40 metres in length over a frequency range of 2 to 25MHz, and is for use with transmitters with an output of up to 150W and a preferred load impedance of 52. An interesting feature of the transformer is that it achieves a continuous transition from step-down to step-up by having an overwind (see diagram right), i.e., the brush contact at one end of the coil goes to a centre-tap, and the end of the coil is left unconnected. The coil has 28 turns, and the input tap is at 14 turns, so a maximum impedance step-up of approximately 4:1 is obtainable.
97 The disadvantage of the Collins transformer is that the coil does not have a magnetic core. The stray magnetic fields will therefore induce currents (eddy currents) in the surrounding metalwork and give rise to resistive losses. The open magnetic circuit also implies that the impedance transformation obtained will not be exactly proportional to the square of the turns ratio, and due to the absence of a magnetic core the inductance might appear on first consideration to be rather low. The inductance for the whole coil, estimated using Wheeler's Formula20 is about 20H, giving only about 5H when referred to the primary side. This will give rise to significant phase shift at lower frequencies, the inductive reactance seen by the transmitter at 2MHz being something around 22106510-6=63. It transpires however, that the choice of primary reactance about equal to the target input impedance at the lowest operating frequency is sensible, because in addition to the impedance transformer, the antenna tuner also has power-factor correction components. In the process of adjusting a reactance in series with the antenna to achieve a resistive input impedance, any phase shift due to the transformer is automatically taken into account. Consequently, it is possible to keep the inductance small, which helps in the avoidance of self-resonance problems at the high-end of the operating frequency range.
This is the prototype of all antenna tuners in the sense that it approaches the impedance matching problem in the simplest possible way. The object of the exercise in every case is to transform the impedance in its two dimensions: magnitude and phase, and the most direct approach is to do so using one device which only affects the magnitude and one device which only affects the phase. The magnitude-correcting engine is the variable auto-transformer, and the phase-correcting engine is a series reactance; relays being provided to insert a series coil in the event that the antenna is capacitive, or a series capacitor in the event that the antenna is inductive. Such a matching unit can, of course, be controlled manually, by the expedient of providing it with control knobs and switches instead of motors and relays. This approach replaces the automatic control system with a human being, but makes no allowance for the fact that humans in general have little aptitude for the task. Here we monitor the load magnitude and phase using bridge circuits, which are the subject of a separate article. The bridges produce error signals, which tell their respective control systems which way to go in event that the error exceeds a certain preset
20 Solenoids. D W Knight [www.g3ynh.info/]
98 threshold. Not shown on the diagram, but necessary to make the system work, are limit switches, two for each variable device. These tell the control system when a motorised device has hit one of its end-stops: so that the change-over relay can be switched and the motor direction reversed in the case of the impedance transformer; so that the switch-over from coil to capacitor or vice versa can be made in the case of the series reactance network; and as protection against motor burn-out in the event that the load is outside the matching range. The control systems for magnitude and phase are shown as being completely separate; which they are except in respect of common signals, such as the request for a tuning carrier or an instruction to reduce power, which they might send to the transmitter on detecting a matching error. The independence of the two systems is possible because the two chosen matching criteria are independent, i.e., the two matching processes can proceed simultaneously without altering the outcome. The system can even adjust itself when presented with a speech SSB signal, but will reach a solution fastest when the error signals are continuously available. One desirable property of this matching system, and of any properly designed matching system, is that it corrects for the defects of its own components. In this case, when the phase control system adds series inductance (for example), the increasing resistance of the coil will increase the impedance magnitude seen at the input, but the magnitude control system will simply back-off to compensate. Similarly, the inductance of the impedance transformer will cause a positive phase shift, but phase control system will back-off in the capacitive direction to compensate. While the simple magnitude-phase matching system is entirely practical however, it has never been particularly popular. The reason is that it is difficult to design an efficient and resonance-free variable broadband transformer. The required transformations can just as well be obtained using only variable capacitors and inductors, and this subject is examined in detail in a separate article21.
99 Converting an impedance into an admittance is simply a matter of taking the reciprocal. Admittance is usually given the symbol Y (and here we put it in bold because it is complex), hence: Y=1/Z. Now, if Z=R+jX this gives: Y = 1/( R +jX ) which can be put into the a+jb form by multiplying the numerator and denominator by the complex conjugate of the denominator, i.e.: R - jX Y= (R + jX) ( R - jX) hence: R - jX Y= (R + X) This expression can be written: Y = G +jB where the real part of the admittance, G, is called the conductance, and the imaginary part, B, is called the susceptance (of the network under consideration). From the above, we can extract definitions for conductance and susceptance which are: Conductance, Susceptance, G = R / (R + X) B = -X / (R + X)
Now observe that when the impedance of a network is purely resistive, the conductance is 1/R, and G=1/R is the definition of conductance in DC electrical theory. When an impedance is purely reactive, the susceptance B= -1/X (susceptance has no DC counterpart). Admittance, conductance, and susceptance, of course, have units; and the modern unit in this case is the Siemens, which is given the dimension symbol capital S (as opposed to the second, which has a small s). In old textbooks and papers, the unit of admittance is often given as the 'Mho' (Ohm spelt backwards), but in either case, the actual dimensions are in reciprocal Ohms, i.e., / or -1. A pure resistance of 50 therefore corresponds to a conductance of 1/50 Siemens, i.e., 20 milli-Siemens or 20mS. A pure reactance X=100 corresponds to a susceptance B=10mS, and so on. "Siemens", incidentally, is a name, like "Jones". The singular of Jones is not Jone, and so Siemens keeps its final s in both singular and plural forms (one Siemens, several Siemens). The plural "Siemenses" is not recommended (but is a lot less embarrassing than the quasi-singular "Siemen"). The double-slash product was previously defined as (17.5): a // b = ab/(a+b) We can demonstrate that addition is the reciprocal-space counterpart of the double slash operator by transforming the parallel impedance formula; i.e., if: Z = Z1 Z2 / (Z1 + Z2) then Y = (Z1 + Z2) / Z1 Z2. If we let Y1=1/Z1 and Y2=1/Z2, then Y = Y1 Y2 [ (1/Y1) + (1/Y2) ] Which rearranges to:
100 Y = Y1 + Y2 i.e., when two networks are placed in parallel, their admittances are added. Recall that the formula for resistances in parallel, R=R1R2/(R1+R2) is a rearrangement of the expression: 1/R = (1/R1) + (1/R2) It should now be apparent, that what the formula really says is: G = G1 + G2 The formula for impedances in parallel is of course a rearrangement of: 1/Z = (1/Z1) + (1/Z2) and this expression can be extended to cover any number of impedances in parallel by adding more terms, i.e.: 1/Z = (1/Z1) + (1/Z2) + (1/Z3) + . . . . . + (1/Zn) This is a sum of admittances, and may be re-written as: Y = Y1 + Y2 + Y3 + . . . . . + Yn We can express this result using the double slash notation: 1/( Z1 // Z2 // Z3 // . . . // Zn ) = Y1 + Y2 + Y3 + . . . . . + Yn where Yk = 1/Zk (k being any subscript). The admittance representation of an electrical circuit is no less authoritative than the impedance representation, and is no more difficult to use. Admittances are phasors, and all of the phasor techniques we have developed in this chapter will work on them. It is however, helpful to remember that a (numerically) large admittance corresponds to a small impedance and vice versa. Reciprocal-space counterparts Impedance space Impedance Z = R+jX = 1/Y Resistance R = G/(G+B) Reactance X = -B/(G+B) Pure resistance R = 1/G Pure reactance X = -1/B Inductive reactance XL = 2fL Capacitive reactance XC = -1/(2fC) // operator + operator Straight line Circle
Admittance space Admittance Y = G+jB = 1/Z Conductance G = R/(R+X) Susceptance B = -X/(R+X) Pure conductance G = 1/R Pure susceptance B = -1/X Inductive susceptance BL = -1/(2fL) Capacitive susceptance BC = 2fC + operator // operator Circle Straight line
101 One further issue of which the reader will need to be aware is that two different definitions of admittance appear in the electrical and electronic literature. Some authors (e.g., Hartshorn22) use Y=G+jB, as is done here; and others (e.g., Langford-Smith23) use Y=G-jB. The 'alternative' definition gives B=X/(R+X), and thus BL=1/(2fL), and BC= -2fC. In the next section, we analyse the parallel resonator bandpass filter and determine the relationship between resonant frequency, bandwidth and Q. In the author's first attempt at the derivation, the definition Y=G-jB was used, and the formula which resulted had it that either Q is negative or f0 is negative. The change to Y=G+jB fixed the problem and so, since Q is by definition positive when loss resistance is positive, the other definition is wrong according to the convention that frequency is positive. We may also note a reflection symmetry in the correct choice, in that we have XL=2fL in impedance space, and BC=2fC in admittance space, etc.; i.e., inductive reactance and capacitive susceptance are positive, capacitive reactance and inductive susceptance are negative.
102 V 0 = V g RP / ( R S + R P ) We will also avail ourselves of a useful property of the potential divider formula (35.4), which is that if we multiply it by a unit quantity consisting of the source resistance divided by itself (i.e., RS/RS ), it becomes a double-slash product: V0 = Vg ( RP // RS ) / RS Similarly, for the output voltage in general: V = Vg ( RP // jXCp // jXLp ) / [ RS + ( RP // jXCp // jXLp ) ] and using the associative rule (17.4): V = Vg ( RS // RP // jXCp // jXLp ) / RS So we can write the ratio V/V0 as: V/V0 = ( RS // RP // jXCp // jXLp ) / ( RP // RS ) The bandwidth function is the magnitude of this expression; but with all of the components represented as impedances, anyone attempting to expand and simplify it, or isolate part of it as the load, will have a hard time keeping track of all of the intermediate terms. We will therefore convert it into an admittance problem, using the relationship: 1/( Z1 // Z2 // Z3 // . . . // Zn ) = Y1 + Y2 + Y3 + . . . . . + Yn Hence: 1 / ( GS + GP + jBCp + jBLp ) V/V0 = 1 / ( GP + GS ) Where G stands for conductance and B for susceptance, and GS=1/RS , GP=1/RP , BCp=-1/XCp and BLp=-1/XLp. The expression above can be re-written: GS + GP V/V0 = GS + GP + jBCp + jBLp and the magnitude is: [ (GS + GP) ] |V| / V0 = [ (GS + GP) + (BCp + BLp) ] i.e., GS + GP |V| / V0 = [ (GS + GP) + (BCp + BLp) ] This can be plotted against frequency by substituting BCp=2fCP and BLp=-1/(2fLP), but we will not bother to do so here because it is identical in appearance to the graph of |I|/I0 for a series resonator given in section 29. We will instead go on to determine the half-power points by noting that, whatever proportion of the parallel resistance RP is designated as the load, power will always be delivered to it in proportion to |V|, so the half-power points occur when |V|=V0/2. Hence, at the half-power points we have:
103 GS + GP = 1/2 [ (GS + GP) + (BCp + BLp) ] Which, upon squaring gives: (GS + GP) = (GS + GP) + (BCp + BLp) and upon inversion gives: (BCp + BLp) +1=2 (GS + GP) i.e.: (BCp + BLp) / (GS + GP) = 1 and taking the square root: (BCp + BLp) / (GS + GP) = 1 Thus: BLp + BCp = (GS + GP) and if we define the sum GS+GP as GQ (i.e., the conductance which determines the Q): BLp + BCp = GQ Now, using the substitutions BCp=2fCp and BLp=-1/(2fLp), we obtain: [ -1/(2fLp) ] + 2fCp = GQ and by factoring out 1/(2fLp) from the left hand side and re-arranging: 2fLpGQ = -1 + (2f)LpCp i.e., [4LpCp]f [2LpGQ]f -1 = 0 This is a quadratic equation in f with a=4LpCp , b=2LpGQ , and c=-1. It has four solutions as was the case for the series resonator (section 29), these being the upper and lower bandwidth limits for positive and negative frequencies. To solve it we apply the standard formula: f = [-b (b - 4ac) ] / 2a Hence: f = { 2LpGQ [(2LpGQ) + 44LpCp] } / (24LpCp) and using the substitution Lp=Lp/Lp to obtain cancellation of Lp from all but one term: f = { LpGQ [(LpGQ) + 4(Lp/Lp)Cp] } / (4LpCp) f = { GQ (GQ + 4Cp/Lp) } / (4Cp) Now, since (GQ + 4Cp/Lp) will always be larger than GQ, we can identify the positive frequency upper bandwith limit as: f+ = { [(GQ + 4Cp/Lp)] + GQ } / (4Cp) and the positive frequency lower bandwidth limit as: f- = { [(GQ + 4Cp/Lp)] - GQ } / (4Cp) and the bandwidth is: fw = f+ - f- = GQ/(2Cp) This is the admittance counterpart of the result obtained at this stage in the derivation of the Q of a series resonator (equation 29.3) and so we will deduce that the bandwidth of the parallel resonator BPF is f0/Q0, and use this deduction to find a definition for Q.
104 f0/Q0 = GQ/(2Cp) 2f0Cp = BCp0 = Q0 GQ Q0 = BCp0 / GQ Now let RQ=1/GQ, where RQ is "the resistance which determines the Q ". Also observe that BCp0= -1/XCp0 , and at resonance -XCp0=XLp0 . Hence: Q0 = -RQ / XCp0 = RQ / XLp0 or, in keeping with the definition of resonant Q given in section 31: 45.1 Q0 = RQ / (Lp/Cp) RQ is simply the parallel combination of the source resistance, the load resistance, and the dynamic resistance of the resonator, i.e.: RQ = RS // Rp0 // RLoad This result gives us the theoretical information we need in order to be able to design parallel resonant bandpasss filters. Firstly, we may observe that the source and load impedances are effectively in parallel with the resonator, which is why any minor source and load reactances can be lumped with the resonator reactances and cause only a detuning effect (if such reactances are very large however, they will cause a significant change in the dynamic resistance and the problem is best re-analysed from scratch). The source, load, and dynamic resistances however, are critical in determining the Q, and we need to obtain high values for all of them in order to obtain a high Q. We can of course adjust the source and load resistances using transformers; and as we shall see shortly, we can replace the resonator coil with a transformer so that the inductor and the transformer become one and the same. Before we look at such coupling schemes however, we must draw attention to a particularly misleading inference of the formula, which is that high Q can be obtained by making the ratio Lp/Cp as small as possible. This suggestion has appeared in at least one amateur radio publication, but it is a fallacy. If the reactive components are of reasonable quality, the parallel form L/C ratio (Lp/Cp ) is only slightly different from the series form L/C ratio, and as we showed in section 21, imaginary resonance can occur if the L/C ratio becomes too low. The imaginary resonance condition is entirely a function of the series (loss) resistances of the coil and the capacitor. It is nothing to do with the source and load resistances because avoidance of imaginary resonance is a matter of ensuring that the +90 component of the coil current at resonance is sufficiently large to cancel the -90 component of the capacitor current (or vice versa, but in practice coils are more lossy than capacitors). Consequently, the design procedure for a parallel resonator BPF is to make the L/C ratio large enough to obtain a good strong resonance (without making the inductance so large that stray capacitance and coil self-capacitance prevent the target maximum frequency from being reached), and then to make Rp0 even larger (by minimising loss resistances) in order to obtain a useful working Q.
105 but it would be a lot more useful intuitively if we could express it in terms of the coil and capacitor impedances in their series (R+jX) forms. We can do so by using the series-to-parallel transformation (section 19); and using the definition of Q from section 31 as precedent, we expect a result in the form: Q0u = [(L/C)] / R . . . (46.1) the point being to find out what is meant by R in this case. The translation from parallel to series form is indicated in the set of equivalent circuits shown below:
Here we identify Rp0 as RCp//RLp, i.e.: Rp0 = RCp RLp / ( RCp + RLp ) and an expansion in terms of the series forms of the impedances has already been given as equation (20.5): (RC + XC) (RL + XL) Rp0 = RL(RC + XC) + RC(RL + XL) The unloaded Q is defined as: Q0u = Rp0 / (-XCp XLp) and, from the series-to-parallel transformation (equation 19.3b), we have: XCp = (RC + XC) / XC . . . (46.2) and XLp = (RL + XL) / XL . . . (46.3) Putting all of this together we have: (RC + XC) (RL + XL) / [ RL(RC + XC) + RC(RL + XL) ] Q0u = [ -(RC + XC) (RL + XL) / XC XL ] and noting that (-XC XL )=(L/C), this rearranges to: [(L/C)] {[ (RC + XC) (RL + XL) ] } Q0u = [ RL(RC + XC) + RC(RL + XL) ] So, at this point we have extracted (L/C) as required by equation (46.1), and the resistance by which (L/C) must be divided in order to obtain Q0u is:
R=
[ R R X R R X ]
L 2 C 2 C C 2 L 2 L
2 C
2 C
R2 X2 L L
(46.4)
106 Which, upon expanding the numerator gives: RL(RC+XC) R = (RC+XC)(RL+XL) + (RC+XC)(RL+XL) RC(RL+XL) + (RC+XC)(RL+XL) 2RLRC(RC+XC)(RL+XL)
The simplification we require here comes from noting that the terms (RC+XC) and (RL+XL) occur in the expressions for XCp and XLp given above (equations 46.2 and 46.3), and that at resonance -XCp=XLp. Hence: (RC + XC) / (-XC) = (RL + XL) / XL i.e.: (RC + XC) / (RL + XL) = -XC / XL and (RL + XL) / (RC + XC) = XL /-XC Hence: R = RL(-XC/XL) + RC(XL/-XC) + 2RLRC which can be factorised: R = { RL[(-XC/XL)] + RC[(XL/-XC)] } Hence: R = RL[(-XC/XL)] + RC[(XL/-XC)] . . . (46.5) (strictly R, but resistance is positive, allowing us to ignore the negative solution) and so: Q0u = [(L/C)] / { RL[(-XC/XL)] + RC[(XL/-XC)] } This is an exact solution provided that RL and RC do not vary; and once again may be assumed exact for normal engineering purposes because RL and RC will not vary significantly in the vicinity of the resonant frequency. Note however that XL=-XC to an extremely good approximation when the L/C ratio is reasonably large, and this relationship is exact when RL=RC. Hence, for most practical purposes: Q0u = [(L/C)] / (RL + RC) 46.6 Which means that the unloaded Q of the parallel resonator is the same as that of the series resonator, it is the square root of the L/C ratio divided by the total series resistance. In other words, we can estimate the unloaded Q of the parallel resonator by considering it to be a series resonator connected as a loop.
107 RC RL I0 = V + (RC+XC) (RL+XL) (V and I0 are now in phase and will be treated as real). Putting the expression onto a common denominator yields: RC(RL+XL) + RL(RC+XC) I0 = V 47.1 (RC+XC) (RL+XL) where the term inside the square brackets is the reciprocal of the dynamic resistance (see equation 20.5), i.e., I0=V/Rp0 . Now, the current circulating in the resonator can be determined from either branch as the total current flowing in the branch, less the current drawn from the generator. Hence the circulating current is simply the imaginary part of the current in the branch. The resonant condition (and the concept of circulation) also implies that the circulating current is of the same magnitude for both branches, but of opposite sign. Hence, if we call the circulating current IQ, then (taking the imaginary parts of the expressions for IC and IL above) we have: IQ = jVXL/(RL+XL) = -jVXC/(RC+XC) and the magnitudes are: |IQ| = VXL/(RL+XL) = V(-XC)/(RC+XC) We can also create a definition involving both branches by taking the geometric mean: |IQ| = V { (-XCXL) / [ (RC+XC) (RL+XL) ] } which allows us to extract the L/C ratio: |IQ| = V { (L/C) / [(RC+XC) (RL+XL)] } . . . . (47.2) Now, let us define the unloaded Q of the resonator as the ratio of the circulating current to the through current: Q0u = |IQ| / I0 Which can be expanded using equations (47.1) and (47.2): (L/C) / [(RC+XC) (RL+XL)] Q0u = [RC(RL+XL)+RL(RC+XC)] / [(RC+XC)(RL+XL)] and rearranged: (RC+XC)(RL+XL) Q0u = [(L/C)] [ RC(RL+XL)+RL(RC+XC) ] The rightmost square-root bracket is simply the reciprocal of R as defined in equation (46.4); hence: Q0u = [(L/C)] / R and we have proved that the current-magnification definition for unloaded Q is identical to that obtained on the assumption that Q is the magnitude of the resonant frequency divided by the bandwidth of the resonator.
The only residual issue is that of why the exact expression for R is (as given by equation 46.5): R = RL[(-XC/XL)]+RC[(XL/-XC)] rather than simply R=RL+RC. This however can be understood by noting that the (real) current flowing through the resonator will be very slightly biased in favour of the branch with the lowest
108 resistance. This difference is very small for practical resonators of moderate unloaded Q, and may normally be ignored.
To be continued . . . . .