Peter Atkins - Four Laws That Drive The Universe (2007)
Peter Atkins - Four Laws That Drive The Universe (2007)
Peter Atkins - Four Laws That Drive The Universe (2007)
Peter Atkins
1
3
Great Clarendon Street, Oxford ox
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide in
Oxford New York
Auckland Cape Town Dar es Salaam Hong Kong Karachi
Kuala Lumpur Madrid Melbourne Mexico City Nairobi
New Delhi Shanghai Taipei Toronto
With offices in
Argentina Austria Brazil Chile Czech Republic France Greece
Guatemala Hungary Italy Japan Poland Portugal Singapore
South Korea Switzerland Thailand Turkey Ukraine Vietnam
Oxford is a registered trade mark of Oxford University Press
in the UK and in certain other countries
Published in the United States
by Oxford University Press Inc., New York
© Peter Atkins 2007
The moral rights of the author have been asserted
Database right Oxford University Press (maker)
First published 2007
All rights reserved. No part of this publication may be reproduced,
stored in a retrieval system, or transmitted, in any form or by any means,
without the prior permission in writing of Oxford University Press,
or as expressly permitted by law, or under terms agreed with the appropriate
reprographics rights organization. Enquiries concerning reproduction
outside the scope of the above should be sent to the Rights Department,
Oxford University Press, at the address above
You must not circulate this book in any other binding or cover
and you must impose the same condition on any acquirer
British Library Cataloguing in Publication Data
Data available
Library of Congress Cataloging in Publication Data
Data available
Typeset by SPI Publisher Services, Pondicherry, India
Printed in Great Britain
on acid-free paper by
Clays Ltd, St Ives plc
ISBN 978–0–19–923236–9
1 3 5 7 9 10 8 6 4 2
P R E FAC E
Preface v
Conclusion 123
1. If the gases in these two containers are at different pressures, when the
pins holding the pistons are released, the pistons move one way or the
other until the two pressures are the same. The two systems are then in
mechanical equilibrium. If the pressures are the same to begin with, there
is no movement of the pistons when the pins are withdrawn, for the two
systems are already in mechanical equilibrium.
6 The Zeroth Law: The concept of temperature
the two pressures. If the piston on the right won the battle, then
we would infer that the pressure on the right was higher than
that on the left. If nothing had happened when we released the
pins, we would infer that the pressures of the two systems were
the same, whatever they might be. The technical expression for
the condition arising from the equality of pressures is mechanical
equilibrium. Thermodynamicists get very excited, or at least get
very interested, when nothing happens, and this condition of
equilibrium will grow in importance as we go through the laws.
We need one more aspect of mechanical equilibrium: it will
seem trivial at this point, but establishes the analogy that will
enable us to introduce the concept of temperature. Suppose the
two systems, which we shall call A and B, are in mechanical equi-
librium when they are brought together and the pins are released.
That is, they have the same pressure. Now suppose we break the
link between them and establish a link between system A and a
third system, C, equipped with a piston. Suppose we observe no
change: we infer that the systems A and C are in mechanical equi-
librium and we can go on to say that they have the same pressure.
Now suppose we break that link and put system C in mechanical
contact with system B. Even without doing the experiment, we
know what will happen: nothing. Because systems A and B have
the same pressure, and A and C have the same pressure, we can
be confident that systems C and B have the same pressure, and
that pressure is a universal indicator of mechanical equilibrium.
Now we move from mechanics to thermodynamics and the
world of the zeroth law. Suppose that system A has rigid walls
made of metal and system B likewise. When we put the two
systems in contact, they might undergo some kind of physical
The Zeroth Law: The concept of temperature 7
A
A
B C B C
A A C
B C B
2. A representation of the zeroth law involving (top left) three systems that
can be brought into thermal contact. If A is found to be in thermal equilib-
rium with B (top right), and B is in thermal equilibrium with C (bottom
left), then we can be confident that C will be in thermal equilibrium with
A if they are brought into contact (bottom right).
We are not yet claiming that we know what temperature is, all we
are doing is recognizing that the zeroth law implies the existence
of a criterion of thermal equilibrium: if the temperatures of two
systems are the same, then they will be in thermal equilibrium
when put in contact through conducting walls and an observer
of the two systems will have the excitement of noting that
nothing changes.
We can now introduce two more contributions to the vocabu-
lary of thermodynamics. Rigid walls that permit changes of state
when closed systems are brought into contact—that is, in the
language of Chapter 2, permit the conduction of heat—are called
diathermic (from the Greek words for ‘through’ and ‘warm’).
Typically, diathermic walls are made of metal, but any conduct-
ing material would do. Saucepans are diathermic vessels. If no
The Zeroth Law: The concept of temperature 9
change occurs, then either the temperatures are the same or—
if we know that they are different—then the walls are classified
as adiabatic (‘impassable’). We can anticipate that walls are adia-
batic if they are thermally insulated, such as in a vacuum flask or
if the system is embedded in foamed polystyrene.
The zeroth law is the basis of the existence of a thermometer,
a device for measuring temperature. A thermometer is just a
special case of the system B that we talked about earlier. It is
a system with a property that might change when put in con-
tact with a system with diathermic walls. A typical thermometer
makes use of the thermal expansion of mercury or the change
in the electrical properties of material. Thus, if we have a sys-
tem B (‘the thermometer’) and put it in thermal contact with
A, and find that the thermometer does not change, and then
we put the thermometer in contact with C and find that it still
doesn’t change, then we can report that A and C are at the same
temperature.
There are several scales of temperature, and how they are
established is fundamentally the domain of the second law (see
Chapter 3). However, it would be too cumbersome to avoid
referring to these scales until then, though formally that could
be done, and everyone is aware of the Celsius (centigrade)
and Fahrenheit scales. The Swedish astronomer Anders Celsius
(1701–1744) after whom the former is named devised a scale
on which water froze at 100◦ and boiled at 0◦ , the opposite of
the current version of his scale (0◦ C and 100◦ C, respectively).
The German instrument maker Daniel Fahrenheit (1686–1736)
was the first to use mercury in a thermometer: he set 0◦ at the
lowest temperature he could reach with a mixture of salt, ice,
10 The Zeroth Law: The concept of temperature
Celsius
–400 –200 0 200
Fahrenheit
0
Kelvin
and water, and for 100◦ he chose his body temperature, a readily
transportable but unreliable standard. On this scale water freezes
at 32◦ F and boils at 212◦ F (Figure 3).
The temporary advantage of Fahrenheit’s scale was that with
the primitive technology of the time, negative values were rarely
needed. As we shall see, however, there is an absolute zero of tem-
perature, a zero that cannot be passed and where negative tem-
peratures have no meaning except in a certain formal sense, not
one that depends on the technology of the time (see Chapter 5).
It is therefore natural to measure temperatures by setting 0 at
this lowest attainable zero and to refer to such absolute tem-
peratures as the thermodynamic temperature. Thermodynamic
temperatures are denoted T , and whenever that symbol is used
in this book, it means the absolute temperature with T = 0
corresponding to the lowest possible temperature. The most
common scale of thermodynamic temperatures is the Kelvin
The Zeroth Law: The concept of temperature 11
energy decreases and the balls sink down on to the lower shelves.
They retain their exponential distribution, with progressively
fewer balls in the upper levels, but the populations die away more
quickly with increasing energy.
When the Boltzmann distribution is used to calculate the
properties of a collection of molecules, such as the pressure of
a gaseous sample, it turns out that it can be identified with the
reciprocal of the (absolute) temperature. Specifically, ‚ = 1/kT,
where k is a fundamental constant called Boltzmann’s constant.
To bring ‚ into line with the Kelvin temperature scale, k has the
value 1.38 × 10−23 joules per kelvin. 1 The point to remember is
that, because ‚ is proportional to 1/T , as the temperature goes
up, ‚ goes down, and vice versa.
There are several points worth making here. First, the huge
importance of the Boltzmann distribution is that it reveals the
molecular significance of temperature: temperature is the para-
meter that tells us the most probable distribution of populations
of molecules over the available states of a system at equilibrium.
When the temperature is high (‚ low), many states have signif-
icant populations; when the temperature is low (‚ high), only
the states close to the lowest state have significant populations
(Figure 4). Regardless of the actual values of the populations,
they invariably follow an exponential distribution of the kind
given by the Boltzmann expression. In terms of our balls-on-
shelves analogy, low temperatures (high ‚) corresponds to our
throwing the balls weakly at the shelves so that only the lowest
the tongue. Nor are the values of ‚ that typify a cool day (10◦ C,
corresponding to 2.56 × 1020 J−1 ) and a warmer one (20◦ C,
corresponding to 2.47 × 1020 J−1 ).
The third point is that the existence and value of the fun-
damental constant k is simply a consequence of our insist-
ing on using a conventional scale of temperature rather than
the truly fundamental scale based on ‚. The Fahrenheit, Cel-
sius, and Kelvin scales are misguided: the reciprocal of tem-
perature, essentially ‚, is more meaningful, more natural, as
a measure of temperature. There is no hope, though, that it
will ever be accepted, for history and the potency of simple
numbers, like 0 and 100, and even 32 and 212, are too deeply
embedded in our culture, and just too convenient for everyday
use.
Although Boltzmann’s constant k is commonly listed as a
fundamental constant, it is actually only a recovery from a
historical mistake. If Ludwig Boltzmann had done his work
before Fahrenheit and Celsius had done theirs, then it would
have been seen that ‚ was the natural measure of temperature,
and we might have become used to expressing temperatures in
the units of inverse joules with warmer systems at low values
of ‚ and cooler systems at high values. However, conventions
had become established, with warmer systems at higher tem-
peratures than cooler systems, and k was introduced, through
k‚ = 1/T , to align the natural scale of temperature based on
‚ to the conventional and deeply ingrained one based on T .
Thus, Boltzmann’s constant is nothing but a conversion factor
between a well-established conventional scale and the one that,
with hindsight, society might have adopted. Had it adopted ‚
The Zeroth Law: The concept of temperature 17
We have not yet arrived at the first law: this will take a little
more work, both literally and figuratively. To move forward,
let’s continue with the same system but strip away the thermal
insulation so that it is no longer adiabatic. Suppose we do our
churning business again, starting from the same initial state and
continuing until the system is in the same final state as before.
We find that a different amount of work is needed to reach the
final state.
Typically, we find that more work has to be done than in the
adiabatic case. We are driven to conclude that the internal energy
can change by an agency other than by doing work. One way of
regarding this additional change is to interpret it as arising from
the transfer of energy from the system into the surroundings due
to the difference in temperature caused by the work that we do
as we churn the contents. This transfer of energy as a result of a
temperature difference is called heat.
The amount of energy that is transferred as heat into or out
of the system can be measured very simply: we measure the
work required to bring about a given change in the adiabatic
system, and then the work required to bring about the same
change of state in the diathermic system (the one with thermal
insulation removed), and take the difference of the two values.
That difference is the energy transferred as heat. A point to note
is that the measurement of the rather elusive concept of ‘heat’ has
been put on a purely mechanical foundation as the difference in
the heights through which a weight falls to bring about a given
change of state under two different conditions (Figure 7).
We are within a whisper of arriving at the first law. Suppose
we have a closed system and use it to do some work or allow a
The First Law: The conservation of energy 29
the steam engine, the internal combustion engine, and the jet
engine.
continues until the piston has moved out a desired amount and,
through its coupling to a weight, has done a certain amount
of work. No greater work can be done, because if at any stage
the external pressure is increased even infinitesimally, then the
piston will move in rather than out. That is, by ensuring that
at every stage the expansion is reversible in the thermodynamic
sense, the system does maximum work. This conclusion is
general: reversible changes achieve maximum work. We shall draw
on this generalization in the following chapters.
that is used to make room for the carbon dioxide and water
vapour and subtract that from the total change in energy. This
is true even if there is no physical piston—if the fuel burns in a
dish—because, although we cannot see it so readily, the gaseous
products must still make room for themselves.
Thermodynamicists have developed a clever way of taking
into account the energy used to do work when any change, and
particularly the combustion of a fuel, occurs, without having to
calculate the work explicitly in each case. To do so, they switch
attention from the internal energy of a system, its total energy
content, to a closely related quantity, the enthalpy (symbol H).
The name comes from the Greek words for ‘heat inside’, and
although, as we have stressed, there is no such thing as ‘heat’ (it is
a process of transfer, not a thing), for the circumspect the name
is well chosen, as we shall see. The formal relation of enthalpy,
H, to internal energy, U , is easily written down as H = U + pV,
where p is the pressure of the system and V is its volume. From
this relation it follows that the enthalpy of a litre of water open to
the atmosphere is only 100 J greater than its internal energy, but
it is much more important to understand its significance than to
note small differences in numerical values.
It turns out that the energy released as heat by a system free
to expand or contract as a process occurs, as distinct from the
total energy released in the same process, is exactly equal to the
change in enthalpy of the system. That is, as if by magic—but
actually by mathematics—the leakage of energy from a system
as work is automatically taken into account by focusing on the
change in enthalpy. In other words, the enthalpy is the basis of a
kind of accounting trick, which keeps track invisibly of the work
The First Law: The conservation of energy 39
that is done by the system, and reveals the amount of energy that
is released only as heat, provided the system is free to expand in
an atmosphere that exerts a constant pressure on the system.
It follows that if we are interested in the heat that can be
generated by the combustion of a fuel in an open container,
such as a furnace, then we use tables of enthalpies to calculate
the change in enthalpy that accompanies the combustion. This
change is written H, where the Greek uppercase delta is used
throughout thermodynamics to denote a change in a quantity.
Then we identify that change with the heat generated by the sys-
tem. As an actual example, the change of enthalpy that accompa-
nies the combustion of a litre of gasoline is about 33 megajoules
(1 megajoule, written 1 MJ, is 1 million joules). Therefore we
know without any further calculation that burning a litre of gaso-
line in an open container will provide 33 MJ of heat. A deeper
analysis of the process shows that in the same combustion, the
system has to do about 130 kJ (where 1 kilojoule, written 1 kJ, is
one thousand joules) of work to make room for the gases that are
generated, but that energy is not available to us as heat.
We could extract that extra 130 kJ, which is enough to heat
about half a litre of water from room temperature to its boiling
point, if we prevent the gases from expanding so that all the
energy released in the combustion is liberated as heat. One way
to achieve that, and to obtain all the energy as heat, would be to
arrange for the combustion to take place in a closed container
with rigid walls, in which case it would be unable to expand and
hence would be unable to lose any energy as work. In practice, it
is technologically much simpler to use furnaces that are open to
the atmosphere, and in practice the difference between the two
40 The First Law: The conservation of energy
heat does not pass from a body at low temperature to one at high
temperature without an accompanying change elsewhere.
10. The equivalence of the Kelvin and Clausius statements. The diagram
on the left depicts the fact that the failure of the Kelvin statement implies
the failure of the Clausius statement. The diagram on the right depicts the
fact that the failure of the Clausius statement implies the failure of the
Kelvin statement.
or q (1 − Tsink /Tsource ). The efficiency is this work divided by the heat supplied
(q ), which gives efficiency = 1 − Tsink /Tsource , which is Carnot’s formula.
66 The Second Law: The increase in entropy
spring from? Why was there not an exact, perfectly and eternally
judged amount of the God-given stuff?
To resolve these matters and to deepen our understanding of
the concept, we need to turn to the molecular interpretation
of entropy and its interpretation as a measure, in some sense,
of disorder.
15. On the left a process occurs in a system that causes a change in internal
energy U and a decrease in entropy. Energy must be lost as heat to the
surroundings in order to generate a compensating entropy there, so less
than U can be released as work. On the right, a process occurs with an
increase in entropy, and heat can flow in to the system yet still correspond
to an increase in total entropy; as a result, more than U can be released
as work.
There are three applications that I shall discuss here. One is the
thermodynamic description of phase transitions (freezing and
boiling, for instance; a ‘phase’ is a form of a given substance, such
as the solid, liquid, and vapour phases of water), another is the
ability of one reaction to drive another in its non-spontaneous
direction (as when we metabolize food in our bodies and then
walk or think), and the third is the attainment of chemical equi-
librium (as when an electric battery becomes exhausted).
The Gibbs energy of a pure substance decreases as the
temperature is raised. We can see how to draw that conclusion
from the definition G = H – TS, by noting that the entropy of a
pure substance is invariably positive. Therefore, as T increases,
TS becomes larger and subtracts more and more from H, and
G consequently falls. The Gibbs energy of 100 g of liquid water,
for instance, behaves as shown in Figure 16 by the line labelled
‘liquid’. The Gibbs energy of ice behaves similarly. However,
because the entropy of 100 g of ice is lower than that of 100 g of
92 Free Energy: The availability of work
Liquid
Solid
0°C 100°C
Temperature
16. The decrease in Gibbs energy with increasing temperature for three
phases of a substance. The most stable phase corresponds to the lowest
Gibbs energy; thus the solid is most stable at low temperatures, then the
liquid, and finally the gas (vapour). If the gas line falls more steeply, it
might intersect the solid line before the liquid line does, in which case
the liquid is never the stable phase and the solid sublimes directly to a
vapour.
and oxygen groups of atoms (hence the ‘tri’ and the ‘phosphate’
in its name). When a terminal phosphate group is snipped off by
reaction with water (Figure 18), to form adenosine diphosphate
(ADP), there is a substantial decrease in Gibbs energy, arising in
part from the increase in entropy when the group is liberated
from the chain. Enzymes in the body make use of this change
in Gibbs energy—this falling heavy weight—to bring about the
linking of amino acids, and gradually build a protein molecule.
It takes the effort of about three ATP molecules to link two
amino acids together, so the construction of a typical protein of
about 150 amino acid groups needs the energy released by about
450 ATP molecules.
The ADP molecules, the husks of dead ATP molecules, are
too valuable just to discard. They are converted back into ATP
molecules by coupling to reactions that release even more Gibbs
energy—act as even heavier weights—and which reattach a phos-
phate group to each one. These heavy-weight reactions are the
96 Free Energy: The availability of work
Equilibrium
Equilibrium
Progress
of reaction Pure products
Pure reactants
lies far to the left, very close to pure reactants, and the Gibbs
function reaches its minimum value after only a few molecules
of products are formed (as for gold dissolving in water). In
other cases, the minimum lies far to the right, and almost all the
reactants must be consumed before the minimum is reached (as
for the reaction between hydrogen and oxygen).
One everyday experience of a chemical reaction reaching equi-
librium is an exhausted electric battery. In a battery, a chem-
ical reaction drives electrons through an external circuit by
depositing electrons in one electrode and extracting them from
another electrode. This process is spontaneous in the thermo-
dynamic sense, and we can imagine it taking place as the
reactants sealed into the battery convert to products, and the
composition migrates from left to right in Figure 19. The Gibbs
energy of the system falls, and in due course reaches its minimum
value. The chemical reaction has reached equilibrium. It has no
further tendency to change into products, and therefore no fur-
ther tendency to drive electrons through the external circuit. The
reaction has reached the minimum of its Gibbs energy and the
battery—but not the reactions still continuing inside—is dead.
This page intentionally left blank
5. THE THIRD LAW
The unattainability of zero
This page intentionally left blank
I have introduced the temperature, the internal energy, and
the entropy. Essentially the whole of thermodynamics can be
expressed in terms of these three quantities. I have also intro-
duced the enthalpy, the Helmholtz energy, and the Gibbs energy;
but they are just convenient accounting quantities, not new fun-
damental concepts. The third law of thermodynamics is not
really in the same league as the first three, and some have argued
that it is not a law of thermodynamics at all. For one thing, it does
not inspire the introduction of a new thermodynamic function.
However, it does make possible their application.
Hints of the third law are already present in the consequences
of the second law, where we considered its implications for refrig-
eration. We saw that the coefficient of performance of a refriger-
ator depends on the temperature of the body we are seeking to
cool and that of the surroundings. We see from the expression
given in Chapter 3 1 that the coefficient of performance falls to
zero as the temperature of the cooled body approaches zero.
That is, we need to do an ever increasing, and ultimately infinite,
amount of work to remove energy from the body as heat as its
temperature approaches absolute zero.
There is another hint about the nature of the third law in
our discussion of the second. We have seen that there are two
approaches to the definition of entropy, the thermodynamic, as
expressed in Clausius’s definition, and the statistical, as expressed
by Boltzmann’s formula. They are not quite the same: the ther-
modynamic definition is for changes in entropy; the statisti-
cal definition is an absolute entropy. The latter tells us that a
1. To save you the trouble of looking back: c = 1/(Tsurroundings /Tcold − 1), and
Tsurroundings /Tcold → ∞ (and therefore c → 0) as Tcold → 0.
104 The Third Law: The unattainability of zero
This is a negative statement; but we have seen that the first and
second laws can also be expressed as denials (no change in inter-
nal energy occurs in an isolated system, no heat engine operates
without a cold sink, and so on), so that is not a weakening of its
implications. Note that it refers to a cyclic process: there might be
106 The Third Law: The unattainability of zero
M
Entropy
Entropy
Low D
field
High
field
0 Temperature 0 Temperature
Note that the experimental evidence and the third law do not
tell us the absolute value of the entropy of a substance at
T = 0. All the law implies is that all substances have the same
entropy at T = 0 provided they have nondegenerate ground
states—no residual order arising from positional disorder of the
type characteristic of ice. However, it is expedient and sensible
The Third Law: The unattainability of zero 111
At first sight, the third law is important only to that very tiny sec-
tion of humanity struggling to beat the low-temperature record
(which, incidentally, currently stands at 0.000 000 000 1 K for
solids and at about 0.000 000 000 5 K for gases—when molecules
travel so slowly that it takes 30 s for them to travel an inch ). The
law would seem to be irrelevant to the everyday world, unlike the
other three laws of thermodynamics, which govern our daily lives
with such fearsome relevance.
There are indeed no pressing consequences of the third law
for the everyday world, but there are serious consequences for
those who inhabit laboratories. First, it eliminates one of science’s
most cherished idealizations, that of a perfect gas. A perfect gas—
a fluid that can be regarded as a chaotic swarm of independent
molecules in vigorous random motion—is taken to be the start-
ing point for many discussions and theoretical formulations in
112 The Third Law: The unattainability of zero
2. The third law implies that the thermal expansion coefficient—a measure
of how the volume of a substance responds to a change of temperature—must
vanish as T → 0, but the thermodynamic properties of a perfect gas imply that
the thermal expansion coefficient becomes infinite as T → 0!
The Third Law: The unattainability of zero 113
21. The variation of (left) the internal energy and (right) the
entropy for a two-level system. The expressions for these
two properties can be calculated for negative temperatures,
as shown on the left of the illustration. Just above T =
0 all the molecules are in the ground state; just below
T = 0 they are all in the upper state. As the temperature becomes
infinite in either direction, the populations become equal.
∗
118 The Third Law: The unattainability of zero
If you would like to take any of these matters further, then here are some
suggestions. I wrote about the conservation of energy and the concept
of entropy in my Galileo’s Finger: The Ten Great Ideas of Science (Oxford
University Press, 2003), at about this level but slightly less quantitatively.
In The Second Law (W. H. Freeman & Co., 1997) I attempted to demon-
strate that law’s concepts and implications largely pictorially, inventing
a tiny universe where we could see every atom. More serious accounts
will be found in my various textbooks. In order of complexity, these
are Chemical Principles: The Quest for Insight (with Loretta Jones, W.
H. Freeman & Co., 2008), Elements of Physical Chemistry (with Julio de
Paula, Oxford University Press and W. H. Freeman & Co., 2006), and
Physical Chemistry (with Julio de Paula, Oxford University Press and
W. H. Freeman & Co., 2006).
Others, of course, have written wonderfully about the laws. I can
direct you to that most authoritative account, Thermodynamics, by
G. N. Lewis and M. Randall (McGraw-Hill, 1923; revised by K. S.
Pitzer and L. Brewer, 1961). Other useful and reasonably accessible texts
on my shelves are The Theory of Thermodynamics, by J. R. Waldram
(Cambridge University Press, 1985), Applications of Thermodynamics,
by B. D. Wood (Addison-Wesley, 1982), Entropy Analysis, by N. C. Craig
(VCH, 1992), Entropy in Relation to Incomplete Knowledge, by K. G.
Denbigh and J. S. Denbigh (Cambridge University Press, 1985), and
Statistical Mechanics: A Concise Introduction for Chemists, by B. Widom
(Cambridge University Press, 2002).
This page intentionally left blank
INDEX
Fahrenheit, D. 9 ice
Fahrenheit scale 9 residual entropy 71
feasible change 65 structure 72
Index 129
m mass 24
c coefficient of performance 75
m metre 14
C heat capacity 41
p pressure 38
D degeneracy 70
Delta X, X = X final − X initial 39 q energy transferred as heat 64
H enthalpy 38 W watt 62
W weight of arrangement 69
J joule 14 w energy transferred as work 75