Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

4.pdf Boild 4.pdf Boild

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

ORIGINAL PAPER

Tafel slopes from first principles


Stephen Fletcher
Received: 15 July 2008 / Accepted: 6 August 2008 / Published online: 1 October 2008
# Springer-Verlag 2008
Abstract Tafel slopes for multistep electrochemical reac-
tions are derived from first principles. The derivation takes
place in two stages. First, Diracs perturbation theory is
used to solve the Schrdinger equation. Second, current
voltage curves are obtained by integrating the single-state
results over the full density of states in electrolyte solutions.
Thermal equilibrium is assumed throughout. Somewhat
surprisingly, it is found that the symmetry factor that
appears in the ButlerVolmer equation is different from the
symmetry factor that appears in electron transfer theory, and
a conversion formula is given. Finally, the Tafel slopes are
compiled in a convenient look-up table.
Keywords Schrdinger equation
.
Golden rule
.
Butler
Volmer equation
.
Tafel slopes
.
Electron transfer
Introduction
To help celebrate the 80th birthday of my long-time friend
and colleague Keith B. Oldham, I thought it might be fun to
present him with a table of Tafel slopes derived from first
principles (i.e. from Schrdingers equation). A total proof
of this kind has been technically feasible for a number of
years butso far as I knowit has never been attempted
before. This seems an auspicious moment to undertake this
task.
The wavefunction of an electron
The amount of theoretical ground one has to cover
before being able to solve problems of real practical
value is rather large...
P.A.M. Dirac, in The Principles of Quantum Me-
chanics, Clarendon Press, Oxford, 1930.
Electrochemists want to understand how electrons
interact with matter. But, before they can even begin to
construct a model, they must first specify the positions of
the electrons. This is not as easy as it sounds, however,
because the positions of electrons are not determined by
the laws of newtonian mechanics. They are determined
by the probabilistic laws of quantum mechanics. In
particular, the location of any given electron is governed
by its wavefunction . This is a complex-valued function
that describes the probability amplitude of finding the
electron at any point in space or time. Now, it is a well-
known postulate of quantum mechanics that the maxi-
mum amount of information about an electron is
contained in its wavefunction. If we accept this postulate
as true (and we currently have no alternative), then we
are forced to conclude that the wavefunction is the best
available parameter for characterizing the behaviour of an
electron in spacetime.
It is natural to enquire how well wavefunctions do
characterize electron behaviour. In general, the answer is
very well indeed. For example, wavefunctions permit the
calculation of the most probable values of all the known
properties of electrons or systems of electrons to very high
accuracy. One problem remains, however. Due to the
probabilistic character of wavefunctions, they fail to
J Solid State Electrochem (2009) 13:537549
DOI 10.1007/s10008-008-0670-8
This article is dedicated to Professor Keith B. Oldham on the occasion
of his 80th birthday.
S. Fletcher (*)
Department of Chemistry, Loughborough University,
Ashby Road,
Loughborough, Leicestershire LE11 3TU, UK
e-mail: Stephen.Fletcher@Lboro.ac.uk
describe the individual behaviour of any system at very
short times. In such cases, the best they can do is
describe the average behaviour of a large number of
systems having the same preparation. Despite this limita-
tion, the analysis of wavefunctions nevertheless provides
measures of the probabilities of occurrence of various
states and the rates of change of those probabilities. Here,
following Dirac, we are happy to interpret the latter as
reaction rate constants.
The uncertainty principle
This principle was first enunciated by Werner Heisenberg in
1927 [1]. The principle asserts that one cannot simulta-
neously measure the values of a pair of conjugate quantum
state properties to better than a certain limit of accuracy.
There is a minimum for the product of the uncertainties.
Key features of pairs of conjugate quantum state properties
are that they are uncorrelated, and, when multiplied
together, have dimensions of energy time. Examples are
(1) momentum and location, and (2) energy and lifetime.
Thus
p x _ =2 (1)
U t _ h=2 (2)
Here, p is momentum of a particle (in one dimension), x
is location of a particle (in one dimension), U is energy of a
quantum state, t is lifetime of a quantum state, and h is the
reduced Planck constant,
=
h
2p
= 0:6582 (eV fs) (3)
The formal and general proof of the above inequalities
was first given by Howard Percy Robertson in 1929 [2]. He
also showed that the uncertainty principle was a deduction
from quantum mechanics, not an independent hypothesis.
As a result of the blurring effect of the uncertainty
principle, quantum mechanics is unable to predict the
precise behaviour of a single molecule at short times. But,
it can still predict the average behaviour of a large number
of molecules at short times, and it can also predict the time-
averaged behaviour of a single molecule over long times.
For example, the energy of an electron measured over a
finite time interval t has an uncertainty
U _

2 t
(4)
and therefore, to decrease the energy uncertainty in a single
electron transfer step to practical insignificance (<1 meV,
say, which is equivalent to about 1.60210
22
J/electron), it
is necessary to observe the electron for t >330 fs.
The quantum mechanics of electron transfer
As shown by Erwin Schrdinger [3], the wavefunction of
a (non-relativistic) electron may be derived by solving the
time-dependent equation
i ( )
@
@t
= H (5)
Here, H is a linear operator known as the Hamiltonian,
and is the reduced Planck constant (=h/2). The
Hamiltonian is a differential operator of total energy. It
combines the kinetic energy and the electric potential
energy of the electron into one composite term:
i
@
@t
=

2
2m
\
2
eV (6)
where m is the electron mass, e is the electron charge, and
V is the electric potential of the electric field. Note that the
electric potential at a particular point in space (x, y, z),
created by a system of charges, is simply equal to the
change in potential energy that would occur if a test charge
of +1 were introduced at that point. So eV is the potential
energy in the electric field. The Laplacian
2
, which also
appears in the Schrdinger equation, is the square of the
vector operator (del), defined in Cartesian co-ordinates
by
\ x; y; z ( ) =
@
@x
x
@
@y
y
@
@z
z (7)
Every solution of Schrdingers equation represents a
possible state of the system. There is, however, always some
uncertainty associated with the manifestation of each state.
Due to the uncertainty, the square of the modulus of the
wavefunction ||
2
may be interpreted in two ways: firstly and
most abstractly, as the probability that an electron might be
found at a given point and, secondly and more concretely, as
the electric charge density at a given point (averaged over a
large number of identically prepared systems for a short time
or averaged over one system for a long time).
Transition probabilities
Almost all kinetic experiments in physics and chemistry lead
to statements about the relative frequencies of events,
expressed either as deterministic rates or as statistical
transition probabilities. In the limit of large systems, these
formulations are, of course, equivalent. By definition, a
transition probability is just the probability that one quantum
state will convert into another quantum state in a single step.
The theory of transition probabilities was developed
independently by Dirac with great success. It can be
said that the whole of atomic and nuclear physics
538 J Solid State Electrochem (2009) 13:537549
works with this system of concepts, particularly in the
very elegant form given to them by Dirac.
Max Born, The Statistical Interpretation of Quantum
Mechanics, Nobel Lecture, 11th December 1954.
Time-dependent perturbation theory
It is an unfortunate fact of quantum mechanics that exact
mathematical solutions of the time-dependent Schrdinger
equation are possible only at the very lowest levels of
system complexity. Even at modest levels of complexity,
mathematical solutions in terms of the commonplace
functions of applied physics are impossible. The recogni-
tion of this fact caused great consternation in the early days
of quantum mechanics. To overcome the difficulty, Paul
Dirac developed an extension of quantum mechanics called
perturbation theory, which yields good approximate
solutions to many practical problems [4]. The only
limitation on Diracs method is that the coupling (orbital
overlap) between states should be weak.
The key step in perturbation theory is to split the total
Hamiltonian into two parts, one of which is simple and the
other of which is small. The simple part consists of the
Hamiltonian of the unperturbed fraction of the system,
which can be solved exactly, while the small part consists
of the Hamiltonian of the perturbed fraction of the system,
which, though complex, can often be solved as a power
series. If the latter converges, solutions of various problems
can be obtained to any desired accuracy simply by
evaluating more and more terms of the power series.
Although the solutions produced by Diracs method are
not exact, they can nevertheless be extremely accurate.
In the case of electron transfer, we may imagine a transition
between two well-defined electronic states (an occupied state
D [ ) inside an electron donor D, and an unoccupied state A [ )
inside an electron acceptor A), whose mutual interaction is
weak. Dirac showed that, provided the interaction between
the states is weak, the transition probability P
DA
for an
electron to transfer from the donor state to the acceptor state
increases linearly with time. Let us see how Dirac arrived at
this conclusion.
Electron transfer from one single state to another single
state
If classical physics prevailed, the transfer of an electron
from one single state to another single state would be
governed by the conservation of energy and would occur
only when both states had exactly the same energy. But in
the quantum world, the uncertainty principle (in its time-
energy form) briefly intervenes and allows electron transfer
between states whose energies are mismatched by a small
amount U = =2t (although energy conservation still
applies on average). As a result of this complication, the
transition probability of electrons between two states
exhibits a complex behaviour. Roughly speaking, the
probability for electron transfer between two precise
energies increases as t
2
, while the width of the band of
allowed energies decreases as t
1
. The net result is an
overall transition probability that is proportional to t.
To make these ideas precise, consider a perturbation
which is switched on at time t =0 and which remains
constant thereafter. In electrochemistry, this corresponds to
the arrival of the system at the transition state. The time-
dependent Schrdinger equation may now be written
i ( )
@
@t
= H
0
H
1
( ) (8)
where (x,t) is the electron wavefunction, H
0
is the
unperturbed Hamiltonian operator, and H
1
is the perturbed
Hamiltonian operator:
H
1
t ( ) = 0 for t<0 (9)
H
1
t ( ) = H
1
for t _ 0 (10)
This is a step function with H
1
being a constant independent
of time at t0. Solving Eq. 8, one finds that the probability of
electron transfer between two precise energies U
D
and U
A
is
P
DA
U; t ( ) ~
2 M
DA
[ [
2
U
A
U
D
[ [
2
1 cos
U
A
U
D
[ [t

_ _ _ _
(11)
where the modulus symbol denotes the (always positive)
magnitude of any complex number. This result is valid
provided the matrix element M
DA
is small. The matrix
element M
DA
is defined as
M
DA
=
_

D
V
A
dv (12)
where
D
and
A
are the wavefunctions of the donor and
acceptor states, V is their interaction energy, and the integral is
taken over the volume v of all space. M
DA
is, therefore, a
function of energy E through the overlap of the wavefunctions

D
and
A
and accordingly has units of energy.
In an alternative representation, we exploit the identity
1 cos x = 2 sin
2
x=2 ( ) (13)
so that
P
DA
U; t ( ) ~
4 M
DA
[ [
2
U
A
U
D
[ [
2
sin
2
U
A
U
D
[ [t
2
_ _
(14)
J Solid State Electrochem (2009) 13:537549 539
If we now recall the cardinal sine function
sinc x ( ) =
sin x
x
(15)
and define
x =
U
A
U
D
[ [t
2h
(16)
then we can substitute these formulas into the equation for
the transition probability to yield
P
DA
U; t ( ) ~
M
DA
[ [
2
t
2

2
sinc
2
x ( ) (17)
This result is wonderfully compact, but unfortunately, it
is not very useful to electrochemists because it fails to
describe electron transfer into multitudes of acceptor states
at electrode surfaces, supplied by the 10
8
10
14
reactant
molecules per square centimetre that are typically found
there. These states have energies distributed over several
hundred meV, and all of them interact simultaneously with
all the electrons in the electrode. They also fluctuate
randomly in electrostatic potential due to interactions with
the thermally agitated solvent and supporting electrolyte
(dissolved salt ions). Accordingly, Eq. 17 must be modified
to deal with this more complex case.
Electron transfer into a multitude of acceptor states
To deal with this more complex case, it is necessary to
define a probability density of acceptor state energies

A
(U). Accordingly, we define
A
(U) as the number of
states per unit of energy and note that it has units of joule
1
.
If we further assume that there is such a high density of
states that they can be treated as a continuum, then the
transition probability between the single donor state D [ )
and the multitude of acceptor states A [ ) becomes
P
DA
t ( ) ~
_

M
DA
[ [
2
t
2

2
sinc
2
U U
D
[ [t
2
_ _

A
U ( ) dU
(18)
Although this equation appears impossible to solve,
Dirac, in a tour de force [5], showed that an asymptotic
result could be obtained by exploiting the properties of a
delta function such that
_

d x x
0
( )F x ( ) d x = F x
0
( ) (19)
and
d ax ( ) =
1
a [ [
d x ( ) (20)
By noting the identity
lim
t
sinc
2
U U
D
[ [t
2
_ _
=
2p
t
d U U
D
( ) (21)
and then extracting the limit t , Dirac found that (!)
lim
t
P
DA
t ( ) ~
2pt

M
DA
[ [
2

A
U
D
( ) (22)
where U
D
, the single energy of the donor state, is a
constant. As we gaze in amazement at Eq. 22, we remark
only that
A
(U
D
) is not the full density of states function

A
(U) which it is sometimes mistakenly stated to be in the
literature. It is, in fact, the particular value of the density of
states function at the energy U
D
.
Upon superficial observation, it may appear that the above
formula for P
DA
(t) is applicable only in the limit of infinite
time. But actually, it is valid after a very brief interval of time
t >

2U
(23)
This time is sometimes called the Heisenberg time. At
later times, Diracs theory of the transition probability can
be applied with great accuracy. Finally, in the ultimate
simplification of electron transfer theory, it is possible to
derive the rate constant for electron transfer k
et
by
differentiating the transition probability. This leads to
Diracs final result
k
et
=
2p

M
DA
[ [
2

A
U
D
( ) (24)
A remarkable feature of this equation is the absence of
any time variable. It was Enrico Fermi who first referred to
this equation as a Golden Rule (in 1949in a university
lecture!), and the name has stuck [6]. He esteemed the
equation so highly because it had by then been applied with
great success to many non-electrochemical problems
(particularly the intensity of spectroscopic lines) in which
the coupling between states (overlap between orbitals) was
small. Because the equation is often referred to as Fermis
Golden Rule, the ignorant often attribute the equation to
Fermi. This is a very bad mistake.
Despite its successful application to many diverse
problems, it is nevertheless important to remember that
the Golden Rule applies only to cases where electrons
transfer from a single donor state into a multitude of
acceptor states. If electrons originate from a multitude of
donor statesas they do during redox reactions in
electrolyte solutionsthen the transition probabilities from
all the donor states must be added together, yielding
k
et
=
_

2p

M
DA
[ [
2

A
U
D
( )
D
U
D
( )dU
D
(25)
540 J Solid State Electrochem (2009) 13:537549
There is, alas, nothing golden about this formula. To
evaluate it, one must first develop models of each of the
probability densities and then evaluate the integral by brute
force.
The density of states functions
A
(U
A
) and
D
(U
D
) are
dominated by fluctuations of electrostatic potential inside
electrolyte solutions even at thermodynamic equilibrium.
According to Fletcher [7], a major source of these
fluctuations is the random thermal motion (Brownian
motion) of electrolyte ions. The associated bombardment
of reactant species causes their electrostatic potentials to
vary billions of times every second. This, in turn, makes the
tunnelling of electrons possible because it ensures that any
given acceptor state will, sooner or later, have the same
energy as a nearby donor state.
Electrostatic fluctuations at equilibrium
The study of fluctuations inside equilibrium systems was
brought to a high state of development by Ludwig
Boltzmann in the nineteenth century [8]. Indeed, his
methods are so general that they may be applied to any
small system in thermal equilibrium with a large reservoir
of heat. In our case, they permit us to calculate the
probability that a randomly selected electrostatic fluctuation
has a work of formation G.
A system is in thermal equilibrium if the requirements of
detailed balance are satisfied, namely, that every process
taking place in the system is exactly balanced by its reverse
process, so there is no net change over time. This implies
that the rate of formation of fluctuations matches their rate
of dissipation. In other words, the fluctuations must have a
distribution that is stationary. As a matter of fact, the
formation of fluctuations at thermodynamic equilibrium is
what statisticians call strict-sense stationary. It means that
the statistical properties of the fluctuations are independent
of the time at which they are measured. As a result, at
thermodynamic equilibrium, we know in advance that the
probability density function of fluctuations
A
(U) must be
independent of time.
Boltzmann discovered a remarkable property of fluctua-
tions that occur inside systems at thermal equilibrium: they
always contain the Boltzmann factor,
exp
W
k
B
T
_ _
(26)
where W is an appropriate thermodynamic potential, k
B
is
the Boltzmann constant, and T is the thermodynamic
(absolute) temperature. At constant temperature and pres-
sure, W is the Gibbs energy of formation of the
fluctuation G. Given this knowledge, it follows that the
probability density function
A
(V) of electric potentials (V)
must have the stationary form

A
V ( ) = A exp
G
k
B
T
_ _
(27)
where A is a time-independent constant. In the case of
charge fluctuations that trigger electron transfer, we have
G =
1
2
C V ( )
2
=
1
2
V ( )
2

(28)
where C is the capacitance between the reactant species
(including its ionic atmosphere) and infinity, and is the
elastance (reciprocal capacitance) between the reactant
species and infinity. Identifying e
2
/2 as the reorganization
energy , we immediately obtain

A
V ( ) = A exp
eV eV
A
( )
2
4l k
B
T
_ _
(29)
which means we now have to solve only for A. Perhaps the
most elegant method of solving for A is based on the
observation that
A
(V) must be a properly normalized
probability density function, meaning that its integral must
equal one:
_

A exp
eV eV
A
( )
2
4l k
B
T
_ _
dV = 1 (30)
This suggests the following four-step approach. First, we
recall from tables of integrals that
1

p
_
_

exp x
2
_ _
d x = 1 (31)
Second, we make the substitution
x =
eV eV
A

4l k
B
T
_ (32)
so that
1

p
_
_

e
2
4l k
B
T

exp
eV eV
A
( )
2
4l k
B
T
_ _
dV = 1
(33)
Third, we compare the constant in the equation with the
constant in the integral containing A, yielding
A =

e
2
4pl k
B
T

(34)
J Solid State Electrochem (2009) 13:537549 541
Fourth, we substitute for A in the original expression to
obtain

A
V ( ) =
e

4pl k
B
T
_ exp
eV eV
A
( )
2
4l k
B
T
_ _
(35)
This, at last, gives us the probability density of
electrostatic potentials. We are now just one step from our
goal, which is the probability density of the energies of the
unoccupied electron states (acceptor states). We merely
need to introduce the additional fact that, if an electron is
transferred into an acceptor state whose electric potential is
V, then the electrons energy must be eV because the
charge on the electron is e. Thus,

A
eV ( ) =
1

4pl k
B
T
_ exp
eV eV
A
( )
2
4l k
B
T
_ _
(36)
or, writing U=eV,

A
U ( ) =
1

4pl k
B
T
_ exp
U U
A
( )
2
4l k
B
T
_ _
(37)
where U is the electron energy. This equation gives the
stationary, normalized, probability density of acceptor states
for a reactant species in an electrolyte solution. It is a
Gaussian density. We can also get the un-normalized result
simply by multiplying
A
(U) by the surface concentration
of acceptor species. Finally, we note that the corresponding
formula for
D
(U) is also Gaussian

D
U ( ) =
1

4pl k
B
T
_ exp
U U
D
( )
2
4lk
B
T
_ _
(38)
where we have assumed that
A
=
D
=.
Homogeneous electron transfer
As mentioned above, Diracs perturbation theory may be
applied to any system that is undergoing a transition from
one electronic state to another, in which the energies of the
states are briefly equalized by fluctuations in the environ-
ment. If we assume that the relative probability of
observing a fluctuation from energy i to energy j at
temperature T is given by the Boltzmann factor exp
(G
ij
/k
B
T), then
k
et
=
2p
h
H
2
DA
1

4plk
B
T
_ exp
G*
k
B
T
_ _
(39)
where k
et
is the rate constant for electron transfer, H
DA
is
the electronic coupling matrix element between the electron
donor and acceptor species, k
B
is the Boltzmann constant,
is sum of the reorganization energies of the donor and
acceptor species, and G* is the Gibbs energy of
activation for the reaction. Incidentally, the fact that the
reorganization energies of the donor and acceptor species
are additive is a consequence of the statistical independence
of
A
(U
A
) and
D
(U
D
). This insight follows directly from
the old adage that for independent Gaussian random
variables, the variances add. The same insight also
collapses Eq. 25 back to the Golden Rule, except that the
density of states functions must be replaced by a joint
density of states function that describes the coincidence of
the donor and acceptor energies.
Referring to Fig. 1, it is clear that G* is the total Gibbs
energy that must be transferred from the surroundings to the
reactants in order to bring them to their mutual transition
states. This is simply
G
+
=
l G
0
( )
2
4l
(40)
which implies that
k
et
=
2p
h
H
2
DA
1

4plk
B
T
_ exp
l G
0
( )
2
4lk
B
T
_ _
(41)
We can also define a symmetry factor such that
G
+
= b
2
l (42)
and
b =
dG*
dG
0
=
1
2
1
G
0
l
_ _
(43)
Evidently, b = 1=2 approximately if G
0
is sufficiently
small (i.e. the electron transfer reaction is neither strongly
exergonic nor strongly endergonic), and b = 1=2 exactly
Fig. 1 Gibbs energy diagram for homogeneous electron transfer
between two species in solution. At the moment of electron transfer,
energy is conserved, so the reactants and the products have the same
Gibbs energy at that point. The symmetry factor corresponds to the
fractional charge of the fluctuation on the ionic atmosphere of the
acceptor at the moment of electron transfer. After Fletcher [7]
542 J Solid State Electrochem (2009) 13:537549
for a self-exchange reaction (G
0
=0). Finally, from the
theory of tunnelling through an electrostatic barrier, we may
write
H
DA
= H
0
DA
exp gx ( ) (44)
where is a constant proportional to the square root of the
barrier height, and x is the distance of closest approach of
the donor and acceptor.
Heterogeneous electron transfer
In the case of electron transfer across a phase boundary
(e.g. electron transfer from an electrode into a solution), the
law of conservation of energy dictates that the energy of the
transferring electron must be added into that of the acceptor
species, such that the sum equals the energy of the product
species. At constant temperature and pressure the energy of
the transferring electron is just its Gibbs energy.
Let us denote by superscript bar the Gibbs energies of
species in solution after the energy of the transferring
electron has been added to them (see Fig. 2). We have
G
reactant
= G
reactant
qE (45)
= G
reactant
eE (46)
where e is the unit charge, and E is the electrode potential
of the injected electron. For the conversion of reactant to
product, the overall change in Gibbs energy is
G
0
= G
product
G
reactant
(47)
= G
product
G
reactant
eE ( ) (48)
= G
product
G
reactant
_ _
eE (49)
= G
0
eE (50)
In the normal region of electron transfer, for a metal
electrode, it is generally assumed that the electron tunnels
from an energy level near the Fermi energy, implying eE
eE
F
. Thus, for a heterogeneous electron transfer process to
an acceptor species in solution, we can use the Golden Rule
directly
k
et
=
2p

H
2
DA
1

4plk
B
T
_ exp
l G
0
eE
F
( )
2
4lk
B
T
_ _
(51)
where is the reorganization energy of the acceptor species
in solution, and eE
F
is the Fermi energy of the electrons
inside the metal electrode. Or, converting to molar
quantities
k
et
=
2p

H
2
DA
N
A

4pl
m
RT
_ exp
l
m
G
0
m
FE
F
_ _
2
4l
m
RT
_ _
(52)
where k
et
is the rate constant for electron transfer, is the
reduced Planck constant, H
DA
is the electronic coupling
matrix element between a single electron donor and a single
electron acceptor, N
A
is the Avogadro constant,
m
is the
reorganization energy per mole, G
0
m
is the difference in
molar Gibbs energy between the acceptor and the product,
and (FE
F
) is the molar Gibbs energy of the electron that
tunnels from the Fermi level of the metal electrode into the
acceptor.
Equation 52 behaves exactly as we would expect. The
more negative the Fermi potential E
F
inside the metal
electrode (i.e. the more negative the electrode potential), the
greater the rate constant for electron transfer from the
electrode into the acceptor species in solution.
Some simplification is achieved by introducing the
notation
h =
G
0
m
F
E
F
(53)
where is called the overpotential. Although the negative
sign in this equation is not recommended by the Interna-
tional Union of Pure and Applied Chemistry, it is
nevertheless sanctioned by long usage, and we shall use it
here. With this definition, increasing overpotential
corresponds to increasing rate of reaction. In other words,
with this definition, the overpotential is a measure of the
driving force for the reaction. The same inference may be
drawn from the equation
h =
G
0
m
F
(54)
An immediate corollary is that the condition =0
corresponds to zero driving force (thermodynamic equilib-
Fig. 2 Gibbs energy diagram for heterogeneous electron transfer from
an electrode to an acceptor species in solution. The superscript bar
indicates that the Gibbs energy of the injected electron has been added
to that of the reactant. After Fletcher [7]
J Solid State Electrochem (2009) 13:537549 543
rium) between the reactant, the product and the electrode
(G
0
m
= 0).
By defining a molar Gibbs energy of activation,
G
+
m
=
l
m
G
0
m
FE
F
_ _
2
4l
m
(55)
=
l
m
Fh ( )
2
4l
m
(56)
we can conveniently put Eq. 52 into the standard Arrhenius
form
k
et
=
2p

H
2
DA
N
A

4pl
m
RT
_ exp
G
+
m
RT
_ _
(57)
We can further simplify the analysis by defining the
partial derivative @ G
+
m
_
@ Fh ( ) at constant G
0
m
as the
symmetry factor , so that
G
+
m
= b
2
l
m
(58)
where
b =
@ G
+
m
@ Fh ( )
=
1
2
1
Fh
l
m
_ _
(59)
This latter equation highlights the remarkable fact that
electron transfer reactions require less thermal activation
energy G
+
m
_ _
as the overpotential () is increased.
Furthermore, the parameter quantifies the relationship
between these parameters.
Expanding Eq. 56 yields
G
+
m
=
l
2
m
2l
m
Fh F
2
h
2
4l
m
(60)
which rearranges into the form
G
+
m
=
l
m
4

2b 1
4
_ _
Fh (61)
Now substituting back into Eq. 57 yields
k
et
=
2p

H
2
DA
N
A

4pl
m
RT
_ exp
l
m
4RT
_ _
exp
2b 1 ( )Fh
4RT
_ _
(62)
= k
0
exp
2b 1 ( )Fh
4RT
_ _
(63)
At thermal equilibrium, an analogous equation applies to
the back reaction, except that is replaced by (1). Thus,
for the overall currentvoltage curve, we obtain
I = I
0
exp
2b 1 ( )Fh
4RT
_ _
exp
3 2b ( )Fh
4RT
_ _ _ _
(64)
where
b =
1
2
1
Fh
l
m
_ _
(65)
Equation 64 is the currentvoltage curve for a reversible,
one-electron transfer reaction at thermal equilibrium. It differs
from the textbook ButlerVolmer equation [9, 10], namely
I = I
0
exp
b
f
Fh
RT
_ _
exp
b
b
Fh
RT
_ _ _ _
(66)
because the latter was derived on the (incorrect) assumption of
linear Gibbs energy curves. The ButlerVolmer equation is
therefore in error. However, its outward formcan be rescued
by defining the following modified symmetry factors
b
f
=
2b 1
4
(67)
and
b
b
=
3 2b
4
(68)
so that
b
f
=
1
2
1
Fh
2l
m
_ _
(69)
and
b
b
=
1
2
1
Fh
2l
m
_ _
(70)
Using these revised definitions, we can continue to use
the traditional form of the ButlerVolmer equation
provided we do not forget that we have re-interpreted
f
and
b
in this new way!
Tafel slopes for multi-step reactions
As shown above, the currentvoltage curve for a reversible,
one-electron transfer reaction at thermal equilibrium may be
written in the form
I = FACk
0
exp
b
f
Fh
RT
_ _
exp
b
b
Fh
RT
_ _ _ _
(71)
which corresponds to the reaction

+ e A B
(72)
In what follows, we seek to derive the currentvoltage
curves corresponding to the reaction

+ e A n Z
(73)
544 J Solid State Electrochem (2009) 13:537549
In order to keep the equations manageable, we consider
the forward and backward parts of the rate-determining step
independently. This makes the rate-determining step appear
irreversible in both directions. For the most part, we also
restrict attention to reaction schemes containing uni-molec-
ular steps (so there are no dimerization steps or higher-order
steps). The general approach is due to Roger Parsons [11].
We begin by writing down all the electron transfer
reactions steps separately:

+ e A

B [pre-step 1]

+ e B C [pre-step 2]
: : :
: : :

+ e Q R [pre-step n
p
]

+ e R
q
n S [rds]

+ e S T [post-step 1]

+ e T U [post-step 2]
: : :
: : :

+ e Y Z [post-step n
r
]





(74)
Next, we adopt some simplifying notation. First, we
define n
p
to be the number of electrons transferred prior to
the rate-determining step. Then we define n
r
to be the
number of electrons transferred after the rate-determining
step. In between, we define n
q
to be the number of electrons
transferred during one elementary act of the rate-determining
step (this is a ploy to ensure that n
q
can take only the values
zero or one, depending on whether the rate-determining
step is a chemical reaction or an electron transfer. This will
be convenient later).
Restricting attention to the above system of uni-
molecular steps, the total number of electrons trans-
ferred is
n = n
p
n
q
n
r
(75)
We now make the following further assumptions. (1)
The exchange current of the rate-determining step is at least
100 times less than that of any other step, (2) the rate-
determining step of the forward reaction is also the rate-
determining step of the backward reaction, (3) no steps are
concerted, (4) there is no electrode blockage by adsorbed
species, and (5) the reaction is in a steady state. Given these
assumptions, the rate of the overall reaction is
I
total
= I
0
exp [ n
p
n
q
b
f
[
F
RT
h
_ _
exp [ n
r
n
q
b
b
[
F
RT
h
_ _ _
= I
0
exp a
f
Fh=RT ( ) exp a
b
Fh=RT ( ) [ [
(76)
In the above expression,
f
should properly be called
the transfer coefficient of the overall forward reaction,
and correspondingly,
b
should properly be called the
transfer coefficient of the overall backward reaction. But
in the literature, they are often simply called transfer
coefficients.
It may be observed that n
r
does not appear inside the first
exponential in Eq. 76. This is because electrons that are
transferred after the rate-determining step serve only to
multiply the height of the current/overpotential relation and
do not have any effect on the shape of the current/
overpotential relation. For the same reason, n
p
does not
appear inside the second exponential in Eq. 76.
Although Eq. 76 has the same outward form as the Butler
Volmer equation (Eq. 66), actually the transfer coefficients
f
and
b
are very different from the modified symmetry factors

f
and
b
and should never be confused with them. Basically,

f
and
b
are composite terms describing the overall kinetics
of multi-step many-electron reactions, whereas
f
and
b
are
fundamental terms describing the rate-determining step of a
single electron transfer reaction. Under the assumptions listed
above, they are related by the equations
a
f
= n
p
n
q
b
f
(77)
and
a
b
= n
r
n
q
b
b
(78)
A century of electrochemical research is condensed into
these equations. And the key result is this: if the rate-
determining step is a purely chemical step (i.e. does not
involve electron transfer), then n
q
=0, and the modified
symmetry factors
f
and
b
disappear from the equations
for
f
and
b
. Conversely, if the rate-determining step is an
electrochemical step (i.e. does involve electron transfer),
then n
q
=1, and the modified symmetry factors
f
and
b
enter the equations for
f
and
b
. Also, in passing, we
remark that
f
and
b
differ from
f
and
b
in another
important respect. The sum of
f
and
b
is
b
f
b
b
= 1 (79)
whereas the sum of
f
and
b
is
a
f
a
b
= n (80)
That is, the sum of the transfer coefficients of the
forward and backward reactions is not necessarily unity.
This stands in marked contrast to the classic case of a
single-step one-electron transfer reaction, for which the sum
is always unity. Furthermore, in systems where the rate-
determining steps of the forward and backward reactions
are not the samea common occurrencethe sums of
f
and
b
have no particular diagnostic value.
J Solid State Electrochem (2009) 13:537549 545
Regarding experimental measurements, the analysis of
Tafel slopes [12] is generally performed by evaluating the
expression
a
f
or a
b
=
2:303RT
F
@ log I [ [
@ h
_ _
I [ [ > I
0
(81)
Such an analysis should be treated with great caution,
however, since both precision and accuracy require the
collection of data over more than two orders of magnitude
of current, with no ohmic distortion, no diffusion control
and no contributions from background currents. The
kinetics should also be in a steady state. Accordingly, no
experimental Tafel slope should be believed that has
been derived from less than two orders of magnitude of
current.
The theoretical analysis of multi-step reactions is also
difficult. On one hand, the number of possible mechanisms
increases rapidly with the number of electrons transferred,
which makes the algebra complex. On the other hand, the
assumption that the exchange current of the rate-determining
step is 100 times less than that of all other steps is not
necessarily true, and hence, there is always a danger of over-
simplification. To steer a course between the Scylla of
complexity and the Charybdis of over-simplification, we here
restrict our attention to quasi-equilibrated reduction reactions
for which the number of mechanistic options is small. To
simplify our analysis further, we write
f
in the form
b
f
=
1
2
1
Fh
2l
m
_ _
=
1
=
2
1 ( ) (82)
We also write 2.303 RT/F60 mVat 25 C (actually, the
precise value is 59.2 mV).
In what follows, the rate-determining step is indicated by
the abbreviation rds. Steps that are not rate-determining
are labelled fast (though of course in the steady state all
steps proceed at the same rate). As a shorthand method of
uniquely identifying component steps of reaction schemes,
we also adopt the following notation: E indicates an
electrochemical step, C indicates a chemical step, D
indicates a dimerization step, and a circumflex accent (^)
indicates a rate-determining step.
Example 1

E
_ _
O e

R rds
In this case, n
p
=0, n
q
=1, n
r
=0
so that a
f
= n
p
n
q
b
f
~
1
=
2
1 ( ), and
@h
@ log I [ [
=
2:303RT
a
f
F
~
120
1 ( )
mVdecade
1
(83)
This is the classical result for a single-step one-electron
transfer process. Note that fast chemical equilibria before or
after the rate-determining step have no effect on the Tafel
slope, as the next two examples confirm.
Example 2 C

E
_ _

+ e I R rds
O I (rearranges) fast
In this case, n
p
=0, n
q
=1, n
r
=0
so that a
f
= n
p
n
q
b
f
~
1
=
2
1 ( ), and
@h
@ log I [ [
=
2:303RT
a
f
F
~
120
1 ( )
mVdecade
1
(84)
Example 3

EC
_ _

+ e O I rds
I R (rearranges) fast
In this case, n
p
=0, n
q
=1, n
r
=0
so that a
f
= n
p
n
q
b
f
~
1
=
2
1 ( ), and
@h
@ log I [ [
=
2:303RT
a
f
F
~
120
1 ( )
mVdecade
1
(85)
Example 4 E

C
_ _

+ e O I fast
I R (rearranges) rds
In this case, n
p
=1, n
q
=0, n
r
=0
so that a
f
= n
p
n
q
b
f
= 1, and
@h
@ log I [ [
=
2:303RT
a
f
F
~ 60mVdecade
1
independent of b
f
:
(86)
Example 5

CE
_ _
O I (rearranges) rds

+ e I R fast
In this case, n
p
=0, n
q
=0, n
r
=1
so that a
f
= n
p
n
q
b
f
= 0, and
546 J Solid State Electrochem (2009) 13:537549
@h
@ log I [ [
=
2:303RT
a
f
F
~ mVdecade
1
independent of b
f
: (87)
Note: the current is independent of potential and is
known as a kinetic current.
Example 6

EE
_ _

+ e O I rds

+ e I R fast
In this case, n
p
=0, n
q
=1, n
r
=1
so that a
f
= n
p
n
q
b
f
~
1
=
2
1 ( ), and
@h
@ log I [ [
=
2:303RT
a
f
F
~
120
1 ( )
mVdecade
1
(88)
Example 7 E

E
_ _

+ e O I fast

+ e I R rds
In this case, n
p
=1, n
q
=1, n
r
=0
so that a
f
= n
p
n
q
b
f
~ 1
1
=
2
1 ( ), and
@h
@ log I [ [
=
2:303RT
a
f
F
~
40
1

3
_ _ mVdecade
1
(89)
Example 8 EE

C
_ _

+ e O I fast

+ e I I fast
I R (rearranges) rds
In this case, n
p
=2, n
q
=0, n
r
=0
so that a
f
= n
p
n
q
b
f
= 2, and
@h
@ log I [ [
=
2:303RT
a
f
F
= 30 mVdecade
1
independent of b
f
:
(90)
Example 9 E

CE
_ _

+ e O I fast
I I (rearranges) rds

e I + R fast
In this case, n
p
=1, n
q
=0, n
r
=1
so that a
f
= n
p
n
q
b
f
= 1, and
@h
@ log I [ [
=
2:303RT
a
f
F
= 60 mVdecade
1
independent of b
f
: (91)
Note: 60 mV decade
1
Tafel slopes are very common for
the reduction reactions of organic molecules containing
double bonds because as soon as the first electron is on
board, there are many opportunities for structural rear-
rangement compared with inorganic molecules. This rear-
rangement is usually rate determining.
Example 10 EC

E
_ _

+ e O I fast
I I (rearranges) fast

e I + R rds
In this case, n
p
=1, n
q
=1, n
r
=0
so that a
f
= n
p
n
q
b
f
~ 1
1
=
2
1 ( ), and
@h
@ log I [ [
=
2:303RT
a
f
F
~
40
1

3
_ _ mVdecade
1
(92)
Example 11 EEE

C
_ _

+ e O I fast

+ e I I fast

+ e I I fast
I R (rearranges) rds
In this case, n
p
=3, n
q
=0, n
r
=0
so that a
f
= n
p
n
q
b
f
= 3, and
@h
@ log I [ [
=
2:303RT
a
f
F
= 20 mVdecade
1
independent of b
f
: (93)
Example 12 EE

E
_ _

+ e O I fast

+ e I I fast

+ e I R rds
In this case, n
p
=2, n
q
=1, n
r
=0
so that a
f
= n
p
n
q
b
f
~ 2
1
=
2
1 ( ), and
J Solid State Electrochem (2009) 13:537549 547
@h
@ log I [ [
=
2:303RT
a
f
F
~
24
1

5
_ _ mVdecade
1
(94)
Example 13 C

ED
_ _
H
+
(H
+
)
ads
fast
(H
+
)
ads
+ e

(H)
ads
rds
2(H)
ads
H
2
fast
In this case, n
p
=0, n
q
=1, n
r
=0, but the presence of the
follow-up dimerization step means that the total number of
electrons per molecule of product n = 2 n
p
n
q
_ _
n
r
= 2.
However, the dimerization step has no effect on the rate of
the reaction, so that a
f
= n
p
n
q
b
f
~
1
=
2
1 ( ), and
@h
@ log I [ [
=
2:303RT
a
f
F
~
120
1 ( )
mVdecade
1
(95)
Notes:
(1) This is a candidate model for hydrogen evolution on
mercury.
(2) The formation of (H)
ads
is slow, and the destruction of
(H)
ads
is fast. Hence, the electrode surface has a low
coverage of adsorbed hydrogen radicals.
(3) For simplicity, we have written the hydrogen ion H
+
instead of the hydronium ion H
3
O
+
.
(4) In the last stage of the reaction, we have assumed that
(H)
ads
is mobile on the electrode surface, so the
mutual encounter rate of (H)
ads
species is fast.
(5) At low rates of reaction, the H
2
produced is present in
solution as H
2
(aq). At high rates of reaction, the H
2
nucleates as bubbles and evolves as a gas.
(6) This mechanism is not one of the textbook mecha-
nisms. The closest textbook mechanism is the Volmer
mechanism, which assumes a concerted electron
transfer and proton transfer:
H

H ( )
ads
(96)
Recall that two reactions are said to be concerted if the
overall rate of reaction through their merged transition state
is faster than the rate through their separate transition states.
Because the Volmer mechanism posits simultaneous elec-
tron and nuclear motions, it violates the FrankCondon
principle. However, this is not to say that it does not occur
in reality, because H
+
has a low rest mass compared with all
other chemical species.
Example 14 CE

D
_ _
H
+
(H
+
)
ads
fast
(H
+
)
ads
+ e

(H)
ads
fast
2(H)
ads
H
2
rds
In this case, n
p
=1, n
q
=0, n
r
=0, but the presence of the
rate-determining dimerization step means that the total
number of el ect r ons per mol ecul e of product
n = 2 n
p
_ _
n
q
n
r
= 2. The overall rate of reaction now
depends on the square of the concentration of (H)
ads
, so
that a
f
= 2 n
p
_ _
n
q
b
f
= 2 and
@h
@ log I [ [
=
2:303RT
a
f
F
= 30 mVdecade
1
independent of b
f
: (97)
Notes:
(1) This is a candidate model for hydrogen evolution on
palladium hydride.
(2) This mechanism is known in the literature as The
Tafel Mechanism.
(3) A low coverage of the electrode is assumed again.
However, on this occasion, such an assumption
possibly conflicts with the fact that the formation of
(H)
ads
may be fast and the destruction of (H)
ads
may
be slow. If that occurs, a more complex reaction
scheme has to be considered to take into account the
coverage by intermediates.
(4) The hydrogen evolution reaction exemplifies the metal
electrode material effect. This effect occurs when an
electrode surface stabilizes an intermediate that is
unstable in solution and thus enhances the overall rate
(i.e. decreases the overpotential). In the present case,
the palladium surface strongly stabilizes H, and so its
hydrogen overpotential is very low. By contrast, the
mercury surface only weakly stabilizes H, and so its
hydrogen overpotential is very high. [The instability of
H(aq) is evident from the standard potential of its
formation from H
+
, about 2.09 V vs SHE, so free H
(aq) never appears at normal potentials between 0
and 2.0 V vs SHE.]
(5) An alternative formulation of the metal electrode
material effect is the following: If the same overall
reaction occurs faster at one electrode material than
another, then the faster reaction necessarily involves
an adsorbed intermediate. This is, in fact, a very
clever way of observing short-lived intermediates
without using fancy apparatus! However, to be certain
that a reaction genuinely involves an adsorbed
548 J Solid State Electrochem (2009) 13:537549
intermediate, the overpotential of the faster case
should be at least kT/e (25.7 mV) less than that of
the slower case to ensure that the difference is not due
to minor differences in the density of states at the
Fermi energy of the electrodes.
(6) At low rates of reaction, the H
2
produced is present in
solution as H
2
(aq).
Summary
Conclusions
Tafel slopes for multistep electrochemical reactions have
been derived from first principles (Table 1). Whilst no
claim is made that individual results are original (indeed
most of them are known), their derivation en masse has
allowed us to identify the assumptions that they all have in
common. Thus, the four standard assumptions of electro-
chemical theory that emerge are: (1) there is weak orbital
overlap between reactant species and electrodes, (2) the
ambient solution never departs from thermodynamic equi-
librium, (3) the fluctuations that trigger electron transfer are
drawn from a Gaussian distribution, and (4) there is quasi-
equilibrium of all reaction steps other than the rate-
determining step.
Finally, we reiterate that the ButlerVolmer equation
fails at high overpotentials. The rigorous replacement is Eq.
64, although traditionalists may prefer to retain the old
formula by applying the corrections given by Eqs. 67 and
68.
References
1. Heisenberg W (1927) Z Phys 43:172 doi:10.1007/BF01397280
2. Robertson HP (1929) Phys Rev 34:163 doi:10.1103/Phys
Rev.34.163
3. Schrdinger E (1926) Ann Phys 79:734 doi:10.1002/
andp.19263840804
4. Dirac PAM (1930) The principles of quantum mechanics.
Clarendon, Oxford
5. Dirac PAM (1927) Proc R Soc Lond 113:621 doi:10.1098/
rspa.1927.0012
6. Orear J, Rosenfeld AH, Schluter RA (1950) Nuclear physics. A
course given by Enrico Fermi at the University of Chicago. U
Chicago Press, Chicago
7. Fletcher S (2007) J Solid State Electrochem 11:965 doi:10.1007/
s10008-007-0313-5
8. Boltzmann L (1909) Wissenschaftliche abhandlungen. Barth.
Leipzig
9. Butler JAV (1924) Trans Faraday Soc 19:729 doi:10.1039/
tf9241900729
10. Erdey-Grz T, Volmer M (1930) Z Phys Chem 150:203
11. Parsons R (1951) Trans Faraday Soc 47:1332 doi:10.1039/
tf9514701332
12. Tafel J (1905) Z Phys Chem 50:641
Table 1 Tafel slopes for multistep electrochemical reactions
Reaction scheme Tafel slope b (mV decade
1
)

CE

CED

E
120/(1)

EE
120/(1)

EEE
120/(1)

EC
120/(1)

ECE
120/(1)
C

E
120/(1)
C

ED
120/(1)
E

C
60 exactly
E

CE
60 exactly
E

E
40/(1/3)
E

EE
40/(1/3)
EC

E
40/(1/3)
EE

C
30 exactly
CE

D
30 exactly
EE

E
24/(1/5)
EEE

C
20 exactly
E indicates an electrochemical step, C indicates a chemical step, D
indicates a dimerization step, and a circumflex accent (^) indicates a
rate-determining step. The word exactly is intended to signify a
result independent of
J Solid State Electrochem (2009) 13:537549 549

You might also like