Äîêóìåíò âçÿò èç êýøà ïîèñêîâîé ìàøèíû. Àäðåñ îðèãèíàëüíîãî äîêóìåíòà : http://electr003.chem.msu.ru/rus/7_8_2009_supplement.pdf
Äàòà èçìåíåíèÿ: Wed Jan 6 10:55:37 2010
Äàòà èíäåêñèðîâàíèÿ: Mon Oct 1 20:30:36 2012
Êîäèðîâêà:

Ïîèñêîâûå ñëîâà: mercury surface
This item was submitted to Loughborough's Institutional Repository (https://dspace.lboro.ac.uk/) by the author and is made available under the following Creative Commons Licence conditions.

For the full text of this licence, please go to: http://creativecommons.org/licenses/by-nc-nd/2.5/


Oldham Festschrift Special Issue
Ms. No. JSEL-D-08-00234 S Fletcher, Journal of Solid State Electrochemistry, 13, 537­549 (2009). Submitted 15 July 2008 Accepted 06 Aug 2008 Published Online 03 OCT 2008

Published online first at http://dx.doi.org/10.1007/s10008-008-0670-8 The institutional repository is http://hdl.handle.net/2134/3716 The original publication at http://www.springerlink.com/content/101568/

Tafel Slopes from First Principles

Stephen Fletcher

Department of Chemistry, Loughborough University, Ashby Road, Loughborough, Leicestershire LE11 3TU, UK Tel. 01509 22 2561 Fax 01509 22 3925 Email Stephen.Fletcher@Lboro.ac.uk

Keywords

SchrÆdinger Equation, Golden Rule, Butler-Volmer Equation, Tafel Slopes, Electron Transfer.

1


Abstract Tafel slopes for multistep electrochemical reactions are derived from first principles. The derivation takes place in two stages. First, Dirac's perturbation theory is used to solve the SchrÆdinger equation. Second, current-voltage curves are obtained by integrating the single-state results over the full density of states in electrolyte solutions. Thermal equilibrium is assumed throughout. Somewhat surprisingly, it is found that the symmetry factor that appears in the Butler-Volmer equation is different from the symmetry factor that appears in electron transfer theory, and a conversion formula is given. Finally, the Tafel slopes are compiled in a convenient look-up table. Dedication This article is dedicated to Professor Keith B. Oldham on the occasion of his eightieth birthday. Introduction To help celebrate the Oldham, I thought it from first principles has been technically never been attempted eightieth birthday of my long-time friend and colleague Keith B. might be fun to present him with a table of Tafel slopes derived (i.e. from the SchrÆdinger equation). A total proof of this kind feasible for a number of years but --so far as I know-- it has before. This seems an auspicious moment to undertake this task.

The Wavefunction of an Electron "The amount of theoretical ground one has to cover before being able to solve problems of real practical value is rather large..." P.A.M. Dirac, in "The Principles of Quantum Mechanics", Clarendon Press, Oxford, 1930. Electrochemists want to understand how electrons interact with matter. But before they can even begin to construct a model, they must first specify the positions of the electrons. This is not as easy as it sounds, however, because the positions of electrons are not determined by the laws of newtonian mechanics. They are determined by the probabilistic laws of quantum mechanics. In particular, the location of any given electron is governed by its wavefunction . This is a complex-valued function that describes the probability amplitude of finding the electron at any point in space or time. Now, it is a well-known postulate of quantum mechanics that the maximum amount of information about an electron is contained in its wavefunction. If we accept this postulate as true (and we currently have no alternative) then we are forced to

2


conclude that the wavefunction is the best available parameter for characterizing the behaviour of an electron in space-time. It is natural to enquire how well wavefunctions do characterize electron behaviour. In general, the answer is "very well indeed". For example, wavefunctions permit the calculation of the most probable values of all the known properties of electrons or systems of electrons to very high accuracy. One problem remains, however. Due to the probabilistic character of wavefunctions, they fail to describe the individual behaviour of any system at very short times. In such cases, the best they can do is describe the average behaviour of a large number of systems having the same preparation. Despite this limitation, the analysis of wavefunctions nevertheless provides measures of the probabilities of occurrence of various states and the rates of change of those probabilities. Here, following Dirac, we are happy to interpret the latter as reaction rate constants. The Uncertainty Principle This principle was first enunciated by Werner Heisenberg in 1927 [1]. The principle asserts that one cannot simultaneously measure the values of a pair of conjugate quantum state properties to better than a certain limit of accuracy. There is a minimum for the product of the uncertainties. Key features of pairs of conjugate quantum state properties are that they are uncorrelated, and, when multiplied together, have dimensions of energy â time. Examples are (i) momentum-and-location, and (ii) energy-and-lifetime. Thus p x h / 2 U t h / 2 (1) (2)

Here p is momentum of a particle (in one dimension), x is location of a particle (in one dimension), U is energy of a quantum state, t is lifetime of a quantum state, and h is the Reduced Planck Constant,

h=

h = 0.6582 (eV â fs) 2

(3)

The formal and general proof of the above inequalities was first given by Howard Percy Robertson in 1929 [2]. He also showed that the Uncertainty Principle was a deduction from quantum mechanics, not an independent hypothesis. As a result of the "blurring" effect of the uncertainty principle, quantum mechanics is unable to predict the precise behaviour of a single molecule at short times. But it can still predict the average behaviour of a large number of molecules at short times, and it can also predict the time-averaged behaviour of a single molecule over long times. For an electron, the energy measured over a finite time interval t has an uncertainty

3


U

h 2 t

(4)

and therefore to decrease the energy uncertainty in a single electron transfer step to practical insignificance (< 1meV, say, which is equivalent to about 1.602 â 10­22 J/electron) it is necessary to observe the electron for t > 330 fs.

The Quantum Mechanics of Electron Transfer

As shown by Erwin SchrÆdinger [3], the wavefunction of a (non-relativistic) electron may be derived by solving the time-dependent equation (ih)
= H t

(5)

Here, H is a linear operator known as the Hamiltonian, and h is the Reduced Planck Constant ( = h / 2 ). The Hamiltonian is a differential operator of total energy. It combines the kinetic energy and the electric potential energy of the electron into one composite term: ih h2 2 =- - eV t 2m (6)

where m is the electron mass, ­e is the electron charge, and V is the electric potential of the electric field. Note that the electric potential at a particular point in space (x, y, z), created by a system of charges, is simply equal to the change in potential energy that would occur if a test charge of +1 were introduced at that point. So ­eV is the potential energy in the electric field. The Laplacian 2 , which also appears in the SchrÆdinger equation, is the square of the vector operator ("del"), defined in Cartesian co-ordinates by
( x, y , z ) = ^ ^ ^ x+ y+ z x y z (7)

Every solution of the SchrÆdinger equation represents a possible state of the system. There is, however, always some uncertainty associated with the manifestation of each state. Due to the uncertainty, the square of the modulus of the wavefunction
2

may

be interpreted in two ways. Firstly, and most abstractly, as the probability that an electron might be found at a given point. Secondly, and more concretely, as the electric charge density at a given point (averaged over a large number of identically prepared systems for a short time, or averaged over one system for a long time).

4


Transition Probabilities

Almost all kinetic experiments in physics and chemistry lead to statements about the relative frequencies of events, expressed either as deterministic rates or as statistical transition probabilities. In the limit of large systems these formulations are, of course, equivalent. By definition, a transition probability is just the probability that one quantum state will convert into another quantum state in a single step. "The theory of transition probabilities was developed independently by Dirac with great success. It can be said that the whole of atomic and nuclear physics works with this system of concepts, particularly in the very elegant form given to them by Dirac." Max Born, "The Statistical Interpretation of Quantum Mechanics", Nobel Lecture, 11th December 1954.
Time Dependent Perturbation Theory

It is an unfortunate fact of quantum mechanics that exact mathematical solutions of the time-dependent SchrÆdinger equation are possible only at the very lowest levels of system complexity. Even at modest levels of complexity, mathematical solutions in terms of the commonplace functions of applied physics are impossible. The recognition of this fact caused great consternation in the early days of quantum mechanics. To overcome the difficulty, Paul Dirac developed an extension of quantum mechanics called "perturbation theory", which yields good approximate solutions to many practical problems [4]. The only limitation on Dirac's method is that the coupling (orbital overlap) between states should be weak. The key step in perturbation theory is to split the total Hamiltonian into two parts, one of which is simple and the other of which is small. The simple part consists of the Hamiltonian of the unperturbed fraction of the system, which can be solved exactly, while the small part consists of the Hamiltonian of the perturbed fraction of the system, which, though complex, can often be solved as a power series. If the latter converges, solutions of various problems can be obtained to any desired accuracy simply by evaluating more and more terms of the power series. Although the solutions produced by Dirac's method are not exact, they can nevertheless be extremely accurate. In the case of electron transfer, we may imagine a transition between two well-defined electronic states (an occupied state D inside an electron donor D, and an unoccupied state A inside an electron acceptor A), whose mutual interaction is weak. Dirac showed that, provided the interaction between the states is weak, the transition probability PDA for an electron to transfer from the donor state to the acceptor state increases linearly with time. Let's see how Dirac arrived at this conclusion.

5


Electron Transfer From One Single State to Another Single State

If classical physics prevailed, the transfer of an electron from one single state to another single state would be governed by the conservation of energy, and would occur only when both states had exactly the same energy. But in the quantum world, the uncertainty principle (in its time-energy form) briefly intervenes, and allows electron transfer between states even when their energies are mismatched by a small amount U = h / 2t (although energy conservation still applies on average). As a result of this complication, the transition probability of electrons between two states exhibits a complex behaviour. Roughly speaking, the probability of electron transfer between two precise energies inside two specified states increases as t 2 , while the energy uncertainty decreases as t ­1 . The net result is that the overall state-to-state transition probability increases proportional to t. To make these ideas precise, consider a perturbation which is "switched on" at time t=0, and which remains constant thereafter. In electrochemistry this corresponds to the arrival of the system at the transition state. The time-dependent SchrÆdinger equation may now be written (ih) = ( H 0 + H1 ) t (8)

where ( x, t ) is the electron wavefunction, H 0 is the unperturbed Hamiltonian operator, and H1 is the perturbed Hamiltonian operator:

H1(t) = 0 H1(t) = H1

for for

t<0 t0

(9) (10)

This is a step function with H1 being a constant independent of time at t 0 . Solving Eq. (8), one finds that the probability of electron transfer between two precise energies UD and UA is PDA (U , t ) 2M
2 DA 2 D

UA - U

[U A - U D ] t 1 ­ cos h

(11)

where the modulus symbol denotes the (always positive) magnitude of any complex number. This result is valid provided the "matrix element" M DA is small. The matrix element M DA is defined as
M
DA

=



DV A dv

(12)

where D and A are the wavefunctions of the donor and acceptor states, V is their interaction energy, and the integral is taken over the volume v of all space. MDA is,

6


therefore, a function of energy E through the overlap of the wavefunctions A , and accordingly has units of energy. In an alternative representation, we exploit the identity

D

and

1 ­ cos x = 2 sin 2 ( x / 2) so that
PDA (U , t ) 4M
2 DA 2 D

(13)

UA -U

[U - U D ] t sin 2 A 2h

(14)

If we now recall the cardinal sine function sinc(x) = sin x x (15)

then we obtain
PDA (U , t ) M
DA 2 22

t

h

[U ­ U D ] t sinc 2 A 2h

(16)

To derive the asymptotic (long-time) state-to-state transition probability from this energy-to-energy probability we must integrate over the entire band of energies allowed by the uncertainty principle. This yields
PDA (t ) = 2t h M
2 DA

(U

A

­UD)

(17)

This result is wonderfully compact, but unfortunately it is not very useful to electrochemists because it fails to describe electron transfer into multitudes of acceptor states at electrode surfaces, supplied by the 108-1014 reactant molecules per cm2 that are typically found there. These states have energies distributed over several hundred meV, and all of them interact simultaneously with all the electrons in the electrode. They also fluctuate randomly in electrostatic potential due to interactions with the thermally agitated solvent and supporting electrolyte (dissolved salt ions). Accordingly, Eq. (17) must be modified to deal with this more complex case.
Electron Transfer into a Multitude of Acceptor States

To deal with this more complex case it is necessary to define a probability density of acceptor state energies A (U ) . Accordingly, we define A (U ) as the number of states per unit of energy, and note that it has units of joule­1. If we further assume that

7


there is such a high density of states that they can be treated as a continuum, then the transition probability between the single donor state D and the multitude of acceptor states A becomes

PDA (t )



-

M

DA 2

22

t

h

[U ­ U D ] t sinc 2 A (U ) dU 2h

(18)

Although this equation appears impossible to solve, Dirac, in a tour de force [5], showed that an asymptotic result could be obtained by exploiting the properties of a "delta function" such that



+ -

( x ­ x

0

) F (x )

dx = F ( x

0

)

(19)

and
(ax ) = 1 ( x a

)

(20)

By noting the identity 2 h [U ­ U D ] t (U ­ U lim sinc 2 = t 2h t

D

)

(21)

and then extracting the limit t , Dirac found that (!)
lim PDA (t ) 2t M h
2 DA

t

A (U D )

(22) gaze in of states ure. It is, .

where U D , the single energy of the donor state, is a constant. As we amazement at Eq. (22), we remark only that A (U D ) is not the full density function A (U ) which it is sometimes mistakenly stated to be in the literat in fact, the particular value of the density of states function at the energy U D

Upon superficial observation, it may appear that the above formula for PDA (t ) is applicable only in the limit of infinite time. But actually it is valid after a very brief interval of time

t >

h 2U

(23)

This time is sometimes called the Heisenberg Time. At later times, Dirac's theory of the transition probability can be applied with great accuracy. Finally, in the ultimate

8


simplification of electron transfer theory, it is possible to derive the rate constant for electron transfer ket by differentiating the transition probability. This leads to Dirac's final result k
et

=

2 M h

2 DA

A (U D )

(24)

A remarkable feature of this equation is the absence of any time variable. It was Enrico Fermi who first referred to this equation as a "Golden Rule" (in 1949 -- in a university lecture!) and the name has stuck [6]. He esteemed the equation so highly because it had by then been applied with great success to many non-electrochemical problems (particularly the intensity of spectroscopic lines) in which the coupling between states (overlap between orbitals) was small. Because the equation is often referred to as "Fermi's Golden Rule", the ignorant often attribute the equation to Fermi. This is a very bad mistake. Despite its successful application to many diverse problems, it is nevertheless important to remember that the Golden Rule applies only to cases where electrons transfer from a single donor state into a multitude of acceptor states. If electrons originate from a multitude of donor states --as they do during redox reactions in electrolyte solutions-- then the transition probabilities from all the donor states must be added together, yielding k
et

=

­

+

2 M h

2 DA

A (U D ) D (U D ) dU

D

(25)

There is, alas, nothing golden about this formula. To evaluate it, one must first develop models of each of the probability densities, and then evaluate the integral by brute force. The density of states functions A (U ) and D (U ) are dominated by fluctuations of electrostatic potential inside electrolyte solutions, even at thermodynamic equilibrium. According to Fletcher [7], a major source of these fluctuations is the random thermal motion (Brownian motion) of electrolyte ions. The associated bombardment of reactant species causes their electrostatic potentials to vary billions of times every second. This, in turn, makes the tunnelling of electrons possible, because it ensures that any given acceptor state will sooner-or-later have the same energy as a nearby donor state.
Electrostatic Fluctuations at Equilibrium

The study of fluctuations inside equilibrium systems was brought to a high state of development by Ludwig Boltzmann in the nineteenth century [8]. Indeed, his methods are so general that they may be applied to any small system in thermal equilibrium with a large reservoir of heat. In our case, they permit us to calculate the probability that a randomly selected electrostatic fluctuation has a work of formation G.

9


A system is in thermal equilibrium if the requirements of detailed balance are satisfied, namely, that every process taking place in the system is exactly balanced by its reverse process, so there is no net change over time. This implies that the rate of formation of fluctuations matches their rate of dissipation. In other words, the fluctuations must have a distribution that is stationary. As a matter of fact, the formation of fluctuations at thermodynamic equilibrium is what statisticians call strict-sense stationary. It means that the statistical properties of the fluctuations are independent of the time at which they are measured. As a result, at thermodynamic equilibrium, we know in advance that the probability density function of fluctuations A (U ) must be independent of time. Boltzmann discovered a remarkable property of fluctuations that occur inside systems at thermal equilibrium: they always contain the "Boltzmann factor",
- W exp kT B

(26)

where W is an appropriate thermodynamic potential, kB is the Boltzmann constant, and T is the thermodynamic (absolute) temperature. At constant temperature and pressure, W is the Gibbs energy of formation of the fluctuation G . Given this knowledge, it follows that the probability density function A (V ) of electric potentials (V), must have the stationary form
A (V ) = - G A exp kT B

(27)

where A is a time-independent constant. In the case of charge fluctuations that trigger electron transfer, we have 1 1 (V ) G = C (V ) 2 = 2 2
2

(28)

where C is the capacitance between the reactant species (including its ionic atmosphere) and infinity, and is the elastance (reciprocal capacitance) between the reactant species and infinity. Identifying e 2 / 2 as the reorganization energy we immediately obtain - (eV - eVA ) 2 A (V ) = A exp 4 k BT
(29)

which means we now have to solve only for A. Perhaps the most elegant method of solving for A is based on the observation that A (V ) must be a properly normalized probability density function, meaning that its integral must equal one:

10




+ -

- (eV - eVA ) 2 dV = 1 A exp 4 k BT

(30)

This suggests the following four-step approach. First, we recall from tables of integrals that 1



+ -

exp(- x 2 ) dx = 1

(31)

Second, we make the substitution
x= eV - eVA 4 kBT

(32)

so that
1



+ -

- (eV - eVA ) 2 e2 dV = 1 exp 4 k BT 4 k BT

(33)

Third, we compare the constant in the equation with the constant in the integral containing A, yielding A= e2 4 kBT (34)

Fourth, we substitute for A in the original expression to obtain A (V ) =

- (eV - eVA ) 2 exp 4 k BT 4 k BT
e

(35)

This, at last, gives us the probability density of electrostatic potentials. We are now just one step from our goal, which is the probability density of the energies of the unoccupied electron states (acceptor states). We merely need to introduce the additional fact that, if an electron is transferred into an acceptor state whose electric potential is V, then the electron's energy must be ­eV because the charge on the electron is ­e. Thus, A (­ eV ) =

- (eV - eVA ) 2 exp 4 k BT 4 k BT
1

(36)

11


or, writing U = ­eV,
A (U ) = - (U - U A ) 2 exp 4 k T 4 k BT B 1

(37)

where U is the electron energy. This equation gives the stationary, normalized, probability density of acceptor states for a reactant species in an electrolyte solution. It is a Gaussian density. We can also get the un-normalized result simply by multiplying A (U ) by the surface concentration of acceptor species. Finally, we note that the corresponding formula for D (U ) is also Gaussian
D (U ) = - (U - U D ) 2 exp 4 k T 4 k BT B 1

(38)

where we have assumed that A = D = .
Homogeneous Electron Transfer

As mentioned above, Dirac's perturbation theory may be applied to any system that is undergoing a transition from one electronic state to another, in which the energies of the states are briefly equalized by fluctuations in the environment. If we assume that the relative probability of observing a fluctuation from energy i to energy j at temperature T is given by the Boltzmann factor exp(­Gij/kBT), then k
et

=

2 H h

2 DA

­ G exp kT 4 k BT B
1

*

(39)

where ket is the rate constant for electron transfer, HDA is the electronic coupling matrix element between the electron donor and acceptor species, kB is the Boltzmann constant, is sum of the reorganization energies of the donor and acceptor species, and G * is the "Gibbs energy of activation" for the reaction. Incidentally, the fact that the reorganization energies of the donor and acceptor species are additive is a consequence of the statistical independence of A (U ) and D (U ) . This insight follows directly from the old adage that "for independent Gaussian random variables, the variances add". The same insight also collapses Eq. (25) back to the Golden Rule, except that the separate density of states functions must be replaced by a joint density of states function that describes the coincidence of the donor and acceptor energies.

12


Fig. 1. Gibbs energy diagram for homogeneous electron transfer between two non-interacting species in solution. At the moment of electron transfer, energy is conserved, so the reactants and the products have the same Gibbs energy at that point. The symmetry factor corresponds to the fractional charge of the fluctuation on the ionic atmosphere of the acceptor at the moment of electron transfer. After Fletcher [7].

Referring to Fig. (1) it is clear that G * is the total Gibbs energy that must be transferred from the surroundings to the reactants in order to bring them to their mutual transition states. This is simply

G

*

=

( + G 0 ) 4

2

(40)

which implies that k

et

=

2 H h

2 DA

­ ( + G 0 ) 2 exp 4 k BT 4 k BT
1

(41)

We can also define a symmetry factor such that

G
and

*

= 2

(42)

13


=

d G d G

* 0

1 = 2

G 0 1 +

(43)

Evidently = 1/2 approximately if G 0 is sufficiently small (i.e. the electron transfer reaction is neither strongly exergonic nor strongly endergonic), and = 1/2 exactly for a self-exchange reaction (G 0 = 0) . From the theory of tunnelling through an electrostatic barrier, we may also write

H

DA

=H

0 DA

exp(­ x)

(44)

where is a constant proportional to the square root of the barrier height, and x is the distance of closest approach of the donor and acceptor.
Heterogeneous Electron Transfer

In the case of electron transfer across a phase boundary (e.g. electron transfer from an electrode into a solution), the law of conservation of energy dictates that the energy of the transferring electron must be added into that of the acceptor species, such that the sum equals the energy of all the product species. At constant temperature and pressure the energy of the transferring electron is just its Gibbs energy. Let us denote by superscript bar the Gibbs energies of species in solution after the energy of the transferring electron has been added to them (see Fig. 2). We have
G
reactant

= Greactant + qE

(45) (46)

= Greactant ­ eE

where e is the unit charge and E is the electrode potential of the injected electron. For the conversion of reactant to product, the overall change in Gibbs energy is G = G
0 product

­G

reactant

(47) (48) (49) (50)

= Gproduct ­ (Greactant ­ eE ) = (Gproduct ­ Greactant ) + eE = G 0 + eE

In the "normal" region of electron transfer, for a metal electrode, it is generally assumed that the electron tunnels from an energy level near the Fermi energy, implying eE eEF . Thus, for a heterogeneous electron transfer process to an acceptor

14


species in solution, we can use the Golden Rule directly k
et

=

2 H h

2 DA

­ ( + G 0 + eEF ) 2 1 exp 4 kBT 4 kBT

(51)

where is the reorganization energy of the acceptor species in solution, and eEF is the Fermi energy of the electrons inside the metal electrode. Or, converting to molar quantities k = 2 H h
2 DA 0 ­ ( m + Gm + FEF ) 2 NA exp 4 m RT 4 m RT

et

(52)

where k et is the rate constant for electron transfer, h is the reduced Planck constant, H DA is the electronic coupling matrix element between a single electron donor and a single electron acceptor, N A is the Avogadro constant, m is the reorganization
0 energy per mole, Gm is the difference in molar Gibbs energy between the acceptor and the product, and (­ FE F ) is the molar Gibbs energy of the electron that tunnels from the Fermi level of the metal electrode into the acceptor.

Equation (52) behaves exactly as we would expect. The more negative the Fermi potential E F inside the metal electrode (i.e. the more negative the electrode potential), the greater the rate constant for electron transfer from the electrode into the acceptor species in solution.

Fig. 2. Gibbs electrode to an Gibbs energy After Fletcher

energy diagram for heterogeneous electron transfer from an acceptor species in solution. The superscript bar indicates that the of the injected electron has been added to that of the reactant. [7].

15


Some notational simplification is achieved by introducing the definition
0 Gm ­ + EF F

(53)

where is called the "overpotential". Although the negative sign in this equation is not recommended by IUPAC, it is nevertheless sanctioned by long usage, and we shall use it here. With this definition, increasing overpotential corresponds to increasing rate of reaction. In other words, with this definition, the overpotential is a measure of the "driving force for the reaction". The same inference may be drawn from the equation G ­ F
0 m

(54)

An immediate corollary is that the condition = 0 corresponds to zero driving force (thermodynamic equilibrium) between the reactant, the product, and the electrode
( G
0 m

= 0) .

By defining a molar Gibbs energy of activation, G
* m

=

0 ( m + Gm + FEF ) 4 m

2

(55)

=

( m ­ F) 4 m

2

(56)

we can conveniently put Eq. (52) into the standard Arrhenius form
k 2 H = h
2 DA

et

­ G NA exp RT 4 m RT

* m



(57)

We can further simplify the analysis by defining the partial derivative
0 G m / (­ F) at constant Gm as the symmetry factor , so that *

G

* m

= 2

m

(58)

where

16


G m 1 F = 1 ­ = (­ F) 2 m

*

(59)

This latter equation highlights the remarkable fact that electron transfer reactions require less thermal activation energy (G m ) as the overpotential () is increased. Furthermore, the parameter quantifies the relationship between these parameters. Expanding Eq. (56) yields G
* m *

=



2 m

­ 2 m F + F 2 4 m

2

(60)

which rearranges into the form G
* m

=

m 2 + 1 ­ F 4 4

(61)

Now substituting back into Eq. (57) yields
k
et

=

2 H h

2 DA

NA ­ (2 + 1) F exp m exp 4 RT 4 m RT 4 RT

(62)

(2 + 1) F = k 0 exp 4 RT

(63)

At thermal equilibrium an analogous equation applies to the back reaction, except that is replaced by (1 ­ ) . Thus for the overall current-voltage curve we obtain (2 + 1) F ­ (3 ­ 2) F I = I 0 exp ­ exp 4 RT 4 RT where
= 1 2 F 1 ­ m

(64)

(65)

Eq. (64) is the current-voltage curve for a reversible, one-electron transfer reaction at thermal equilibrium. It differs from the "textbook" Butler-Volmer equation [9,10] namely

17


F ­ b F I = I 0 exp f ­ exp RT RT



(66)

because the latter was derived on the (incorrect) assumption of linear Gibbs energy curves. The Butler-Volmer equation is therefore in error. However, its outward form can be "rescued" by defining the following modified symmetry factors
f

=

2 + 1 4

(67)

and
b

=

3 ­ 2 4

(68)

so that
= 1 2 F 1 ­ 2 m

f

(69)

and
= 1 2 F 1 + 2 m

b

(70)

Using these revised definitions we can continue to use the traditional form of the Butler-Volmer equation -- provided we don't forget that we have re-interpreted f and b in this new way!
Tafel Slopes for Multi-Step Reactions

As shown above, the current-voltage curve for a reversible, one-electron transfer reaction at thermal equilibrium may be written in the form
­ b F F I = FACk 0 exp f ­ exp RT RT

(71)

which corresponds to the reaction
A+e
-

B

(72)

18


In what follows, we seek to derive the current-voltage curves corresponding to the reaction A + ne
-

Z

(73)

In order to keep the equations manageable, we consider the forward and backward parts of the rate-determining step independently. This makes the rate-determining step appear irreversible in both directions. For the most part, we also restrict attention to reaction schemes containing uni-molecular steps (so there are no dimerization steps or higher-order steps). The general approach is due to Roger Parsons [11]. We begin by writing down all the electron transfer reactions steps separately: A + e- B + e- : : Q + e- R + nq e S + e- T + e- : : Y + e-
-

B C : : R S

[pre-step 1] [pre-step 2] : : [pre-step np] [rds] [post-step 1] [post-step 2] : : [post-step nr]

T U : : Z

(74)

Next, we adopt some simplifying notation. First, we define np to be the number of electrons transferred prior to the rate-determining step. Then we define nr to be the number of electrons transferred after the rate-determining step. In between, we define nq to be the number of electrons transferred during one elementary act of the ratedetermining step. (This is a ploy to ensure that nq can take only the values zero or one, depending on whether the rate-determining step is a chemical reaction or an electron transfer. This will be convenient later.) Restricting attention to the above system of uni-molecular steps, the total number of electrons transferred is n = n p + nq + n
r

(75)

We now make the following further assumptions. (i) The exchange current of the ratedetermining step is at least one hundred times less than that of any other step, (ii) the rate-determining step of the forward reaction is also the rate-determining step of the backward reaction, (iii) no steps are concerted, (iv) there is no electrode blockage by

19


adsorbed species, and (v) the reaction is in a steady state. Given these assumptions, the rate of the overall reaction is I F F = I 0 exp [ np + nq f ] ­ exp ­ [ nr + nq b ] RT RT = I 0 [exp ( f F / RT ) ­ exp (­ b F/RT )] (76)

total

In the above expression f should properly be called The Transfer Coefficient of the Overall Forward Reaction, and correspondingly b should properly be called The Transfer Coefficient of the Overall Backward Reaction. But in the literature they are often simply called Transfer Coefficients. It may be observed that nr does not appear inside the first exponential in Eq. (76). This is because electrons that are transferred after the rate-determining step serve only to multiply the height of the current/overpotential relation and do not have any effect on the shape of the current/overpotential relation. For the same reason, np does not appear inside the second exponential in Eq. (76). Although Eq. (76) has the same outward form as the Butler-Volmer Equation (Eq. (66)), actually the transfer coefficients f and b are very different to the modified symmetry factors f and b , and should never be confused with them. Basically, f and b are composite terms describing the overall kinetics of multi-step manyelectron reactions, whereas f and b are fundamental terms describing the ratedetermining step of a single electron transfer reaction. Under the assumptions listed above, they are related by the equations f = np + nq and b = nr + nq
b f

(77)

(78)

A century of electrochemical research is condensed into these equations. And the key result is this: if the rate-determining step is a purely chemical step (ie. does not involve electron transfer) then nq = 0 and the modified symmetry factors f and b disappear from the equations for f and b . Conversely, if the rate-determining step is an electrochemical step (ie. does involve electron transfer), then nq = 1 and the modified symmetry factors f and b enter the equations for f and b . Also, in passing, we remark that f and b differ from f and b in another important respect. The sum of f and b is

20


f + b = 1 whereas the sum of f and b is f + b = n

(79)

(80)

That is, the sum of the transfer coefficients of the forward and backward reactions is not necessarily unity. This stands in marked contrast to the classic case of a singlestep one-electron transfer reaction, for which the sum is always unity. Furthermore, in systems where the rate-determining steps of the forward and backward reactions are not the same --a common occurrence-- the sums of f and b have no particular diagnostic value. Regarding experimental measurements, the analysis of Tafel slopes [12] is generally performed by evaluating the expression
f or b = 2.303RT log F
I I >I

0

(81)

Such an analysis should be treated and accuracy require the collection current, with no ohmic distortion, background currents. The kinetics experimental "Tafel slope" should two orders of magnitude of current.

with great caution, however, since both precision of data over more than two orders of magnitude of no diffusion control, and no contributions from should also be in a steady state. Accordingly, no be believed that has been derived from less than

The theoretical analysis of multi-step reactions is also difficult. On one hand, the number of possible mechanisms increases rapidly with the number of electrons transferred, which makes the algebra complex. On the other hand, the assumption that the exchange current of the rate-determining step is one hundred times less than that of all other steps is not necessarily true, and hence there is always a danger of oversimplification. To steer a course between the Scylla of complexity and the Charybdis of over-simplification we here restrict our attention to quasi-equilibrated reduction reactions for which the number of mechanistic options is small. To simplify our analysis further we write f in the form
f = 1 2
F 1 ­ 2 m = ½ (1 ­ )

(82)

We also write 2.303RT F 60 mV at 25°C. (Actually the precise value is 59.2 mV.) In what follows the rate-determining step is indicated by the abbreviation "rds". Steps that aren't rate-determining are labelled "fast" (though of course in the steady state all steps proceed at the same rate). As a shorthand method of uniquely identifying

21


component steps of reaction schemes, we also adopt the following notation: E indicates an electrochemical step, C indicates a chemical step, D indicates a dimerization step, and a circumflex accent (^) indicates a rate-determining step.
Example 1 ^ (E)

O+e In this case np = 0 , nq = 1 , nr = 0

-

R

rds

so that f = np + nq f ½ (1 ­ ) and
2.303RT = log I f F

120 mV decade­1 (1 ­ )

(83)

This is the classical result for a single step one-electron transfer process. Note that fast chemical equilibria before or after the rate-determining step have no effect on the Tafel slope, as the next two examples confirm.
Example 2 ^ ( CE )
O

I R

(rearranges) fast rds

I+e In this case np = 0 , nq = 1 , nr = 0

-

so that f = np + nq f ½ (1 ­ ) and

2.303RT = f F log I

120 mV decade­1 (1 ­ )

(84)

Example 3 ^ ( EC )

O+e

-

I I R

rds (rearranges) fast

22


In this case np = 0 , nq = 1 , nr = 0 so that f = np + nq f ½ (1 ­ ) and

2.303RT = f F log I

120 mV decade­1 (1 ­ )

(85)

Example 4 ^ ( EC )

O+e

-

I I R

fast (rearranges) rds

In this case np = 1 , nq = 0 , nr = 0 so that f = np + nq f = 1 and
2.303RT = f F log I

60 mV decade­

1

independent of f .

(86)

Example 5 ^ CE
O

I R

(rearranges) rds fast

I+e

-

In this case np = 0 , nq = 0 , n r = 1 so that f = np + nq f = 0 and
2.303RT = log I f F

mV decade­

1

independent of f .

(87)

Note: the current is independent of potential, and is known as a kinetic current.
Example 6 ^ ( EE )

23


O+e

- -

I R

rds fast

I+e In this case np = 0 , nq = 1 , n r = 1

so that f = np + nq f ½ (1 ­ ) and
2.303RT = f F log I

120 mV decade­1 (1 ­ )

(88)

Example 7 ^ ( EE )

O+e I+e In this case np = 1 , nq = 1 , nr = 0

-

I R

fast rds

-

so that f = np + nq f 1 + ½ (1 ­ ) and
2.303RT = log I f F

40 mV decade­1 (1 ­ ) 3

(89)

Example 8 ^ ( EEC )

O+e I+e

-

I

fast fast (rearranges) rds

-

I
R

I In this case np = 2 , nq = 0 , nr = 0 so that f = np + nq f = 2 and

24


2.303RT = = f F log I

30 mV decade

­1

independent of f .

(90)

Example 9 ^ ( ECE )

O+e

-

I I I R

fast (rearranges) rds fast

I + e

­

In this case np = 1 , nq = 0 , n r = 1 so that f = np + nq f = 1 and
2.303RT = = f F log I

60 mV decade

­1

independent of f .

(91)

Note: 60 mV decade­1 Tafel slopes are very common for the reduction reactions of organic molecules containing double bonds, because as soon as the first electron is "on board" there are many opportunities for structural rearrangement compared with inorganic molecules. This rearrangement is usually rate determining.
Example 10 ^ ( ECE )

O+e

-

I I

fast (rearranges) fast rds

I
R

I + e

­

In this case np = 1 , nq = 1 , nr = 0 so that f = np + nq f 1 + ½ (1 ­ ) and

25


2.303RT = log I f F

40 mV decade­1 (1 ­ ) 3

(92)

Example 11 ^ ( EEEC )

O+e I+e

- -
-

I I I R

fast fast fast (rearranges) rds

I + e

I
In this case np = 3 , nq = 0 , nr = 0 so that f = np + nq f = 3 and
2.303RT = = log I f F

20 mV decade

­1

independent of f .

(93)

Example 12 ^ ( EEE )

O+e I+e
I + e

-

I

fast fast rds

-
-

I
R

In this case np = 2 , nq = 1 , nr = 0 so that f = np + nq f 2 + ½ (1 ­ ) and
2.303RT = log I f F

24 mV decade­1 (1 ­ ) 5

(94)

26


Example 13 ^ ( CED )

H+ (H+)
ads

(H+)
­

ads

fast rds fast

+e

(H·) H2

ads

2(H·)ads

In this case, np = 0, nq = 1, nr = 0 , but the presence of the follow-up dimerization step means that the total number of electrons per molecule of product n = 2(np + nq) + nr = 2. However, the dimerization step has no effect on the rate of the reaction, so that f = np + nq f ½ (1 ­ ) and
2.303RT = log I f F

120 mV decade­1 (1 ­ )

(95)

Notes: (i) This is a candidate model for hydrogen evolution on mercury. (ii) The formation of (H·)ads is slow and the destruction of (H·)ads is fast. Hence the electrode surface has a low coverage of adsorbed hydrogen radicals. (iii) For simplicity we have written the hydrogen ion H+ instead of the hydronium ion 3 + . (iv) In the last stage of the reaction we have assumed that (H·)ads is mobile on the electrode surface, so the mutual encounter rate of (H·)ads species is fast. (v) At low rates of reaction the H2 produced is present in solution as H2(aq). At high rates of reaction the H2 nucleates as bubbles and evolves as a gas. (vi) This mechanism is not one of the textbook mechanisms. The closest textbook mechanism is the "Volmer mechanism", which assumes a concerted electron transfer and proton transfer: H+ + e- (H·)ads (96) Recall that two reactions are said to be concerted if the overall rate of reaction through their merged transition state is faster than the rate through their separate transition states. Because the Volmer mechanism posits simultaneous electron and

27


nuclear motions it violates the Frank-Condon principle. However, this is not to say that it doesn't occur in reality, because H+ has a low rest mass compared with all other chemical species.
Example 14 ^ ( CED )

H+ (H+)
ads

(H+)
­

ads ads

fast fast rds

+e

(H·) H2

2(H·)ads

In this case np = 1, nq = 0, nr = 0 , but the presence of the rate-determining dimerization step means that the total number of electrons per molecule of product n = 2(np) + nq + nr = 2. The overall rate of reaction now depends on the square of the concentration of (H·)ads, so that
f

= 2(np ) + nq f = 2 and 30 mV decade
­1

2.303RT = = log I f F

independent of f .

(97)

Notes: (i) This is a candidate model for hydrogen evolution on palladium hydride. (ii) This mechanism is known in the literature as "The Tafel Mechanism". (iii) A low coverage of the electrode is assumed again. However, on such an assumption possibly conflicts with the fact that the formation be fast and the destruction of (H·)ads may be slow. If that occurs, a reaction scheme has to be considered to take into account the intermediates. this occasion, of (H·)ads may more complex coverage by

(iv) The hydrogen evolution reaction exemplifies the Metal Electrode Material Effect. This effect occurs when an electrode surface stabilizes an intermediate that is unstable in solution, and thus enhances the overall rate (i.e. decreases the overpotential). In the present case, the palladium surface strongly stabilizes H· and so its hydrogen overpotential is very low. By contrast, the mercury surface only weakly stabilizes H· and so its hydrogen overpotential is very high. [The instability of H·(aq) is evident from the standard potential of its formation from H+, about ­2.09V vs SHE, so free H·(aq) never appears at "normal" potentials between 0 and ­2.0V vs SHE.] (v) An alternative formulation of the metal electrode material effect is the following: If the same overall reaction occurs faster at one electrode material than another, then

28


the faster reaction necessarily involves an adsorbed intermediate. This is, in fact, a very clever way of "observing" short-lived intermediates without using fancy apparatus! However, to be certain that a reaction genuinely involves an adsorbed intermediate, the overpotential of the faster case should be at least kT e (25.7mV) less than that of the slower case, to ensure that the difference is not due to minor differences in the density-of-states at the Fermi energy of the electrodes.
(vi) At low rates of reaction the H2 produced is present in solution as H2(aq).
Summary Reaction Scheme ^ CE ^ CED ^ E ^ EE ^ EEE ^ EC ^ ECE ^ CE ^ CED ^ EC ^ ECE ^ EE ^ EEE ^ ECE ^ EEC ^ CED ^ EEE ^ EEEC Tafel Slope b (mV decade­1) 120/(1­ ) 120/(1­ ) 120/(1­ ) 120/(1­ ) 120/(1­ ) 120/(1­ ) 120/(1­ ) 60 exactly 60 exactly 40/(1­ / 3 ) 40/(1­ / 3 ) 40/(1­ / 3 ) 30 exactly 30 exactly 24/(1­ / 5 ) 20 exactly

Table 1. Tafel slopes for multistep electrochemical reactions. Notation: E indicates an electrochemical step, C indicates a chemical step, D indicates a dimerization step, and a circumflex accent (^) indicates a rate-determining step. The word "exactly" is intended to signify "a result independent of ".

Conclusions

Tafel slopes for multistep electrochemical reactions have been derived from first principles (Table 1). Whilst no claim is made that individual results are original

29


(indeed most of them are known), their derivation en masse has allowed us to identify the assumptions that they all have in common. Thus the four assumptions of standard electrochemical theory that emerge are: (1) there is weak orbital overlap between reactant species and electrodes, (2) the ambient solution never departs from thermodynamic equilibrium, (3) the fluctuations that trigger electron transfer are drawn from a Gaussian distribution, and (4) there is quasi-equilibrium of all reaction steps other than the rate-determining step. Finally, we reiterate that the Butler-Volmer equation fails at high overpotentials. The rigorous replacement is Eq. (64), although traditionalists may prefer to retain the old formula by applying the corrections given by Eqs. (67) and (68).
References

[1] Heisenberg W (1927) Z Physik 43:172 [2] Robertson HP (1929) Phys Rev 34:163 [3] SchrÆdinger E (1926) Ann. Physik 79:734 [4] Dirac PAM (1930) The principles of quantum mechanics. Clarendon, Oxford [5] Dirac PAM (1927) Proc Roy Soc (Lond) 113:621 [6] Orear J, Rosenfeld AH, Schluter RA (1950) Nuclear physics. A course given by Enrico Fermi at the University of Chicago. U Chicago Press [7] Fletcher S (2007) J Solid State Electrochem 11:965 [8] Boltzmann L (1909) Wissenschaftliche abhandlungen. Barth. Leipzig [9] Butler JAV (1924) Trans Faraday Soc 19:729 [10] Erdey-GrÇz T, Volmer M (1930) Z Physik Chem 150:203 [11] Parsons R (1951) Trans Faraday Soc 47:1332 [12] Tafel J (1905) Z Physik Chem 50:641 888888888888888888888888888888888888888888888888888888888888888888888
Figure Captions

Fig. 1. Gibbs energy diagram for homogeneous electron transfer between two species in solution. At the moment of electron transfer, energy is conserved, so the reactants and the products have the same Gibbs energy at that point. The symmetry factor corresponds to the fractional charge of the fluctuation on the ionic atmosphere of the acceptor at the moment of electron transfer. After Fletcher [7]. Fig. 2. Gibbs energy diagram for heterogeneous electron transfer from an electrode to an acceptor species in solution. The superscript bar indicates that the Gibbs energy of the injected electron has been added to that of the reactant. After Fletcher [7].

30


Fig. 1

31


Fig. 2

32