Äîêóìåíò âçÿò èç êýøà ïîèñêîâîé ìàøèíû. Àäðåñ îðèãèíàëüíîãî äîêóìåíòà : http://electr003.chem.msu.ru/rus/7_8_2009_supplement.pdf
Äàòà èçìåíåíèÿ: Wed Jan 6 10:55:37 2010
Äàòà èíäåêñèðîâàíèÿ: Mon Oct 1 20:30:36 2012
Êîäèðîâêà:
This item was submitted to Loughborough's Institutional Repository (https://dspace.lboro.ac.uk/) by the author and is made available under the following Creative Commons Licence conditions.

For the full text of this licence, please go to: http://creativecommons.org/licenses/by-nc-nd/2.5/


Oldham Festschrift Special Issue
Ms. No. JSEL-D-08-00234 S Fletcher, Journal of Solid State Electrochemistry, 13, 537­549 (2009). Submitted 15 July 2008 Accepted 06 Aug 2008 Published Online 03 OCT 2008

Published online first at http://dx.doi.org/10.1007/s10008-008-0670-8 The institutional repository is http://hdl.handle.net/2134/3716 The original publication at http://www.springerlink.com/content/101568/

Tafel Slopes from First Principles

Stephen Fletcher

Department of Chemistry, Loughborough University, Ashby Road, Loughborough, Leicestershire LE11 3TU, UK Tel. 01509 22 2561 Fax 01509 22 3925 Email Stephen.Fletcher@Lboro.ac.uk

Keywords

SchrÆdinger Equation, Golden Rule, Butler-Volmer Equation, Tafel Slopes, Electron Transfer.

1


Abstract Tafel slopes for multistep electrochemical reactions are derived from first principles. The derivation takes place in two stages. First, Dirac's perturbation theory is used to solve the SchrÆdinger equation. Second, current-voltage curves are obtained by integrating the single-state results over the full density of states in electrolyte solutions. Thermal equilibrium is assumed throughout. Somewhat surprisingly, it is found that the symmetry factor that appears in the Butler-Volmer equation is different from the symmetry factor that appears in electron transfer theory, and a conversion formula is given. Finally, the Tafel slopes are compiled in a convenient look-up table. Dedication This article is dedicated to Professor Keith B. Oldham on the occasion of his eightieth birthday. Introduction To help celebrate the Oldham, I thought it from first principles has been technically never been attempted eightieth birthday of my long-time friend and colleague Keith B. might be fun to present him with a table of Tafel slopes derived (i.e. from the SchrÆdinger equation). A total proof of this kind feasible for a number of years but --so far as I know-- it has before. This seems an auspicious moment to undertake this task.

The Wavefunction of an Electron "The amount of theoretical ground one has to cover before being able to solve problems of real practical value is rather large..." P.A.M. Dirac, in "The Principles of Quantum Mechanics", Clarendon Press, Oxford, 1930. Electrochemists want to understand how electrons interact with matter. But before they can even begin to construct a model, they must first specify the positions of the electrons. This is not as easy as it sounds, however, because the positions of electrons are not determined by the laws of newtonian mechanics. They are determined by the probabilistic laws of quantum mechanics. In particular, the location of any given electron is governed by its wavefunction . This is a complex-valued function that describes the probability amplitude of finding the electron at any point in space or time. Now, it is a well-known postulate of quantum mechanics that the maximum amount of information about an electron is contained in its wavefunction. If we accept this postulate as true (and we currently have no alternative) then we are forced to

2


conclude that the wavefunction is the best available parameter for characterizing the behaviour of an electron in space-time. It is natural to enquire how well wavefunctions do characterize electron behaviour. In general, the answer is "very well indeed". For example, wavefunctions permit the calculation of the most probable values of all the known properties of electrons or systems of electrons to very high accuracy. One problem remains, however. Due to the probabilistic character of wavefunctions, they fail to describe the individual behaviour of any system at very short times. In such cases, the best they can do is describe the average behaviour of a large number of systems having the same preparation. Despite this limitation, the analysis of wavefunctions nevertheless provides measures of the probabilities of occurrence of various states and the rates of change of those probabilities. Here, following Dirac, we are happy to interpret the latter as reaction rate constants. The Uncertainty Principle This principle was first enunciated by Werner Heisenberg in 1927 [1]. The principle asserts that one cannot simultaneously measure the values of a pair of conjugate quantum state properties to better than a certain limit of accuracy. There is a minimum for the product of the uncertainties. Key features of pairs of conjugate quantum state properties are that they are uncorrelated, and, when multiplied together, have dimensions of energy â time. Examples are (i) momentum-and-location, and (ii) energy-and-lifetime. Thus p x h / 2 U t h / 2 (1) (2)

Here p is momentum of a particle (in one dimension), x is location of a particle (in one dimension), U is energy of a quantum state, t is lifetime of a quantum state, and h is the Reduced Planck Constant,

h=

h = 0.6582 (eV â fs) 2

(3)

The formal and general proof of the above inequalities was first given by Howard Percy Robertson in 1929 [2]. He also showed that the Uncertainty Principle was a deduction from quantum mechanics, not an independent hypothesis. As a result of the "blurring" effect of the uncertainty principle, quantum mechanics is unable to predict the precise behaviour of a single molecule at short times. But it can still predict the average behaviour of a large number of molecules at short times, and it can also predict the time-averaged behaviour of a single molecule over long times. For an electron, the energy measured over a finite time interval t has an uncertainty

3


U

h 2 t

(4)

and therefore to decrease the energy uncertainty in a single electron transfer step to practical insignificance (< 1meV, say, which is equivalent to about 1.602 â 10­22 J/electron) it is necessary to observe the electron for t > 330 fs.

The Quantum Mechanics of Electron Transfer

As shown by Erwin SchrÆdinger [3], the wavefunction of a (non-relativistic) electron may be derived by solving the time-dependent equation (ih)
= H t

(5)

Here, H is a linear operator known as the Hamiltonian, and h is the Reduced Planck Constant ( = h / 2 ). The Hamiltonian is a differential operator of total energy. It combines the kinetic energy and the electric potential energy of the electron into one composite term: ih h2 2 =- - eV t 2m (6)

where m is the electron mass, ­e is the electron charge, and V is the electric potential of the electric field. Note that the electric potential at a particular point in space (x, y, z), created by a system of charges, is simply equal to the change in potential energy that would occur if a test charge of +1 were introduced at that point. So ­eV is the potential energy in the electric field. The Laplacian 2 , which also appears in the SchrÆdinger equation, is the square of the vector operator ("del"), defined in Cartesian co-ordinates by
( x, y , z ) = ^ ^ ^ x+ y+ z x y z (7)

Every solution of the SchrÆdinger equation represents a possible state of the system. There is, however, always some uncertainty associated with the manifestation of each state. Due to the uncertainty, the square of the modulus of the wavefunction
2

may

be interpreted in two ways. Firstly, and most abstractly, as the probability that an electron might be found at a given point. Secondly, and more concretely, as the electric charge density at a given point (averaged over a large number of identically prepared systems for a short time, or averaged over one system for a long time).

4


Transition Probabilities

Almost all kinetic experiments in physics and chemistry lead to statements about the relative frequencies of events, expressed either as deterministic rates or as statistical transition probabilities. In the limit of large systems these formulations are, of course, equivalent. By definition, a transition probability is just the probability that one quantum state will convert into another quantum state in a single step. "The theory of transition probabilities was developed independently by Dirac with great success. It can be said that the whole of atomic and nuclear physics works with this system of concepts, particularly in the very elegant form given to them by Dirac." Max Born, "The Statistical Interpretation of Quantum Mechanics", Nobel Lecture, 11th December 1954.
Time Dependent Perturbation Theory

It is an unfortunate fact of quantum mechanics that exact mathematical solutions of the time-dependent SchrÆdinger equation are possible only at the very lowest levels of system complexity. Even at modest levels of complexity, mathematical solutions in terms of the commonplace functions of applied physics are impossible. The recognition of this fact caused great consternation in the early days of quantum mechanics. To overcome the difficulty, Paul Dirac developed an extension of quantum mechanics called "perturbation theory", which yields good approximate solutions to many practical problems [4]. The only limitation on Dirac's method is that the coupling (orbital overlap) between states should be weak. The key step in perturbation theory is to split the total Hamiltonian into two parts, one of which is simple and the other of which is small. The simple part consists of the Hamiltonian of the unperturbed fraction of the system, which can be solved exactly, while the small part consists of the Hamiltonian of the perturbed fraction of the system, which, though complex, can often be solved as a power series. If the latter converges, solutions of various problems can be obtained to any desired accuracy simply by evaluating more and more terms of the power series. Although the solutions produced by Dirac's method are not exact, they can nevertheless be extremely accurate. In the case of electron transfer, we may imagine a transition between two well-defined electronic states (an occupied state D inside an electron donor D, and an unoccupied state A inside an electron acceptor A), whose mutual interaction is weak. Dirac showed that, provided the interaction between the states is weak, the transition probability PDA for an electron to transfer from the donor state to the acceptor state increases linearly with time. Let's see how Dirac arrived at this conclusion.

5


Electron Transfer From One Single State to Another Single State

If classical physics prevailed, the transfer of an electron from one single state to another single state would be governed by the conservation of energy, and would occur only when both states had exactly the same energy. But in the quantum world, the uncertainty principle (in its time-energy form) briefly intervenes, and allows electron transfer between states even when their energies are mismatched by a small amount U = h / 2t (although energy conservation still applies on average). As a result of this complication, the transition probability of electrons between two states exhibits a complex behaviour. Roughly speaking, the probability of electron transfer between two precise energies inside two specified states increases as t 2 , while the energy uncertainty decreases as t ­1 . The net result is that the overall state-to-state transition probability increases proportional to t. To make these ideas precise, consider a perturbation which is "switched on" at time t=0, and which remains constant thereafter. In electrochemistry this corresponds to the arrival of the system at the transition state. The time-dependent SchrÆdinger equation may now be written (ih) = ( H 0 + H1 ) t (8)

where ( x, t ) is the electron wavefunction, H 0 is the unperturbed Hamiltonian operator, and H1 is the perturbed Hamiltonian operator:

H1(t) = 0 H1(t) = H1

for for

t<0 t0

(9) (10)

This is a step function with H1 being a constant independent of time at t 0 . Solving Eq. (8), one finds that the probability of electron transfer between two precise energies UD and UA is PDA (U , t ) 2M
2 DA 2 D

UA - U

[U A - U D ] t 1 ­ cos h

(11)

where the modulus symbol denotes the (always positive) magnitude of any complex number. This result is valid provided the "matrix element" M DA is small. The matrix element M DA is defined as
M
DA

=



DV A dv

(12)

where D and A are the wavefunctions of the donor and acceptor states, V is their interaction energy, and the integral is taken over the volume v of all space. MDA is,

6


therefore, a function of energy E through the overlap of the wavefunctions A , and accordingly has units of energy. In an alternative representation, we exploit the identity

D

and

1 ­ cos x = 2 sin 2 ( x / 2) so that
PDA (U , t ) 4M
2 DA 2 D

(13)

UA -U

[U - U D ] t sin 2 A 2h

(14)

If we now recall the cardinal sine function sinc(x) = sin x x (15)

then we obtain
PDA (U , t ) M
DA 2 22

t

h

[U ­ U D ] t sinc 2 A 2h

(16)

To derive the asymptotic (long-time) state-to-state transition probability from this energy-to-energy probability we must integrate over the entire band of energies allowed by the uncertainty principle. This yields
PDA (t ) = 2t h M
2 DA

(U

A

­UD)

(17)

This result is wonderfully compact, but unfortunately it is not very useful to electrochemists because it fails to describe electron transfer into multitudes of acceptor states at electrode surfaces, supplied by the 108-1014 reactant molecules per cm2 that are typically found there. These states have energies distributed over several hundred meV, and all of them interact simultaneously with all the electrons in the electrode. They also fluctuate randomly in electrostatic potential due to interactions with the thermally agitated solvent and supporting electrolyte (dissolved salt ions). Accordingly, Eq. (17) must be modified to deal with this more complex case.
Electron Transfer into a Multitude of Acceptor States

To deal with this more complex case it is necessary to define a probability density of acceptor state energies A (U ) . Accordingly, we define A (U ) as the number of states per unit of energy, and note that it has units of joule­1. If we further assume that

7


there is such a high density of states that they can be treated as a continuum, then the transition probability between the single donor state D and the multitude of acceptor states A becomes

PDA (t )



-

M

DA 2

22

t

h

[U ­ U D ] t sinc 2 A (U ) dU 2h

(18)

Although this equation appears impossible to solve, Dirac, in a tour de force [5], showed that an asymptotic result could be obtained by exploiting the properties of a "delta function" such that



+ -

( x ­ x

0

) F (x )

dx = F ( x

0

)

(19)

and
(ax ) = 1 ( x a

)

(20)

By noting the identity 2 h [U ­ U D ] t (U ­ U lim sinc 2 = t 2h t

D

)

(21)

and then extracting the limit t , Dirac found that (!)
lim PDA (t ) 2t M h
2 DA

t

A (U D )

(22) gaze in of states ure. It is, .

where U D , the single energy of the donor state, is a constant. As we amazement at Eq. (22), we remark only that A (U D ) is not the full density function A (U ) which it is sometimes mistakenly stated to be in the literat in fact, the particular value of the density of states function at the energy U D

Upon superficial observation, it may appear that the above formula for PDA (t ) is applicable only in the limit of infinite time. But actually it is valid after a very brief interval of time

t >

h 2U

(23)

This time is sometimes called the Heisenberg Time. At later times, Dirac's theory of the transition probability can be applied with great accuracy. Finally, in the ultimate

8


simplification of electron transfer theory, it is possible to derive the rate constant for electron transfer ket by differentiating the transition probability. This leads to Dirac's final result k
et

=

2 M h

2 DA

A (U D )

(24)

A remarkable feature of this equation is the absence of any time variable. It was Enrico Fermi who first referred to this equation as a "Golden Rule" (in 1949 -- in a university lecture!) and the name has stuck [6]. He esteemed the equation so highly because it had by then been applied with great success to many non-electrochemical problems (particularly the intensity of spectroscopic lines) in which the coupling between states (overlap between orbitals) was small. Because the equation is often referred to as "Fermi's Golden Rule", the ignorant often attribute the equation to Fermi. This is a very bad mistake. Despite its successful application to many diverse problems, it is nevertheless important to remember that the Golden Rule applies only to cases where electrons transfer from a single donor state into a multitude of acceptor states. If electrons originate from a multitude of donor states --as they do during redox reactions in electrolyte solutions-- then the transition probabilities from all the donor states must be added together, yielding k
et

=

­

+

2 M h

2 DA

A (U D ) D (U D ) dU

D

(25)

There is, alas, nothing golden about this formula. To evaluate it, one must first develop models of each of the probability densities, and then evaluate the integral by brute force. The density of states functions A (U ) and D (U ) are dominated by fluctuations of electrostatic potential inside electrolyte solutions, even at thermodynamic equilibrium. According to Fletcher [7], a major source of these fluctuations is the random thermal motion (Brownian motion) of electrolyte ions. The associated bombardment of reactant species causes their electrostatic potentials to vary billions of times every second. This, in turn, makes the tunnelling of electrons possible, because it ensures that any given acceptor state will sooner-or-later have the same energy as a nearby donor state.
Electrostatic Fluctuations at Equilibrium

The study of fluctuations inside equilibrium systems was brought to a high state of development by Ludwig Boltzmann in the nineteenth century [8]. Indeed, his methods are so general that they may be applied to any small system in thermal equilibrium with a large reservoir of heat. In our case, they permit us to calculate the probability that a randomly selected electrostatic fluctuation has a work of formation G.

9


A system is in thermal equilibrium if the requirements of detailed balance are satisfied, namely, that every process taking place in the system is exactly balanced by its reverse process, so there is no net change over time. This implies that the rate of formation of fluctuations matches their rate of dissipation. In other words, the fluctuations must have a distribution that is stationary. As a matter of fact, the formation of fluctuations at thermodynamic equilibrium is what statisticians call strict-sense stationary. It means that the statistical properties of the fluctuations are independent of the time at which they are measured. As a result, at thermodynamic equilibrium, we know in advance that the probability density function of fluctuations A (U ) must be independent of time. Boltzmann discovered a remarkable property of fluctuations that occur inside systems at thermal equilibrium: they always contain the "Boltzmann factor",
- W exp kT B

(26)

where W is an appropriate thermodynamic potential, kB is the Boltzmann constant, and T is the thermodynamic (absolute) temperature. At constant temperature and pressure, W is the Gibbs energy of formation of the fluctuation G . Given this knowledge, it follows that the probability density function A (V ) of electric potentials (V), must have the stationary form
A (V ) = - G A exp kT B

(27)

where A is a time-independent constant. In the case of charge fluctuations that trigger electron transfer, we have 1 1 (V ) G = C (V ) 2 = 2 2
2

(28)

where C is the capacitance between the reactant species (including its ionic atmosphere) and infinity, and is the elastance (reciprocal capacitance) between the reactant species and infinity. Identifying e 2 / 2 as the reorganization energy we immediately obtain - (eV - eVA ) 2 A (V ) = A exp 4 k BT
(29)

which means we now have to solve only for A. Perhaps the most elegant method of solving for A is based on the observation that A (V ) must be a properly normalized probability density function, meaning that its integral must equal one:

10




+ -

- (eV - eVA ) 2 dV = 1 A exp 4 k BT

(30)

This suggests the following four-step approach. First, we recall from tables of integrals that 1



+ -

exp(- x 2 ) dx = 1

(31)

Second, we make the substitution
x= eV - eVA 4 kBT

(32)

so that
1



+ -

- (eV - eVA ) 2 e2 dV = 1 exp 4 k BT 4 k BT

(33)

Third, we compare the constant in the equation with the constant in the integral containing A, yielding A= e2 4 kBT (34)

Fourth, we substitute for A in the original expression to obtain A (V ) =

- (eV - eVA ) 2 exp 4 k BT 4 k BT
e

(35)

This, at last, gives us the probability density of electrostatic potentials. We are now just one step from our goal, which is the probability density of the energies of the unoccupied electron states (acceptor states). We merely need to introduce the additional fact that, if an electron is transferred into an acceptor state whose electric potential is V, then the electron's energy must be ­eV because the charge on the electron is ­e. Thus, A (­ eV ) =

- (eV - eVA ) 2 exp 4 k BT 4 k BT
1

(36)

11


or, writing U = ­eV,
A (U ) = - (U - U A ) 2 exp 4 k T 4 k BT B 1

(37)

where U is the electron energy. This equation gives the stationary, normalized, probability density of acceptor states for a reactant species in an electrolyte solution. It is a Gaussian density. We can also get the un-normalized result simply by multiplying A (U ) by the surface concentration of acceptor species. Finally, we note that the corresponding formula for D (U ) is also Gaussian
D (U ) = - (U - U D ) 2 exp 4 k T 4 k BT B 1

(38)

where we have assumed that A = D = .
Homogeneous Electron Transfer

As mentioned above, Dirac's perturbation theory may be applied to any system that is undergoing a transition from one electronic state to another, in which the energies of the states are briefly equalized by fluctuations in the environment. If we assume that the relative probability of observing a fluctuation from energy i to energy j at temperature T is given by the Boltzmann factor exp(­Gij/kBT), then k
et

=

2 H h

2 DA

­ G exp kT 4 k BT B
1

*

(39)

where ket is the rate constant for electron transfer, HDA is the electronic coupling matrix element between the electron donor and acceptor species, kB is the Boltzmann constant, is sum of the reorganization energies of the donor and acceptor species, and G * is the "Gibbs energy of activation" for the reaction. Incidentally, the fact that the reorganization energies of the donor and acceptor species are additive is a consequence of the statistical independence of A (U ) and D (U ) . This insight follows directly from the old adage that "for independent Gaussian random variables, the variances add". The same insight also collapses Eq. (25) back to the Golden Rule, except that the separate density of states functions must be replaced by a joint density of states function that describes the coincidence of the donor and acceptor energies.

12


Fig. 1. Gibbs energy diagram for homogeneous electron transfer between two non-interacting species in solution. At the moment of electron transfer, energy is conserved, so the reactants and the products have the same Gibbs energy at that point. The symmetry factor corresponds to the fractional charge of the fluctuation on the ionic atmosphere of the acceptor at the moment of electron transfer. After Fletcher [7].

Referring to Fig. (1) it is clear that G * is the total Gibbs energy that must be transferred from the surroundings to the reactants in order to bring them to their mutual transition states. This is simply

G

*

=

( + G 0 ) 4

2

(40)

which implies that k

et

=

2 H h

2 DA

­ ( + G 0 ) 2 exp 4 k BT 4 k BT
1

(41)

We can also define a symmetry factor such that

G
and

*

= 2

(42)

13


=

d G d G

* 0

1 = 2

G 0 1 +

(43)

Evidently = 1/2 approximately if G 0 is sufficiently small (i.e. the electron transfer reaction is neither strongly exergonic nor strongly endergonic), and = 1/2 exactly for a self-exchange reaction (G 0 = 0) . From the theory of tunnelling through an electrostatic barrier, we may also write

H

DA

=H

0 DA

exp(­ x)

(44)

where is a constant proportional to the square root of the barrier height, and x is the distance of closest approach of the donor and acceptor.
Heterogeneous Electron Transfer

In the case of electron transfer across a phase boundary (e.g. electron transfer from an electrode into a solution), the law of conservation of energy dictates that the energy of the transferring electron must be added into that of the acceptor species, such that the sum equals the energy of all the product species. At constant temperature and pressure the energy of the transferring electron is just its Gibbs energy. Let us denote by superscript bar the Gibbs energies of species in solution after the energy of the transferring electron has been added to them (see Fig. 2). We have
G
reactant

= Greactant + qE

(45) (46)

= Greactant ­ eE

where e is the unit charge and E is the electrode potential of the injected electron. For the conversion of reactant to product, the overall change in Gibbs energy is G = G
0 product

­G

reactant

(47) (48) (49) (50)

= Gproduct ­ (Greactant ­ eE ) = (Gproduct ­ Greactant ) + eE = G 0 + eE

In the "normal" region of electron transfer, for a metal electrode, it is generally assumed that the electron tunnels from an energy level near the Fermi energy, implying eE eEF . Thus, for a heterogeneous electron transfer process to an acceptor

14


species in solution, we can use the Golden Rule directly k
et

=

2 H h

2 DA

­ ( + G 0 + eEF ) 2 1 exp 4 kBT 4 kBT

(51)

where is the reorganization energy of the acceptor species in solution, and eEF is the Fermi energy of the electrons inside the metal electrode. Or, converting to molar quantities k = 2 H h
2 DA 0 ­ ( m + Gm + FEF ) 2 NA exp 4 m RT 4 m RT

et

(52)

where k et is the rate constant for electron transfer, h is the reduced Planck constant, H DA is the electronic coupling matrix element between a single electron donor and a single electron acceptor, N A is the Avogadro constant, m is the reorganization
0 energy per mole, Gm is the difference in molar Gibbs energy between the acceptor and the product, and (­ FE F ) is the molar Gibbs energy of the electron that tunnels from the Fermi level of the metal electrode into the acceptor.

Equation (52) behaves exactly as we would expect. The more negative the Fermi potential E F inside the metal electrode (i.e. the more negative the electrode potential), the greater the rate constant for electron transfer from the electrode into the acceptor species in solution.

Fig. 2. Gibbs electrode to an Gibbs energy After Fletcher

energy diagram for heterogeneous electron transfer from an acceptor species in solution. The superscript bar indicates that the of the injected electron has been added to that of the reactant. [7].

15


Some notational simplification is achieved by introducing the definition
0 Gm ­ + EF F

(53)

where is called the "overpotential". Although the negative sign in this equation is not recommended by IUPAC, it is nevertheless sanctioned by long usage, and we shall use it here. With this definition, increasing overpotential corresponds to increasing rate of reaction. In other words, with this definition, the overpotential is a measure of the "driving force for the reaction". The same inference may be drawn from the equation G ­ F
0 m

(54)

An immediate corollary is that the condition = 0 corresponds to zero driving force (thermodynamic equilibrium) between the reactant, the product, and the electrode
( G
0 m

= 0) .

By defining a molar Gibbs energy of activation, G
* m

=

0 ( m + Gm + FEF ) 4 m

2

(55)

=

( m ­ F) 4 m

2

(56)

we can conveniently put Eq. (52) into the standard Arrhenius form
k 2 H = h
2 DA

et

­ G NA exp RT 4 m RT

* m



(57)

We can further simplify the analysis by defining the partial derivative
0 G m / (­ F) at constant Gm as the symmetry factor , so that *

G

* m

= 2

m

(58)

where

16


G m 1 F = 1 ­ = (­ F) 2 m

*

(59)

This latter equation highlights the remarkable fact that electron transfer reactions require less thermal activation energy (G m ) as the overpotential () is increased. Furthermore, the parameter quantifies the relationship between these parameters. Expanding Eq. (56) yields G
* m *

=



2 m

­ 2 m F + F 2 4 m

2

(60)

which rearranges into the form G
* m

=

m 2 + 1 ­ F 4 4

(61)

Now substituting back into Eq. (57) yields
k
et

=

2 H h

2 DA

NA ­ (2 + 1) F exp m exp 4 RT 4 m RT 4 RT

(62)

(2 + 1) F = k 0 exp 4 RT

(63)

At thermal equilibrium an analogous equation applies to the back reaction, except that is replaced by (1 ­ ) . Thus for the overall current-voltage curve we obtain (2 + 1) F ­ (3 ­ 2) F I = I 0 exp ­ exp 4 RT 4 RT where
= 1 2 F 1 ­ m

(64)

(65)

Eq. (64) is the current-voltage curve for a r